Topic
stringclasses 9
values | News_Title
stringlengths 10
120
| Citation
stringlengths 18
4.58k
| Paper_URL
stringlengths 27
213
| News_URL
stringlengths 36
119
| Paper_Body
stringlengths 11.8k
2.03M
| News_Body
stringlengths 574
29.7k
| DOI
stringlengths 3
169
|
---|---|---|---|---|---|---|---|
Physics | Researchers reveal secret of ultra-slow motion of pine cones | Feilong Zhang et al, Unperceivable motion mimicking hygroscopic geometric reshaping of pine cones, Nature Materials (2022). DOI: 10.1038/s41563-022-01391-2 Journal information: Nature Materials | https://dx.doi.org/10.1038/s41563-022-01391-2 | https://phys.org/news/2022-11-reveal-secret-ultra-slow-motion-cones.html | Abstract The hygroscopic deformation of pine cones, featured by opening and closing their scales depending on the environmental humidity, is a well-known stimuli-responsive model system for artificial actuators. However, it has not been noted that the deformation of pine cones is an ultra-slow process. Here, we reveal that vascular bundles with unique parallelly arranged spring/square microtubular heterostructures dominate the hygroscopic movement, characterized as ultra-slow motion with the outer sclereids. The spring microtubes give a much larger hygroscopic deformation than that of the square microtubes along the longitudinal axis direction, which bends the vascular bundles and consequently drives the scales to move. The outer sclereids with good water retention enable the vascular-bundle-triggered deformation to proceed ultra-slowly. Drawing inspiration, we developed soft actuators enabling controllable yet unperceivable motion. The motion velocity is almost two orders of magnitude lower than that of the same-class actuators reported, which made the as-developed soft actuators applicable in camouflage and reconnaissance. Main Stimuli-responsive deformation is an important seed dispersal strategy for many natural organisms 1 , 2 , 3 , 4 , 5 , 6 , and clarifying the underlying mechanisms has become increasingly non-trivial for designing stimuli-responsive materials 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 . The pine cone, a typical natural organism with hygroscopic deformation character, has gained a good reputation as a bionic model for constructing artificial actuators 15 , 16 , 17 , 18 , 19 , 20 . Specifically, the pine cone is capable of humidity-responsive geometric reshaping, characterized by reversible opening and closing of the scales by alternately decreasing and increasing the environmental humidity (Fig. 1a and Supplementary Fig. 1 ). However, it has not been noted that the hygroscopic deformation of the pine cone is an ultra-slow process, with a velocity (normalized by the thickness) 21 much lower than that of other hygroscopic plant organisms such as wheat awn and seed capsules. To be noticed, such ultra-slow motion also proceeds rather steadily with a large amplitude (Fig. 1b and Supplementary Table 1 ) 2 , 5 , 22 , 23 . Therefore, the pine cone opens only in a long-term dry environment, which is beneficial to the long-distance dispersal of seeds from the parent tree by wind and animals 24 . Fig. 1: The typical slow hygroscopic geometric reshaping of a pine cone and its hierarchical components of scale, skin, skeleton and VB. a , Opening process of a waterlogged pine cone. The waterlogged pine cone gradually opened over time at 35 °C with a RH of 30–40% and took over 24 hours to reach the fully open state. b , Summary of the deformation velocity and capability of various hygroscopic plant organisms, where the pine cone scale (stars) exhibits the smallest thickness-normalized velocity (brown symbols) while still having quite a large deformability (green symbols). c , Optical images of front and side views of the samples: the whole scale, skin, skeleton and VB. d , The humidity-responsive reversible bending of the whole scale, skin, skeleton and VB. Top, optical images showing the dynamic processes. Bottom, the deformation of wet samples in air versus time, characterized by the change of the distance l between the tip (point P 1 ) and the tail (point O) of the sample and the tilt angle θ of the tip–tail line (brown line) relative to the horizontal axis (black dashed arrow; upper left). All samples are almost straight with the largest l 0 and θ (90°) at the wet state. The solid line in the lower plot represents mean values, and the shaded region represents the standard deviation from three repeated experiments. e , The deformation capability characterized by Δ l / l 0 and Δ θ and the velocity characterized by (Δ l / l 0 )/Δ t and Δ θ/ Δ t of the scale, skin, skeleton and VB. The data in e are presented as mean ± standard deviation (s.d.) of n = 3 measurements. Source data Full size image Numerous efforts have been made to explore the bending mechanism of the pine cone scale 1 , 25 , 26 , 27 , 28 , 29 , 30 . As reported by Shaw 25 , the opening of pine cones is caused by a larger shrinkage of the bast tissues (sclereids) than that of the vascular tissues in the scale. Further, the difference in the hygroscopic expansion was found to be controlled by the microfibril orientation in these two tissue cells 1 , 26 , 27 . The microfibrils in vascular tissues are oriented towards the long axis of the cell and constrain the cell deformation, while the microfibrils in sclereid cells are perpendicular to the cell’s long axis, which endow the cell with a larger expansion rate and lower stiffness along this long axis. For a long time, the hygroscopic movement of the pine cone has been ascribed to the uneven hygroscopic expansion of vascular bundles (VBs) and sclereids in the scale 1 , 30 . Recently, some researchers observed that the VBs and sclereids can deform separately 28 , 29 ; however, the bending mechanism of the VBs and their contributions to the bending of the whole scale remain unexplored. Here, we report that the hygroscopic deformation of the pine cone is an ultra-slow process dominated by VBs with a unique spring/square ( ○ /□) microtube heterostructure. As environmental humidity changes, spring microtubes give a larger deformation than do square microtubes, which drives the hygroscopic deformation of the VBs. With VBs imbedded in the sclereids with good water retention, the pine cone scale exhibits a much lower deformation velocity than other plant tissues, by at least one order of magnitude. Drawing inspiration, we fabricated actuators whose motion is almost unperceivable due to the rather low motion velocity of 10 –5 BL s –1 where BL is body length, which is two orders of magnitude lower than that of other reported humidity-driven actuators (around 10 –3 to 10 –1 BL s –1 ) 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 . Moreover, such ultra-slow stimuli-responsive motion proceeds rather steadily and with a large amplitude, which offers a solution for innovative actuators with unperceivable yet controllable motion for camouflage and reconnaissance strategies 39 , 40 , 41 . Hygroscopic geometric reshaping To explore the crucial tissue that triggers the ultra-slow hygroscopic geometric reshaping of the pine cone, in situ observation of different tissue units during the reversible movement was conducted. As shown in Fig. 1a , the scales of a waterlogged pine cone ( Pinus elliottii ) bend away from the centre as the water evaporates, so that the pine cone gradually opens from the bottom to the top. This typical geometric reshaping of the pine cone takes a rather long time of around 24 h, reaching the fully open state at 35 °C with a relative humidity (RH) of 30–40%. When the deformation velocity was normalized by thickness, the scales of the pine cone give the lowest value among plant organisms capable of hygroscopic deformation, as summarized in Fig. 1b . The scales on the cone are independent from each other, and a single scale shows a typical lamellae structure (Fig. 1c , scale). Isolated from the bulk pine cone, each scale is able to perform reversible hygroscopic movement (Fig. 1d , scale, and Supplementary Video 1 ), indicating that the geometric reshaping of the pine cone originates from the scales. To quantify the bending movement, two parameters are introduced: the distance between the tip (point P 1 , Fig. 1d ) and the tail (point O, Fig. 1d ) of the scale (brown line, defined as l , Fig. 1d ), and the tilt angle of the tip–tail line relative to the horizontal axis (defined as θ , Fig. 1d ). Note that the measured length l is not a full description of the movement but reflects the comprehensive results of the bending and expansion/shrinkage. At the wet state, the scale is almost straight with the largest l (denoted as l 0 ) and θ (90°). When exposed in dry air, the scale gradually bends away from its original position (point P 2 , Fig. 1d ), as characterized by a decreased θ and l / l 0 (Fig. 1d , scale). The motion of the scale proceeds rather steadily, as indicated by the monotonous, decreasing smooth curves (labelled with solid circles) shown in Fig. 1d (bottom). The hygroscopic deformation of the pine cone scales is thus featured as ultra-slow motion that remains steady over a large amplitude. As has been reported, the scale is mainly composed of two tissues, inner VBs (assembled as the skeleton) and outer sclereid tissue (figuratively called ‘skin’ hereafter; Extended Data Fig. 1 ). Here, the bending of the waterlogged skin in air fluctuates for a long time before reaching a steady state, as indicated by the curves (labelled with solid rhombuses) in Fig. 1d , Extended Data Fig. 2 and Supplementary Video 2 . After removing the skin from the scale, the skeleton was clearly seen to be composed of a set of VBs (Fig. 1c , skeleton). The skeleton itself retains the ability of reversible and steady hygroscopic movement (Fig. 1d , labelled with solid squares, and Supplementary Video 3 ), with an even larger deformation than that of the full scale. Furthermore, we demonstrated that a single VB also enables reversible and steady geometric reshaping (Fig. 1d , labelled with solid triangles, and Supplementary Video 4 ). To evaluate the contribution of different parts of the scale to the hygroscopic motion of the scale, the deformations of the whole scale, the skin, the skeleton and a single VB were characterized by measuring deformability with Δ l / l 0 and Δ θ , and velocity with (Δ l / l 0 )/Δ t and Δ θ /Δ t (Fig. 1e ), where Δ l , Δ θ and Δ t indicate the change of l and θ and the duration time, respectively, from the wet state to the dry state. The skeleton and the single VB show much larger deformability than the skin and the scale, as confirmed by the larger Δ l / l 0 and Δ θ values (Fig. 1e , left), which indicates a major contribution of the VBs to the hygroscopic motion of the scales. For the motion velocity (Fig. 1e , right), both the skin and the whole scale have a much lower value than the skeleton and VBs (Extended Data Fig. 2 and Supplementary Videos 1 – 4 ). Similarly, the scale and the skin exhibit a higher water content and slower dehydration velocity than the skeleton and VBs (Extended Data Fig. 1c ), indicating the great contribution of the skin to the low motion velocity of the scales. The VBs enable hygroscopic geometric reshaping with an even larger deformability and motion velocity than that of the scale, indicating their crucial role in driving the humidity-responsive motion of pine cone scales, while the skin with its ultra-low deformation capability decreases the overall deformation speed. Microstructure of the VB The microstructure of a single VB (Fig. 2a ) was characterized using scanning electron microscopy (SEM). As shown in the cross-sectional SEM image, the VB shows a typical heterostructure comprising two kinds of tube-like constituent cell walls with a clear boundary (indicated by the yellow dotted line, Fig. 2b ). To make the features intuitively understandable, the term ‘microtube’ is used to refer to the constituent cell walls. The microtubes at the dorsal side are in a nearly round shape with spring-like structures (Fig. 2c and Extended Data Fig. 3a–e ), while the microtubes at the ventral side are in a rectangular shape with a smooth inner wall (Fig. 2d ). To conceptualize their features from the perspective of bionics and materials design and promote understanding among readers, in this work we call the microtubes the ‘spring microtube’ (labelled with ○ ) and ‘square microtube’ (labelled with □), respectively. These two microtubes are of comparative size (Fig. 2e ). To further investigate their microstructures, in situ, non-destructive characterization using X-ray computed tomography (XCT) was conducted. As shown in the reconstructed three-dimensional (3D) XCT image (Fig. 2f ), the VB is composed of two kinds of microtubes (those with a spring-like structure and those with a square-tube-like structure; Fig. 2g,h ) in a parallel arrangement, verifying its heterostructure characteristic. The exposed inner walls at the ventral side are much smoother than those at the dorsal side. The longitudinal cross-sectional SEM images of the microtubes show a discontinuous periodic circular structure of the spring microtube, with the microfibrils oriented along the spring winding direction (Fig. 2g and Extended Data Fig. 3c–f ) and a continuous, smooth structure of the square microtube, with the microfibrils oriented towards the long axis of the cell (Fig. 2h and Extended Data Fig. 3g–i ). Uniaxial tension tests (Fig. 2i ) indicate that the spring microtube exhibits lower tensile stiffness (0.31 ± 0.24 GPa) than the square microtube (3.67 ± 1.27 GPa; Fig. 2j ). In summary, the VB is a heterostructured bundle comprising two kinds of microtubes with a similar chemical nature 1 , 42 , 43 : spring microtubes at the dorsal side and square microtubes at the ventral side. Therefore, we speculate that the hygroscopic geometric reshaping of the VB is triggered by the heterostructure of the spring/square ( ○ /□) microtubes. Fig. 2: Microstructures of a single VB. a , Optical image of a single VB. The black line and triangles indicate the position for further observation. b , The cross-sectional SEM image shows that the VB (outlined with pink dashed line) is an aggregate of hollow microtubes, in which the dorsal side is approximately round microtubes and the ventral side is relatively regular rectangular microtubes. The yellow dotted line is the dividing line of the two kinds of microtubes. c , d , SEM images with increased magnification show that the inner walls of the round and rectangular microtubes are spring-like and smooth, so that the two types were defined as the spring microtube ( c ) and square microtube ( d ), respectively. e , Schematic (left) and stacked bar chart of geometric parameters (right) of the spring microtube (labelled with ○ ) and square microtube (labelled with □) obtained from XCT. These two kinds of microtubes are of comparative size. R is the radius of the spring microtubes; d is the wire diameter of the spring wire; a 1 is the half-width; a 2 is the wall thickness along the width direction; b 1 is the half-length; and b 2 is the wall thickness along the length direction. The bars without shading indicate the mean value of R , a 1 and b 1 , repectively; and the bars with shading indicate the mean values of d , a 2 and b 2 , respectively. The error bars represent s.d. f – h , The inner morphologies of the spring and square microtubes. The 3D reconstructed partial VB obtained by XCT ( f ) shows that the inner wall of the microtubes at the ventral side is much smoother than that at the dorsal side, as divided by the yellow dotted line. The longitudinal cross-sectional SEM images of the two microtubes confirmed their spring ( g ) and smooth ( h ) structures. i , j , Representative stress–strain curves ( i ) and the calculated tensile stiffness ( j ) of the spring and square microtubes. The square microtubes show a higher stiffness. The data in j are presented as mean ± s.d. of n = 3 measurements. Source data Full size image Underlying mechanism To further explore the function of the spring and square microtubes in the hygroscopic geometric reshaping of the VB, in situ observation of the hygroscopic motion was conducted for a single spring microtube and a single square microtube (both mechanically extracted from the VB), respectively, using environmental SEM (ESEM) under controllable RH (Fig. 3a,b and Supplementary Videos 5 and 6 ). With increasing RH, both the spring and square microtubes expand slightly along their short axis (horizontally directed triangles, Fig. 3a,b ) with an almost equal expansion rate (Fig. 3d ). However, the longitudinal expansion of the two microtubes is remarkably different. The spring microtube extends (Fig. 3a , from the solid line to the dashed line, as indicated by the vertically directed triangle) with an expansion rate of around 9% (Fig. 3d and Extended Data Fig. 4a ). Simultaneously, the coil pitch of the spring microtube increases when immersed in water from the dry state, as reflected by the in situ XCT results (Fig. 3e ). By comparison, the square microtube shows very little deformation (Fig. 3b and Extended Data Fig. 4b ) with an expansion rate of around 1% (Fig. 3d ). Furthermore, we mechanically separated a side-by-side spring/square microtube couple from the VB and observed its hygroscopic motion (Fig. 3c and Supplementary Video 7 ). With increasing RH, the length of the spring microtube increases, as indicated by the vertically directed arrow (from the solid line to the dashed line, Fig. 3c ). At the same time, the inflection angle α at a certain point decreases to α ′, as indicated by the horizontally directed triangle (from the solid tangent line to the dashed tangent line, Fig. 3c ), suggesting the bending of the microtube couple towards the square microtube side. On the contrary, when RH decreases, the microtube couple bends towards the spring microtube side, as shown by the decreased length of the spring microtube and the increased inflection angle. Dawson, Vincent and Rocca 1 revealed that cells in sclereids have differently orientated cellulose microfibrils relative to those in sclerenchyma fibres, which results in different hygroscopic expansions. In our observations, the spring microtubes and square microtubes are both in the sclerenchyma fibre (that is, VB), and the microfibrils in the spring microtubes are assembled into a spring-like structure at the micro-scale. These results give deep insight into the bending mechanism of pine cone scales. According to the above results, a simplified model of one-dimensional heterostructured spring/square microtubes was used to address the hygroscopic geometric reshaping of the VB (Fig. 3f ). The model is merely heterogeneous in structure, independent of the chemical nature of the materials. With increasing RH, the spring microtube stretches and its coil pitch increases, while the square microtube with its dense structure shows only very limited change. To accommodate the difference in the length of the two microtubes, the couple bends towards the square microtube side. On the contrary, with decreasing RH, the couple bends towards the spring microtube side. Here, the spring structure endows the couple with a good environmental adaptivity for steady motion. As shown in Fig. 3g,h , different from the continuous stress field on the square tube, the spring can divide the external stress into many numerous subunits, which is helpful for minimizing the fluctuation caused by sudden, local changes. Fig. 3: Mechanisms behind the reversible hygroscopic geometric reshaping of the VB. a – c , ESEM images of the spring microtube ( a ), square microtube ( b ) and spring/square ( ○ /□) microtube couple ( c ) at low, high and low RH in turn. The solid and dashed lines indicate the borders of the microtubes at low and high RH, respectively. The green translucent solid circles in b indicate reference points used to evaluate the longitudinal expansion of the square microtube. In a and b , the spring microtube obviously stretches along its long axis from low to high RH and shortens from high to low RH, while the square microtube changes only slightly. Both the spring and square microtubes change slightly along the short axis with the variation in RH. In c , the microtube couple bends towards the square microtube side with the increase in RH, as indicated by the increased length of the spring microtube (from the solid line to the dashed line) and the decreased inflection angle (from the solid tangent line α to the dashed tangent line α ′). On the contrary, the microtube couple bends towards the spring microtube side with a decrease in RH. d , Hygroscopic expansion rates of the spring and square microtubes along their long and short axes obtained from ESEM. The spring microtubes show an obviously larger longitudinal expansion rate than the square microtubes. The data in d are presented as mean ± s.d. of n = 3 measurements. e , In situ XCT images of the longitudinal cross-section of the spring microtube in dry (RH = 23%) and wet (waterlogged) states. The coil pitch of the spring structure in the wet state is larger than that in the dry state, indicating the expansion of the spring microtube. f , Schematic of the simplified one-dimensional heterostructured spring/square ( ○ /□) microtube model for the hygroscopic geometric reshaping mechanism. The spring microtube shows a larger deformation as RH changes due to the stretchable spring structure, while the dense square microtube keeps inert. The asymmetric hygroscopic deformation leads to the reversible bending of the microtube couple. g , h , Numerical simulations of the spring/square ( ○ /□) couples in a stress field with different directions. The left and right images in g and h show the front and side views of the couples, respectively. The spring tube divides the stress field into many parts, leading to the effective dissipation of the stress concentration. Arrows indicate the field direction. Source data Full size image Biomimetic analogues As a proof of concept, we constructed a couple of one-dimensional heterostructured spring/square ( ○ /□) pillars by 3D printing. As shown in Fig. 4a,b , the left side in the pillar couple is a round pillar with an elastomer spring (TangoPlus FLX930) embedded in the hygroscopic material of Fullcure705, and the right side is a square pillar with an elastomer tube filled with Fullcure705. The filled hygroscopic material works as the skin in the scale to increase the water diffusion path and decreases the expansion velocity. With the same materials, the 3D-printed spring and square pillars exhibit different hygroscopic expansion properties, which is similar to the spring and square microtubes in the VB. The spring pillar exhibits a relatively lower Young’s modulus (270 ± 8 kPa) and larger expansion rate (14.0 ± 0.2%), while the square pillar exhibits a higher Young’s modulus (382 ± 10 kPa) and smaller expansion rate (0.9 ± 0.6%; Fig. 4c,d and Supplementary Fig. 2 ). The pillar couple can either bend underwater (wet state) or straighten in air (dry state, RH = 50%) as a consequence of the expansion/shrinkage of the spring pillar (Fig. 4e , top, and Supplementary Video 8 ). By comparison, a couple of parallel circular/square pillars without a spring structure, used as the control, stubbornly stays straight regardless of changes in the RH (Fig. 4e , bottom, and Supplementary Video 9 ). Therefore, the heterostructure plays a key role in realizing reversible bending. Also, the degree of bending of the artificial pillar couple can be finely regulated by simply adjusting the thickness ratio h ○ /□ of the spring pillar to the square pillar. The pillar couples with larger h ○ /□ show larger curvature r –1 after being immersed in water for 90 min (Fig. 4f and Extended Data Fig. 5a ). According to the theory of bending of a bi-metal strip 44 , 45 , the curvature change Δ r –1 of a one-dimensional heterogeneous pillar couple is mainly determined by the hygroscopic expansion rate, modulus and geometrical thickness of the two pillars (detailed equation in the Methods ). With the measured hygroscopic expansion rate and Young’s modulus (Extended Data Fig. 5b–d ), we drew the theoretical Δ r –1 as a function of thickness ratio h ○ /□ and total thickness h ○ +□ , where the theoretical results of the samples in Fig. 4f were highlighted with solid star symbols (Fig. 4g ). The experimental results agree well with the theoretical values (Fig. 4h ). Additionally, the experimental Δ r – 1 values of different parts in a VB also compare well with the theoretical values (Extended Data Fig. 6 ). These results verified the proposed model in Fig. 3f . Fig. 4: Artificial actuators mimicking the structure of the VB via 3D printing. a , b , Schematic ( a ) and optical images ( b ) of a VB-mimic spring/square ( ○ /□) pillar couple from the top and side views. c , d , Young’s modulus ( c ) and longitudinal hygroscopic expansion rate ( d ) of 3D-printed spring and square pillars. e , The pillar couple bends in the wet state and recovers to straight in the dry state (top), while the control sample (a circular/square pillar couple without the spring structure) remains still in the various states (bottom). f , Optical images of the pillar couples with different thickness ratios h ○ /□ of the spring pillar to square pillar. The pillar couples with larger h ○ /□ show larger bending after being immersed in water for 90 min. g , The theoretical curvature changes Δ r –1 as a function of h ○ /□ and h ○ +□ . The stars represent the theoretical results of the samples in f . h , The experimental Δ r –1 (solid circle) of the pillar couples in f agrees well with the theorical value (stars), indicating the rationality of the proposed model in Fig. 3f . The data in c , d and h are presented as mean ± s.d. of n = 3 independent measurements. Source data Full size image Various actuators have been made using stimuli-responsive materials 14 , 46 . The focus has been actuators with high motion velocity 31 , 32 , 33 , 34 , 35 , 36 , 37 , which, however, always causes a disturbance to the surrounding environment and limits the use in stealth situations 47 , 48 . As shown by the simulation results (Extended Data Fig. 7 ), plates with a higher motion velocity disturb the surrounding water in a larger range and to a greater extent. By contrast, the plate with extremely low motion velocity causes no disturbance to the surrounding water during the motion process, during which the whole system is infinitely close to an equilibrium state and the motion is unperceivable. In our system, the bending movement of the spring/square ( ○ /□) pillars can be regulated to be very slow when underwater, which will make it possible to fabricate actuators with unperceivable motion. As a proof of concept, we fabricated a table with a tabletop that moves reversibly underwater and in air to transport an object on it, using spring/square ( ○ /□) pillars as legs (Extended Data Fig. 8 ). To investigate the motion stability of the moving table and its disturbance to the surrounding water, we put a ball on the table and hung another ball above the table (Fig. 5a ). The relative motion between the ball on the table and the table was used to evaluate the moving stability of the table. The movement of the hanging ball was used to evaluate the disturbance caused by the moving table to the surrounding water. When immersed underwater, the table with spring/square ( ○ /□) pillars as legs moved spontaneously with a low velocity of 5 × 10 –6 m s –1 . The ball on the tabletop moved smoothly along with the moving tabletop without any relative displacement, and the hanging ball kept still both in the horizonal and vertical directions (Fig. 5b , top; Fig. 5d ; and Supplementary Video 10 ), indicating stable motion of the table and no disturbance to the surrounding water, as verified by the simulated result (Fig. 5c , top). As a control, when the table moved quickly (0.15 m s –1 ), the ball on the fast-moving table rolled down the tabletop, and the hanging ball rocked back and forth (Fig. 5b , bottom; Fig. 5d ; and Supplementary Video 11 ), indicating unstable motion of the table and a large disturbance to the surrounding water, as verified by the simulated result (Fig. 5c , bottom). Thus, a low velocity is indispensable to unperceivable actuators. We further summarized the motion velocities of previously reported humidity-driven soft actuators normalized by their body length (BL; Fig. 5e and Supplementary Table 2 ) 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 . The motion velocity of the actuators presented in this work (red star), especially in humid air (Supplementary Fig. 3 ), is almost two orders of magnitude lower than those of other actuators. Thus, with the artificial pillar couples as the holder, detectors like a camera can achieve a largely increased monitoring scope with unperceivable motion. As a proof of concept, with a small lamp representing a camera lens, positioned on top of a 3D-printed spring/square ( ○ /□) pillar couple, the light irradiation scope largely increased with the slow bending of the pillar (Fig. 5f and Supplementary Fig. 4 ). In summary, using a spring/square ( ○ /□) heterostructure and selected 3D-printed materials, we fabricated a movable table and devices with unperceivable motion character. Fig. 5: Pine-cone-inspired actuators with spring/square ( ○ /□) pillar couples enable unperceivable motion. a , Schematic of a moving table using spring/square ( ○ /□) pillar couples as legs, with one free ball on the table and another hanging above it, by which the stability of the table movement and the disturbance to the surrounding water were evaluated. b , Photographs of the moving table at low velocity (spontaneous motion when immersed underwater) and high velocity (pulled quickly with external force); in the former, the ball on the table stays in the same place, without observable rotation. c , Numerical simulations of the fluid flow caused by the plate with velocities of 5 × 10 –6 and 0.15 m s –1 corresponding to the slow and fast motion in b , respectively. d , The relative displacement of the ball on the table (solid circles) and the displacement of the hanging ball (hollow circles) at the horizontal (top) and vertical (bottom) directions during the movement of the table. e , Motion velocities (BL s –1 ) of humidity-responsive soft actuators versus body length (BL) from the literature 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 ; the labels are the reference numbers. Supplementary Table 2 contains detailed data. The data of the movable table in this work is marked with a red star. f , A proof of concept for unperceivable, moving detectors with largely increased monitoring scope. Left, schematic of moving lamp with artificial spring/square ( ○ /□) pillar couple as the holder; right, selected snapshots showing that the scanning scope of the lamp beam was largely increased by virtue of the moving holder. Source data Full size image What’s more, by taking advantage of the editability and compatibility of such a one-dimensional heterostructured spring/square ( ○ /□) pillar couple with the 3D-printing technique, the ability to scale up (Supplementary Fig. 5 ) and various elaborate shape transformations can be realized in a controllable manner by merely regulating the structures (Extended Data Fig. 9 ). For example, a cornucopia-like shape with gradient curvature was obtained by a pillar couple with a gradient thickness of the square pillar (Extended Data Fig. 9a ); the bending direction and the number of reversals of the pillar couple can be controlled by the alternate distribution of the spring and square pillars (Extended Data Fig. 9b ). The above 3D-printed units demonstrate the promising applicability of the one-dimensional heterostructured spring/square ( ○ /□) microtube model in more complex structure design (Extended Data Fig. 9c,d ) for biomimetic actuators. Conclusions We have revealed that the hygroscopic geometric reshaping of a pine cone is dominated by heterostructured spring/square ( ○ /□) microtubes in the VBs. The spring microtubes at the dorsal side show a larger deformation along the long axis than the square microtubes as environmental humidity changes, and the spring structure endows the microtube couple with good environmental adaptivity for steady motion, which leads to the reversible bending of the VB and dominates the closing and opening of the pine cone scales. The outer sclereids with good water retention show very limited bending, which leads to the VB-triggered deformation proceeding ultra-slowly. More importantly, the spring/square ( ○ /□) tubular heterostructure characteristic is also shared by the VBs of other pine cones (Extended Data Fig. 10 ), indicating the generality of the VB-dominated hygroscopic geometric reshaping. Combining the heterostructure and the delicate material design, we fabricated soft actuators that enable unperceivable motion in a controllable manner, due to the rather low motion velocity—two orders of magnitude lower than those of reported humidity-driven actuators. The motion of such actuators has a negligible effect on the surrounding environment. This study offers deep insight into understanding the well-known hygroscopic deformation of the pine cone, but also into developing stimuli-responsive actuators that enable motion to proceed extremely slowly and unperceivably. We envision such actuators with ultra-slow motion being applicable to various applications of camouflage and reconnaissance. Methods Sample collection and preparation The pine cones of P. elliottii, Pinus sylvestris var. mongolica Litv., Pinus armandii Franch., Picea asperata Mast., Pinus thunbergii Parl . and Pinus massoniana Lamb . were purchased from Taobao. The pine cone of Pinus tabuliformis Carr . was obtained from the Summer Palace in China. The scales were stripped mechanically from the middle part of the pine cone. The VB skeletons were obtained by scraping away the sclereid tissue from the scale. The single VBs were separated from the VB skeleton. The skin was obtained by taking the skeleton away. Observation of hygroscopic geometric reshaping We used a camera (EOS 80D, Canon) to record the morphology changes of the pine cone and its hierarchical components (the scale, skin, skeleton and VB; Fig. 1d ). The motion process of the dry pine cone, scale, skin, skeleton and VB in water and of the wet samples in air were observed. The scale, skin, skeleton and VB all exhibit similar hygroscopic geometric reshaping. Because of the different curvature changes at different positions in the samples, we defined two parameters to evaluate the total bending degree: the distance l between the tip (point P 1 ) and the tail (point O) of the sample and the tilt angle θ of the tip–tail line relative to the horizontal axis (Fig. 1d ). In the wet state, the sample was held vertically and the distance between the tip and the tail of the sample was denoted as l 0 . Thus, the relative tip–tail distance can be denoted as l / l 0 . In the wet state, l / l 0 = 1 and θ = 90°. We further calculated the deformability and motion velocity of the waterlogged scale, skin, skeleton and VB in air with a RH of 30–40% (Fig. 1e ). The deformability was characterized with Δ l / l 0 and Δ θ . To evaluate the deformation velocity, the length change velocity ((Δ l / l 0 )/Δ t ) and the tilt angle change velocity (Δ θ /Δ t ) of the samples were calculated by dividing the relative length change (Δ l / l 0 ) and tilt angle change (Δ θ ) by the elapsed time to reach equilibrium (Δ t ). To compare the bending degree of different parts of the VB, the curvature changes Δ r –1 from the dry (RH = 11%) to wet (underwater) state at the top, middle and bottom parts of the VB were calculated from their optical photos (Extended Data Fig. 6a ). The curvature radius r at a certain part is determined by drawing a circle through the three points on the contour line (4 mm) as illustrated in Extended Data Fig. 6a . All the above parameters were measured from the corresponding photos using ToupView. Data in Fig. 1d,e and Extended Data Fig. 6c were acquired from three repeats. To compare the deformability and motion velocity of the pine cone scale and other plant organisms, BL-normalized deformability and velocity were calculated. First, the movement trace of the scale tip were recorded and measured (Extended Data Fig. 2a , motion in air). Then, the deformation capability (TL/BL) was calculated with trace length (TL) divided by BL, and the motion velocity ((TL/BL)/ t ) was calculated by dividing the BL-normalized trace length by the deformation time t . The motion velocities of other plant organisms (Supplementary Table 1 ) were calculated with the same methods according to the data in the literature. Considering the effect of sample thickness ( h ) on the deformability and velocity, the thickness-normalized deformability ( h (TL/BL)) and thickness-normalized velocity ( h 2 (TL/BL)/ t ) were further calculated. Structural characterization by SEM The microstructure of the VB was observed using an SEM instrument (SU-8010, Hitachi) operated at an acceleration voltage of 5 kV after the samples were sputtered with a thin layer of gold. The cross-section of the VB along the short axis in Fig. 2b was obtained by a Leica EM TIC 3X argon ion cutter with three ion beams hitting the sample from three different directions. The milling process was performed at an ion energy of 5.5 keV at room temperature. The longitudinal cross-sections of the spring and square microtubes in Fig. 2g,h were obtained by slightly scraping the surface of the VB along the long axis from the dorsal and ventral sides, respectively. Non-destructive observation using XCT To measure the distribution of the spring and square microtubes, we imaged the top, middle and bottom parts of the VB using a high-resolution 3D X-ray microscope (Nano 3DX, Rigaku) at a resolution of 0.53 μm per voxel with a Cu target (40 kV, 30 mA) and X-ray camera (lens type, L0270) in a field of view (FOV) of 0.9 × 0.7 mm. The scans were made with an exposure time of 8–11 s to acquire 800 projections as the sample rotated 180°. All scans were reconstructed automatically using the Nano 3DX software. Then the tomograms were converted to slice images and 3D data with ImageJ. The 3D reconstructions were created by importing the obtained 3D data into Drishti v.2.4. The distributions of the spring and square microtubes in Extended Data Fig. 6b were observed from the cross-sectional views with cuts approximately orthogonal to the long axis of the VB. The 3D reconstruction picture in Fig. 2f was observed with cuts parallel and orthogonal to the long axis of the VB. The dimensions of the spring and square microtubes (Fig. 2e ) were measured from the slice image with ImageJ. To compare the coil pitch of the spring microtube in the dry and wet states, we scanned the VB before and after soaking it in water, respectively (Fig. 3e ). To avoid the evaporation of water during observation, the waterlogged VB was encapsulated in a self-made sample chamber and sealed using UV curing glue. The scans were made at a resolution of 0.27 μm per voxel with an exposure time of 20–30 s and FOV of 0.8 × 0.4 mm to acquire 1,000 projections as the sample rotated 180°. It took around 6 to 8 hours to finish one cycle of observation with this resolution high enough to observe the spring tube and square tube. It is difficult to make real-time in situ observations of the reshaping process because over such a long period of time, the sample moved and defocused due to dehydration. Therefore, the XCT scans were conducted in the dry and wet states separately. The 3D reconstruction picture of a partial scale in Extended Data Fig. 1b was obtained at a resolution of 1.08 μm per voxel with an X-ray camera (lens type, L1080) with a FOV of 3.6 × 2.8 mm to acquire 1,000 projections as the sample rotated 180°. In situ dynamic observation using ESEM The hygroscopic expansion behaviours of the spring microtube, square microtube and a couple with both were observed using ESEM (Quattro S, FEI) under controllable RH (12–100%) at an acceleration voltage of 15 kV. To ensure the chemical integrity of the microtubes, the individual microtubes were obtained by mechanical exfoliation. In detail, we first scraped tubular aggregates from the dorsal side, ventral side and central region of the VB. Then we soaked them in water and crushed them using ultrasound for 10 s to get individual microtubes. Finally, the suspension of the microtubes was dropped on a substrate and dried in an oven at 60 °C for further observation. The hygroscopic expansion rates of the spring and square microtubes (Fig. 3d ) were measured and calculated according to the ESEM images with ImageJ. Data in Fig. 3d were acquired from three independent measurements. 3D printing of VB-mimic artificial actuators We designed parallel spring/square ( ○ /□) pillar couples mimicking the heterostructured spring/square ( ○ /□) microtube couple in the VB with 3D MAX and fabricated them using a 3D printer (Stratasys Eden260vs) with TangoPlus FLX930 (Stratasys Direct) as the tube material and Fullcure705 (Stratasys Direct) as the hygroscopic filling material. The mean diameter and wire diameter of the spring structure with 80 windings per 10 cm were 4.0 mm and 0.4 mm, respectively. The outside dimension and wall thickness of the square tube were 4.0 mm and 0.4 mm, respectively. The length of the pillar couple was 10 cm. As a control, a couple consisting of circular/square tubes (TangoPlus FLX930) filled with hygroscopic material (Fullcure705) was also fabricated. The control sample had the same dimensions as the spring/square ( ○ /□) pillar couple. The size of the 3D-printed, parallel spring/square ( ○ /□) pillar couples also can be magnified by three times, limited by the platform size of the 3D printer. To adjust the bending degree, we also designed pillar couples with different thickness ratios of the spring pillar (with a mean diameter of 4 mm) to the square pillar (with an outside dimension of 1, 2, 3, 4, 6 and 8 mm). Their bending processes underwater were recorded with a camera. The changes in curvature radius were analysed using ToupView. The data in Fig. 4h and Extended Data Fig. 5a were acquired from three independent tests. A spring/square ( ○ /□) pillar couple with gradient thickness of the square pillar (from 0 to 8 mm; Extended Data Fig. 9a ) and couples with alternate distribution of the spring and square pillars and different numbers of reversals (Extended Data Fig. 9b ; total length of 10 cm) were also fabricated to obtain the complex shape transformation between the dry (RH = 50%) and wet (underwater) states. A dynamic table was fabricated using four pillar couples (length of 5 cm) as legs and a 3D-printed plate (Vero Clear, Stratasys) as the tabletop. Two 3D-printed balls (diameter of 8 mm) were used to assess the stability of the system. One ball was on the tabletop, with its movement used to evaluate the stability of the table movement. The other ball was hung above the tabletop to visualize the disturbance caused by the moving table to the surrounding water. The tabletop moved horizontally and smoothly underwater and returned to its original position in air (RH = 50%), as recorded with a camera (Fig. 5b , top, and Supplementary Video 10 ). As a control, fast motion of the tabletop was realized by mechanical pulling (Fig. 5b , bottom, and Supplementary Video 11 ). To measure the motion velocity, the length of the table was defined as the body length. The motion velocity (BL s –1 ) was calculated by dividing the body-length-normalized displacement by the motion time (Supplementary Table 2 ). With the spring/square ( ○ /□) pillar couple as the holder, a lamp was fixed at one end to mimic the camera. To visualize the light beam, an atomizing environment was created by a commercial atomizer. The movement of the light beam was recorded with a camera (EOS 80D, Canon; Fig. 5f and Supplementary Fig. 4 ). Mechanical measurement The stress–strain curves of the spring and square microtubes in the VB and the 3D-printed spring and square pillars were measured using a commercial testing machine (Mark-10 ESM301L, Copiague). The spring and square tubular aggregates were obtained by scraping them from the dorsal and ventral sides of the VB, respectively. The cross-sections of the tubular aggregates were observed with XCT, and the section areas were measured with ImageJ. The cross-sections of the 3D-printed pillars were recorded with a camera, and the section areas were measured based on the optical images with ImageJ. All the samples were clamped on both ends and elongated at a loading rate of 1 mm min –1 . The data in Fig. 2j , Fig. 4c and Extended Data Fig. 5c,d were acquired from three independent tests. Hygroscopic expansion rate of the 3D-printed spring and square pillars The expansion rates of the 3D-printed spring and square pillars along the long axis (Fig. 4d and Extended Data Fig. 5b ) were calculated based on the sample length at the dry (RH = 50%) and wet (immersed underwater for 90 min) states. The morphologies of the pillars were recorded with a camera, and the expansion rates were measured and calculated based on the optical images with ImageJ. Data in Fig. 4d and Extended Data Fig. 5b were acquired from three independent tests. Calculation of the equivalent thickness Since the shapes of the sections of spring microtube part and square microtube part in the VB are irregular (Fig. 2b and Extended Data Fig. 6b ), the thicknesses of the two parts cannot be directly measured. Based on the theory of bending of a bi-metal strip, the equivalent thickness of the two parts can be obtained by equating the inertia moment of the irregular shape ( I ir ) with that of the rectangle ( I r ): $$I_{{{{\mathrm{ir}}}}} = I_{{{\mathrm{r}}}}.$$ The inertia moment of the irregular shape can be calculated by AutoCAD software from the contour line of the sections of the spring microtube part and square microtubes part (Supplementary Table 3 ). The inertia moment of a rectangle can be expressed as 59 $$I_{{{\mathrm{r}}}} = bh^3/12$$ where b is the length of the contact line between the sections of the spring microtube part and square microtube part, measured from the cross-sectional view of the XCT images by ImageJ. Then, the equivalent thickness h can be deduced: $$h = \left( {12I_{{{{\mathrm{ir}}}}}/b} \right)^{1/3}.$$ The results are summarized in Supplementary Table 3 . Theoretical analysis of the hygroscopic curvature change Since the motion of the heterostructured spring/square ( ○ /□) couple is based on the asymmetric hygroscopic expansion of the spring microtubes and square microtubes, we adapt the bending theory of a bi-metal strip by replacing temperature with humidity 44 . The curvature change Δ r –1 can be expressed as follows: $$\Delta r^{ - 1} = \frac{{6\left( {\alpha _\bigcirc - \alpha _\square } \right)\left( {{\it{\phi }}_{{{{\mathrm{wet}}}}} - {\it{\phi }}_{{{{\mathrm{dry}}}}}} \right)\left( {1 + 1/h_{\bigcirc /\square }} \right)^2}}{{h_{\bigcirc + \square }\left( {3\left( {1 + 1/h_{\bigcirc /\square }} \right)^2 + \left( {1 + 1/\left( {h_{\bigcirc /\square }E_{\bigcirc /\square }} \right)} \right)\left( {1/h_{\bigcirc /\square }^2 + h_{\bigcirc /\square }E_{\bigcirc /\square }} \right)} \right)}}.$$ Here, α ○ and α □ are the hygroscopic expansion coefficients of the spring microtube and square microtube, respectively; ϕ is the RH of different states (wet and dry); h ○ +□ (= h ○ + h □ ) is the total thickness of the spring microtube part and square microtube part, where h ○ and h □ are the equivalent thickness of the spring microtube part and square microtube part, respectively, calculated according to the equivalent principle of the inertia moment; and h ○ /□ (= h ○ / h □ ) and E ○ /□ (= E ○ / E □ ) denote the thickness ratio and Young’s modulus ratio, respectively. At given conditions, the variations of the hygroscopic expansion coefficients and RH are constant and thus the curvature change is decided by h ○ +□ , h ○ /□ and E ○ /□ . For the 3D-printed spring and square pillars, the diameter of the spring pillar is equal to the width of the square pillar. Thus, the equivalent thickness h ○ of the spring pillar (diameter of D ) was calculated to be 0.838 D (ref. 59 ). With the measured Young’s modulus (Extended Data Fig. 5 ), the theoretical Δ r –1 of the pillar couple can be drawn as a function of h ○ /□ and h ○ +□ (Fig. 4g ). The h ○ /□ and h ○ +□ in Fig. 4f–h and Extended Data Fig. 5a are the corresponding equivalent value when the couples were immersed underwater for 90 min. For the spring and square microtube parts in the VB, with the measured Young’s modulus (Fig. 2j ) and equivalently calculated h ○ +□ and h ○ /□ (Supplementary Table 3 ), the theoretical Δ r –1 was also calculated (Extended Data Fig. 6c ). Numerical simulation of flow field To unravel the mystery of the stable motion of the spring/square microtube couple, we performed a numerical simulation of the spring/square tube couple in the moving flow field. The fluid field was set to move towards the right or towards the left (Fig. 3g,h ). A 3D model was constructed to simulate the fluid–structure interactions between water and the spring/square tube couple. In this model, the radial dimensions of the two tubes were proportionally enlarged according to their actual size. To explore the disturbance caused by the moving plate to the surrounding medium (water), the numerical simulation of the flow field generated by the plate with different velocities towards the right was also performed (Extended Data Fig. 7 ). A two-dimensional model was constructed to simulate the fluid–structure interactions between the water and the plate. In this model, a rectangle with a length equal to that of the tabletop in Fig. 5b was used to simulate the plate and surrounded by a water phase. All the numerical simulation was performed by COMSOL Multiphysics 5.6 under transient analysis mode. The flow field was described as the incompressible Navier–Stokes equation: $$\rho \frac{{\partial {{{\mathbf{u}}}}}}{{\partial t}} + \rho \left( {{{{\mathbf{u}}}} \cdot {\mathbf{\nabla }}} \right){{{\mathbf{u}}}} = {\mathbf{\nabla }} \cdot \left[ { - p{{{{I}}}} + \eta \left( {{\mathbf{\nabla }} {{{\mathbf{u}}}} + \left( {{\mathbf{\nabla }} {{{\mathbf{u}}}}} \right)^T} \right)} \right]$$ $$\rho {\mathbf{\nabla }} \cdot \left( {{{\mathbf{u}}}} \right) = 0$$ where the ρ , u , p , η and T are the density, flowing velocity, pressure, viscosity and temperature of water, respectively; I is the second order unit tensor; ∇ is the Hamilton operator. All the boundaries of the water phase were established as the open boundaries. Meanwhile, the load ( F T ) experienced by the spring/square tube couple and the plate was described as follows: $${{{\mathbf{F}}}}_T = - {{{\mathbf{n}}}} \cdot \left( { - p{{{{I}}}} + \eta \left( {{\mathbf{\nabla }} {{{\mathbf{u}}}} + \left( {{\mathbf{\nabla }} {{{\mathbf{u}}}}} \right)^T} \right)} \right)$$ where n is the normal vector to the boundary. This load represents a sum of pressure and viscous forces. Data availability The data supporting the findings of this study are available within the Article and its Supplementary Information. Other raw data generated during this study are available from the corresponding authors upon reasonable request. Source data are provided with this paper. | In a study recently published in Nature Materials, Prof. Wang Shutao from the Technical Institute of Physics and Chemistry (TICP) of the Chinese Academy of Sciences (CAS) and Prof. Liu Huan from Beihang University revealed the secret of ultra-slow motion of pine cones and developed mimicking actuators enabling unperceivable motion. Responsive actuators have attracted extensive attention by virtue of their great potential applications in flexible robotics, sensors, energy conversion and other fields. Pine cones are a well-known bionic model for constructing artificial actuators. However, little attention has been paid to the fact that the hygroscopic motion of pine cones is an ultra-slow process. Hygroscopic deformation has long been attributed to the uneven hygroscopic expansion of vascular bundles (VBs) and sclereids, controlled by their different microfibril orientations. The mechanism cannot explain the observation that VBs themselves are capable of reversible hygroscopic motion. Therefore, the mechanism of ultra-slow motion in pine cones has long been unclear. In this work, the researchers revealed that VBs are composed of unique parallelly arranged spring/square microtubes forming heterostructures. The spring microtubes show much larger hygroscopic deformation than do square microtubes along the longitudinal axis direction. This deformation bends the VBs and consequently drives the scales to move as humidity changes. Pine-cone-inspired actuators with unperceivable motions. Credit: TICP In addition, the soft outer sclereids, which have good water retention, slow down the VB-triggered motion without compromising motion amplitude. Drawing inspiration from this observation, the researchers prepared one-dimensional (1D) heterostructured spring/square pillars filled with a hygroscopic polymer to mimic the skin in the scales, which increases the water diffusion path and decreases the overall expansion velocity. With the spring/square pillars as basic units, they developed soft actuators enabling controllable yet unperceivable motion, which is two orders of magnitude slower than motion enabled by other reported actuators. "This study offers deep insight into understanding the well-known hygroscopic deformation of the pine cone and other plant tissues capable of moving, but also into developing stimuli-responsive actuators that enable motion to proceed extremely slowly and unperceivably," said Prof. Wang. They envision such actuators enabling ultra-slow motion for use in various camouflage and reconnaissance applications, according to Prof. Liu. | 10.1038/s41563-022-01391-2 |
Medicine | Data sharing uncovers five new risk genes for Alzheimer's disease | Kunkle BW et al. Meta-analysis of genetic association with diagnosed Alzheimer's disease identifies novel risk loci and implicates Abeta, Tau, immunity and lipid processing. Nature Genetics. 2019 Feb. 28 DOI: 10.1038/s41588-019-0358-2 Journal information: Nature Genetics | http://dx.doi.org/10.1038/s41588-019-0358-2 | https://medicalxpress.com/news/2019-02-uncovers-genes-alzheimer-disease.html | Abstract Risk for late-onset Alzheimer’s disease (LOAD), the most prevalent dementia, is partially driven by genetics. To identify LOAD risk loci, we performed a large genome-wide association meta-analysis of clinically diagnosed LOAD (94,437 individuals). We confirm 20 previous LOAD risk loci and identify five new genome-wide loci ( IQCK , ACE , ADAM10 , ADAMTS1 , and WWOX ), two of which ( ADAM10 , ACE ) were identified in a recent genome-wide association (GWAS)-by-familial-proxy of Alzheimer’s or dementia. Fine-mapping of the human leukocyte antigen (HLA) region confirms the neurological and immune-mediated disease haplotype HLA-DR15 as a risk factor for LOAD. Pathway analysis implicates immunity, lipid metabolism, tau binding proteins, and amyloid precursor protein (APP) metabolism, showing that genetic variants affecting APP and Aβ processing are associated not only with early-onset autosomal dominant Alzheimer’s disease but also with LOAD. Analyses of risk genes and pathways show enrichment for rare variants ( P = 1.32 × 10 −7 ), indicating that additional rare variants remain to be identified. We also identify important genetic correlations between LOAD and traits such as family history of dementia and education. Main Our previous work identified 19 genome-wide-significant common variant signals in addition to APOE that influence risk for LOAD (onset age > 65 years) 1 . These signals, combined with ‘subthreshold’ common variant associations, account for ~31% of the genetic variance of LOAD 2 , leaving the majority of genetic risk uncharacterized 3 . To search for additional signals, we conducted a GWAS meta-analysis of non-Hispanic Whites (NHW) by using a larger Stage 1 discovery sample (17 new, 46 total datasets; n = 21,982 cases, 41,944 cognitively normal controls) from our group, the International Genomics of Alzheimer’s Project (IGAP) (composed of four consortia: Alzheimer Disease Genetics Consortium (ADGC), Cohorts for Heart and Aging Research in Genomic Epidemiology Consortium (CHARGE), The European Alzheimer’s Disease Initiative (EADI), and Genetic and Environmental Risk in AD/Defining Genetic, Polygenic and Environmental Risk for Alzheimer’s Disease Consortium (GERAD/PERADES) (Supplementary Tables 1 and 2 , and Supplementary Note ). To sample both common and rare variants (minor allele frequency (MAF) ≥ 0.01 and MAF < 0.01, respectively), we imputed the discovery datasets by using a 1,000 Genomes reference panel consisting of 36,648,992 single-nucleotide polymorphisms (SNPs), 1,380,736 insertions/deletions, and 13,805 structural variants. After quality control, 9,456,058 common variants and 2,024,574 rare variants were selected for analysis. Genotype dosages were analyzed within each dataset, and then combined with meta-analysis (Supplementary Fig. 1 and Supplementary Tables 1– 3). Results Meta-analysis of Alzheimer’s disease GWAS The Stage 1 discovery meta-analysis produced 12 loci with genome-wide significance ( P ≤ 5 × 10 −8 ) (Table 1 ), all of which are previously described 1 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 . Genomic inflation factors ( λ ) were slightly inflated ( λ median = 1.05; λ regression = 1.09; see Supplementary Figure 2 for a quantile–quantile (QQ) plot); however, univariate linkage disequilibrium score (LDSC) regression 12 , 13 estimates indicated that the majority of this inflation was due to a polygenic signal, with the intercept being close to 1 (1.026, s.e.m. = 0.006). The observed heritability ( h 2 ) of LOAD was estimated at 0.071 (0.011) using LDSC. Stage 1 meta-analysis was first followed by Stage 2, using the I-select chip we previously developed in Lambert et al. 1 (including 11,632 variants, n = 18,845; Supplementary Table 4 ) and finally Stage 3A ( n = 11,666) or Stage 3B ( n = 30,511) (for variants in regions not well captured in the I-select chip) (see Supplementary Figure 1 for the workflow). The final sample was 35,274 clinical and autopsy-documented Alzheimer’s disease cases and 59,163 controls. Table 1 Summary of discovery Stage 1, Stage 2 and overall meta-analyses results for identified loci reaching genome-wide significance after Stages 1 and 2 Full size table Meta-analysis of Stages 1 and 2 produced 21 genome-wide-significant associations ( P ≤ 5 × 10 −8 ) (Table 1 and Fig. 1 ), 18 of which were previously reported as genome-wide significant in Lambert et al. 1 . Three other signals were not initially described in the initial IGAP GWAS: the rare R47H TREM2 coding variant previously reported by others 7 , 8 , 14 ; ECHDC3 (rs7920721; NC_000010.10 : g.11720308A>G), which was recently identified as a potential genome-wide-significant Alzheimer’s disease risk locus in several studies 15 , 16 , 17 , and ACE (rs138190086; NC_000017.10 : g.61538148G>A) (Supplementary Figs. 3 and 4 ). In addition, seven signals showed suggestive association with P < 5 × 10 −7 (closest genes: ADAM10 , ADAMTS1 , ADAMTS20 , IQCK , MIR142/TSPOAP1-AS1 , NDUFAF6 , and SPPL2A ) (Supplementary Figs. 5 – 11 ). Stage 3A and meta-analysis of all three stages for these nine signals (excluding the TREM2 signal; see Supplementary Table 5 for the variant list) identified five genome-wide-significant loci. In addition to ECHDC3 , this included four new genome-wide Alzheimer’s disease risk signals at IQCK , ADAMTS1 , ACE , and ADAM10 (Table 2 ) . ACE and ADAM10 were previously reported as Alzheimer’s disease candidate genes 18 , 19 , 20 , 21 , 22 but were not replicated in some subsequent studies 23 , 24 , 25 . A recent GWAS using family history of Alzheimer’s disease or dementia as a proxy 26 also identified these two risk loci, suggesting that while use of proxy Alzheimer’s disease/dementia cases introduces less sensitivity and specificity for true Alzheimer’s disease signals overall in comparison to clinically diagnosed Alzheimer’s disease, proxy studies can identify disease-relevant associations. Two of the four other signals approached genome-wide significance: miR142/TSPOAP1-AS1 ( P = 5.3 × 10 −8 ) and NDUFAF6 ( P = 9.2 × 10 −8 ) (Table 2 ). Stage 3A also extended the analysis of two loci ( NME8 and MEF2C ) that were previously genome-wide significant in our 2013 meta-analysis. These loci were not genome-wide significant in our current study and will deserve further investigation ( NME8 : P = 2.7 × 10 −7 ; MEF2C : P = 9.1 × 10 −8 ; Supplementary Figs. 12 and 13 ). Of note, GCTA COJO 27 conditional analysis of the genome-wide loci indicates that TREM2 and three other loci ( BIN1 , ABCA7 , and PTK2B / CLU ) have multiple independent LOAD association signals (Supplementary Table 6 ), suggesting that the genetic variance associated with some GWAS loci is probably underestimated. Fig. 1: Manhattan plot of meta-analysis of Stage 1, 2, and 3 results for genome-wide association with Alzheimer’s disease. The threshold for genome-wide significance ( P < 5 × 10 −8 ) is indicated by the red line, while the blue line represents the suggestive threshold ( P < 1 × 10 −5 ). Loci previously identified by the Lambert et al. 1 IGAP GWAS are shown in blue and newly associated loci are shown in red. Loci are named for the closest gene to the sentinel variant for each locus. Diamonds represent variants with the smallest P values for each genome-wide locus. Full size image Table 2 Summary of discovery Stage 1, Stage 2, Stage 3 (A and B), and overall meta-analysis results of potential novel loci Full size table We also selected 33 variants from Stage 1 (28 common and 5 rare variants in loci not well captured in the I-select chip; see Methods for full selection criteria) for genotyping in Stage 3B (including populations of Stage 2 and Stage 3A). We nominally replicated a rare variant (rs71618613; NC_000005.9 : g.29005985A>C) within an intergenic region near S UCL G2P4 (MAF = 0.01; P = 6.8 × 10 −3 ; combined P = 3.3 × 10 −7 ) and replicated a low-frequency variant in the TREM2 region (rs114812713; NC_000006.11 : g.41034000G>C, MAF = 0.03, P = 7.2 × 10 −3 ; combined P = 2.1 × 10 −13 ) in the gene OARD1 that may represent an independent signal according to our conditional analysis (Table 2 , Supplementary Figs. 14 and 15 , Supplementary Tables 6 and 7 ). In addition, rs62039712 ( NC_000016.9 : g.79355857G>A) in the WWOX locus reached genome-wide significance ( P = 3.7 × 10 −8 ), and rs35868327 ( NC_000005.9 : g.52665230T>A) in the FST locus reached suggestive significance ( P = 2.6 × 10 −7 ) (Table 2 and Supplementary Figs. 16 and 17 ). WWOX may play a role in Alzheimer’s disease through its interaction with tau 28 , 29 , and it is worth noting that the sentinel variant (defined as the variant with the lowest P value) is just 2.4 megabases from PLCG2 , which contains a rare variant that we recently associated with Alzheimer’s disease 14 . Since both rs62039712 and rs35868327 were only analyzed in a restricted number of samples, these loci deserve further attention. Candidate gene prioritization at genome-wide loci To evaluate the biological significance and attempt to identify the underlying risk genes for the newly identified genome-wide signals ( IQCK , ACE , ADAM10 , ADAMTS1 , and WWOX ) and those found previously, we pursued five strategies: (1) annotation and gene-based testing for deleterious coding, loss-of-function (LOF) and splicing variants; (2) expression-quantitative trait loci (eQTL) analyses; (3) evaluation of transcriptomic expression in LOAD clinical traits (correlation with the BRAAK stage 30 and differential expression in Alzheimer’s disease versus control brains 31 ); (4) evaluation of transcriptomic expression in Alzheimer’s disease–relevant tissues 32 , 33 , 34 ; and (5) gene cluster/pathway analyses. For the 24 signals reported here, other evidence indicates that APOE 35 , 36 , ABCA7 (refs. 37 , 38 , 39 , 40 ), BIN1 (ref. 41 ), TREM2 ( refs. 7 , 8 ), SORL1 (refs. 42 , 43 ), ADAM10 (ref. 44 ), SPI1 (ref. 45 ), and CR1 (ref. 46 ) are the true Alzheimer’s disease risk gene, although there is a possibility that multiple risk genes exist in these regions 47 . Because many GWAS loci are intergenic, and the closest gene to the sentinel variant may not be the actual risk gene, in these analyses we considered all protein-coding genes within ±500 kilobases (kb) of the sentinel variant linkage disequilibrium (LD) regions ( r 2 ≥ 0.5) for each locus as a candidate Alzheimer’s disease gene ( n = 400 genes) (Supplementary Table 8 ). We first annotated all sentinel variants for each locus and variants in LD ( r 2 > 0.7) with these variants in a search for deleterious coding, LOF or splicing variants. In line with findings that most causal variants for complex disease are non-coding 48 , only 2% of 1,073 variants across the 24 loci (excluding APOE ) were exonic variants, with a majority (58%) being intronic (Supplementary Fig. 18 and Supplementary Table 9 ). Potentially deleterious variants include the rare R47H missense variant in TREM2 , common missense variants in CR1 , SPI1 , MS4A2 , and IQCK , and a relatively common (MAF = 0.16) splicing variant in IQCK . Using results of a large whole-exome-sequencing study conducted in the ADGC and CHARGE sample 49 ( n = 5,740 LOAD cases and 5,096 controls), we also identified ten genes located in our genome-wide loci as having rare deleterious coding, splicing or LOF burden associations with LOAD (false discovery rate (FDR) P < 0.01), including previously implicated rare-variant signals in ABCA7 , TREM2 , and SORL1 (refs. 14 , 49 , 50 , 51 , 52 , 53 , 54 , 55 ), and additional associations with TREML4 in the TREM2 locus, TAP2 and PSMB8 in the HLA -DRB1 locus, PIP in the EPHA1 locus, STYX in the FERMT2 locus, RIN3 in the SLC24A4 locus, and KCNH6 in the ACE locus (Supplementary Table 10 ). For eQTL analyses, we searched existing eQTL databases and studies for cis-acting eQTLs in a prioritized set of variants ( n = 1,873) with suggestive significance or in LD with the sentinel variant in each locus. Of these variants, 71–99% have regulatory potential when considering all tissues according to RegulomeDB 56 and HaploReg 57 , but restricting to Alzheimer’s disease–relevant tissues (via Ensembl Regulatory Build 58 and GWAS4D 59 ) appears to aid in regulatory variant prioritization, with probabilities for functional variants increasing substantially when using GWAS4D cell-dependent analyses with brain or monocytes, for instance (these and other annotations are provided in Supplementary Table 11 ). Focusing specifically on eQTLs, we found overlapping cis-acting eQTLs for 153 of the 400 protein-coding genes, with 136 eQTL-controlled genes in Alzheimer’s disease–relevant tissues (that is, brain and blood/immune cell types; see Methods for details) (Supplementary Tables 12 and 13 ). For our newly identified loci, there were significant eQTLs in Alzheimer’s disease–relevant tissue for ADAM10 , FAM63B , and SLTM (in the ADAM10 locus); ADAMTS1 ( ADAMTS1 locus); and ACSM1 , ANKS4B , C16orf62 , GDE1 , GPRC5B , IQCK , and KNOP1 ( IQCK locus). There were no eQTLs in Alzheimer’s disease–relevant tissues in the WWOX or ACE locus, although several eQTLs for PSMC5 in coronary artery tissue were found for the ACE locus. eQTLs for genes in previously identified loci include BIN1 ( BIN1 locus), INPP5D ( INPP5D locus), CD2AP ( CD2AP locus), and SLC24A4 ( SLC24A4 locus). Co-localization analysis confirmed evidence of a shared causal variant affecting expression and disease risk in 66 genes over 20 loci, including 31 genes over 13 loci in LOAD-relevant tissue (see Supplementary Table 14 and 15 for complete lists). Genes implicated include CR1 ( CR1 locus), ABCA7 ( ABCA7 loci), BIN1 ( BIN1 locus), SPI1 and MYBPC3 ( SPI1 locus), MS4A2 , MS4A6A , and MS4A4A ( MS4A2 locus), KNOP1 ( IQCK locus), and HLA -DRB1 ( HLA -DRB1 locus) (Supplementary Table 12 ). To study the differential expression of genes in brains of patients with Alzheimer’s disease versus controls, we used 13 expression studies 31 . We found that 58% of the 400 protein-coding genes within the genome-wide loci had evidence of differential expression in at least one study (Supplementary Table 16 ). Additional comparisons to Alzheimer’s disease related gene expression sets revealed that 62 genes were correlated with pathogenic stage (BRAAK) in at least one brain tissue 30 (44 genes in prefrontal cortex, the most relevant LOAD tissue; 36 in cerebellum and 1 in visual cortex). Finally, 38 genes were present in a set of 1,054 genes preferentially expressed in aged microglial cells, a gene set shown to be enriched for Alzheimer’s disease genes ( P = 4.1 × 10 −5 ) 34 . We also annotated our list of genes with brain RNA-seq data, which showed that 80% were expressed in at least one type of brain cell, and the genes were most highly expressed in fetal astrocytes (26%), followed by microglia/macrophages (15.8%), neurons (14.8%), astrocytes (11.5%), and oligodendrocytes (6.5%). When not considering fetal astrocytes, mature astrocytes (21%), and microglial cells (20.3%), the resident macrophage cells of the brain thought to play a key role in the pathologic immune response in LOAD 8 , 14 , 60 , became the highest expressed cell types in the genome-wide set of genes, with 5.3% of the 400 genes showing high microglial expression (Supplementary Table 17 ; see Supplementary Table 18 for the highly expressed gene list by cell type). We conducted pathway analyses (MAGMA 61 ) separately for common (MAF > 0.01) and rare variants (MAF < 0.01). For common variants, we detected four function clusters including (1) APP metabolism/Aβ formation (regulation of Aβ formation: P = 4.56 × 10 −7 and regulation of APP catabolic process: P = 3.54 × 10 −6 ); (2) tau protein binding ( P = 3.19 × 10 −5 ); (3) lipid metabolism (four pathways including protein−lipid complex assembly: P = 1.45 × 10 −7 ); and (4) immune response ( P = 6.32 × 10 −5 ) (Table 3 and Supplementary Table 19 ). Enrichment of the four clusters remained after removal of genes in the APOE region. When APOE -region genes and genes near genome-wide-significant genes were removed, tau showed moderate association ( P = 0.027), and lipid metabolism and immune-related pathways showed strong associations ( P < 0.001) (Supplementary Table 20 ). Genes driving these enrichments (that is, having a gene-wide P < 0.05) included SCNA , a Parkinson’s risk gene that encodes alpha-synuclein, the main component of Lewy bodies, whch may play a role in tauopathies 62 , 63 , for the tau pathway; apolipoprotein genes ( APOM , APOA5 ) and ABCA1 , a major regulator of cellular cholesterol, for the lipid metabolism pathways; and 52 immune pathway genes (Supplementary Table 21 ). While no pathways were significantly enriched for rare variants, lipid and Aβ pathways did reach nominal significance in rare-variant-only analyses. Importantly, we also observed a highly significant correlation between common and rare pathway gene results ( P = 1.32 × 10 −7 ), suggesting that risk Alzheimer’s disease genes and pathways are enriched for rare variants. In fact, 50 different genes within tau, lipid, immunity and Aβ pathways showed nominal rare-variant driven associations ( P < 0.05) with LOAD. Table 3 Significant pathways ( q value ≤ 0.05) from MAGMA pathway analysis for common and rare variant subsets Full size table To further explore the APP/Aβ pathway enrichment, we analyzed a comprehensive set of 335 APP metabolism genes 64 curated from the literature. We observed significant enrichment of this gene set in common variants ( P = 2.27 × 10 −4 ; P = 3.19 × 10 −4 excluding APOE ), with both ADAM10 and ACE nominally significant drivers of this result (Table 4 and Supplementary Tables 22 and 23 ). Several ‘sub-pathways’ were also significantly enriched in the common variants, including ‘clearance and degradation of Aβ’, and ‘aggregation of Aβ’, along with its subcategory ‘microglia’, the latter supporting microglial cells suspected role in response to Aβ in LOAD 65 . Nominal enrichment for risk from rare variants was found for the pathway ‘aggregation of Aβ: chaperone’ and 23 of the 335 genes. Table 4 Top results of pathway analysis of the Aβ-centered biological network from Campion et al. 64 (see Supplementary Table 12 for full results) Full size table To identify candidate genes for our novel loci, we combined results from our five prioritization strategies in a priority ranking method similar to that of Fritsche et al. 66 (Fig. 2 and Supplementary Table 24 ). ADAM10 was the top ranked gene of the 11 genes within the ADAM10 locus. ADAM10, the most important α-secretase in the brain, is a component of the non-amyloidogenic pathway of APP metabolism 67 and sheds TREM2 (ref. 68 ), an innate immunity receptor expressed selectively in microglia. Overexpression of ADAM10 in mouse models can halt Aβ production and subsequent aggregation 69 . In addition, two rare ADAM10 alterations segregating with disease in LOAD families increased Aβ plaque load in ‘Alzheimer-like’ mice, with diminished α-secretase activity from the alterations probably the causal mechanism 19 , 44 . For the IQCK signal, which is also an obesity locus 70 , 71 , IQCK , a relatively uncharacterized gene, was ranked top, although four of the other 11 genes in the locus have a priority rank ≥ 4, including KNOP1 and GPRC5B , the latter being a regulator of neurogenesis 72 , 73 and inflammatory signaling in obesity 74 . Of the 22 genes in the ACE locus, PSMC5 , a key regulator of major histocompatibility complex (MHC) 75 , 76 , has a top score of 4, while DDX42 , MAP3K3 , an important regulator of macrophages and innate immunity 77 , 78 , and CD79B , a B lymphocyte antigen receptor subunit, each have a score of 3. Candidate gene studies have associated ACE variants with Alzheimer’s disease risk 20 , 22 , 79 , including a strong association in the Wadi Ara, an Israeli Arab community with high risk of Alzheimer’s disease 21 . However, these studies yielded inconsistent results 23 , and our work reports a clear genome-wide association in NHW at this locus. While ACE was not prioritized, it should not be rejected as a candidate gene, as its expression in Alzheimer’s disease brain tissue is associated with Aβ load and Alzheimer’s disease severity 80 . Furthermore, cerebrospinal fluid (CSF) levels of the angiotensin-converting enzyme (ACE) are associated with Aβ levels 81 and LOAD risk 82 , and studies show ACE can inhibit Aβ toxicity and aggregation 83 . Finally, angiotensin II, a product of ACE function, mediates a number of neuropathological processes in Alzheimer’s disease 84 and is now a target for intervention in phase II clinical trials of Alzheimer’s disease 85 . Another novel genome-wide locus reported here, ADAMTS1 , is within 665 kb of APP on chromosome 21. Of three genes at this locus, our analyses nominate ADAMTS1 as the likely risk gene, although we cannot rule out that this signal is a regulatory element for APP . ADAMTS1 is elevated in Down’s syndrome with neurodegeneration and Alzheimer’s disease 86 , and it is a potential neuroprotective gene 87 , 88 , 89 or a neuroinflammatory gene important to microglial response 90 . Finally, WWOX and MAF , which surround an intergenic signal in an obesity associated locus 91 , were both prioritized for the WWOX locus, with MAF , another important regulator of macrophages 92 , 93 , being highly expressed in microglia in the Brain RNA-seq database, and WWOX , a high-density-lipoprotein cholesterol and triglyceride–associated gene 94 , 95 , being expressed most highly in astrocytes and neurons. WWOX has been implicated in several neurological phenotypes 96 ; in addition, it binds tau and may play a critical role in regulating tau hyper-phosphorylation, neurofibrillary formation and Aβ aggregation 28 , 29 . Intriguingly, treatment of mice with its binding partner restores memory deficits 97 , hinting at its potential in neurotherapy. Fig. 2: Top prioritized genes of 400 genes located in genome-wide-significant loci. The criteria include: (1) deleterious coding, LOF or splicing variant in the gene; (2) significant gene-based tests; (3) expression in a tissue relevant to Alzheimer’s disease (astrocytes, neurons, microglia/macrophages, oligodendrocytes); (4) a HuMi microglial-enriched gene; (5) having an eQTL effect on the gene in any tissue, in Alzheimer’s disease–relevant tissue, and/or a co-localized eQTL; (6) being involved in a biological pathway enriched in Alzheimer’s disease (from the current study); (7) expression correlated with the BRAAK stage; and (8) differential expression in a 1 + Alzheimer’s disease (AD) study. Novel genome-wide loci from the current study are listed first, followed by known genome-wide loci. Each category is assigned an equal weight of 1, with the priority score equaling the sum of all categories. Colored fields indicate that the gene meets the criteria. Genes with a priority score ≥ 4 are listed for each locus. If no gene reached a score of ≥ 5 in a locus, then the top ranked gene(s) is listed. Full size image For previously reported loci, applying the same prioritization approach highlights several genes, as described in Fig. 2 , some of which are involved in APP metabolism ( FERMT2 , PICALM ) or tau toxicity ( BIN1 , CD2AP , FERMT2 , CASS4 , PTK2B ) 98 , 99 , 100 , 101 . Pathway, tissue and disease trait enrichment analyses support the utility of our prioritization method, as the 53 prioritized genes with a score ≥ 5 are (1) enriched in substantially more Alzheimer’s disease–relevant pathways, processes and dementia-related traits; (2) enriched in candidate Alzheimer’s disease cell types such as monocytes (adjusted P = 9.0 × 10 −6 ) and macrophages (adjusted P = 5.6 × 10 −3 ); and (3) more strongly associated with dementia-related traits and Alzheimer’s disease–relevant pathways (Supplementary Table 25 and 26 ; see Supplementary Fig. 19 for the interaction network of these prioritized genes). To further investigate the cell types and tissues the prioritized genes are expressed in, we performed differentially expressed gene (DEG) set enrichment analysis of the prioritized genes by using GTEx 102 tissues, and we identified significant differential expression in several potentially relevant Alzheimer’s disease tissues including immune-related tissues (upregulation in blood and spleen), obesity-related tissue (upregulation in adipose), heart tissues (upregulation in left ventricle and atrial appendage), and brain tissues (dowregulation in cortex, cerebellum, hippocampus, basal ganglia, and amygdala). Furthermore, the 53 genes are overexpressed in ‘adolescence’ and ‘young adult’ brain tissues in BrainSpan 103 , a transcriptomic atlas of the developing human brain, which is consistent with accumulating evidence suggesting Alzheimer’s disease may start decades before the onset of disease 104 , 105 (Supplementary Fig. 20 ; see Supplementary Fig. 21 for a tissue expression heat map for the 53 genes). Fine-mapping of the HLA region The above approach prioritized HLA -DRB1 as the top candidate gene in the MHC locus, known for its complex genetic organization and highly polymorphic nature (see Supplementary Fig. 22 for a plot of the region of the Stage 1 results). Previous analyses in the ADGC (5,728 Alzheimer’s disease cases and 5,653 controls) have linked both HLA class I and II haplotypes with Alzheimer’s disease risk 106 . In order to further investigate this locus in a much larger sample, we used a robust imputation method and fine-mapping association analysis of alleles and haplotypes of HLA class I and II genes in 14,776 cases and 23,047 controls from our datasets (Supplementary Table 27 ). We found risk effects of HLA - DQA1 *01:02 (FDR P = 0.014), HLA -DRB1*15:01 (FDR P = 0.083), and HLA - DQB1 *06:02 (FDR P = 0.010) (Supplementary Table 28 ). After conditioning on the sentinel meta-analysis variant in this region (rs78738018), association signals were lost for the three alleles, suggesting that the signal observed at the variant level is due to the association of these three alleles. These alleles form the HLA - DQA1 *01:02~ HLA - DQB1 *06:02~ HLA -DRB1*15:01 ( DR15 ) haplotype, which is also associated with Alzheimer’s disease in our sample (FDR P = 0.013) (Supplementary Table 29 ). Taken together, these results suggest a central role of the DR15 haplotype in Alzheimer’s disease risk, a finding originally discovered in a small study in the Tunisian population 107 and more recently in a large ADGC analysis 106 . Intriguingly, the DR15 haplotype and its component alleles also associate with protection against diabetes 108 , a high risk for multiple sclerosis 109 , 110 , and risk or protective effects with many other immune-mediated diseases (Supplementary Table 30 ). Moreover, the associated diseases include a large number of traits queried from an HLA-specific Phewas 111 , including neurological diseases (for example, Parkinson’s disease 112 , 113 ) and diseases with risk factors for Alzheimer’s disease (for example, hyperthyroidism 114 ), pointing to potential shared and/or interacting mechanisms and co-morbidities, a common paradigm in the MHC locus 115 . Two additional alleles, HLA - DQA1 *03:01 and HLA - DQB1 *03:02 , belonging to another haplotype, show a protective effect on Alzheimer’s disease, but their signal was lost after conditioning on HLA - DQA1 *01:02 , and the HLA - DQA1 *03:01~ HLA - DQB1 *03:02 haplotype is not associated with Alzheimer’s disease (FDR P = 0.651). Genetic correlations with Alzheimer’s disease As described above, several of our genome-wide loci have potentially interesting co-morbid or pleiotropic associations with traits that may be relevant to the pathology of Alzheimer’s disease. To investigate the extent of LOAD’s shared genetic architecture with other traits, we performed LD-score regression to estimate the genetic correlation between LOAD and 792 human diseases, traits and behaviors 12 , 116 (Supplementary Table 31 ). The common variant genetic architecture of LOAD was positively correlated with a maternal family history of Alzheimer’s disease/dementia ( r g for the genetic correlation of two traits = 0.81; FDR P = 2.79 × 10 −7 ), similar to the Marioni et al. family proxy analyses 26 , which found maternal genetic correlation with Alzheimer’s disease to be higher than that for paternal Alzheimer’s disease ( r g = 0.91 and 0.66, respectively). There is substantial overlap between these estimates, as the Marioni et al. analyses include the 2013 IGAP summary statistics and employed the same UK Biobank variable that we used for r g estimates with maternal history of dementia. We also find significant negative correlation between Alzheimer’s disease and multiple measures of educational attainment (for example, college completion, r g = −0.24; years of schooling, r g range = −0.19 to −0.24; cognitive scores, r g = −0.24 and −0.25) (FDR P < 0.05), supporting the theory that a greater cognitive reserve could help protect against development of LOAD 117 . The extent to which socioeconomic, environmental, or cultural factors contribute to the correlation between educational attainment and risk for Alzheimer’s disease is unknown, but research shows dementia risk to be associated with lower socioeconomic status, independently of education status 118 , 119 . We also found negative correlations at P < 0.05 with multiple measures of cardiovascular health (that is, family history of high blood pressure and heart disease and vascular/heart problems) and diabetes (that is, fasting proinsulin, basal metabolic rate and fasting insulin), supporting previous research suggesting that use of blood pressure and diabetic medications may reduce the risk of Alzheimer’s disease 120 . In fact, use of blood pressure medication does show a negative genetic correlation with Alzheimer’s disease in our study ( r g = −0.12; P = 0.035), although this result does not survive FDR correction. These and other top results from this analysis (for example, body mass index, height; see Supplementary Table 31 for a full list of other nominally significant correlations) have been linked to Alzheimer’s disease previously 116 , 120 , 121 , 122 , 123 , 124 , 125 , 126 , 127 , either through suggestive or significant genetic or epidemiological associations (see Kuzma et al. 128 for a recent review), but the multiple measures here support and emphasize their genetic correlation with LOAD and highlight the possible genetic pleiotropy or co-morbidity of these traits with pathology of LOAD. Discussion In conclusion, our work identifies five new genome-wide associations for LOAD and shows that GWAS data combined with high-quality imputation panels can reveal rare disease risk variants (for example, TREM2 ). The enrichment of rare variants in pathways associated with Alzheimer’s disease indicates that additional rare variants remain to be identified, and larger samples and better imputation panels will facilitate identifying them. While these rare variants may not contribute substantially to the predictive value of genetic findings, they will enhance the understanding of disease mechanisms and potential drug targets. Discovery of the risk genes at genome-wide loci remains challenging, but we demonstrate that converging evidence from existing and new analyses can prioritize risk genes. We also show that APP metabolism is associated with not only early-onset Alzheimer’s disease but also LOAD, suggesting that therapies developed by studying early-onset families could also be applicable to the more common late-onset form of the disease. Pathway analysis showing that tau is involved in LOAD supports recent evidence that tau may play an early pathological role in Alzheimer’s disease 129 , 130 , 131 and confirms that therapies targeting tangle formation/degradation could potentially affect LOAD. Finally, our fine-mapping analyses of HLA and genetic correlation results point to LOAD’s shared genetic architecture with many immune-mediated and cognitive traits, suggesting that research and interventions that elucidate mechanisms behind these relationships could also yield fruitful therapeutic strategies for LOAD. URLs ADGC Reference Dataset: ; AlzBase: ; Brain RNA-seq Database: ; Enrichr: ; exSNP: ; NESDA eQTL catalog: ; FUMA: ; HLA-PheWas catalog: ; INFERNO: ; LD Hub: ; STRING: . Methods Samples All Stage 1 meta-analysis samples are from four consortia: ADGC, CHARGE, EADI, and GERAD/PERADES. Summary demographics of all 46 case-control studies from the four consortia are described in Supplementary Tables 1 and 2 . Written informed consent was obtained from study participants or, for those with substantial cognitive impairment, from a caregiver, legal guardian, or other proxy. Study protocols for all cohorts were reviewed and approved by the appropriate institutional review boards. Further details of all cohorts can be found in the Supplementary Note . Pre-imputation genotype chip quality control Standard quality control was performed on all datasets individually, including exclusion of individuals with low call rate, individuals with a high degree of relatedness, and variants with low call rate. Individuals with non-European ancestry according to principal components analysis of ancestry-informative markers were excluded from the further analysis. Imputation and pre-analysis quality control Following genotype chip quality control, each dataset was phased and imputed to the 1,000 Genomes Project (phase 1 integrated release 3, March 2012) 132 using SHAPEIT/IMPUTE2 133 , 134 or MaCH/Minimac 135 , 136 software (Supplementary Table 3 ). All reference population haplotypes were used for the imputation, as this method improves accuracy of imputation for low-frequency variants 137 . Common variants (MAF ≥ 0.01%) with an r 2 or an information measure <0.40 from MaCH and IMPUTE2 were excluded from further analyses. Rare variants (MAF < 0.01%) with a ‘global’ weighted imputation quality score of <0.70 were also excluded from analyses. This score was calculated by weighting each variant’s MACH/IMPUTE2 imputation quality score by study sample size and combining these weighted scores for use as a post-analysis filter. We also required the presence of each variant in 30% of cases and 30% of controls across all datasets. Stage 1 association analysis and meta-analysis Stage 1 single variant-based association analysis employed an additive genotype model adjusting for age (defined as age-at-onset for cases and age-at-last exam for controls), sex, and population substructure using principal components 138 . The score test was implemented on all case-control datasets. This test is optimal for meta-analysis of rare variants due to its balance between power and control of type 1 error 139 . Family datasets were tested using GWAF 140 , with generalized estimating equations (GEE) implemented for common variants (MAF ≥ 0.01), and a general linear mixed effects model (GLMM) implemented for rare variants (MAF < 0.01), per our preliminary data showing that the behavior of the test statistics for GEE was fine for common variants but inflated for rare variants, while GLMM controlled this rare-variant inflation. Variants with regression coefficient |β| > 5 or P value equal to 0 or 1 were excluded from further analysis. Within-study results for Stage 1 were meta-analyzed in METAL 141 using an inverse-variance-based model with genomic control. The meta-analysis was split into two separate analyses according to the study sample size, with all studies being included in the analysis of common variants (MAF ≥ 0.01), and only studies with a total sample size of 400 or greater being included in the rare-variant (MAF < 0.01) analysis. See the Supplementary Note for further details of the meta-analyses methods. Stage 1 summary statistics quality control and analysis Genomic inflation was calculated for lambda ( λ ) in the GenABEL package 142 . In addition, we performed LDSC regression via LD Hub v.1.9.0 (refs. 12 , 13 ) to calculate the LD score regression intercept and derive a heritability estimate for the inverse-variance weighted meta-analysis summary statistics. The APOE region (Chr19:45,116,911–46,318,605) was removed to calculate the intercept. Removal of the APOE region reduced the heritability estimate slightly from 0.071 (s.e.m. = 0.011) to 0.0637 (s.e.m. = 0.009). LDSC was also employed via the LD Hub web server to obtain genetic correlation estimates (rg) 116 between LOAD and a wide range of other disorders, diseases and human traits, including 518 UK BioBank traits 143 . UK BioBank is a large long-term study (~500,000 volunteers aged 40 to 69) begun in 2006 in the United Kingdom, which is investigating the contributions of genetic predisposition and environmental exposure (that is, nutrition, lifestyle, and medications) to the development of disease. While volunteers in the study are generally healthier than the overall United Kingdom population 144 , its large size and comprehensive data collection make the study an invaluable resource for researchers looking to interrogate the combined effect of genetics and environmental factors on disease. Before analyses in LD Hub, we removed all SNPs with extremely large effect sizes including the MHC (Chr6:26,000,000–34,000,000) and APOE region, as outliers can overly influence the regression analyses. A total of 1,180,989 variants were used in the correlation analyses. Statistical significance of the genetic correlations was estimated using 5% Benjamini−Hochberg FDR corrected P values. GCTA COJO 27 was used to conduct conditional analysis of the Stage 1 summary statistics, with 28,730 unrelated individuals from the ADGC as a reference panel for calculation of LD. See URLs for methods for creation of the ‘ADGC reference dataset’. Stage 2 and 3 genotyping, quality control, and analysis Stage 2 genotypes were determined for 8,362 cases and 10,483 controls (Supplementary Table 4 ). 1,633 variants from the I-select chip were located in the 24 genome-wide loci (defined by the LD blocks of the sentinel variants; excluding the APOE region), with an average of 68 variants per locus. The most well-covered loci were HLA -DRB1 , M24A2 , and PICALM (763, 202, and 156 variants available, respectively); the least were MAF , ADAMTS1 , and INPP5D (0, 4, and 5 variants, respectively). Stage 3A was conducted for variants selected as novel loci from meta-analyses of Stages 1 and 2 with P < 5 × 10 −7 (9 variants) and variants that were previously significant ( P < 5 × 10 −8 ) that were not genome-wide significant after Stages 1 and 2 (2 variants) (4,930 cases and 6,736 controls) (Supplementary Table 5 ). Variants were genotyped using Taqman. Stage 3B, which combined samples from Stage 2 and 3A, included variants with MAF < 0.05 and P < 1 × 10 −5 or variants with MAF ≥ 0.05 and P < 5 × 10 −6 in novel loci not covered in the 2013 iSelect genotyping 1 (13,292 cases and 17,219 controls) (Supplementary Table 7 ). See the Supplementary Note for details on selection of variants for Stage 3B follow-up genotyping. For Stages 1, 2, and 3, samples did not overlap. Per-sample quality checks for genetic sex and relatedness were performed in PLINK. Sex mismatches or individuals showing a high degree of relatedness (identical-by-descent value of 0.98 or greater) were removed from the analysis. A panel of ancestry-informative markers was used to perform principal component analysis with SMARTPCA from EIGENSOFT 4.2 software 145 , and individuals with non-European ancestry were excluded. Variant quality control was also performed separately in each country including removal of variants missing in more than 10% of individuals, having a Hardy−Weinberg P value in controls lower than 1 × 10 −6 or a P value for missingness between cases and controls lower than 1 × 10 −6 . Per-study analysis for Stage 2 and Stage 3 followed the same analysis procedures described for Stage 1, except for covariate adjustments per cohort, where all analyses were adjusted on sex and age apart from the Italian, Swedish, and Gr@ACE cohorts, which were also adjusted for principal components. Within-study results were meta-analyzed in METAL 141 using an inverse-variance-based model. Characterization of gene(s) and non-coding features in associated loci We determined the base-pair boundaries of the search space for potential gene(s) and non-coding features in each of the 24 associated loci (excluding APOE ) using the ‘proxy search’ mechanism in LDLink 146 . LDLink uses 1,000 genomes genotypes to calculate LD for a selected population; in our case all five European populations were selected (population codes CEU, TSI, FIN, GBR, and IBS). The boundaries for all variants in LD ( r 2 ≥ 0.5) with the top associated variant from the Stage 2 meta-analysis for each region ±500 kb of the ends of the LD blocks (as eQTL controlled genes are typically less than 500 kb from their controlling variant 147 ) were input into the UCSC genome browser’s ‘Table Browser’ for RefSeq 148 and GENCODEv24lift37 149 genes at each associated locus. The average size of the LD blocks was 123 kb. Identification of potentially causal coding or splicing variants To identify deleterious coding or splicing variants that may represent causal variants for our genome-wide loci, we first used SNIPA 150 to identify variants in high LD (defined as r 2 > 0.7) with the sentinel variants of the 24 genome-wide loci (excluding APOE ) ( n = 1,073). The sentinel variants were defined as the variants with the lowest P in each genome-wide locus. We then used Ensembl VEP 151 for annotation of the set of sentinel variants and their proxies. We used BLOSUM62 (ref. 152 ), SIFT 153 , Polyphen-2 (ref. 154 ), CADD 155 , Condel 156 , MPC 157 , and Eigen 158 to predict the pathogenicity of protein-altering exonic variants and MaxEntScan to predict the splicing potential of variants. Splicing variants with high splicing potential according to MaxEntScan 159 and protein-coding variants predicted to be deleterious by two or more programs were considered to be potentially causal variants for a locus. It should be noted that while we do include rare variants from imputation in our analyses, we may be missing many rare causal variants in this study. Identification of genes with rare-variant burden via gene-based testing We used the summary statistics results of a large whole-exome sequencing (WES) study of LOAD, the Alzheimer’s Disease Sequencing Project (ADSP) case-control study ( n = 5,740 LOAD cases and 5,096 cognitively normal controls of NHW ancestry) to identify genes within our genome-wide loci that may contribute to the association signal through rare deleterious coding, splicing or LOF variants. The individuals in the ADSP study largely overlap with individuals in the ADGC and CHARGE cohorts included in our Stage 1 meta-analysis. All 400 protein-coding genes within our LD-defined genome-wide loci were annotated with the gene-based results from this study, and the results were corrected using a 1% FDR P as a cutoff for significance. Complete details of the analysis can be found in Bis et al. 49 and the Supplementary Note . Regulatory variant and eQTL analysis To identify potential functional risk variants and genes at each associated locus, we first annotated a list of prioritized variants from the 24 associated loci (excluding APOE ) ( n = 1,873). This variant list combined variants in LD with the sentinel variants ( r 2 ≥ 0.5) using INFERNO 160 LD expansion ( n = 1,339) and variants with suggestive significance ( P < 10 −5 ) and LD ( r 2 ≥ 0.5) with the sentinel variants for the 24 associated loci (excluding APOE ) ( n = 1,421 variants). We then identified variants with regulatory potential in this set of variants using four programs that incorporate various annotations to identify likely regulatory variants: RegulomeDB 56 , HaploReg v.4.1 (refs. 57 , 161 ), GWAS4D 59 , and the Ensembl Regulatory Build 58 . We used the ChromHMM (core 15-state model) as ‘source epigenomes’ for the HaploReg analyses. We used immune (Monocytes-CD14 + , GM12878 lymphoblastoid, HSMM myoblast) and brain (NH-A astroctyes) for the Ensembl Regulatory Build analyses. We then used the list of 1,873 prioritized variants to search for genes functionally linked via eQTLs in LOAD relevant tissues including various brain and blood tissue types, including all immune-related cell types, most specifically myeloid cells (macrophages and monocytes) and B-lymphoid cells, which are cell types implicated in LOAD and neurodegeneration by a number of recent studies 14 , 45 , 162 , 163 . While their specificity may be lower for identifying Alzheimer’s disease risk eQTLs, we included whole blood cell studies in our Alzheimer’s disease–relevant tissue class due to their high correlation of eQTLs with Alzheimer’s disease–relevant tissues (70% with brain 164 ; 51–70% for monocytes and lymphoblastoid cell lines 165 ) and their large sample sizes that allow for increased discovery power. See the Supplementary Note for details on the eQTL databases and studies searched, and Supplementary Table 13 for sample sizes of each database/study. Formal co-localization testing of our summary Stage 1 results was conducted using (1) COLOC 166 via INFERNO and (2) Summary Mendelian Randomization (SMR)-Heidi analysis 167 . The approximate Bayes factor (ABF), which was used to assess significance in the INFERNO COLOC analysis, is a summary measure that provides an alternative to the P value for the identification of associations as significant. SMR-Heidi analysis, which employs a heterogeneity test (HEIDI test) to distinguish pleiotropy or causality (a single genetic variant affecting both gene expression and the trait) from linkage (two distinct genetic variants in LD, one affecting gene expression and one affecting trait), was also employed for co-localization analysis. Genes located less than 1 Mb from the GWAS sentinel variants that pass a 5% Benjamini–Hochberg FDR-corrected SMR P- value significance threshold and a HEIDI P- value > 0.05 threshold were considered significant. The Westra eQTL 168 summary data and Consortium for the Architecture of Gene Expression (CAGE) eQTL summary data were used for analysis. These datasets, conducted in whole blood, are large eQTL studies (Westra: discovery phase n = 5,311, replication phase n = 2,775; CAGE: n = 2,765), and while there is some overlap in samples between the two datasets, CAGE provides finer coverage. The ADGC reference panel dataset referenced above for GCTA COJO analysis was used for LD calculations. Human brain gene expression analyses We also evaluated gene expression of all candidate genes in the associated loci (see Supplementary Table 8 for a complete list of genes searched), using differential Alzheimer’s disease gene expression results from AlzBase 31 , brain tissue expression from the Brain RNA-seq Database 32 , 33 (see URLs ), and the HuMi_Aged gene set 34 , a set of genes preferentially expressed in aged human microglia established through RNA-seq expression analysis of aged human microglial cells from ten post-mortem brains. AlzBase includes transcription data from brain and blood from aging, non-dementia, mild cognitive impairment, early-stage Alzheimer’s disease, and late-stage Alzheimer’s disease. See ALZBase (see URLs ) for a complete list of studies included in the search. Correlation values for the BRAAK stage expression were taken from the Zhang et al. 30 study of 1,647 post-mortem brain tissues from LOAD patients and non-demented subjects. Pathway analysis Pathway analyses were performed with MAGMA 61 , which performs SNP-wise gene analysis of summary statistics with correction for LD between variants and genes to test whether sets of genes are jointly associated with a phenotype (that is, LOAD), compared to other genes across the genome. Adaptive permutation was used to produce an empirical P value and an FDR-corrected q value. Gene sets used in the analyses were from GO 169 , 170 , KEGG 171 , 172 , REACTOME 173 , 174 , BIOCARTA, and MGI 175 pathways. Analyses were restricted to gene sets containing between 10 and 500 genes, a total of 10,861 sets. Variants were restricted to common variants (MAF ≥ 0.01) and rare variants (MAF < 0.01) only for each analysis, and separate analyses for each model included and excluded the APOE region. Analyses were also performed after removal of all genome-wide-significant genes. Primary analyses used a 35-kb upstream/10-kb downstream window around each gene in order to capture potential regulatory variants for each gene, while secondary analyses were run using a 0-kb window 176 . To test for significant correlation between common and rare-variant gene results, we performed a gene property analysis in MAGMA, regressing the gene-wide association statistics from rare variants on the corresponding statistics from common variants, correcting for LD between variants and genes using the ADGC reference panel. The Aβ-centered network pathway analysis used a curated list of 32 Aβ-related gene sets and all 335 genes combined (see Campion et al. 64 for details). The combined dataset of 28,730 unrelated individuals from the ADGC referenced in the GCTA COJO analysis was used as a reference set for LD calculations in these analyses. Validation of prioritization method Evaluation of the prioritization of the risk genes in genome-wide loci was done using STRING 177 , and Jensen Diseases 178 , Jensen Tissues 179 , dbGAP gene sets, and the ARCHS4 180 resource via the EnrichR 181 tool. We evaluated both the 400 genes set list and a list of 53 genes with priority score ≥ 5 (adding in APOE to both lists as the top gene in the APOE locus) using the standard settings for both STRING and EnrichR. We used the q value, which is the adjusted P value using the Benjamini–Hochberg FDR method with a 5% cutoff for correction for multiple hypotheses testing. We also performed ‘differentially expressed gene (DEG)’ sets analysis via FUMA 182 . These analyses were performed in order to assess whether our 53 prioritized genes were significantly differentially expressed in certain GTEx v.7 (ref. 102 ; 30 general tissues and 53 specific tissues) or BrainSpan tissues (11 tissue developmental periods with distinct DEG sets ranging from early prenatal to middle adulthood) 103 . FUMA defines DEG sets by calculating a two-sided t -test per tissue versus all remaining tissue types or developmental periods. Genes with a Bonferonni-corrected P < 0.05 and absolute log(fold change) ≥ 0.58 were considered DEGs. Input genes were tested against each of the DEG sets using the hypergeometric test. Significant enrichment was defined by Bonferonni-corrected P ≤ 0.05. HLA region analysis Non-familial datasets from the ADGC, EADI and GERAD consortiums were used for HLA analysis. After imputation quality control, a total of 14,776 cases and 23,047 controls were available for analysis (Supplementary Table 27 ). Within ADGC, GenADA, ROSMAP, TARC1, TGEN2, and a subset of the UMCWRMSSM datasets were not imputed as Affymetrix genotyping arrays are not supported by the imputation software. Imputation of HLA alleles Two-field resolution HLA alleles were imputed using the R package HIBAG v.1.4 (ref. 183 ) and the NHW-specific training set. This software uses specific combinations of variants to predict HLA alleles. Alleles with an imputation posterior probability lower than 0.5 were considered as undetermined as recommended by HIBAG developers. HLA-A , HLA-B , HLA-C class I genes, and HLA -DPB1 , HLA - DQA1 , HLA - DQB1 , and HLA -DRB1 class II genes were imputed. Individuals with more than two undetermined HLA alleles were excluded. Statistical analysis All analyses were performed in R 184 . Associations of HLA alleles with disease were tested using logistic regressions, adjusting for age, sex, and principal components as specified above for single variant association analysis. Only HLA alleles with a frequency higher than 1% were analyzed. Haplotype estimations and association analyses with disease were performed using the ‘haplo.glm’ function from the haplo.stats R package 185 with age, sex, and principal components as covariates. Analysis was performed on two-loci and three-loci haplotypes of HLA - DQA1 , HLA - DQB1 , and HLA -DRB1 genes. Haplotypes with a frequency below 1% were excluded from the analysis. Considering the high LD in the MHC region, only haplotypes predicted with posterior probabilities higher than 0.2 were considered for analysis. Meta-analysis P values were computed using an inverse-variance-based model as implemented in METAL software 141 . For haplotypes analysis, only individuals with no undetermined HLA alleles and only datasets with more than 100 cases or controls were included. Adjustments on HLA significant variants and HLA alleles were performed by introducing the variant or alleles as covariates in the regression models. Adjusted P values were computed using the FDR method and the R ‘p.adjust’ function, and applied to the meta-analysis P values. The FDR threshold was set to 10%. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Genome-wide summary statistics for the Stage 1 discovery have been deposited in The National Institute on Aging Genetics of Alzheimer’s Disease Data Storage Site (NIAGADS)—a NIA/NIH-sanctioned qualified-access data repository, under accession NG00075 . Stage 1 data (individual level) for the GERAD cohort can be accessed by applying directly to Cardiff University. Stage 1 ADGC data are deposited in NIAGADS. Stage 1 CHARGE data are accessible by applying to dbGaP for all US cohorts and to Erasmus University for Rotterdam data. AGES primary data are not available owing to Icelandic laws. Stage 2 and Stage 3 primary data are available upon request. Change history 15 August 2019 An amendment to this paper has been published and can be accessed via a link at the top of the paper. | Analysis of genetic data from more than 94,000 individuals has revealed five new risk genes for Alzheimer's disease, and confirmed 20 known others. An international team of researchers also reports for the first time that mutations in genes specific to tau, a hallmark protein of Alzheimer's disease, may play an earlier role in the development of the disease than originally thought. These new findings support developing evidence that groups of genes associated with specific biological processes, such as cell trafficking, lipid transport, inflammation and the immune response, are "genetic hubs" that are an important part of the disease process. The study, which was funded in part by the National Institute on Aging (NIA) and other components of the National Institutes of Health, follows results from 2013. It will be published online February 28, 2019 in the journal Nature Genetics. "This continuing collaborative research into the genetic underpinnings of Alzheimer's is allowing us to dig deeper into the complexities of this devastating disease," said Richard J. Hodes, M.D., director of the NIA. "The size of this study provides additional clarity on the genes to prioritize as we continue to better understand and target ways to treat and prevent Alzheimer's." The researchers, members of the International Genomic Alzheimer's Project (IGAP), analyzed both rare and common gene variants in 94,437 individuals with late onset Alzheimer's disease, the most common form of dementia in older adults. IGAP is made up of four consortia in the United States and Europe that have been working together since 2011 on genome-wide association studies (GWAS) involving thousands of DNA samples and shared datasets. GWAS are aimed at detecting variations in the genome that are associated with Alzheimer's. Understanding genetic variants is helping researchers define the molecular mechanisms that influence disease onset and progression. In addition to confirming the known association of 20 genes with risk of Alzheimer's and identifying five additional Alzheimer's-associated genes, these genes were analyzed to see what cellular pathways might be implicated in the disease process. The pathway analysis implicated the immune system, lipid metabolism and amyloid precursor protein (APP) metabolism. Mutations in the APP gene have been shown to be directly related to early onset Alzheimer's. The present study, done in late onset Alzheimer's subjects, suggests that variants affecting APP and amyloid beta protein processing are associated with both early-onset autosomal dominant Alzheimer's and with late onset Alzheimer's. In addition, for the first time, the study implicated a genetic link to tau binding proteins. Taken together, data suggest that therapies developed by studying subjects with early-onset disease could also be applied to the late-onset form of Alzheimer's. The research was led by an international team of experts including Brian Kunkle, Ph.D. and Margaret Pericak-Vance, Ph.D., from the Miller School of Medicine's John P. Hussman Institute for Human Genomics at the University of Miami, and Benjamin Grenier-Boley, Ph.D. and Jean-Charles Lambert, Ph.D., from INSERM, Lille, France. Once the functions of the five genes newly associated with Alzheimer's—IQCK, ACE, ADAM10, ADAMTS1 and WWOX—are understood and examined in conjunction with the functions of the 20 known genes, researchers will be in a better position to identify where the genetic hubs of Alzheimer's are clustering. Armed with these findings, researchers can look more deeply into these genetic hubs to reveal disease mechanisms and potential drug targets. A key to these discoveries was the sample size, the largest to date for this kind of Alzheimer's study. A large sample is especially important to find rare genes that might be involved with a disease. "Having more and more samples in GWAS data sets is like adding more and more pixels to a photograph—it helps researchers see details that they otherwise wouldn't and helps them decide where to focus further study," explained Marilyn Miller, Ph.D., director of the Genetics of Alzheimer's Disease program in the Division of Neuroscience at NIA. "If the genes only appear in one out of ten thousand people, you need to find several samples containing those genes for results to be statistically significant." Miller also emphasized the collaborative resources that made these discoveries possible. In addition to IGAP, the study resulted from open data sharing and coordination among the Alzheimer's Disease Research Centers (funded under a variety of awards), the National Alzheimer's Coordinating Center, the NIA Genetics of Alzheimer's Disease Data Storage Site, the National Cell Repository for Alzheimer's Disease, Alzheimer's Disease Genetics Consortium, the CHARGE Consortium, and the Late-onset Alzheimer's Disease Family Study. | 10.1038/s41588-019-0358-2 |
Physics | Fermi's ground-breaking figure: How the radial wave function transformed physics | Christopher R. Gould et al, Fermi's favorite figure: the history of the pseudopotential concept in atomic physics and neutron physics, The European Physical Journal H (2022). DOI: 10.1140/epjh/s13129-022-00042-z | https://dx.doi.org/10.1140/epjh/s13129-022-00042-z | https://phys.org/news/2022-09-fermi-ground-breaking-figure-radial-function.html | Abstract In the early 1930’s, Fermi wrote two papers in which he introduced the concepts of “scattering length” and “pseudopotential.” Since that time, these terms have become universally associated with low energy scattering phenomena. Even though the two papers are very different—one in atomic physics, the other in neutron physics—a simple figure underlies both. The figure appears many times in Fermi’s work. We review how the two papers came about and briefly discuss modern developments of the work that Fermi initiated with these two remarkable papers. Working on a manuscript? Avoid the common mistakes 1 Introduction Close to ninety years ago Fermi wrote papers defining two terms now universally used to describe scattering processes: “scattering length” and “pseudopotential.” The papers discussed phenomena that could hardly be more different. The 1934 paper [ 9 ] focused on atomic spectroscopy, explaining the puzzling pressure shift of spectral lines in highly excited alkali metal atoms [ 1 ]. The 1936 paper [ 11 ] focused on the scattering of thermal neutrons from free protons and from protons bound in materials. But as Segre notes, p706, in his commentaries to Fermi’s collected papers [ 13 ] “Fermi had a great insight for discerning analogies in phenomena which were completely unrelated.” Both papers do indeed have a common feature: a drawing, almost a sketch, of a radial wave function inside and outside a potential well. A version of the figure appears thirteen times in the collected works, six times in his nuclear physics notes [ 12 ] and became, as Segre notes further “… a sort of trademark in many of his (Fermi’s) later theoretical studies.” In our paper here, we discuss the history of these two papers, the way the figure was utilized by Fermi, and how his thinking evolved between the first and the second publications. We think it is a useful exercise to go back and be reminded of the creative processes of one of the twentieth century’s most innovative physicists. While both papers were first written in Italian, the second was subsequently translated into English [ 13 ] and is better known, to the extent that often the neutron paper is cited as the genesis of the idea of the pseudopotential. But primacy of the original ideas does lie with the first paper from atomic physics. The methodology he developed there for analyzing spectroscopic data continued to be cited into the 1980’s [ 20 ]. And interestingly, most of the applications today of the pseudopotential concept lie in the field of atom–atom interactions in ultracold systems. During this period, Fermi also wrote his primary papers [ 10 ] explaining beta decay. CN Yang relates [ 28 ] that Wigner considered this to be Fermi’s most important contribution to physics. The figure does not appear there of course. But it does have a feature these other two papers have: squeezing what you do not know into an increasingly smaller volume—eventually a delta-function—and characterizing your lack of knowledge by a single number, \(G_F\) in beta decay, the scattering length a , for low energy scattering processes. 2 Fermi’s paper of 1934 The goal of the paper was to explain shifts to higher energy of alkali metal absorption spectral lines due to the interaction with “foreign” neutral atoms. Of particular interest were transitions from very high lying levels, for example, 3S–30P in sodium. Simple Bohr model scaling puts the radius of the valence electron orbit at ~ 500 Å. At a gas pressure of one atmosphere, the electron can then collide with of order 13,000 foreign atoms in the atom volume. At this excitation energy, the valence electron is barely bound, with kinetic energy ~ 0.015 eV and de Broglie wavelength ~ 100 Å. Atomic spectroscopy was a strength of Fermi’s colleagues at Rome. In late 1933, Amaldi and Segre [ 1 ] had investigated pressure broadening of sodium spectral lines due to “perturber” atoms and had found puzzling results. They had expected the foreign gas to destroy the high terms of the absorption series, spreading the lines out so they were no longer clearly visible. That did not happen. Furthermore, they expected the spectral lines to always shift to the red. That did not happen either. In argon, the lines did shift red. But in nitrogen, they did not shift at all, and in hydrogen and helium, they shifted blue. The shift to the red was based on the idea that the foreign gas atoms would be polarized by the residual field + e of the core of the alkali metal atom (Z protons, Z-1 electrons). This would lower the energy of the outer electron to account for the energy stored in the polarization electric field. A classical analogy would be a capacitor consisting of two spherical shells, with a dielectric consisting of foreign gas atoms in between the two shells. The energy is \(U=\frac{1}{2}{Q}^{2}/C\) and with Q fixed as C increases by a factor k , the dielectric constant, the energy U of the system goes down. Fermi introduced his own version of the polarization model. Taking advantage of a calculation by his colleague Wick [ 27 ], he confirmed that while the polarization shift had about the right magnitude, the effect was always to the red, lower in energy. Footnote 1 With the failure of the polarization model, Fermi then proceeded to look for an alternate explanation, this time focusing on the behavior of the valence electron as it travels slowly in its orbit. Denoting R as the magnitude of the distance of the electron from the core of the alkali atom, and r as the magnitude of the distance of the electron from the perturbing neutral atom, he writes the Schrodinger equation for the wave function of the electron as: $${\nabla }^{2} \psi \left(r\right)+\left(\frac{8{\pi }^{2}m}{{h}^{2}}\right)\left(\epsilon -U\left(R\right)\right) \psi \left(r\right)-\left(\frac{8{\pi }^{2}m}{{h}^{2}}\right)\sum_{i}{V}_{i}{\psi} \left(r\right)\,\,\hbox{=}\,\,0$$ (1) Here, \(V_i(r)\) is the potential well due to perturber atom i , and U ( R ) is the weak and slowly varying coulomb attraction of the alkali atom core on the valence electron. The energy \(\epsilon\) is the full energy of the alkali atom. In principle, R and r are vectors. But he is soon going to say U is essentially constant, and therefore, he can work with a one-dimensional Schrodinger equation in r alone. To solve the equation with the unknown potential \(V_i(r)\) , Fermi takes advantage of the fact that for these very low energy electrons, only s-wave scattering is relevant. For the case of known potentials, the quantum theory of slow electrons scattering by atomic gasses (Ramsauer–Townsend effect) had already been studied in detail by Faxen and Holtsmark [ 8 ]. For the unknown potential, Fermi chooses to use a spherically symmetric potential of very short range ρ . He notes that because the atoms are neutral, the well is very narrow in relation to the de Broglie wavelength of the electrons. He now introduces a new function, \(\overline{\psi }\) that is a spatial average of the true wave function \(\psi\) over a distance that is large enough to contain many perturbers, but small enough so U (the weak Coulomb potential) can be considered constant. He argues that \(\overline{\psi }\) reproduces the general trend of \(\psi\) but does not have the irregularities associated with the narrow but deep potential wells. Averaging yields a new equation: $${\nabla }^{2}\overline{\psi }+\left(\frac{8{\pi }^{2}m}{{h}^{2}}\right)\left(\epsilon -U\right)\overline{\psi } - \left(\frac{8{\pi }^{2}m}{{h}^{2}}\right){\sum_{i}\overline{{V}_{i}\psi }}\,\hbox{=}\,0$$ (2) To evaluate the third term, he notes that inside the well \(V_i(r)\) is large and \(\left(\epsilon -U\right)\) is negligible. For r < ρ, the Schrodinger equation therefore becomes, with \(u\left(r\right)=r\psi \left(r\right)\) , $${u}^{{{\prime}}{{\prime}}}\,\hbox{=}\, \left(\frac{8{\pi }^{2}m}{{h}^{2}}\right) {V}_{i} u$$ (3) Far away from the origin ( r > ρ ), \({V}_{i}=0\) , \({u}^{{{\prime}}{{\prime}}}= 0\) and the solution is simply \(u={c}_{1}+{c}_{2} r\) . But far away, \(\psi \,\hbox{=}\, \overline{\psi }\) , \(u= r\overline{\psi }\) , and he can set \({c}_{2}= \overline{\psi }\) . At this point, the famous figure appears (Fig. 1 ). He introduces a new quantity a by writing \({c}_{1}=\) a \(\overline{\psi }\) . He does not give a name to a, but notes it is a “length” whose meaning is clarified by the figure. The perturber atom displaces the electron wave function by a distance a from the origin. Fig. 1 Interpretation of the scattering length in [ 13 ], page 711 Full size image With the boundary condition \(u\left(r=0\right)= 0\) , and the solution \(u=\left(a+r\right)\overline{\psi }\) for large r, the third term in Eq. 1 can be evaluated by taking a volume integral: $$\begin{gathered} \left(\frac{8{\pi }^{2}m}{{h}^{2}}\right) \int V\psi \mathrm{d}\tau \,\hbox{=}\,4\pi \left(\frac{8{\pi }^{2}m}{{h}^{2}}\right)\int Vur\mathrm{d}r\\ \quad \,\hbox{=}\, 4\pi \int {u}^{{{\prime}}{{\prime}}}r\mathrm{d}r\,\hbox{=}\,4\pi {\left|{u}^{^{\prime}}r-u \right|}_{0}^{r} \,\hbox{=}\, -4\pi a \overline{\psi } \end{gathered}$$ (4) With n as the density of foreign gas “perturbers,” Eq. 2 becomes a Schrodinger equation in the averaged wavefunction with an energy shift that is linear in the scattering length a, and in the gas density n : $${\nabla }^{2} \overline{\psi }+\left(\frac{8{\pi }^{2}m}{{h}^{2}}\right)\left(\epsilon +\left(\frac{{h}^{2}a n}{2\pi m}\right) -U\right)\overline{\psi } \,\hbox{=}\,0$$ (5) The sign of a is not known so the shift could be in either direction. But the cross section for s-wave scattering from a potential well is well understood from the work of Faxen and Holtsmark. Amaldi and Segre find a cross section of about 12 Å 2 , and from \(\sigma\) = \(4\pi {a}^{2}\) , they get a value for the scattering length of ± 1 Å. With known values of a , n , m and Eq. ( 5 ), Fermi can explain the shifts of spectral lines in the Amaldi and Segre experiment. He publishes his results in March 1934. The term linear in a in Eq. ( 5 ) is often called the Fermi “optical potential” and with the reduced Plank constant is written as $$V\,\hbox{=}\, \frac{2\pi {\hbar }^{2} }{m}an.$$ (6) Many contemporary authors also call it the Fermi “effective” potential, having in view that it leads, approximately, to the same results as the true potential. But at this point, Fermi is not using this terminology. For him, the potential well due to the perturber atom is simply deep and narrow. It becomes a delta function only in the 1936 paper, which we now discuss. 3 Fermi’s paper of 1936 In the collected works [ 13 ], p 639, Segre notes that up to 1934, Fermi’s work had been mainly theoretical. But with the discovery of the neutron in 1932, and the feeling among his experimental colleagues that atomic spectroscopy had run its course, focus shifted toward nuclear physics, and particularly to experimental neutron physics. Segre further notes: “The occasion for a really new departure in the nuclear field occurred in 1934 with the discovery by I. Curie and F. Joliot of artificial radioactivity.” Fermi’s group had access to a gram of radium and could generate neutron beams by alpha bombardment of beryllium. There followed an intense period of experimental activity, much in collaboration with Amaldi who notes in [ 13 ] p 808: “We had prepared a systematic plan of attack which we jokingly summarized by saying that we would measure the absorption coefficient of all 92 elements combined in all possible ways with 92 elements used as detectors.” The 1936 paper is, however, a theoretical paper, first published in Italian. Amaldi comments: “In addition to many new results, it contains some results obtained earlier, but previously unpublished, or published only in preliminary form.” Amaldi tried to convince Fermi to translate the paper into English. But apparently, Fermi replied along the lines: “I don’t want to waste the time, and if someone was interested in studying slow neutrons, he would have to read it, even if it was published only in Italian.” It was of course later translated by Temmer. Both the Italian original and the English translation appear in [ 13 ], one of only a handful of papers to get “star treatment”. Fermi proceeded in the same way as in his 1934 paper, however, without any reference to it. The goal was to introduce an average potential for interaction of a neutron at coordinates ( x,y,z ) with a proton at coordinates ( X,Y,Z ). The proton is chemically bound in a hydrogenous medium. Various length scales are introduced. The nuclear potential range, ρ, is of order 10 –12 cm. The chemical bonding potential is denoted U ( X,Y,Z ) and acts over a scale of the order of the amplitude of atomic vibrations in a molecule, about 10 –9 cm. Slow neutrons have wavelengths λ of order 10 –8 cm. The true wave function is \(\psi\) ( x,y,z,X,Y,Z ). Proceeding as in 1934, he introduces a wave function \(\overline{\psi }\) that is averaged over a sphere of radius R . He gives more detail about what calculating the average means than in the previous paper, but basically the result is the same. The radius R satisfies the inequalities: $$R \gg \, \rho \quad {\text{ and}}\quad R \gg a, \, \quad {\text{but}}\,R \ll \, \lambda .$$ (7) The scattering length a appears again, but in a new version (Fig. 2 ). The averaged wave function is now normalized to unity so \(u=r\psi = \left(a+r\right)\) and the angle is 45 deg. Fig. 2 Interpretation of the scattering length in [ 13 ], p 969 Full size image Finally, Fermi obtained the desired Schrodinger equation for \(\overline{\psi }\) , this time in its time-dependent form: $$- \frac{h}{2\pi i} \frac{\partial \overline{\psi }}{\partial t}\,\hbox{=}\,-\left(\frac{{h}^{2}}{8{\pi }^{2}M}\right) \left\{\frac{{\partial }^{2}\overline{\psi }}{\partial {x}^{2}}+..+\frac{{\partial }^{2}\overline{\psi }} {\partial {X}^{2}}+..\right\}+U\overline{\psi }-\left(\frac{{h}^{2} a}{\pi M}\right){\updelta }_{R} \left(r\right)$$ (8) The new function \({\updelta }_{R}\left(r\right)\) he defines to be equal to 3/(4π R 3 ) for r < R and to be zero for r > R . Its volume integral extended over all space is equal to 1. And since the quantities in Eq. ( 8 ) vary slowly in the region where \({\updelta }_{R}\left(r\right)\ne 0\) , Fermi wrote that, in the first-Born approximation, “… when calculating the matrix elements of the interaction term, the function \({\updelta }_{R}\left(r\right)\) may be identified with the Dirac delta-function in three dimensions.” This is the essence of the Fermi pseudopotential concept. In modern notation [ 24 ], it is written Footnote 2 : $$V_{{\varvec{F}}} { } \,\hbox{=}\, { }2{\uppi }\frac{{\hbar^{2} }}{m}n b{\updelta }^{\left( 3 \right)}\left( r \right).$$ (9) Here, \(b=\left(\frac{A+1 }{A}\right)a\) is the bound scattering length for a nucleus of mass number A, and n is the number density of scatterers. Fermi applied the first-Born approximation to tackling the problem of inelastic scattering of neutron in paraffin while treating the hydrogen atoms as harmonic oscillators of frequency ν . He calculated, for the first time, the elastic and inelastic cross sections in the function of the neutron energy w , which are shown in Fig. 3 . The curves 1, 2, 3 correspond to excitations of the oscillator in the corresponding excited states. This theoretical prediction was confirmed experimentally only about fifteen years later. Fig. 3 Neutron elastic and inelastic cross sections in paraffin [ 13 ], p 973 Full size image We’ve been writing the term scattering length here, but it is somewhat unclear when Fermi started using it. The first appearance in a publication is in 1947 in Fermi and Marshall [ 15 ]. He was, however, using the term in 1945 in (at the time) classified lectures. The neutron scattering sections were later declassified and are reproduced from participant notes in volume two of the collected works [ 14 ] p469. Interestingly, the definition of the sign of a changed during this time. Earlier he had taken a to be positive when the intercept fell to the left of the origin because this made the (unbound) singlet n – p scattering length positive. But as more data were accumulated, it became clear that hydrogen was an unusual case, and most scattering lengths were of the opposite sign. By the time of the Fermi and Marshall paper in 1947, he adopts the modern (negative) sign convention for the relation between the phase shift δ and the scattering length a as the wavenumber k goes to zero: \(\delta =-ka\) . As more and more data were accumulated, the accuracy of what became known as “Fermi method”—Fermi potential plus first-order Born approximation—became a focus of theoretical work by others. Breit [ 4 ] noted that the original version of the pseudopotential defined over a sphere of radius R with the factor \({\updelta }_{R}\left(r\right)\) was problematic because it implied an interaction energy inside the potential well of \(V=\frac{3a}{R}\frac{{h}^{2}}{M{R}^{2}}\) . This gives a scaling law \(V{R}^{3}\) , not the expected \(V{R}^{2}\) . But he found that when used in scattering problems to first-order Born approximation, the delta function version of the potential was very reliable. For free protons, Breit reproduced all Fermi’s results and for bound protons, found the corrections were only of order 0.3%. The conclusion was that the application of the first-order Born approximation with the Fermi pseudopotential does not increase the typical inaccuracy of the Born method. Later, Blatt and Weisskopf [ 2 ] came to the same conclusion. 4 Modern applications of the scattering length and pseudopotential concepts In Fermi’s original thinking about a low energy scattering process, the fine details of the unknown short-range interaction potential were not relevant, and the situation could be described completely by one parameter, the scattering length, or equivalently the pseudopotential. This idea has continued to bear fruit with remarkable consequences in systems as varied as Bose–Einstein condensates, where the scattering length can be adjusted to any value by external magnetic fields—Feshbach resonances—to few body stems where quantum mechanics predicts the formation of bound trimers—Efimov states—many times bigger than the range of the interaction itself. Resonances in neutron reactions are a common phenomenon, arising when the energy of the incoming neutron matches the energy of a discrete state in the compound system. Feshbach, thinking primarily about nuclear reactions, showed [ 16 ] that this was a general feature of any collision process. Feshbach resonances in atomic physics came into play because the magnetic interactions in atoms are dependent on the magnetic moments of nuclei through the hyperfine splitting of the energy levels of the total angular momentum. Theorists developed two channel zero-range potential models, calculated the scattering length, and found resonance behavior in the dependence of the external magnetic field. Inoue et al. [ 18 ] confirmed these predictions by manipulating the scattering length in a 23 Na Bose–Einstein condensate exposed to a spatially homogeneous magnetic field of variable strength. For a review, see Chin et al. [ 5 ]. The phenomenon is named after Feshbach who, as related by Kleppner [ 19 ], found it amusing that his name should be attached to such a generic phenomenon. Today, the method is widely used to tune the interactions between ultracold atoms over many orders of magnitude in atomic Bose and two-spin-component Fermi gasses. An abstract search in arxiv.org lists over 1600 references to the term “Feshbach resonance.” Feshbach’s technique developed for nuclear processes found its most fruitful applications in atomic physics. Similarly, calculations [ 6 ] aimed at understanding the structure of three body nuclear systems—the triton, the Hoyle state in 12C—subsequently found remarkable confirmation in three body atomic systems. Efimov considered a system in which three particles are interacting pairwise through short-range potentials that are not quite strong enough to bind but are nearly in resonance, so the respective scattering length is large. The singlet n – p scattering length is an example: a = 23.5 fm compared to a typical nuclear force range of a few fm. He found that when three identical bosons are present, a bound-state system emerges with size comparable to the scattering length, not the nuclear force range. Counterintuitively, he found a whole series of bound states n = 1, 2, 3…., with sizes increasing by factors of (22.7) n each time, and bound-state energies scaling as 1/(22.7) 2 n . This strange numerical factor ( \({e}^{\pi /\left|{s}_{0}\right|}, \mathrm{with} \left|{s}_{0}\right|\approx 1.00624\) ) emerges from zero-range theory, and this power law scaling is coming from the fact that the overall potential has a \(1/{r}^{2}\) behavior, well known for its paradoxical properties. It took 35 years to confirm Efimov’s predictions, in part because you need to identify both a ground and an excited state to confirm the size and energy scaling predictions. See Naidon and Endo [ 22 ] and Braaten and Hammer [ 3 ] for a full discussion and complete references. Efimov recounts the history of his prediction in “Giant trimers true to scale” [ 7 ] and notes “it has been heartening to witness the evolution of this miracle of quantum mechanics from questionable to pathological to exotic to being a hot topic of today’s ultracold physics.” Perhaps as nuclear physics explores the edges of the valley of stability with new radioactive ion facilities like FRIB, we may hope to identify more barely bound systems that link back to Efimov’s original predictions and their roots in the world of nuclear structure. 5 Conclusion From the outlined history of the Fermi pseudopotential concept one can conclude the following: The so-called effective or averaged potential, V , (Eq. 6 ) in the Schrodinger equation for interaction of electrons with the media was first introduced by Fermi in 1934. About twenty years later, in neutron optics, it got the name optical potential. The pseudopotential with the Dirac Delta -function, \(V_F\) , (Eq. 9 ) allowing calculations in the first Born approximation, appeared in the neutron physics paper published in 1936. Under the notation \(V_F\) it is known since Breit’s paper of the year 1947. The most active use and development of the Fermi pseudopotential concept are presently in the field of ultracold atom physics where Efimov states and Feshbach resonances are yielding fascinating new insights into the quantum world. New radioactive beam facilities like FRIB may provide more surprises as they explore nuclear structure far from the line of stability. Notes In the appendix, we review his calculation in the light of extensive subsequent work by many to understand pressure broadening and shifting of spectral lines, a hugely important subject for astrophysics, atmospheric physics, and plasma physics. See [ 24 ] for clarification on the terminology and distinctions between the potentials in Eqs. ( 6 ) and ( 9 ). | One way to better understand an atom is to shoot a particle at it and infer the atom's properties based on how the particle bounces off it. In the mid-1930s, the physicist Enrico Fermi showed that one measurable number—the scattering length—illuminated everything that could be known about an electron scattering off an atom, or a neutron scattering off a nucleus. In a new paper in EPJ Historical Perspectives on Contemporary Physics, Chris Gould from North Carolina State University in Raleigh, U.S., explains how Fermi's simple sketch of a radial wave function laid the groundwork for a better understanding of low energy scattering phenomena, and led in turn to the concept of the pseudopotential, widely used in many areas of physics, including ultracold atom research and studies of qubits in realizations of quantum computers. In Fermi's atomic physics paper, published in 1934, his sketch of a radial wave function—the value of a wave function at some distance from a scatterer—was the clue that led him to understand a puzzling result in atomic spectroscopy. In his neutron physics paper, published in 1936, Fermi went in a different direction, employing the scattering length concept to introduce a new idea—the pseudopotential, a potential well with a radius of zero—to correctly predict how a neutron scatters in paraffin. Gould concludes that Fermi's extraordinary intuition enabled the physicist to apply concepts to seemingly unrelated areas, and to develop ideas that impact the world of quantum physics to this day. | 10.1140/epjh/s13129-022-00042-z |
Medicine | A research team identifies how some gliomas develop chemoresistance | Barbara Oldrini et al, MGMT genomic rearrangements contribute to chemotherapy resistance in gliomas, Nature Communications (2020). DOI: 10.1038/s41467-020-17717-0 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-020-17717-0 | https://medicalxpress.com/news/2020-08-team-gliomas-chemoresistance.html | Abstract Temozolomide (TMZ) is an oral alkylating agent used for the treatment of glioblastoma and is now becoming a chemotherapeutic option in patients diagnosed with high-risk low-grade gliomas. The O-6-methylguanine-DNA methyltransferase (MGMT) is responsible for the direct repair of the main TMZ-induced toxic DNA adduct, the O6-Methylguanine lesion. MGMT promoter hypermethylation is currently the only known biomarker for TMZ response in glioblastoma patients. Here we show that a subset of recurrent gliomas carries MGMT genomic rearrangements that lead to MGMT overexpression, independently from changes in its promoter methylation. By leveraging the CRISPR/Cas9 technology we generated some of these MGMT rearrangements in glioma cells and demonstrated that the MGMT genomic rearrangements contribute to TMZ resistance both in vitro and in vivo. Lastly, we showed that such fusions can be detected in tumor-derived exosomes and could potentially represent an early detection marker of tumor recurrence in a subset of patients treated with TMZ. Introduction The therapeutic benefits of TMZ depend on its ability to methylate DNA, which takes place at the N-7 and O6 positions of guanine and N-3 position of adenine. Although the minor product O6-methylguanine (O6-meG) accounts for <10% of total alkylation, it exerts the greatest potential for apoptosis induction 1 . O6-meG pairs with thymine as opposed to cytosine during DNA replication. The O6-meG:thymine mismatch can be recognized by the post-replication mismatch repair (MMR) system and, according to the futile repair hypothesis, ultimately induces DNA double-strand breaks, cell-cycle arrest, and cell death 2 . The O6-methylguanine-DNA methyltransferase (MGMT) is responsible for the direct repair of O6-meG lesion by transferring the alkyl group from guanine to a cysteine residue. Epigenetic silencing, due to promoter methylation, of the MGMT gene prevents the synthesis of this enzyme, and as a consequence increases the tumors sensitivity to the cytotoxic effects induced by TMZ and other alkylating compounds 3 , 4 . As today, MGMT promoter hypermethylation is the only known biomarker for TMZ response 4 . However, the discordance between promoter methylation and protein expression detected in a subset of patients limits the prognostic value of methylation assessment 5 , 6 . Moreover, while MGMT methylation at diagnosis predicts longer survival, this is not the case at recurrence 7 . These evidence would suggest that other mechanisms, in addition to promoter methylation, could contribute to MGMT upregulation in the recurrent tumors 5 , 7 . According to the 2016 WHO classification, that integrates both histological and molecular features, diffuse gliomas can be divided in IDH -wildtype or IDH -mutant, by the presence of mutations in the isocitrate dehydrogenase 1 and 2 ( IDH1/2 ) genes 8 . TMZ is the standard chemotherapeutic approach in IDH -wildtype gliomas, such as glioblastomas, and, more recently in high-risk IDH -mutant gliomas 9 . By analyzing a large cohort of IDH -wildtype and mutant recurrent gliomas treated with TMZ, we have discovered that a subset of patients carries distinct MGMT genomic rearrangements. These MGMT alterations lead to MGMT overexpression, independently from changes in its promoter methylation, and contribute to TMZ resistance both in vitro and in vivo. Results Identification of MGMT gene fusions in recurrent gliomas To reveal the landscape of TMZ resistance in glioma patients, we analyzed RNA-sequencing data of 252 TMZ-treated recurrent gliomas, among which 105 (42%) were newly collected (Supplementary Fig. 1a, b and Supplementary Data 1 ). We then integrated clinical information and performed bioinformatics analysis to determine the mutational status of several key alterations (“Methods”). Overall, we found IDH1 mutation in 38.4% (94 out of 245) patients, 1p/19q co-deletion in 9.4% (23 out of 245) patients, MGMT promoter hypomethylation in 38% (52 out of 136) patients, and DNA hypermutation in 10.7% (27 out of 252) patients (Fig. 1a ). By analyzing the RNA-seq data of 252 recurrent gliomas, we identified eight different MGMT fusions in seven patients (~3% of all patients, 95% CI, 1.1–5.6%) (Supplementary Data 1 and Supplementary Table 1 ). Of note, among the seven patients who harbor MGMT fusions, six are females, which is significantly higher than expected ( P = 0.015, Fisher exact test, Supplementary Fig. 1c ). Importantly, there was significant mutual-exclusiveness between MGMT hypomethylation, DNA hypermutation and MGMT fusion as revealed by a bootstrapping method ( P < 10 −4 , see “Methods”), suggesting these alterations were carrying out alternative roles during cancer progression. Fig. 1: Multiple MGMT fusions in TMZ-treated recurrent gliomas. a Landscape of MGMT hypomethylation, MGMT fusions, DNA hypermutation. b Circos plot showing the identified MGMT fusions. c Structure of the MGMT fusion proteins. Each partner gene is indicated by color, and the narrow bars in SAR1A-MGMT, RPH3A-MGMT, and CTBP2-MGMT mean 5′UTR. d , e Validation of the MGMT fusion genes in positive samples by PCR and Sanger sequencing. The bands in the left panel were validated by Sanger sequencing in the right panel. Limited by specimen availability the validation was performed once. f The genomic rearrangement generating FAM175B-MGMT fusion. WGS whole-genome sequencing. Source data are provided as a Source Data file. Full size image Gliomas with MGMT fusions or hypomethylated MGMT promoter had significantly higher MGMT expression, while the DNA hypermutated patients showed the lowest MGMT expression, even lower than the MGMT-methylated tumors (Supplementary Fig. 1d , P -values calculated by Wilcoxon rank-sum test). Interestingly, we found that in IDH -wild-type glioma patients, high MGMT expression indicates worse survival ( P = 0.02, log-rank test, Supplementary Fig. 1e ), while it is associated to a trend of better survival in IDH -mutant patients ( P = 0.04, log-rank test, Supplementary Fig. 1f ). We next performed an in-depth investigation of the eight different MGMT rearrangements: BTRC - MGMT , CAPZB - MGMT , GLRX3 - MGMT , NFYC - MGMT , RPH3A - MGMT , and SAR1A - MGMT in HGG, and CTBP2 - MGMT and FAM175B - MGMT in LGG (Fig. 1b ). Five of the eight partner genes located on chromosome 10q, mostly close to MGMT (Fig. 1b ). Interestingly, although the left partners of the MGMT fusions were different, the transcriptomic breakpoint in MGMT invariably located at the boundary of MGMT exon 2, which is 12 bp upstream of the MGMT start codon. In three of the rearrangements ( SAR1A - MGMT, RPH3A - MGMT , and CTBP2 - MGMT ), MGMT coding sequence was fused to the 5′UTR of the fusion partner. Reconstruction of the chimeric transcripts found all fusions are in-frame, and both the methyltransferase domain and DNA-binding domain of MGMT are intact, suggesting the functions of MGMT might be preserved in the fusion proteins (Fig. 1c ). We validated the gene fusions using PCR and Sanger sequencing in samples with enough specimen available (Fig. 1d, e ). For one patient (CGGA_1729), we performed whole-genome sequencing (WGS), and analysis of structural rearrangements in this sample revealed a deletion of about 4.8 Mb resulting in the FAM175B-MGMT fusion (Fig. 1f ). MGMT genomic rearrangements lead to MGMT overexpression To characterize the MGMT fusions, we sought to generate some of the identified rearrangements using the CRISPR/Cas9-mediated genome editing. Co-expression of Cas9 with pairs of single-guide RNAs (sgRNAs) has been used to model a variety of chromosomal rearrangements (such as translocations, inversions, and deletions) by creating DNA double-strand breaks at the breakpoints of chromosome rearrangements, which are then joined by non-homologous end joining 10 , 11 . To generate cell lines carrying the MGMT fusions, we first transduced U251 and U87 cells, two MGMT-methylated GBM cell lines, with lentiviral vectors expressing different combinations of gRNA pairs directed to four different MGMT rearrangements: BTRC-MGMT , NFYC-MGMT , SAR1A-MGMT , and CTBP2-MGMT (Supplementary Fig. 2a–c ). The expected chromosomal rearrangements in the bulk populations were detected by PCR at the genomic level and confirmed by Sanger sequencing (Supplementary Fig. 3a, b ). The newly generated cell populations were then exposed to TMZ. Surviving clones were observed only in the bulk populations of cells carrying the different fusion events but not in the control cells (sgCtrl, non-targeting sgRNA) (Fig. 2a ). We then isolated some of the TMZ-resistance clones and further confirmed the presence of the desired gene fusion by PCR both at the genomic level (Supplementary Fig. 3c ) and at mRNA level by reverse transcription PCR (RT-PCR) of cDNA fragments overlapping the fusion exon junctions (Supplementary Figs. 3d–f and 6a ). However, we could not confirm at the genomic level the exact breakpoints in the CTBP2-MGMT clones, both in U251 and U87 cells, possibly to the occurrence of larger deletions that removed the binding site of the primers used for our initial studies in the bulk population (Supplementary Fig. 3a, b ). Nevertheless, the desired genomic rearrangements were further validated using a break-apart fluorescence in situ hybridization (FISH) assay (Supplementary Fig. 4 ). Fig. 2: MGMT fusions cells show enhanced TMZ resistance via increased MGMT expression. a Colony-forming assay performed on U251 and U87 cells expressing sgCtrl, BTRC-MGMT, NFYC-MGMT, SAR1A-MGMT, CTBP2-MGMT exposed for 12 days to TMZ (100 μM) or DMSO. b MGMT quantitative-PCRs performed on mRNA from U251 and U87 TMZ-resistant single-cell clones expressing the indicated MGMT fusions. Data are from a representative experiment of n = 3 biological replicate. Centre of the bars represent the mean (technical replicate n = 3) and the error bars are the standard deviations. c Analysis of MGMT promoter methylation, by methylation-specific PCR (MSP), in the TMZ-resistant cell clones expressing the indicated MGMT fusions from U251 (left panel) and U87 (right panel). M and U lanes indicate methylated and unmethylated status of the promoter, respectively. LN18 and U87 cells are shown as control for unmethylated and methylated, respectively. d Western blot analysis of MGMT protein levels in TMZ-resistant cell clones from U251 and U87 expressing the indicated MGMT fusions. Source data are provided as a Source Data file. Full size image Promoter exchanges are one class of gene fusions, characterized by the replacement of a gene’s regulatory regions with those of another gene, often resulting in deregulation of transcription of the genes participating in the fusion event 12 , 13 , 14 . Another class of gene fusions generates chimeric proteins with biological function different from either of the partner genes from which it originated 12 , 13 , 14 . Since all the MGMT gene fusions identified had similar structures, with the 5′ gene contributing with either small and diverse protein domains or just with the 5′-UTR regions (Fig. 1c ), we hypothesized that the TMZ resistance might be driven by increased MGMT expression due to the rearrangements that bring the MGMT gene under the control of a more active promoter. Real-time quantitative PCR showed a striking increase of MGMT expression in the clones carrying the different fusions (Fig. 2b ), as compared with control cells, without changes in MGMT promoter methylation status, as evidenced by methylation-specific PCR (MSP) (Fig. 2c ). These results are in line with what observed in the patient cohort: patients carrying MGMT rearrangements showed elevated expression of MGMT, concurrently with a methylated MGMT promoter (Fig. 1a ; Supplementary Fig. 1d ). Western blot analysis, using an anti-MGMT antibody, evidenced a marked overexpression of MGMT at the protein level, especially obvious for the SAR1A-MGMT and CTBP2-MGMT fusion clones (Fig. 2d ). Moreover, we observed higher-molecular-weight protein products for BTRC-MGMT and NFYC-MGMT, consistent with the expected size of those fusion proteins. Of note, the different levels of MGMT expression might be determined by the activity of the specific gene’s promoter participating in the fusion event and/or by the number of copies of the genomic rearrangement in each specific clone. To validate our results in an another biologically relevant model of GBM, we also used patient-derived cell line propagated in stem cell medium. These cells, as compared with immortalized cancer cell lines, have been shown to maintain the molecular genotype, phenotype, as well as heterogeneity of the original tumor both in vitro and in vivo. We generated and confirmed at genomic and mRNA levels the BTRC-MGMT and SAR1A-MGMT fusion events in the MGMT-negative patient-derived h543 GBM tumor spheres (Supplementary Fig. 7a–c ). Similar to what observed for U251 and U87, h543 cells carrying the fusions showed increased expression of MGMT at mRNA and protein level, as compared with control cells (Supplementary Fig. 7d, e ). MGMT gene fusions contribute to TMZ resistance To establish whether the TMZ resistance in the clones carrying the fusions was determined by the overexpression of a fully functional MGMT protein, and not caused by other mutations acquired during TMZ treatment, we analyzed the TMZ sensitivity in presence of O6-benzylguanine (O6-BG), a synthetic derivative of guanine that inhibit MGMT activity 15 . Clonogenic assay of two independent U251 clones per fusion showed that the TMZ sensitivity was re-established by the co-treatment with O6-BG (Fig. 3a ). By contrast, cell knockout for the mismatch repair gene MSH6, a proposed TMZ-resistance mechanism independent from MGMT expression, was fully TMZ-resistant also in the presence of O6-BG. Similarly, cell-cycle profile analysis with propidium iodide staining and EdU incorporation assays showed that the fusion clones bypassed the TMZ-induced accumulation in the G2/M phase and O6-BG co-treatment was able to re-establish the cell-cycle arrest (Fig. 3b-d ). We noticed that individual clones showed variable TMZ sensitivity when treated concurrently with O6-BG. Clones with higher MGMT expression (e.g., NFYC-MGMT clone 2 and SAR1A-MGMT clones) showed increased resistance to TMZ; however, in these cells increasing doses of O6-BG significantly enhanced TMZ cytotoxic effect (Supplementary Fig. 5a, b ). Same results were obtained in U87 fusion clones (Supplementary Fig. 6b, c ) and in h543 tumor spheres (Supplementary Fig. 7f, g ). Fig. 3: MGMT fusions protect from TMZ-induced damage. a Clonogenic survival assay of U251 clones expressing MGMT fusions exposed to O6-BG (100 μM) or/and TMZ (100 μM) for 12 days. U251sgMSH6 cells are shown as control for TMZ resistance independently from MGMT. b Cell-cycle distribution of U251 MGMT fusion expressing cells in presence of O6-BG (100 μM) or/and TMZ (100 μM) for 72 h, measured by propidium iodide (PI) staining and FACS. U251 sgCtrl and sgMSH6 are shown as control. c High-throughput microscopy-mediated quantification of cell-cycle distribution at 48 h after treatment. See “Methods” for details. d Quantification of the percentage of cells in c . Data are from a representative experiment repeated in triplicate and presented as mean (technical replicate n = 3) and standard deviation. e , f High-throughput microscopy-mediated quantification of γH2AX intensity levels and 53BP1 foci in U251 cells expressing the MGMT fusions after 48 h of treatment with 100 μM of the indicated drugs. U251 sgCtrl and sgMSH6 were included as controls. The bottom and top of each box represents the first and third quartiles, and the line inside is the median. The whiskers correspond to 1.5 times the interquartile range. Data are representative of n = 3 biologically independent experiments. Two-sided Student’s t test with Bonferroni adjustment for multiple comparisons: *** P < 0.001, ** P < 0.01, * P < 0.05, ns not significant, A.U. arbitrary unit. Source data are provided as a Source Data file. Full size image We then assessed to which extent the TMZ resistance was determined by increased MGMT activity, and therefore boosted by the DNA repair potential of the fusion clones. Quantitative high-throughput microscopy analysis revealed that in MGMT fusion expressing cells, similarly to what observed in sgRNA MSH6 cells, TMZ treatment did not increase levels of γH2AX and 53BP1 foci, DNA damage markers characteristic of cells bearing DNA double-strand breaks (Fig. 3e, f ). However, MGMT inhibition by O6-BG led to the accumulation of γH2AX and 53BP1 foci upon TMZ treatment in the fusion clones. Taken together, these data indicate that TMZ resistance induced by MGMT genomic rearrangements is mechanistically linked to MGMT activity. MGMT gene fusions protect from TMZ treatment in vivo Lastly, we evaluated the TMZ resistance of MGMT fusion in vivo through establishing nu/nu mice xenograft models with the U251 BTRC-MGMT and control cells, previously transduced with a luciferase expressing construct. A week after intracranial transplantation, mice were intraperitoneally treated with TMZ (50 mg/Kg) or DMSO (0.3%) for 5 days, and tumor growth was monitored weekly with bioluminescence imaging (BLI) for 4 weeks. Mice with MGMT fusion-bearing tumors exhibited no significant prolonged lifespan between TMZ and DMSO group, and significantly poorer survival compared with control mice when receiving TMZ treatment (Fig. 4a ). Similarly, while TMZ treatment significantly extended the survival of mice transplanted with h543 transduced with the sgCtrl, it failed to do so in those expressing the SAR1A-MGMT rearrangement (Supplementary Fig. 7h ). BLI analysis confirmed that TMZ antitumor effect was limited to control mice (Fig. 4b ). In addition, as shown by immunohistochemistry, the BTRC-MGMT mice had increased BrdU incorporation and reduced accumulation of γH2AX compared with control mice upon TMZ administration (Fig. 4c ), confirming our proliferation and DNA repair in vitro results. Fig. 4: MGMT fusions confer TMZ resistance in vivo and serve as biomarkers at recurrence. a Top panel: scheme of the in vivo experimental design . Bottom panel: Kaplan–Meier survival curve of animals intracranially injected with U251 sgCtrl and U251 BTRC-MGMT clone 2 cells transduced with a luciferase construct, treated or not with TMZ (50 mg/Kg) for 5 days: sgCtrl Vehicle n = 4, sgCtrl TMZ n = 5, BTRC-MGMT vehicle n = 4, BTRC-MGMT TMZ n = 5. sgCtrl log-rank P -value = 0.0049, BTRC-MGMT log-rank P -value = 0.9273. b Representative luminescent images of the tumor bearing at the indicated time points. c Immunohistochemistry analysis against BrdU and γH2AX of tumors from mice injected with U251 sgCtrl and BTRC-MGMT clone 2 cells, treated or not with TMZ (50 mg/kg) for 3 days. Mice were sacrificed 2 h after BrdU injection. Scale bars: 100 μm. d Western blot analysis of the EXO markers Alix and TSG101 and of MGMT levels in samples pair of cells and cell-derived EXOs expressing sgCtrl and SAR1A-MGMT. e SAR1A-MGMT and MGMT mRNA expression by RT-PCR in RNA pair samples from cells and cell-derived EXOs expressing sgCtrl and SAR1A-MGMT. f Transcript levels of BTRC-MGMT by RT-PCR analysis in EXOs isolated from serum of BTRC-MGMT clone 2 tumor-bearing mice compared to sgCtrl mice. U251 sgCtrl and BTRC-MGMT clone 2 cells were included as controls. Source data are provided as a Source Data file. Full size image In clinical settings, liquid biopsies can be a powerful noninvasive technique to monitor cancer-associated genetic alterations by analyzing circulating tumor cells (CTCs), circulating free DNA (cfDNA) or tumor-derived extracellular vesicle (EV), including exosomes (EXOs). Previous studies have already showed that (i) glioma-derived extracellular vesicle (EV) can cross the blood brain barrier and be detected in peripheral blood of patients 16 , (ii) MGMT mRNA is enriched in glioma exosomes (EXOs) 17 , and (iii) other gene fusion was identified in glioma EXOs 18 . Based on these findings, we assessed whether the MGMT fusions could be detected in EXOs. We purified EXOs from conditioned media of cells harboring SAR1A-MGMT and sgCtrl by standard ultracentrifugation. Western blot of protein content confirmed enrichment in the EXOs of the exosome-specific markers TSG101 and Alix (Fig. 4d ) and the presence of MGMT in the cells expressing the fusion event (Fig. 4d ). Most importantly, also the mRNA of the MGMT fusion was detected by RT-PCR in the EXOs (Fig. 4e ). Lastly, to further evaluate a clinical application of our findings, we tested whether EXOs isolated from blood serum of mice injected orthotopically with the U251 BTRC-MGMT cells would also exhibit the fusion transcript. Remarkably, RT-PCR analysis confirmed the presence of the cDNA fusion fragment in the BTRC-MGMT-derived circulating blood EXOs (Fig. 4f ). Discussion Currently, TMZ is the only chemotherapeutic drug that is established to considerably extend the overall survival of GBM patients and is becoming a therapeutic option also for high-risk low-grade gliomas 9 . Both intrinsic and acquired resistance might contribute to glioma tumor recurrence upon TMZ treatment. While MGMT promoter hypomethylation is undoubtedly recognized as the primary mechanism of intrinsic TMZ resistance, the genetic alterations acquired during TMZ exposure that contribute to tumor relapse still remain to be fully characterized. Defects in various components of the MMR machinery possibly represent one of the most well-characterized mechanism of acquired TMZ resistance. Though rarely detected in primary GBMs, MMR alterations have been previously described in 10–20% of recurrent tumors 7 , 19 , 20 . Changes in MGMT promoter methylation status during tumor progression have been observed only in a small subset of patients 19 . More recently, it has also been suggested that in recurrent GBMs enhancer hijacking could promote MGMT expression, despite promoter methylation, and therefore TMZ resistance; however, the clinical significance of these findings still remain to be evaluated 5 . In this study, we demonstrated that MGMT fusions represent a previously unidentified genetic alteration that contribute to MGMT overexpression and a novel mechanism of acquired TMZ resistance that is mutually exclusive from MGMT promoter hypomethylation and the hypermutator phenotype, typically associated with MMR defects. For those patients for whom both primary and recurrent tumor were available (4 out of 7), the MGMT rearrangements were detected only in the tumor relapse. Although we cannot exclude that some of the primary tumors might express the MGMT fusion at subclonal level, and therefore possibly lower than the RNA-seq detection limits, we speculate that the MGMT rearrangements have been acquired during the course of TMZ treatment and then positively selected due to their ability of driving TMZ resistance. Very recently, another MGMT gene fusion, ASAP2-MGMT , with similar features to the fusions that we have described here in gliomas, has been identified in a medulloblastoma patient that relapsed after TMZ treatment 21 . These data would suggest that MGMT genomic rearrangements could represent a relevant mechanism of resistance to alkylating agents across a broader spectrum of tumor types. Although the presence of the MGMT gene fusions in extracellular vesicles appears to be promising as a possible liquid biopsy approach for the identification of MGMT rearrangements, its validity still remains to be validated in the clinical settings. Early detection of MGMT genetic rearrangements in patients under treatment would eventually predict early tumor recurrence and guide therapy decision in a subset of MGMT-methylated patients. Unlike primary tumors, at the time of recurrence there is not a standard of care available for gliomas, and TMZ rechallenge is one of the few options in glioblastomas 22 . MGMT promoter methylation has been proposed as prognostic marker for benefit from TMZ rechallenge in recurrent glioblastoma 23 and is used as a stratification factor in trials comprising TMZ treatment 24 . However, our current findings might limit MGMT promoter methylation prognostic value and would predict that a subset of patients might be assigned to the wrong treatment arm, if based solely on MGMT promoter methylation analysis. In summary, here we have presented MGMT genomic rearrangements not only as a novel mechanism of resistance to TMZ in a subset of gliomas but also, to our knowledge, as a unique genetic alteration never described before in response to other chemotherapeutic agents. Methods Patients The newly sequenced tumors were collected from Beijing Tiantan Hospital as part of the Chinese Glioma Genome Atlas project (CGGA, ). The study was approved by the institutional review board in Capital Medical University (IRB ID: KYSB2015-023). Informed consent was obtained from each patient before surgery. For each specimen, the pathological diagnosis was reviewed by board-certificated pathologists. The specimen was flash-frozen within 5 min after being resected for subsequent RNA extraction and sequencing. We also curated RNA sequencing from four published studies. This include 72 samples from Wang et al. 7 , 42 samples from Hu et al. 25 , 28 samples from Bao et al. 26 , and 5 samples from The Cancer Genome Atlas 27 (Supplementary Fig. 1 and Supplementary Data 1 ). The most recent follow-up information of the TCGA patients were retrieved from NCI Genomics Data Commons (GDC) data portal ( , accessed on July 18, 2019). Similarly, we used the most recent follow-up information (last follow-up in December 2018) of all patients from CGGA. For the 41 patients from Samsung Medical Center (SMC), patient follow-up continued after the publication of our last study 7 , and the updated data were used in this study. In total, 12 out of 41 patients changed survival status and/or surviving time. In addition, the MGMT methylation status of the recurrent gliomas from seven patients were newly tested and updated in this study. RNA sequencing and gene expression quantification RNA-sequencing assay of the newly collected glioma samples in this study was performed using the same protocol as our previous research 28 . For each sample, about 80 million reads were generated. The cleaned RNA-sequencing reads were mapped to the reference human genome assembly of Ensembl GRCh37 annotation version 75 using STAR 2.6.1d 29 with default parameters. Reads mapped to each gene were counted using FeatureCount 1.5.1 30 and transformed to RPKM. Since our cohort includes samples from multiple cohorts, we used Z -score of MGMT expression in the recurrent glioma samples within each cohort for normalization to overcome potential batch effects. Detection of MGMT fusion from RNA-sequencing data RNA-sequencing data from previous publications were downloaded, and the reads were extracted using samtools 1.2 31 . STAR-fusion 1.5.0 32 was utilized to identify and annotate gene fusion candidates, using the fastq files as input. The fusion candidates were then filtered by removing fusions that were present in normal tissues, fusions involving mitochondria genes and uncharacterized genes, and fusions of two paralog genes. See Supplementary Table 1 for the breakpoint information of the MGMT fusions identified. Whole-genome sequencing and analysis For one MGMT fusion positive case (CGGA_1729), we had enough sample for whole-genome sequencing. The total DNA was extracted and sequenced using Illumina HiSeq 4000 platform. The sequencing depth is about 50×. Sequencing reads were then cleaned and mapped to hg19 reference genome using bwa mem 0.7.15-r1140 33 . Duplicates were marked using Picard MarkDuplicates 2.9.2 tool ( ). Structural variants were identified using Manta 1.4.0 34 , and the variant related to the MGMT fusion was manually picked. Determination of IDH , 1p/19q co-deletion, and hypermutation The mutation status of IDH1 Arginine 132 and IDH2 Arginine 172 were determined from RNA-seq data using samtools 1.2 mpileup. At least five reads were required to cover the hotspot position, otherwise the result was marked as not available (NA). The 1p/19q co-deletion status was predicted using CNAPE 35 . CNAPE is a software to predict large-scale copy number alteration from gene expression data using multinomial logistic regression models trained on TCGA data and have shown high sensitivity and specificity. The 1p/19q co-deletion prediction results were further confirmed by the allele frequency of common SNPs. Hypermutation was identified using a computational method based on RNA-sequencing data 36 . A bootstrapping method to test mutual-exclusiveness To test whether the three TMZ-resistance-related alterations, namely MGMT promoter hypomethylation, hypermutation, and MGMT fusion, are mutually exclusive, we reasoned that if they are mutual exclusive, then when combined they should cover significantly more patients than random. Note the contraposition also holds. We therefore randomly assigned the patients whether they had the alteration and summarized the number patients that had at least one of the three alterations. This randomized assignment was repeated for 10,000 times. P -value was calculated by (times for which the number of covered patients is larger than the observed number of patients carrying at least one such alteration)/10,000. PCR validation of MGMT fusion in patient samples The total RNA was extracted from the positive fusion glioma samples using RNeasy Mini Kit (Qiagen) according to the manufacturer’s instructions, and RNA intensity was examined by Bioanalyzer 2100 (Agilent Technologies). Then cDNA was synthesized from 1 μg of the total RNA using the RevertAid First Strand cDNA Synthesis kit (Thermo Fisher Scientific, Cat. K1622), with random hexamer as the primer. The MGMT fusion gene fragments were amplified by PCR using specific primers (Supplementary Table 2 ). The PCR products were purified using a QIAquick PCR purification kit (Qiagen, Cat. 28104) and sequenced by an ABI Prism 3730 DNA sequencer (Applied Biosystems). DNA constructs, design, and cloning of guide RNAs The pKLV-U6gRNA-PGKpuro2ABFP (Plasmid #50946) and the lentiCas9-Blast (Plasmid #52962) were obtained from Addgene. The HSV1-tk/GFP/firefly luciferase (TGL) triple-reporter construct was from J. Gelovani Tjuvajev 37 . The gRNA sequences targeting MGMT, BTRC, NFYC, SAR1A, and CTBP2 were designed using the Genetic Perturbation Platform web portal ( ) (Supplementary Table 3 ). The paired sgRNAs were sub-cloned into the pKLV-U6gRNA-PGKpuro2ABFP, as previously described 11 . Briefly, the oligonucleotides containing the different gRNA pairs (Supplementary Table 4 ) were amplified with Phusion High-Fidelity polymerase (New England Biolabs, M0530S) using primer F5 and R1 (Supplementary Table 2 ). PCR products were gel-purified and ligated to Bbs I-digested pDonor_mU6 plasmid (kindly provided by A. Ventura) by using the Gibson Assembly Master Mix (New England Biolabs 174E2611S). The Gibson reaction was then digested with Bbs I at 37 °C for 3 h. The linearized fragment containing the pair gRNA, the mU6 promoter, and the gRNA scaffold was gel-purified and cloned into the pKLV-U6gRNA-PGKpuro2ABFP. All the constructs were verified by Sanger sequencing. Cell lines, transfections, infections, and reagents The human glioma cell lines U251 (Sigma-Aldrich, 09063001) was kindly provided by Eric Holland, and U87 (HTB-14) was purchased from ATCC. The Gp2-293 packaging cell line was purchased from Clontech (Cat. 631458). Cells were cultured in DMEM (Sigma-Aldrich, Cat. D5796) + 10% FBS (Sigma-Aldrich, Cat. F7524). All the cell lines were routinely checked for Mycoplasma contamination by PCR analysis. DNA fingerprinting has been performed for authentication of the U251 and U87 cell lines (data available upon request). Human GBM tumor spheres h543, kindly provided by Eric Holland, were cultured in human NeuroCult NS-A Proliferation Kit (Stem Cell Technologies, Cat. 05751) and supplemented with 10 ng/ml recombinant human EGF (Gibco, Cat. PHG0313), 20 ng/ml basic-FGF (Sigma-Aldrich, Cat. F0291-25UG), 1 mg/ml heparin (Stem Cell Technologies, Cat. 07980), 100 U/ml penicillin, and 100 µg/ml streptomycin. Lentiviruses were generated by co-transfection of lentiviral plasmids (pKLV-U6gRNA-PGKpuro2ABFP and lentiCas9-Blast) and 2nd generation packaging vectors (pMD2G and psPAX2) in Gp2-293 cells using calcium–phosphate precipitate. High-titer virus was collected at 36 and 60 h following transfection, and used to infect cells in presence of 7 μg/ml polybrene (Sigma-Aldrich, Cat. H9268-5G) for 12 h. Transduced cells were selected with Blasticidin (3 μg/ml) (Gibco, Cat. A11139-03) and Puromycin (1.5 μg/ml) (Sigma-Aldrich, Cat. P8833-25MG). To isolate clones of U251 and U87 cells carrying the desired MGMT genomic rearrangements, after transduction with the specific pKLV-dual gRNA vectors, the cells were selected with Puromycin and then exposed to Temozolomide. Single TMZ-resistance clones were then recovered with cloning cylinders and then expanded. Temozolomide was purchased from Selleckchem (Cat. S1237). O 6 -benzylguanine was from Sigma-Aldrich (Cat. B2292-50MG). Immunoblotting Cells were lysed with RIPA lysis buffer (20 mM Tris-HCl, 150 mM NaCl, 1% NP-40, 1 mM EDTA, 1 mM EGTA, 1% sodium deoxycholate, 0.1% SDS), and protein concentrations were determined by DC protein assay kit (Biorad, Cat. 5000111). Proteins were run on house-made SDS-PAGE gels and transferred to the nitrocellulose membrane (Amersham, Cat. GEHE10600003). Membranes were first incubated in blocking buffer (5% milk 0.1% Tween, 10 mM Tris at pH 7.6, 100 mM NaCl) and then with primary antibody MGMT (Biosciences, Cat. 557045, Lot. 6280927, 1:2000), Alix (Cell Signaling, Cat. 2171, Lot. 5, 1:1000), TSG101 (BD Transduction Laboratories, Cat. 612696, Lot. 7208980, 1:2000) overnight at 4 °C, p85 (Millipore, Cat. 0619, Lot. 3009962, 1:10,000), GAPDH (Santa Cruz, Cat. Sc-365062, Lot. J1314, 1:500), and Vinculin (Sigma-Aldrich, Cat. V9131, 1:10.000) for 1 h at room temperature. Anti-mouse or rabbit-HRP conjugated antibodies (Jackson Immunoresearch) were used to detect desired protein by chemiluminescence with ECL (Amersham, RPN2106). Immunohistochemistry Tissue samples were fixed in 10% formalin, paraffin-embedded and cut in 3-μm sections, which were mounted in superfrostplus microscope slides (Thermo Scientific, Cat. 165061) and dried. The immunohistochemistry was performed using an automated immunostaining platform (Ventana discovery XT, Bond Max II, Leica). Antigen retrieval was performed with low pH buffer (CC1m) for p-H2AX and high pH buffer (ER2) for BrdU. Endogenous peroxidase was blocked (peroxide hydrogen at 3%), and slides were then incubated with anti-BrdU (BU-1, GE Healthcare, RPN202, Lot. 341585, 1:100) and phospho-histone H2AX (Ser139) (γH2AX, JBW301, Millipore, 05-636, Lot. DAM1493341, 1:4000). After the primary antibody, slides were incubated with the corresponding secondary antibodies when needed (rabbit anti-mouse Abcam) and visualization systems (Omni Map anti-Rabbit, Ventana, Roche; Bond Polymer Refine Detection, Bond, Leica) conjugated with horseradish peroxidase. Immunohistochemical reaction was developed using 3,30- diaminobenzidine tetrahydrochloride (DAB) (ChromoMap DAB, Ventana, Roche; Bond Polymer Refine Detection, Bond, Leica), and nuclei were counterstained with Carazzi’s hematoxylin. Finally, the slides were dehydrated, cleared, and mounted with a permanent mounting medium for microscopic evaluation. Colony-forming assay Cells were seeded in six-well culture plates (5000 per well) or in 12-well plates (2200 per well) in triplicate. After 4 h from the seeding, Temozolomide (100 or 200 μM) and/or O 6 -benzylguanine (100 μM) were added to the cells, and fresh media with drugs was replaced after 6 days. Twelve days after plating, resistant colonies were either stained with 0.5 M of crystal violet (Alfa Aesar, Cat. B21932) or isolated using cloning cylinders (Corning, Cat. 31666) and subsequently amplified. Flow cytometry Cells were seeded in six-well culture plates (100,000 per well) in duplicates and cultured in presence of temozolomide (100 μM) and/or O 6 -benzylguanine (100 μM) for 72 h. Cells were then harvested by phosphate-buffered saline (PBS), washed twice in cold PBS, fixed with cold 100% ethanol on ice for 30 min, and pelleted by centrifugation at 1200 rpm for 10 min. Pellet was then washed twice with PBS and 1% fetal bovine serum (FBS) and stained with 200 μl of propidium iodide (PI) (50 μg/ml) overnight. Samples were acquired on a FACS Canto II (Beckton Dickinson). All data were analyzed using FlowJo 9.9.4 (Treestar, Oregon). Gating strategy is described in Supplementary Fig. 8 . MTT assays For viability assays, 10 4 tumor spheres h543 were plated per well in 96-well plates. After addition of the indicated concentrations of temozolomide or vehicle, cells were incubated for 7 days at 37 °C and 5% CO 2 . In total, 10 μl of MTT reagent (Sigma, 5 mg/ml in PBS) were then added to the media and incubated for 4 h. After adding 100 µl of a 1% SDS, 4 mM HCl solution, absorbance at 595 nm was recorded after 24 h with a plate reader. High-throughput microscopy Cells (2000 per well) were grown on a μCLEAR bottom 96-well plates (Greiner Bio-One, Cat. 736-0230) and treated with temozolomide (100 μM) and/or O 6 -benzylguanine (100 μM) in triplicates for 48 h. EdU (10 μM) (Life Technologies, S.A., Cat. A10044) was added to the media at the last hour of incubation with the drugs. Cells were then fixed in 4% PFA for 20 min, permeabilized, and incubated for 1 h in blocking solution (3% BSA in 0.1% Triton-X PBS). EdU incorporation was detected using the Click-iT™ EdU Alexa Fluor® Imaging kit (Life Technology, S.A., Cat. C-10425). Phospho-histone H2AX (Ser139) (γH2AX, Merck, Cat. 05-363, Lot. 2310355, 1:1000) and 53BP1 (Novus Biologicals, Cat. NB100-304, Lot. A2, 1:3000) immunofluorescence was performed using standard procedures. Cells were incubated with primary antibodies overnight at 4 °C, and secondary antibodies conjugated with Alexa 488 (rabbit) (Life Technologies, Cat. A-21206, Lot. 198155, 1:400) or Alexa 555 (mouse) (Life Technologies, Cat. A-31570, Lot. 1048568, 1:400). Nuclei were visualized by DAPI staining (Sigma-Aldrich, Cat. D8417). Images from each well were automatically acquired by an Opera High-Content Screening System (Perkin Elmer) at non-saturating conditions with a ×20 (γH2AX) and ×40 (53BP1) magnification lens. Images were segmented using the DAPI staining to generate masks matching cell nuclei from which the mean signals were calculated. Cell-cycle phases were inferred based on DNA content (DAPI intensity*nuclear area) and EdU mean intensity: cells with 2n DNA content and EdU-negative were considered as G1 phase; <4n DNA content and EdU-positive, as S phase; 4n DNA content and EdU low or negative, as G2 phase. Genomic DNA isolation, gene fusion analysis, and methylation-specific PCR Genomic DNA was isolated as previously described 11 . Briefly, cell pellets were incubated in lysis buffer (10 mM Tris-HCl ph8, 100 mM NaCl, 0.5 mM EDTA, 10% SDS, and proteinase K) for 4 h at 55 °C, and genomic DNA was extracted using phenol:chloroform (1:1) and Phase Lock heavy 2-ml tubes (5PRIME, Cat. 2302830). In all, 0.1 M sodium acetate and 100% cold ethanol were then added to the recovered aqueous phase. Samples were centrifuged at 15,000 rpm for 25 min. After washing in 70% cold ethanol, draining, and dissolving in water, genomic DNA was quantified. For detection of gene fusion events, 100 ng of DNA were amplified with specific primers listed in (Supplementary Table 2 ). PCR products were cloned into the pGEM-T Easy vector (Promega, Cat. A1360) and submitted to Sanger sequencing. The MGMT promoter methylation status was determined by methylation-specific PCR (MSP). In total, 2 μg of DNA were subjected to bisulfite treatment using the EpiTect® Bisulfite kit (Quiagen, Cat. 59104). DNA was cleaned up following the manufacturer’s instructions and quantified. In all, 30 ng of DNA per sample were PCR-amplified with the Platinum SuperFi DNA polymerase (Invitrogen, Cat. 12351-010) and specific primers to detect methylated and unmethylated MGMT promoter (Supplementary Table 2 ). The PCR amplification protocol was as follows: 94 °C for 1 min, then denature at 94 °C for 30 s, anneal at 60 °C for 30 s, extension at 70 °C for 30 s for 35 cycles, followed by a 7-min final extension. Reverse transcription quantitative PCR and analysis of cDNA fragments RNA from cells was isolated with TRIzol reagent (Invitrogen, Cat. 15596-026) according to the manufacturer’s instructions. For reverse transcription PCR (RT-PCR), 500 ng of the total RNA was reverse transcribed using the High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Cat. 4368814). The cDNA was used either for quantitative PCR or Sanger sequencing. The cDNA was PCR-amplified using primers listed in Supplementary Table 2 , in-gel-purified and ligated into the pGEM-T Easy vector (Promega, Cat. A1360) and submitted to Sanger sequencing. Quantitative PCR was performed using the SYBR-Select Master Mix (Applied Biosystems, Cat. 4472908) according to the manufacturer’s instructions. qPCRs were run, and the melting curves of the amplified products were used to determine the specificity of the amplification. The threshold cycle number for the genes analyzed was normalized to ACTIN. Sequences of the primers used are listed in Supplementary Table 2 . Fluorescence in situ hybridization (FISH) Two sets of FISH probes were used to study the various MGMT genomic rearrangements. Bacterial artificial chromosomes (BACs) that map at the 5′ and 3′ MGMT flanking regions (10q26 cytoband) were purchased from BACPAC Resoirce CHORI and labeled by Nick translation assay with Spectrum Green (RP11-165L12 and RP11-343L20) and Spectrum Orange (RP11-960B17 and RP11-357N5) fluorochromes, respectively, to generate a break-apart locus-specific FISH probe. FISH analyses were performed according to the manufacturers’ instructions, on Carnoy’s fixed cells mounted on positively charged slides (SuperFrost, Thermo Scientific). Briefly, the slides were first dehydrated followed by a denaturing step in the presence of the FISH probe at 85 °C for 10 min and left overnight for hybridization at 45 °C in a DAKO hybridizer machine. Finally, the slides were washed with 20 × SSC (saline-sodium citrate) buffer with detergent Tween-20 at 63 °C, and mounted in fluorescence mounting medium (DAPI). FISH signals were manually enumerated within nuclei. FISH images were also captured using a CCD camera (Photometrics SenSys camera) connected to a PC running the CytoVision Version 7.4 image analysis system (Applied Imaging Ltd., UK). Exosomes isolation To purify exosomes from cell culture, the conditioned media was collected after 72 h from 10 × 15 cm plates and centrifuged at 500 × g for 10 min followed by centrifugation at 12,500 × g for 25 min and 100,000 × g for 80 min. The exosome pellet was then washed with cold PBS, centrifuged at 100,000 × g for 80 min and re-suspended in 100 μl PBS. Isolation of exosomes from mice serum was performed following the same protocol after an initial centrifugation at 3000 × g for 20 min and a further one at 12,000 for 20 min. NanoSight analysis was used to confirm the integrity and expected size of the isolated exosomes. Centrifugations were done at 10 °C using a Beckman Optima X100 ultracentrifuge with a Beckman 50.4Ti or 70.1Ti rotor. Exosome protein content was determined by DC protein assay kit. Particle content was determined by measuring 1 µl of exosome aliquot diluted in 1 ml PBS with an NTA (NanoSight; Malvern) equipped with a blue laser (405 nm). Orthotopic GBM models, bioluminescence imaging, and in vivo treatment U251 sgCtrl and BTRC-MGMT cells were stably transduced with the HSV1-tk/GFP/firefly luciferase (TGL) triple-reporter construct and GFP positive cells were purified by FACS. Four to five weeks old immunodeficient nu/nu mice were then intracranially injected with the sorted cells (5 × 10 5 cells) using a stereotactic apparatus (Stoelting). After intracranial injection, mice were imaged every week to follow tumor growth and drug response. Mice were anesthetized with 3% isoflurane before retro-orbital injection with d-luciferin (150 mg/Kg) (Perkin Elmer S.L., Cat. 122796) and imaged with an IVIS Xenogen machine (Caliper Life Sciences). Bioluminescence analysis was performed using Living Image software, version 3. Beginning the day in which tumors were clearly visible by IVIS, mice were randomized into two groups, and temozolomide (50 mg/Kg) or vehicle (DMSO) was administered intraperitoneally daily for 5 days. For survival curve, mice were then checked until they developed symptoms of disease (lethargy, weight loss, macrocephaly). For IHC analysis, BrdU (150 µg) (Sigma-Aldrich, Cat. B9285) was administrated intraperitoneally to mice, and mice were then sacrificed 2 h later. For the h543 orthotopic model, 3 × 10 5 sgCtrl and SAR1A-MGMT were transplanted intracranially, and after 1 week mice were randomized in two groups, and temozolomide (100 mg/Kg) or vehicle (DMSO) was administered intraperitoneally daily for 5 days. Mice were housed at 22 °C with a 12-h light/ 12-h dark cycle, in the specific pathogen-free animal house of the Spanish National Cancer Centre under conditions in accordance with the recommendations of the Federation of European Laboratory Animal Science Associations (FELASA). All animal experiments were approved by the Ethical Committee (CEIyBA) and performed in accordance with the guidelines stated in the International Guiding Principles for Biomedical Research Involving Animals, developed by the Council for International Organizations of Medical Sciences (CIOMS). Statistics and reproducibility Data in bar graphs are presented as mean and SD, except otherwise indicated. Results were analyzed by unpaired two-tailed Student’s t tests or Wilcoxon rank-sum test using the R programming language (version 3.5.3). Kaplan–Meier survival curves were produced either with GraphPad Prism 6 (Fig. 4a ) or with the “survival” 2.44–1.1 and “survminer” 0.4.6 R packages (Supplementary Figs. 1e, f and 7g ); P -values were generated using the log-rank statistic. Heatmap, boxplots, and barplots were made with the “ComplexHeatmap” 1.2.0, “ggplot2” 3.2.1 and “ggpubr” 0.2.5R packages, respectively. Each experiment was repeated independently (minimum n = 2), unless specifically indicated, with similar results. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The raw sequencing data of the newly sequenced samples are deposited in the European Genome-phenome Archive (EGA) ( ), accession study ID: EGAS00001004544, and in the Genome Sequence Archive (GSA) of the Beijing Institute of Genomics (BIG) Data Center Chinese Academy of Sciences ( ), accession number BioProject ID: PRJCA001580. Data from SMC were available in EGA, accession study ID: EGAS00001001800. Data from TCGA were downloaded from NCI Genomics Data Commons (GDC) data portal ( ). Previously published CGGA data were available at the GSA BIG ( ) under accession number BioProject ID: PRJCA001746 and PRJCA001747. The reference human genome hg19 is downloaded from , while the genome annotation file is downloaded from ftp://ftp.ensembl.org/pub/release-75/gtf/homo_sapiens/Homo_sapiens.GRCh37.75.gtf.gz . All the other data supporting the findings of this study are available within the article and its information files and from the corresponding author upon reasonable request. Source data are provided with this paper. Code availability The 1p/19q co-deletion status was predicted using the custom CNAPE software available at: . | A team led by Massimo Squatrito, Head of the Seve Ballesteros Foundation Brain Tumor Group at the Spanish National Cancer Research Centre (CNIO), has made important findings of how some gliomas can acquire chemoresistance. Carried out together with the laboratory of Jiguang Wang from the Hong Kong University of Science and Technology (HKUST) and a clinical team led by Tao Jiang from Beijing Neurological Institute, and published in Nature Communications, the study provides new clues on how to monitor the efficacy of therapy. At present, the main, and virtually only, treatment for glioma—a very common type of tumor originating in the brain—is a combination of radiotherapy and the chemotherapy agent temozolomide. This type of treatment can improve patients' survival rates by up to 30%. Similar to most of the chemotherapy drugs, temozolomide induces DNA damage in cancer cells. Gliomas can progress by repairing this damage through an enzyme encoded by the MGMT gene. In patients whose MGMT activity is blocked because of a modification of its promoter called 'hypermethylation', cancer cells cannot repair the temozolomide-induced damage and collapse. Unfortunately, up to 40-50% of patients are intrinsically resistant to temozolomide. These patients express high levels of MGMT and the tumor continue growing under treatment. Now, the recent study carried out by the CNIO and the HKUST team shows that a subset of patients acquires a specific genetic alteration that can evade the combined therapy. Control changes hands "Translocation of the MGMT gene was observed in a group of patients," says Massimo Squatrito. "These genomic rearrangements involve the fusion of MGMT with other genes, which means that MGMT is now regulated by the promoters it is fused with, that contributes to its overexpression. When this type of rearrangements take place, the temozolomide-induced DNA damage is very efficiently repaired and the glioma continues growing even under treatment." The team at the HKUST validated the presence of the genetic rearrangement in a subset of a large cohort of recurrent tumors, coming from different hospitals, mainly from Beijing Tiantan Hospital. Using the CRISPR-Cas9 genomic editing tool, the team at CNIO replicated some of these translocations in different cell and animal models and confirmed that they can confer resistance to temozolomide. "It appears that the translocations are not present in the original tumor, only in recurrent ones, that are, tumors that emerge after the original cancer is treated," Squatrito says. "This indicates that the resistance may occur as a consequence to the treatment itself." The findings may lead to changes in the methods to monitor therapy efficacy: "Currently, the only known therapeutic biomarker in gliomas is the analysis of MGMT promoter status. When methylated, the MGMT gene is silenced and the patient is predicted to respond to temozolomide. The study shows this method is no longer valid when there has been a genetic translocation. The promoter might still be blocked, but the gene is being overactivated by other promoters and hence could contribute to tumor recurrence." Another relevant finding from animal models is that the MGMT translocations are present in exosomes—extracellular vesicles released by glioma cells into the bloodstream. "If this finding is validated in patients, it could become a useful tool for early resistance detection. A liquid biopsy, that is, a simple test using blood samples, could tell us when patients are developing resistance to temozolomide, and they could be advised to switch to other therapeutic options when they will become available." The next step will be identifying novel treatment intervention for the temozolomide resistant patients. | 10.1038/s41467-020-17717-0 |
Medicine | High heels may enhance a man's instinct to be helpful | Guéguen, N. (2014). High Heels Increase Women's Attractiveness. Archives of Sexual Behavior. DOI: 10.1007/s10508-014-0422-z Journal information: Archives of Sexual Behavior | http://dx.doi.org/10.1007/s10508-014-0422-z | https://medicalxpress.com/news/2014-11-high-heels-instinct.html | Abstract Research has found that the appearance of women’s apparel helps increase their attractiveness as rated by men and that men care more about physical features in potential opposite-sex mates. However, the effect of sartorial appearance has received little interest from scientists. In a series of studies, the length of women’s shoe heels was examined. A woman confederate wearing black shoes with 0, 5, or 9 cm heels asked men for help in various circumstances. In Study 1, she asked men to respond to a short survey on gender equality. In Study 2, the confederate asked men and women to participate in a survey on local food habit consumption. In Study 3, men and women in the street were observed while walking in back of the female confederate who dropped a glove apparently unaware of her loss. It was found that men’s helping behavior increased as soon as heel length increased. However, heel length had no effect on women’s helping behavior. It was also found that men spontaneously approached women more quickly when they wore high-heeled shoes (Study 4). Change in gait, foot-size judgment, and misattribution of sexiness and sexual intent were used as possible explanations. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Introduction Research has found that across all cultures men care more about physical features in potential opposite-sex mates while women care more about resource features (Buss, 1989 ; Kenrick, Groth, Trost, & Sadalla, 1993 ; Shackelford, Schmitt, & Buss, 2005 ). In sex differences in human mate preferences conducted in 37 cultures, Buss ( 1989 ) reported that males value physical attractiveness in potential mates more than females do in 34 cultures (mean Cohen’s d = 0.59). Women’s physical attractiveness is important and men can therefore be influenced by various aspects of their appearance. Women’s Physical Appearance and Men’s Behavior Prior studies have indicated that different morphological features of women are associated with varied levels of attractiveness in the eyes of men. Furnham, Lavancy, and McClelland ( 2001 ), Henss ( 2000 ), Singh ( 1993 ), and Singh and Luis ( 1995 ) reported that a lower waist-to-hip ratio of women was associated with greater physical attractiveness when evaluated by men. Another important morphological factor associated with female attractiveness is breast size. Beck, Ward-Hull, and McLear ( 1976 ) found that males from the United States rated a female figure with larger than average-sized breasts more favorably than others. Wildman and Wildman ( 1976 ) found that the bust was the most sexually stimulating female body part for males and that those men preferred busts larger than what women possess on average. In a field study, Guéguen ( 2007a ) reported that men but not women drivers were more likely to stop to offer a ride to a female hitchhiker as soon as her breast size increased. In Western cultures, several studies have reported that women’s hair color influenced male behavior and judgments. Men were more likely to approach women with blond hair for a date. Swami and Barrett ( 2011 ) and Guéguen ( 2012a ) reported that the same female confederate was approached more frequently in a bar or nightclub setting as a blond than as a redhead or brunette. Lynn ( 2009 ) observed in a survey that higher waitressing tips from men were associated with having blond hair. It has also been found that men are sensitive to other female physical cues not related to morphological appearance. Sometimes slight modifications in women’s physical appearance are associated with variation in men’s reactions. Some studies have reported that women with apparent tattoos are perceived to be more sexually promiscuous by men (Swami & Furnham, 2007 ). Guéguen ( 2013 ) reported that a female confederate with a temporary tattoo placed on her lower back was more favorably approached by men and perceived as more probable to have sex on the first date. Research has shown that men’s behavior and evaluation are also affected by women’s cosmetics. Graham and Jouhar ( 1981 ) reported that photographs of female targets wearing make-up compared with the same photographs of female targets with no make-up were rated by men as being tidier, more feminine, and physically attractive, as well as being more secure, sociable, interesting, poised, confident, organized, and popular. Cox and Glick ( 1986 ) examined how average-looking women were perceived after a professional make-over compared to being cosmetics free and found that cosmetics were positively associated with femininity and sexiness. Workman and Johnson ( 1991 ) instructed participants to view one of three colored photographs of a professional model wearing either heavy, moderate, or no cosmetics. They found that cosmetics significantly enhanced the impression of attractiveness and femininity. Cash, Dawson, Davis, Bowen, and Galumbeck ( 1989 ) conducted an experiment in which American college students were photographed while wearing their typical facial cosmetics and again following the removal of their makeup. Participants rated the physical attractiveness of the women. It was found that male judgments were more favorable when the women were photographed with cosmetics than when they were without cosmetics. Female judgments, on the other hand, were not affected by the presence versus absence of make-up. Women’s Clothing Appearance and Men’s Behavior Research has also found that men react differently toward women based on their clothing appearance. Abbey ( 1987 ) found that males were more likely than females to interpret a low-cut top, shorts, tight jeans, or no bra as an indication of sexual receptiveness. Abbey, Cozzarelli, Mclaughlin, and Harnish ( 1987 ) found that female targets who wore revealing clothing were rated by men as sexier as and more seductive than those wearing non-revealing clothing. This effect was confirmed in a study by Koukounas and Letch ( 2001 ) who found that an actress wearing revealing clothing was more often perceived by male than by female observers as having greater sexual intent. Research has also reported that the color of women’s clothing was associated with variation in men’s behavior. In a study by Niesta Kayser, Elliot, and Feltman ( 2010 ), men who viewed an ostensible conversation partner in a red as opposed to a green shirt chose to ask her more intimate questions (Experiment 1) or sit closer to a woman with a red shirt than with a blue shirt (Experiment 2). Guéguen ( 2012b ) observed that women hitchhikers wearing red solicited a higher response in the number of male drivers who stopped to offer a ride. However, no color effect was found when considering the behavior of female drivers. Men’s judgments were also reported to be influenced by women’s clothing color. Pazda, Elliot, and Greitemeyer ( 2012 ) found that women wearing red were perceived as more sexually receptive by men. Overall, the studies reported above show that men’s behavior and judgment of women are affected by a large range of cues related to physical appearance, including morphology, cosmetics, and clothing. Our objective was to evaluate an aspect of clothing appearance that has received less attention in the literature: shoes and more specifically heel height. Popular magazines and ads frequently associate high-heeled shoes with female sex-appeal and attractiveness. Magazines and adult films also use a host of models wearing high-heeled shoes, thus suggesting a relation between high heels and sexiness. To our knowledge, only one study has examined the effect of women’s shoe heels on men’s judgment. Morris, White, Morrison, and Fischer ( 2012 ) recorded women walking in flat shoes and high heels, but their participants viewed only point-light videos of the women walking. Morris et al. found that participants judged the targets in the high-heel condition as significantly more attractive than those in the flat-heel condition. Morris et al. also analyzed the biomechanical changes produced by the heels and found that heels altered the women’s gait, reducing stride length and increasing pelvis tilt and hip rotation. It was stated that women probably use high heels to artificially increase the femininity of their gait and thus to become more attractive for men. If walking with high heels increases the femininity of women’s gait, it could be hypothesized that high heels probably increase women’s attractiveness for men and could influence the latter’s behavior positively. Four studies conducted in field settings were performed to evaluate the relation between women heels, and their effect on attractiveness for men. It was hypothesized that high heels would increase women’s attractiveness as perceived by men. Study 1 Method Participants The participants were 90 men between the ages of 25 and 50 years chosen at random while they were walking alone in the pedestrian areas of a town (around 60–70,000 inhabitants) situated on the south coast of Brittany in France. Procedure A 19-year-old female confederate acted as an interviewer in this study. Her clothing appearance was identical in the three experimental conditions: a black straight skirt, a white long-sleeved shirt, and a black suit jacket. New, black leather shoes were used: one with flat heels (flat heels condition), a second with 5-cm heels (medium heels condition and a third with 9-cm heels (high heels condition). They were in fashion and considered to be dress pumps that enveloped the sides of the foot, the heel, and the toes, leaving the ankles and the instep visible. The shoes had neither straps nor laces. In the two conditions with heels, except for the length, precaution was taken to use the same form of heels: the top of the heel was 4.5 × 5 cm 2 and tapered to 1.5 × 1.5 cm 2 at the bottom. The confederate stationed herself in front of a store and chose a passer-by walking in her direction. If a child, an adolescent, an older person, or a group of people passed, the confederate waited until a person corresponding to the profile (a man of roughly 25–50 years of age walking alone) walked by. The confederate made contact by saying: “Excuse me, sir. We are currently conducting a survey on gender equality. Would you agree to answer our questionnaire? It will take 3–4 minutes.” Participants who refused were thanked. Those who complied then immediately responded to the questionnaire and were thanked at the end. The confederate was instructed to change her shoes after soliciting 10 participants. The order of the shoe model worn was randomly determined. Results The number of men who complied with the confederate’s survey request was the dependent variable measured in this study and the data are shown in Table 1 . Table 1 Frequency and percentage of participants who complied with the survey request according to experimental condition Full size table With the number of participants who complied with the survey request, a 3 (experimental condition) × 2 (compliance) Chi square test was performed and revealed a significant relationship, χ 2 (2, N = 90) = 8.83, p = .012, ф = .30. Further comparison revealed that the flat heels condition was not significantly different from the medium heels condition, χ 2 (1, N = 60) = 1.68, p = .19, ф = .17, but was significantly different from the high heels condition, χ 2 (1, N = 60) = 8.86, p = .003, ф = .36. The difference between the medium heels condition and the high heels condition approached significance, χ 2 (1, N = 60) = 3.07, p = .08, ф = .22. Discussion The results found in this first study provided evidence that the length of the shoe heels worn by the confederate influenced participants’ behavior. Men responded more favorably to the confederate survey request as soon as the length of her heels increased. Such results suggested that high heels increased the attractiveness of the women confederate for men which, in turn, increased their compliance with the survey request. However, this exploratory study had several limitations. First, only one woman confederate being used precludes the generalization of the shoe heel effect to all women. Second, the sample sizes were low ( N = 30 in each condition) and so generalization or a greater sample size appeared necessary. Third, and most importantly, only male participants were solicited in this study. Thus, the question remains whether the shoe heel length effect found in this study could be explained by the effect of interaction between the men and the shoes worn by the female confederate or if the effect was only explained by the shoes per se. If this effect is only explained by shoe heel length, then we could expect to find a similar effect with female participants. If the effect occurs only with men, then we expect to find this positive effect of shoe heel length only with male participants. The objective of the second study was to replicate our first study using more female confederates, larger sample sizes, and to test participants of both genders. Study 2 Method Participants The participants were 180 men and 180 women (approximately between the ages of 25 and 50) chosen at random while they were walking alone in pedestrian areas of two towns (around 60–70,000 inhabitants) situated on the south coast of Brittany in France. Procedure Four young women ( M = 19.1 years, SD = 0.4) served as confederates in this study. All had the same foot size and nearly the same height (167–168 cm) and weight (54–57 kg). Their clothing appearance was nearly identical as in Study 1: a dark straight skirt, a white long-sleeved shirt, and a dark suit jacket. The same three types of smart black shoes as those used in Study 1 were worn by the confederates. The procedure was strictly the same as in Study 1 but the topic of the survey was related to local food consumption habits. To prevent possible multiple solicitations of the same pedestrian, the study was conducted at the same time in each town by the different confederates, and there was a minimum distance of 1 km between each confederate. Each confederate was instructed to solicit 90 passersby (45 males and 45 females). The confederate was instructed to change her shoes after soliciting 15 participants. The order of the shoe model worn was randomly determined. Results The number of men who complied with the confederates’ survey request was the dependent variable measured in this study and the data are shown in Table 2 . Table 2 Frequency and percentage of participants who complied with the survey request according to experimental condition and the pedestrians’ gender Full size table A 2 (participant gender) × 3 (experimental condition) log-linear analysis using the frequency of participants who complied with the survey request as the dependent variable was used. The interaction effect between participant gender and experimental condition was significant, χ 2 (2) = 11.92, p = .003, ф = .18. Additional analysis revealed that the difference in the three experimental conditions was not statistically different with female participants, χ 2 (1) < 1, ф = .06. However the difference with the male participants was significant, χ 2 (1) = 20.24, p < .001, ф = .33. Further comparison with this sub-group revealed that the flat heels condition was significantly different from the medium heel condition, χ 2 (1) = 4.03, p = .045, ф = .21, and the high heels condition, χ 2 (1) = 20.31, p < .001, ф = .48, while the difference between the medium heel condition and the high heels condition, was also significant, χ 2 (1) = 6.81, p < .009, ф = .28. Discussion The results found in this second study confirmed and extended those found in Study 1. Our results again provided evidence that the length of the shoe heels worn by the confederate influenced the participants’ behavior. However, we observed that this effect occurred only with male participants while the height of the confederate’s heels appeared to have no effect on women’s receptivity to the survey request addressed by the confederate. Thus, it seems that the effect observed in the first study was not explained by the length of heels per se but by the fact that female confederates exhibited high heels shoes in front of men. Such research suggests that women’s heels probably increased their attractiveness for males which, in turn, increased the probability that men comply with their request. It could be argued that the men in our study accepted the survey request more favorably when the women interviewer wore shoes with high heels because they probably wanted to interact with the interviewer. The objective of the third study was to evaluate if more desire to interact with a woman confederate could explain men’s behavior. Thus, in this study, we examined the effect of women’s heels on spontaneous helping behavior. It has previously been found that spontaneous helping behavior is a way for men to easily initiate social interaction contact with a potential opposite-gender mate (Guéguen, 2007b , 2010 ). Study 3 Method Participants The participants were 180 men and 180 women (approximately between the ages of 20 and 45) chosen at random while they were walking alone in pedestrian areas of two towns (60–70,000 inhabitants) situated on the south coast of Brittany in France. Procedure The same confederates as those used in Study 2 acted as confederates in this study. Their clothing appearance was identical as in Study 2. The same three types of smart black shoes as those used in the two previous studies were worn by the confederates. The confederate selected a participant walking in her direction while apparently looking for something in her bag as she walked. The confederate was carefully instructed to approach men or women walking alone, aged roughly between 20 and 45. The confederate was also instructed to avoid men or women who stopped near a store. Once a participant was identified, the confederate began walking in the same direction as the participant about three meters away. The confederate held a handbag and accidentally dropped a glove. The confederate continued walking, apparently unaware of her loss. Responses were recorded as help if the participant warned the confederate within 10 s after losing the object. If not, the confederate acted as if she was searching for something in her handbag, looked around in surprise, and returned to pick up the glove without looking at the participant. The confederate was instructed to change her shoes after testing 10 men and 10 women. The order of the shoe model worn was randomly determined. Results The number of times help was offered was the dependent variable measured in this study, and the data are shown in Table 3 . Table 3 Frequency and percentage of participants who helped the confederate according to experimental condition and the pedestrians’ gender Full size table A 2 (participant gender) × 3 (experimental condition) log-linear analysis using the frequency of participants who complied with the survey request as the dependent variable was used. The interaction effect between participant gender and experimental condition was significant, χ 2 (2) = 8.11, p = .017, ф = .15. Additional analysis revealed that the difference in the three experimental conditions was not statistically different with female participants χ 2 (1) < 1, ф = .07, while it was with the male participants, χ 2 (2) = 17.42, p < .001, ф = .31. Further comparison with only male participants revealed that the flat heels condition was significantly different from the medium heels condition, χ 2 (1) = 3.97, p = .046, ф = .21, and the high heels condition, χ 2 (1) = 17.42, p < .001, ф = .43. The difference between the medium heels condition and the high heels condition was also significant, χ 2 (1) = 5.51, p = .018, ф = .24. Discussion Our results showed, that, in general, spontaneous help was offered more easily to women as soon as the length of their shoe heels increased. However, we reported again that the effect of shoe heels was observed only with male participants. These new findings confirm those found in our two previous studies and suggest that women with high-heeled shoes increase their attractiveness for men. In this study and in the two previous studies it could be argued that men helped the women more favorably in the high heels condition because their wish to interact with the confederate was probably higher. Previous research has reported that spontaneous male helping behavior toward women is a good method to evaluate their motivation to interact with the target and is related to the target’s attractiveness (Guéguen, 2007b , 2010 ). The objective of the fourth study was to evaluate the effect of women’s shoe heels directly on courtship approach given the fact that spontaneous helping behavior is perhaps a platonic way for men to interact with a woman perceived as more attractive but without any ulterior motive. Study 4 Method Participants The participants were 36 young men aged approximately between 20 and 28 years. They were tested while they were present in one of the three bars where the study was conducted. The bars were located in the center of Vannes, a medium-sized resort town (70,000 inhabitants) on the Atlantic coast in France. Procedure The study was conducted between 8:30 p.m. and midnight for six Wednesday and six Saturday nights. Three half-hour sessions were done each night: 9:30–10:00 p.m., 10:30–11:00 p.m., and 11:30–12:00 p.m. One day, the first session began in one bar while the second began in the second bar, and the last session in the third bar. The reverse order was used the next day. As a result, 36 observational periods were obtained (2 days a week × 6 weeks × 3 sessions daily = 36). Two male observers (20 years old) were seated in the bar where the study took place. The woman confederate used in Study 1 acted as a target in this study. While the study was conducted, the woman confederate wore a skirt and an off the shoulder tight fitting top. With the exception of their shoes, the woman confederate did not change her appearance in terms of make-up, hairstyle, and so on. The same three types of smart black shoes as those used in the three previous studies were worn by the confederate. The confederate was instructed to try to sit at a free table near the bar where single men usually stand. She was also instructed to cross her legs on one side so that people around could clearly view her shoes. The two young male observers took their places in the bar 2 min before the woman confederate entered. They were instructed to find a spot from which they could observe the bar and the tables but to not choose a table near the bar. The female confederate was instructed to sit down without exhibiting interest toward the other people present in her environment. When the woman confederate was seated, one of the observers turned on a chronometer (an Oregon Scientific chronometer, model C510) and stopped it when the woman confederate crossed her arms to signal that a man had made contact. A man’s behavior was considered to be a contact if he expressed verbal behavior toward the female confederate (sentences such as “Hello,” “Hello, I’ve never seen you here before,” “Hello, who are you waiting for?” and so on).We decided not to consider the men’s nonverbal behavior such as a fixed gaze or a smile, although it has been found that such behaviors are expressed in courtship interaction (Grammer, Kruck, Juette, & Fink, 2000 ; Moore, 1985 ; Moore & Butler, 1989 ), because these behaviors are difficult to interpret, numerous, and difficult to count. Furthermore, our woman confederate was instructed not to look around her and was not able to observe such behaviors. When verbal contact was made by a man, the woman confederate was instructed to say “Hello, I am waiting for someone who will probably arrive in one or two minutes.” At this time, the second observer got up and came to the confederate’s table and said “Hello Lucie, sorry for the delay” and then sat down. It was found that this stopped further interaction and the man left the woman confederate and her “friend” alone. If there was no male contact after 30 min, the female confederate was instructed to leave the bar. Results The lapse of time before a man made a contact with the confederate was the dependent variable measured. Data are shown in Table 4 . Table 4 Mean of time elapsed before the first man’s contact and depending on the three experimental conditions (in minutes) Full size table A one-way between group ANOVA analysis was performed with the time lapse as the dependent variable. A main effect of experimental condition was found, F (2, 33) = 7.18, p = .003, with post hoc tests revealing that the flat heels condition was not significantly different from the medium heels condition, LSD test, p = .26, but was significantly different from the high heels condition, LSD test, p = .001, whereas the difference between the high heels condition and the medium heels condition was significantly different, LSD test, p = .015. Discussion Using a behavioral measure in a field setting, we found that women wearing high-heeled shoes received quicker interest from surrounding males. Thus, from a strictly behavioral point of view, we found that our woman confederate with high heels was considered “more interesting” for a courtship approach since surrounding men decided to approach her more quickly. Studies conducted in France (Guéguen, 2011a , 2013 ) have reported that the latency of a man’s approach was a good way to interpret his interest in a young woman target in a bar and that this dependent variable was clearly correlated with the level of attractiveness attributed by men to the woman target. Thus, in our study, the female confederate with high heels was probably more attractive for the men in the bar, leading them to make contact with her more rapidly. General Discussion In four experimental studies conducted in several field settings, we reported that the length of shoe heels worn by women exerted an effect on men’s behavior. Four times we observed that men more easily displayed social interaction with a woman wearing high heels. In Studies 1 and 2, it was found that men but not women accepted a survey request more often as soon as the heels of the female interviewer increased while in Study 3 we reported that men but not women spontaneously helped a female confederate more often as soon as her shoe heel length increased. In Study 4, we reported that men in a bar more favorably initiated contact with a female confederate who wore shoes with high heels. Overall, it could be stated that women’s shoe heel size exerted a powerful effect on men’s behavior. To our knowledge, this is the first time that an effect of women’s shoe heels has been found on men’s behavior. Previous studies have reported that women’s clothing appearance exerted an effect on men’s judgment and behavior toward them (Abbey, 1987 ; Abbey et al., 1987 ; Guéguen, 2011b , 2012b ; Koukounas & Letch, 2001 ; Niesta-Kayser et al., 2010 ; Shotland & Craig, 1988 ). However, none of these studies have examined the effect of shoes and more specifically the effect of shoe heels. The question that remains is how to explain why shoes with high heels worn by women influence men’s behavior? One possible argument is that the foot size of the female confederate was perceived to be smaller as soon as her heels increased in length. In a study by Fessler et al. ( 2005 ), women presented in line drawings varying only with regard to foot size were perceived as more attractive with a small foot size. This effect was reported in nine cultures suggesting that men possess an involved preference for small feet in females. As children’s feet are smaller than those of adults, a small foot on a woman may be perceived as a sign of youthfulness. Research has found that men prefer women who exhibit morphological traits associated with youthfulness (Buss, 1994 ; Symons, 1995 ). Thus, in our studies, the heels could have created a difference in the participant’s perception of the female confederate foot length which, in turn, led them to perceive the female confederate as more attractive and youthful. While this explanation appears interesting and could be examined in further studies, it probably could not be used as the only theoretical explanation in our study. Indeed, in the third study, the participants did not see the women’s feet from the front because they were walking behind the female confederate. In this way, it was difficult for the participant to evaluate the foot size of the female target. Is it possible that men were attracted and more positively reactive to the confederate with high heels because she became taller than the average in this condition. Again, while this explanation could be examined in further studies by comparing different height women wearing different height heels, it probably could not be used as the only theoretical explanation in our studies. In the fourth study, we reported that men were positively influenced by high heels even when the female confederate was seated. Secondly, several studies have reported that if women preferred taller men, it has also been found that men prefer shorter women (Swami et al., 2008 ). It has also been reported when examining response to lonely hearts advertisements that tall men received more responses for women but that the woman’s height had no influence on men’s responses (Pawlowski & Koziel, 2002 ). It could also be stated that the female confederate’s gait changed according to the length of her shoes heels, influencing her attractiveness for men. Previous research examining female nonverbal behavior reported that subtle variations, including gait, were associated with variations in men attractiveness judgment and behavior (Guéguen, 2007b , 2010 ; Johnson & Tassinary, 2005 ; Moore, 1985 ; Perper, 1985 ; Walsh & Hewitt, 1985 ). In the only published study examining the effect of women heels, Morris et al. ( 2012 ) reported that women wearing heels were perceived as more feminine, but in their study, Morris et al. recorded the targets while they were walking and their participants viewed only point-light videos of the walking women. Nevertheless, Morris et al. reported a perceived difference in the femininity of the targets’ gait. Thus, in our studies, high heels could have influenced women’s gait or posture which, in turn, influenced their attractiveness for men. Again, while this statement seems to be a good one for explaining the results found in Studies 1–2 and particular in Study 3, it was less relevant at explaining the effect of high heels found in Study 4 where the female confederate was seated in a bar. Again, this theoretical hypothesis is insufficient to explain the positive effect of high heels on men’s behavior found in our four studies. A last, a third explanation could be put forward. Perhaps, high heels increased women’s attractiveness for men because of the association of high heels with sexual content displayed in the media. Previous research reported that some female physical characteristics are overrepresented in the media. Rich and Cash ( 1993 ) found that the proportion of blondes in three popular U.S. magazines surpassed the base rate of blonds in the population. They also found that in Playboy , a magazine with erotic content and directed at a male audience, this proportion of blondes was higher than in magazines for American woman such as Vogue or Ladies Home Journal . It was also found that this overrepresentation increased with time. It was concluded that the distortion of blondes seen in the media may be sending men a message that associated blondness with sexuality. Several recent studies reported that men approached women with blond hair more readily than their brunette counterparts (Guéguen, 2012a ; Swami & Barrett, 2011 ). Thus, it could be argued that high-heeled shoes exert the same effect on men’s judgment because highly sexy female models frequently appear in the media wearing high-heeled shoes. Adult magazines and pornography also display models with high-heeled shoes. Research has found that men generally overestimate female sexual interest, especially when examining their clothing appearance (Abbey, 1987 ; Abbey et al., 1987 ; Guéguen, 2011b ; Koukounas & Letch, 2001 ; Shotland & Craig, 1988 ). Thus, the over-association of high heels with women’s sexiness and sexual content could lead men to misinterpret the sexual intent of women with high heels. Given that men were more eager for sexual intercourse than women (Clark, 1990 ; Clark & Hatfield, 1989 ; Hatfield, 1983 ), they were probably more eager to approach a woman wearing high-heeled shoes. This misinterpretation of sexual intent associated with shoe appearance could explain why men were more ready to accept their survey request (Studies 1 and 2), to help them spontaneously (Study 3) or to approach them in a bar (Study 4). Of course, these various explanations proposed are still speculative and other experiments are now necessary to try to point out the processes activated by women’s shoe heels on men. Perhaps, the theoretical explanation is multifactorial and implies several processes. It will be interesting in further studies to examine the effect of shoe heels on the attractiveness of the target and sexual intent. However, judgment of their foot size or their gait will also be an interesting question to examine. Again, the results of these studies revealed how men focus on women’s physical attributes when judging and interacting with them. Previous research has shown that men value physical appearance in the long-term (Buss, 1989 ; Kenrick et al., 1993 ; Shackelford, et al., 2005 ) and in short-term mating (Buunk, Dijkstra, Fetchenhauer, & Kenrick, 2002 ; Li, Bailey, Kenrick, & Linsenmeier, 2002 ; Li & Kenrick, 2006 ; Sprecher & Regan, 2002 ). Research has also shown that multiple aspects of women’s physical appearance are used to evaluate their mating value. Morphological factors are not the only factors associated with this judgment. Clothing appearance (Abbey, 1987 ; Abbey et al., 1987 ; Guéguen, 2011b ; Koukounas & Letch, 2001 ; Shotland & Craig, 1988 ) or color (Guéguen, 2012b ; Niesta-Kayser et al., 2010 ), cosmetics (Cash et al., 1989 ; Jacob, Guéguen, Boulbry, & Ardicioni, 2009 ), or hair color (Guéguen & Lamy, 2009 ; Swami & Barrett, 2011 ) were associated with variation in men’s approach to and judgment of women. Congruent with these studies, it seems that shoes and particularly shoe heels also act as a cue that influence men’s approach. Change history 11 October 2019 Retraction Note to Arch Sex Behav 11 October 2019 Retraction Note to Arch Sex Behav | If it's help a woman needs, maybe she should wear high heels. That's the message from Nicolas Guéguen of the Université de Bretagne-Sud in France, after he observed how helpful men are towards women in high heels versus those wearing flat, sensible shoes. The study, published in Springer's journal Archives of Sexual Behavior, is the first ever to investigate how the height of a woman's shoe heel influences how men behave towards her. Research across various cultures has shown at length how important physical features, such as body size and the style and color of a woman's clothing, influence a man's behavior towards and judgment of a woman. Even though a link between high-heeled shoes and sexiness is implied by the many models wearing such shoes in magazines and adult films, only one previous study has tested the effect of women's shoe heels on men's judgment. Guéguen therefore set out to conduct field experiments to test the influence of different shoe styles on men's helping behavior. He watched what happened when a woman in flat shoes asked people to complete a survey, and whether or not they complied more readily when she was wearing high heels. He also tested whether or not people's spontaneous urge to help changed when the same woman - again wearing shoes with different heel sizes - dropped a glove. The findings show that men's helpfulness increased along with the height of the heels a woman was wearing. However, heel height had no influence on other women's willingness to help. In the final experiment, Guéguen found that men in a bar were quicker to start chatting with a woman wearing heels than when she was wearing flat shoes. "Women's shoe heel size exerts a powerful effect on men's behavior," summarizes Guéguen, who argues that the results of these studies once again reveal how men focus on women's physical attributes when judging and interacting with members of the opposite sex. He believes that more research must be done to examine whether this effect depends on a woman's shoe heel size and on any change of gait due to wearing high heels. He speculates that, because sexy female models often wear such shoes in the media, men have started to associate the wearers of high-heeled shoes with those having sexual intent. | 10.1007/s10508-014-0422-z |
Earth | Coral gardening is benefiting Caribbean reefs, study finds | Stephanie A. Schopmeyer et al, Regional restoration benchmarks for Acropora cervicornis, Coral Reefs (2017). DOI: 10.1007/s00338-017-1596-3 Journal information: Coral Reefs | http://dx.doi.org/10.1007/s00338-017-1596-3 | https://phys.org/news/2017-07-coral-gardening-benefiting-caribbean-reefs.html | Abstract Coral gardening plays an important role in the recovery of depleted populations of threatened Acropora cervicornis in the Caribbean. Over the past decade, high survival coupled with fast growth of in situ nursery corals have allowed practitioners to create healthy and genotypically diverse nursery stocks. Currently, thousands of corals are propagated and outplanted onto degraded reefs on a yearly basis, representing a substantial increase in the abundance, biomass, and overall footprint of A. cervicornis . Here, we combined an extensive dataset collected by restoration practitioners to document early (1–2 yr) restoration success metrics in Florida and Puerto Rico, USA. By reporting region-specific data on the impacts of fragment collection on donor colonies, survivorship and productivity of nursery corals, and survivorship and productivity of outplanted corals during normal conditions, we provide the basis for a stop-light indicator framework for new or existing restoration programs to evaluate their performance. We show that current restoration methods are very effective, that no excess damage is caused to donor colonies, and that once outplanted, corals behave just as wild colonies. We also provide science-based benchmarks that can be used by programs to evaluate successes and challenges of their efforts, and to make modifications where needed. We propose that up to 10% of the biomass can be collected from healthy, large A. cervicornis donor colonies for nursery propagation. We also propose the following benchmarks for the first year of activities for A. cervicornis restoration: (1) >75% live tissue cover on donor colonies; (2) >80% survivorship of nursery corals; and (3) >70% survivorship of outplanted corals. Finally, we report productivity means of 4.4 cm yr −1 for nursery corals and 4.8 cm yr −1 for outplants as a frame of reference for ranking performance within programs. Such benchmarks, and potential subsequent adaptive actions, are needed to fully assess the long-term success of coral restoration and species recovery programs. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Introduction In the last 20 yr, active restoration to mitigate declines in coral cover has increased worldwide and coral propagation for restoration is now considered an essential component of coral conservation and management plans (Rinkevich 2005 ; Precht 2006 ; Edwards and Gomez 2007 ; Lirman and Schopmeyer 2016 ). In 2012, over 60 restoration projects focusing on the threatened coral genus Acropora were identified in the Caribbean (Young et al. 2012 ). Once one of the Caribbean’s predominant reef-building coral genera, Acropora has suffered significant degradation from both biological and anthropogenic stressors (Jaap et al. 1988 ; Porter and Meier 1992 ) and is now listed as threatened under the Endangered Species Act (NMFS 2006 , 2014 ). The decline of acroporids leads to the loss of reef function and structural complexity, both of which are critical for reef growth, fisheries habitat, coastal protection, and overall reef biodiversity (Bruckner 2002 ; Alvarez-Filip et al. 2009 ). Adapted from terrestrial silviculture, “coral gardening” is one of the most commonly used coral propagation and restoration methods (Rinkevich 1995 ; Bowden-Kerby 2001 ; Epstein et al. 2003 ; Shafir et al. 2006 ; Shafir and Rinkevich 2008 ; Shaish et al. 2008 ). This method involves removing a limited amount of tissue and skeleton (from a few polyps to small branches) from healthy wild coral populations and propagating an initial stock within ex situ or, more commonly, in situ coral nurseries. Throughout the Caribbean, in situ coral nurseries are used to propagate a renewable source of the threatened staghorn coral, Acropora cervicornis for restoration and species recovery (Johnson et al. 2011 ). Nursery-reared corals are “outplanted” from nurseries to reef restoration sites to bridge spatial gaps between existing populations (Griffin et al. 2012 , 2015 ), enhance A. cervicornis abundance, supplement genetic and genotypic diversity (Lirman and Schopmeyer 2016 ), and promote natural recovery through the creation of sexually reproductive populations (Baums 2008 ). Acropora cervicornis is considered a good candidate for use in restoration projects due to its high growth rates, natural use of fragmentation for asexual reproduction, ability to heal wounds, and high survivorship of fragments compared to other coral species (Gladfelter et al. 1978 ; Tunnicliffe 1981 ; Bak and Criens 1982 ; Highsmith 1982 ; Lirman et al. 2010 , 2014a ). The ability of coral propagation and restoration programs to create renewable sources of corals for use in restoration using low-cost, science-based methods has been well documented (Soong and Chen 2003 ; Lirman et al. 2010 , 2014a ; Rinkevich 2014 ). While long-term monitoring is required by permitting agencies for many, if not all, restoration projects, few studies have published the impacts of collection on existing wild populations used to populate coral nurseries (Epstein et al. 2001 ; Shafir et al. 2006 ; Lirman et al. 2010 ), or the success of restoration projects over the course of more than a few months (Bruckner and Bruckner 2001 ; Griffin et al. 2015 ) especially in the Caribbean where coral gardening activities generally began more recently than in the Red Sea and the Pacific. This information is needed to evaluate the performance of restoration efforts. These knowledge gaps within the literature can lead to criticism of restoration or population enhancement programs and questions the ability of such programs to successfully create functioning populations (Rinkevich 2014 ). In this study, we address these gaps by evaluating the effects of fragment collection on donor colonies, documenting the success of coral propagation within in situ coral nurseries, and tracking the survival and productivity of nursery-reared corals for up to 2 yr after outplanting in Florida and Puerto Rico, USA. Based on our analyses, which combine information from thousands of staghorn corals during “normal” conditions (i.e., no disease or thermal stress) from >120 distinct genotypes from six geographical regions, we propose coral propagation and outplanting benchmarks that may be used by existing and new restoration programs to assess performance and progress towards restoration goals. Materials and methods Beginning in 2009, funding was received as part of the American Recovery and Reinvestment Act (National Oceanic and Atmospheric Administration, NOAA) for the creation or expansion of Acropora species recovery programs along the Florida Reef Tract and in Puerto Rico. In-water coral nurseries were installed or expanded by a partnership of state and federal government agencies, non-profit organizations, and universities, thus creating the largest coordinated species recovery effort in the world (Table 1 ). To populate each local nursery, partners collected only small branches or branch tips equaling <10% of the total colony size (Epstein et al. 2001 ) from healthy, wild (donor) A. cervicornis colonies using hand cutters, or collected corals of opportunity (i.e., corals found detached from the reef). Donor colonies were monitored for at least 12 months post collection to determine impacts of fragment collection. The coral genotypes of the donor colonies were identified by the Baums Lab (Penn State University) using microsatellite markers as described by Baums et al. ( 2009 ). Collected branches or colonies were fragmented to create smaller fragments and were secured to propagation platforms, including cement blocks (Florida nurseries only) and floating underwater coral arrays (FUCAs; Puerto Rico only) (described in detail in Johnson et al. 2011 ). Coral nurseries were maintained (monthly to quarterly) to remove coral competitors including macroalgae, hydroids, and bivalves and predators such as Hermodice and Coralliophila . Individually tagged corals were monitored for survival, growth, and condition. After allowing coral fragments to grow and create an initial stock in nurseries, corals were outplanted within each region to local reefs identified to have suitable conditions and substrate for coral survival. Corals were outplanted by securing them to the reef using nails, cable ties, and/or epoxy as described by Johnson et al. ( 2011 ). Individually tagged outplants were monitored (monthly or quarterly) for survival, condition, and growth for at least 1 yr after transplantation (2 yr in three Florida regions). Table 1 Program information for in situ Acropora cervicornis propagation nurseries and outplanting as of 2016 Full size table Growth and survival data were collected for both nursery corals and coral outplants by six nursery programs: Nova Southeastern University (Broward County, BC), University of Miami (Miami-Dade County, MIA), Florida Fish and Wildlife Conservation Commission (Middle Keys, MK), Mote Marine Laboratory (Lower Keys, LK), The Nature Conservancy (Dry Tortugas National Park, DRTO), and NOAA (Puerto Rico, PR). We compiled the most complete dataset possible from our partner restoration practitioners, but some data were not available for all metrics from all partners. If more than one type of propagation platform was used within a nursery, growth and productivity data were calculated separately for each platform type due to potential differences in extension rates. Nursery data were collected for at least 1 yr after corals were installed within the nurseries: BC, MIA, MK, and PR (2010–2011) and LK and DRTO (2011–2012). Outplant survival data were collected for 1 yr at multiple sites in LK and PR (2012–2013) and for 2 yr at multiple sites in BC, MIA, and MK (2012–2014). Coral growth and productivity data were collected for 1 yr following outplanting in BC, MIA, MK, and LK. Outplant data were collected in DRTO as part of a separate outplant experiment conducted during 2014–2015. Not all parameters were available for each region, and therefore, some variability within the dataset may exist due to potential differences in timing and environmental conditions. No evidence of bleaching or disease was observed in any nursery or outplant location during this study. Donor colonies Donor colonies were scouted on reefs with known presence of A. cervicornis and were selected based on healthy coloration, size (>25 cm maximum diameter), and tissue cover (90–100% live tissue cover prior to collections). Collections typically included 3–4 branches [mean fragment total linear extension, TLE (SD): MIA = 5.4 (1.9); MK = 10.3 (4.5); LK = 4.5 (2.1); DRTO = 3.3 (0.9); PR = 4.4 (1.4)] or ≤10% of the total colony (Electronic Supplementary Material, ESM, Fig. S1a). Donor colonies were monitored in five regions (BC, MIA, MK, LK, DRTO) for at least 1 yr after fragment collection to determine if fragmentation affected colony survival (all regions) and growth (MIA). The status of donor colonies was determined by estimating percent tissue mortality (0–100%) of each colony. When mortality was observed, the cause of mortality was noted if easily identified (i.e., predation, algal/sponge competition, breakage, disease). All donor colony data were collected during 2009–2010 (except in DRTO where data were collected in 2011). In MIA, donor colonies, as well as adjacent undamaged control colonies for comparison, were monitored at each collection site. Growth of donor ( n = 20) and control colonies ( n = 20) was calculated by measuring linear extension of branch tips marked 2 cm from the apical or fragmented tip with a small cable tie (Shinn 1966 ; Lirman et al. 2010 ). Growth was documented for three fragmented and three control (unfragmented) branches within each donor colony, was well as within undamaged control colonies (3 branch tips). Survivorship of nursery and outplanted fragments/colonies Fragment/colony survival was determined by counting the number of fragments (within nurseries) and colonies (outplanted onto wild reefs) with some live tissue (if partial mortality was <100%, the colony was considered alive). When mortality was observed, the cause of mortality was noted. Growth and productivity of nursery and outplanted fragments/colonies Growth and productivity were calculated for nursery and outplanted corals. Total linear extension (Johnson et al. 2011 ) was determined for each coral individual using a flexible ruler; measurements of all branches were calculated to the closest cm. Annual growth was determined as change in TLE over time for each coral. Annual productivity was calculated as the amount of coral produced relative to the tissue/skeleton present at the start of the study (annual productivity = growth/initial TLE) as described by Lirman et al. ( 2014a ). Mean annual growth and productivity were calculated by pooling all nursery corals (including all genotypes) or outplants (including all genotypes and sites) for each region. Only fragments with positive growth that were alive for the 12-month period and that did not undergo partial tissue mortality or fragmentation were included within the data set. Thus, our approach documents maximum growth potential by excluding fragments that experienced breakage or partial mortality from the analyses as described by Edmunds ( 2007 ). If a total 12 months was not available for TLE measurements, TLE values were extrapolated linearly to calculate 12-month values and, therefore, may underestimate growth due to exponential growth associated with branch development and colony complexity (Lirman et al. 2014a ). If corals within a program were not measured using TLE but were assigned into size classes, the median length of each size class was used to calculate growth (i.e., if a colony was binned into a size class of 10–20, 15 cm was used as the TLE measurement). This method was used at MK to collect outplant growth data. Restoration benchmarks Restoration benchmarks are usually established in comparison to reference or pristine conditions (Lirman and Miller 2003 ). However, in the case of Caribbean coral reefs, such conditions no longer exist due to decades of decline in A. cervicornis populations. In most cases, corals are grown in nurseries that do not replicate reef conditions and corals are often outplanted onto reefs devoid of surviving Acropora . This creates a situation in which restoration metrics such as survivorship and growth can only be compared and evaluated within and among programs, and not against historical or undisturbed conditions. The approach used here is to present regional and overall means from six large-scale programs that have used similar restoration methods (but highlighting methodological differences when appropriate) to measure survival and growth of staghorn donor colonies, nursery fragments, and outplanted nursery-reared corals. Based on these data, we propose simple benchmarks for each step of the coral gardening process that can be used by practitioners to compare their local metrics to those obtained from our extensive database. Admittedly, regional means collapse environmental and genetic/genotypic variability; however, we argue that the value of the proposed simple benchmarks is that they represent a large number of corals from many genotypes that were grown in different environments. Thus, while new programs will not replicate the environmental conditions or the genotypes used in the analyses, they will still be able to compare their success metrics to those proposed, and determine the relative performance of their components. Large departures from the regional means can be used as early warning signals and more exhaustive attention can be paid to those steps of the gardening method that may need to be modified. Similar approaches based on collating large datasets and/or expert opinion have been used extensively to develop benchmarks for water quality ( ), seagrass health (Madden et al. 2009 ), partial coral mortality (Lirman et al. 2014b ), and overall reef health (Kramer et al. 2015 ). Here, we propose a stop-light framework based on the relative performance (mean) of each region for each restoration criterion, where values within 10% of the overall mean are considered “green” (above proposed desirable benchmark: no action or improvements required), values 10–20% below the mean are considered “yellow” (caution: some adjustments should be made), and values >20% below the mean are considered “red” (action must be taken to improve methods, design, or site selection). These benchmarks are only proposed for sites and years when there have been no large-scale disturbances like temperature anomalies or hurricanes. Results Effects of fragment collection on donor colonies Donor colonies (sample sizes: BC = 20; MIA = 20; MK = 10; LK = 7; DRTO = 20 colonies) were monitored for at least 12 months after fragment collection and all metrics collected indicated limited or no significant impacts of fragment collection. Tissue mortality was not observed at the lesion site of fragment collection on any donor colonies in any region (ESM Fig. S1b). Only a few donor colonies (one colony at MIA and three colonies at DRTO; 5.1% of all donor colonies) experienced complete mortality. Mean percent tissue cover of surviving donor colonies for all regions combined was high (85.0 ± 5.8%; Fig. 1 ) 1 yr after fragment collection, with limited partial mortality. Partial mortality was attributed to algal overgrowth, predation ( Hermodice , Coralliophila , and damselfish), or breakage. The MIA donor colonies experienced the highest partial tissue mortality but, importantly, there were no significant differences in mean partial mortality between donor (29.8 ± 8.6%) and control (22.5 ± 7.4%) colonies ( p = 0.278), further indicating the lack of impacts of collection. In MIA (the only region where both donor and control colonies were monitored), there were no significant differences in growth or productivity between fragmented (mean growth = 6.9 ± 0.4 cm yr −1 ) or control (mean growth = 6.2 ± 0.7 cm yr −1 ) branches on the donor colony ( p = 0.337 and 0.477, respectively) or between branches on the donor colonies and the control (6.7 ± 0.8) colonies ( p = 0.207 and 0.114, respectively). Fig. 1 Percent tissue cover (± SD) of donor colonies of Acropora cervicornis 12 months after fragment collection in Broward County (BC), Miami-Dade County (MIA), Middle Keys (MK), and Lower Keys (LK). Bars were colored green if values were within 10% of the overall mean, yellow if 10–20% below the mean, and red if >20% below the mean. Overall mean indicated by black line Full size image Survival of nursery corals After installation within in situ coral nurseries, mean fragment survivorship over 12 months for all regions combined was high (90.8 ± 4.5%, range 84.8–96.0%; sample sizes BC = 86; MIA = 123; MK = 174; DRTO = 191; PR = 675 corals and 75 total distinct genotypes; Fig. 2 ). The common sources of tissue mortality within nurseries were breakage (4.2%) and predation (2.9%), while algal/sponge/hydroid competition (2.1%) also occurred but was lower due to nursery maintenance practices. Fig. 2 Percent fragment survival of Acropora cervicornis within coral nurseries (all nursery corals combined for each region; Broward County (BC) = 86; Miami-Dade County (MIA) = 123; Middle Keys (MK) = 174; Dry Tortugas National Park (DRTO) = 191; Puerto Rico (PR) = 675 corals). Corals were propagated on cement blocks (Florida nurseries only) and floating underwater coral arrays (FUCAs; Puerto Rico only). Bars were colored green if values were within 10% of the overall mean, yellow if 10–20% below the mean, and red if >20% below the mean. Overall mean indicated by black line Full size image Growth and productivity of nursery corals Corals propagated on cement blocks grew between 10.5 (5.9) and 29.5 (20.2) cm yr −1 and annual productivity values ranged between 2.6 (1.6) and 6.7 (3.5) among regions during the first year (Fig. 3 ). The mean growth rate was 17.8 (7.8) cm yr −1 , while mean productivity was 4.3 (0.8) ( n = 1456 corals and 85 genotypes for all regions combined). Significantly higher growth rates (52.5 ± 28.8 cm yr −1 ) and productivity values (12.3 ± 6.7) were seen for fragments propagated on FUCAs in Puerto Rico ( n = 675) compared with fragments grown on blocks attached to the bottom (sample sizes: BC = 86; MIA = 123; MK = 174; LK = 207; DRTO = 191; p < 0.001). Here, benchmarks were set based on productivity by corals propagated on blocks because all Florida nurseries were initially populated using fixed-to-bottom platforms. Productivity was selected as the benchmark metric to remove the potential confounding effect of initial fragment size on growth (i.e., mean initial fragment sizes ranged from 2.8 ± 0.1 to 10.3 ± 0.6 cm among regions; ESM Table S1; Lirman et al. 2014a ). Fig. 3 Mean annual productivity values (± SD) of Acropora cervicornis grown on cement block (Broward County (BC) = 86; Miami-Dade County (MIA) = 123; Middle Keys (MK) = 174; Lower Keys (LK) = 207; Dry Tortugas National Park (DRTO) = 191) and floating (Puerto Rico (PR) only; n = 675) (FUCA) platforms within in-water nurseries. Annual productivity was calculated as the amount of coral produced relative to the tissue/skeleton present at the start of the study [annual productivity = growth (cm)/initial TLE (cm)] as described by Lirman et al. ( 2014a ). Bars were colored green if values were within 10% of the overall mean (calculated for corals grown on block only), yellow if 10–20% below the mean, and red if >20% below the mean. Overall mean for annual productivity ( black line ) is calculated using data collected from block platforms only Full size image Survival of outplanted corals Outplant survival was 85.2 ± 9.7% 12 months after transplantation (range 74.7–93.1%; sample sizes: BC = 75; MIA = 264; MK = 195; LK = 150; DRTO = 76; PR = 173 corals; 60 total distinct genotypes, and 25 sites total for all regions combined; Fig. 4 ). Survivorship of outplants 24 months after transplantation was documented in three regions: BC (4 sites; 66.7%), MIA (6 sites; 79.7%), and MK (4 sites; 78.7%). The most prevalent causes of tissue loss for outplanted corals were breakage (23.6%) and predation (12.5%), similar to causes of mortality of nursery corals. While most regions experienced low levels of mortality, there were important differences in survival of outplants among outplant sites. For example, in MIA, the mean survival of outplants was 82.0 ± 24.1% for 12 outplant sites ( n = 150 outplants per site) after 1 yr, but one site had significantly higher mortality after outplanting (>90% mortality; p < 0.001) due to heavy predation by Hermodice and Coralliophila (Fig. 5 ). Fig. 4 Percent survival of nursery-reared outplants after 1 yr (Broward County (BC) = 75; Miami-Dade County (MIA) = 264; Middle Keys (MK) = 195; Lower Keys (LK) = 150; Dry Tortugas National Park (DRTO) = 76; Puerto Rico (PR) = 173 corals). Bars were colored green if values were within 10% of the overall mean, yellow if 10–20% below the mean, and red if >20% below the mean. Overall mean indicated by black line Full size image Fig. 5 Survival of outplanted Acropora cervicornis at Miami-Dade County (MIA) restoration sites ( n = 150 corals per site) over 12 months. Bars were colored green if values were within 10% of the overall mean, yellow if 10–20% below the mean, and red if >20% below the mean. Overall mean indicated by black line Full size image Growth and productivity of outplanted corals Mean growth rates of corals outplanted ranged from 25.6 (13.9) to 80.6 (45.2) cm yr −1 among regions, with a mean overall growth rate of 46.2 (20.8) cm yr −1 (sample sizes: BC = 75; MIA = 264; MK = 195; LK = 150; DRTO = 76 corals). Mean annual productivity values ranged between 2.6 (0.6) and 11.2 (7.7) among regions, with a mean annual overall productivity value of 4.9 (3.6) (Fig. 6 ). Initial outplant size ranged from 7.6 (0.3) to 21.3 (9.3) cm among regions and mean annual productivity of A. cervicornis outplants was slightly higher than observed in nursery corals (ESM Table S1). Fig. 6 Mean annual productivity values (±SD) of nursery-reared Acropora cervicornis outplants (sample sizes: Broward County (BC) = 75; Miami-Dade County (MIA) = 264; Middle Keys (MK) = 195; Lower Keys (LK) = 150; Dry Tortugas National Park (DRTO) = 76; Puerto Rico (PR) = 173). Bars were colored green if values were within 10% of the overall mean, yellow if 10–20% below the mean, and red if >20% below the mean. Overall mean indicated by black line Full size image Discussion In this study, we combined an extensive dataset collected by coral restoration practitioners as part of the US Acropora recovery program to document early (1–2 yr) restoration success metrics for A. cervicornis during non-stressful conditions in Florida and Puerto Rico. By reporting region-specific data on the impacts of fragment collection on donor colonies, survivorship and growth of nursery corals, and survivorship and growth of outplanted corals, we provide the basis for a stop-light indicator framework for new or existing restoration programs to evaluate the performance of the different steps of the coral gardening methodology and to make adjustments when needed. This is the first attempt to collect such baseline data at regional scales and will pave the way for the development of more detailed indicators and benchmarks that can be used to fully assess the progress and impacts of our regional coral and reef restoration efforts in the future. Donor colonies Causing damage above natural levels to severely depleted wild A. cervicornis populations for nursery propagation would represent an undesirable impact of any coral restoration program, especially those working with threatened or endangered species. Here, we show that the collection of fragments did not negatively impact donor staghorn colonies when <10% of a healthy colony is collected as: (1) no tissue mortality was recorded at lesion sites; (2) >85% live tissue coverage was recorded on donor colonies even after 1 yr; (3) donor and control colonies had similar tissue mortality rates; and (4) there was no significant difference in growth of donor and control branches. The impacts of fragment collection on donor colonies can vary among species and even within Acropora . While Epstein et al. ( 2001 ) found high mortality in Stylophora when >10% of the colony was removed, Lohr et al. ( 2015 ) found that small A. cervicornis colonies may be fragmented up to 75% with no significant effects. Based on our observations, we suggest the removal of <10% of coral tissue from large, healthy donor colonies as a conservative guideline for initial staghorn collections. When these guidelines are followed and fragments are collected from corals with ~100% live tissue cover, a benchmark of 76% live tissue cover (survival) 1 yr after collection is proposed. Based on this benchmark, the MIA donor colonies appear to have suffered some impacts from fragment collections (~30% mortality, classified as “yellow”). This is the type of result that would indicate that collection methods or colony selection procedures may need to be modified. However, in this case, there were no significant differences in tissue mortality between donor and control (unfragmented) colonies, indicating that all colonies in the Miami region naturally experience tissue losses that are higher on average than those observed within other regions in our study. With this knowledge, benchmarks for MIA (or elsewhere where these patterns are evident) can be adjusted as experimental data are collected. Lastly, further controlled studies such as those conducted here for A. cervicornis need to be performed for other species as new taxa are added to propagation programs in the Caribbean. Nursery survival The very high survivorship of fragments observed at all nurseries combined (only 9.8% of fragments showed 100% mortality) demonstrates that the methods used for the collection, transportation, and deployment of A. cervicornis fragments within nurseries are very efficient and do not cause excessive mortality. This is a consistent result across programs and it is not unreasonable to propose a benchmark of 80% survivorship of staghorn fragments within nurseries over the first year after collection. Large deviations from these survivorship values may reflect a genotype or genotypes that are particularly ill-suited to nursery conditions, a sub-optimal nursery environment, or inadequate collection, transportation, and deployment methods. In extreme cases (e.g., several cohorts classified as yellow or red), nurseries may need to be re-designed or moved to a more appropriate location (guidelines for the selection of nursery sites and recommended nursery maintenance appear in Johnson et al. 2011 ). Nursery productivity Coral growth is a true integrator of environmental conditions and, when measured within common gardens (i.e., nursery, single reef sites), provides a metric that can be easily used to assess site and genotype performance (Lirman et al. 2014a ). Growth rates measured within the staghorn nurseries were equal to or higher than growth rates reported for wild A. cervicornis (Shinn 1966 ; Gladfelter et al. 1978 ; Tunnicliffe 1983 ), reinforcing the earlier finding that in-water nurseries, even those established in non-reef habitats (e.g., on sandy channels, over seagrass beds), provide an excellent growth environment for propagated staghorn corals (Lirman et al. 2010 ). In addition, nursery methods have changed over time to promote significantly higher growth rates even between propagation platforms, as seen when comparing nursery growth of staghorn corals on fixed-to-bottom blocks verses mid-water floating FUCAs. Here we report both growth (change in colony size) and productivity (growth normalized to initial colony size) to explore the potential use of these metrics as simple benchmarks. Lirman et al. ( 2014a ) showed that growth of staghorn coral is linearly related to colony size and number of branches. Thus, when growth is compared among fragments or colonies from different cohorts or programs, it is important to note the average size of the units used to avoid using inadequate null hypotheses. Using productivity as a performance metric for A. cervicornis resolves the issue of the relationship between size and growth. The mean regional productivity values of staghorn corals grown on blocks attached to the bottom (Johnson et al. 2011 ) ranged from 2.6 to 6.7, with an overall average of 4.4 (>1500 corals from 85 genotypes). The mean productivity value of outplanted corals ranged 2.7–11.2, with an overall mean value of 4.8. A nursery productivity benchmark value of 4 and an outplant productivity value of 4.3 can be used to rank the performance of staghorn genotypes within a program and compare performance across nurseries (that use fixed-to-bottom propagation platforms) and reef sites. Growth and productivity of staghorn corals appear to be much faster when grown on suspended platforms such as FUCAs, lines, or PVC trees (Nedimyer et al. 2011 ; O’Donnell et al. 2017 ). In fact, most Acropora restoration programs now either maintain a combination of both types of platforms or have switched completely to floating platforms. As shown for the high productivity of staghorn corals grown in FUCAs in PR (productivity value = 12.3), productivity benchmarks would need to be adjusted based on growth platform as more data on growth of suspended corals are collected. Unlike the survivorship benchmarks proposed for nurseries that may be used to modify the collection or nursery steps of the gardening program, the productivity benchmark should only be used for the identification or ranking of fast- and slow-growing genotypes within a program. However, productivity values can still be used to inform nursery operations. Slow-growing genotypes may need to be fragmented more frequently to maintain maximum growth rates (Lirman et al. 2014a ), while fast-growing genotypes may need to be outplanted sooner to prevent a given genotype swamping nursery capacity. The goals of nursery operations are to minimize coral mortality and maximize productivity. However, the goals of outplanting go beyond these two nursery goals to include the establishment of genotypically diverse restored populations (Lirman and Schopmeyer 2016 ). This last goal will require the propagation of coral genotypes with both high and low productivity within nurseries. Also, because performance within a nursery is often not predictive of performance when outplanted, and genotypes may have widely different growth rates in different environments (Lirman et al. 2014a ; Drury et al. 2017 ), we suggest that practitioners do not disregard genotypes that consistently rank below the proposed benchmark from their program. Slower-growing genotypes may be more resistant to disturbances such as temperature anomalies and, thus, should also be maintained within nursery broodstock. Finally, unlike survivorship, a metric that can be collected quickly, measuring growth and productivity of staghorn corals is a time-consuming process and is commonly only performed on a subset of corals of each new genotype brought into the nursery or outplanted onto the wild. While researchers have proposed using coarser and less time-consuming colony measurements such as diameter and height to estimate colony growth (Kiel et al. 2012 ; Huntington and Miller 2013 ), we presently lack benchmarks for these approaches within our database. The regional productivity data provide a frame of reference for new programs but also help highlight large-scale patterns of habitat suitability that need to be further investigated. One example of the value of these data is the documentation of very high mean productivity of both nursery (6.7) and outplanted (11.2) staghorn corals in Broward County, Florida. Broward County is home to the best developed staghorn thickets in Florida (Vargas-Ángel et al. 2003 ; Walker et al. 2012 ) and the documentation of the highest regional productivity values here suggests favorable environmental conditions (and possible refugia) for this threatened coral species in this area. Such productivity gradients can be used to develop testable hypotheses to explore growth and environment relationships in support of future restoration and management decisions. Outplant survival A crucial step in the coral gardening process is the outplanting of corals back onto natural reefs where, if successful, they will increase reef complexity, build valuable habitat, and sexually reproduce to increase genetic diversity and aid in the recovery of A. cervicornis (Lirman and Schopmeyer 2016 ). Mean survival of outplants across regions during years where no major disturbances were documented was 85.2%, representing higher survival than experimental outplanting conducted in previous A. cervicornis studies (Becker and Mueller 2001 ; Fogarty 2012 ). Thus, we propose a benchmark of 77% for the survivorship of outplanted corals during the first year. In this study, deviations from this value were attributed to predation, damselfish occupation (Schopmeyer and Lirman 2015 ), disease, and physical removal and breakage. While some of the attributes of a good outplant site can be assessed prior to outplanting (e.g., depth, coral cover, presence of living wild staghorn, no disease), the presence of small, cryptic predators like Coralliophila and Hermodice is harder to assess visually and may only be detected based on impacts recorded after outplanting, as seen in our Bowl site in MIA where 90% of corals were lost to predation (Fig. 5 ) (see Johnson et al. 2011 for a description of site-selection guidelines). Therefore, the use of small-scale pilot plots to evaluate site and genotype performance prior to full outplanting is suggested to improve overall outplant survival and success (Johnson et al. 2011 ). Within the field of coral reef restoration, research is presently focused on identifying site-specific biological and physical variables that may explain and predict outplanting success (Wirt et al. 2013 ). Wirt et al. ( 2015 ) developed an interactive tool to help inform A. cervicornis outplanting and restoration site prioritization based on current environmental and ecological data such as species richness, coral cover, connectivity, species interactions, and reef health. As more data on the influence of habitat (and genotype) become available, benchmarks of outplanting success will need to be adjusted accordingly (Drury et al. 2017 ). Nevertheless, even when expert judgment was used (as it has been the case in the last 10 yr in the USA), the mean survivorship of >80% of outplanted staghorn corals after 1 yr (a figure that integrates a wide range of genotypes and environments) is a remarkable achievement of the Acropora recovery efforts. The three programs that measured survivorship beyond the first year showed that mean outplant survivorship after 2 yr dropped to 75% (only an additional 10% mortality). As more programs collect survivorship data over longer intervals, benchmarks beyond the first year can be developed using the proposed framework. One of the more important outcomes of our project is that, based on survivorship and growth, nursery-grown A. cervicornis colonies behave similarly to wild colonies once outplanted (ESM Fig. S1c, d). Further evidence of this is the growing number of observations of the synchronous spawning of both wild and nursery-reared staghorn corals. Nursery-reared outplants are reaching sexually reproductive sizes within 2 yr of outplanting and developed gonads have been found in nursery corals in Florida (BC, MIA, MK, and LK) 2–3 yr post collection (Authors pers. obs.). Outplants have been observed spawning within at least two regions of the Florida Reef Tract (BC, MK) showing clearly that outplants created through coral gardening can contribute to sexual reproduction in this species. Similar findings have been reported for the congeneric A. palmata , where colonies reared from larvae spawned only 4 yr after being outplanted onto natural reefs (Chamberland et al. 2016 ). The main goal of this study was to use our extensive regional data to propose simple propagation and outplanting benchmarks for practitioners and managers with useful metrics to evaluate the performance of the steps of the coral gardening framework for recovery of depleted Acropora populations in the Caribbean. A key component for the sustained success of ecological reef restoration is to develop standards to hold practitioners accountable for the responsible propagation and outplanting of corals (Lirman and Schopmeyer 2016 ). Based on analyses of a consistent, large regional dataset, we show that the coral gardening methods used to propagate and restore A. cervicornis populations are very effective, that no excess damage is caused to donor colonies, and that once outplanted, staghorn corals behave just as wild colonies. We also provide science-based benchmarks that can be used to evaluate successes and challenges of coral gardening efforts and to make modifications where needed. Here, we propose that up to 10% of the biomass can be collected from healthy, large donor colonies for nursery propagation. We also propose the following benchmarks for A. cervicornis during the first year of activities: (1) >75% live tissue cover on donor colonies; (2) >80% survivorship of nursery corals; and (3) >70% survivorship of outplanted corals. Finally, we report productivity means of 4.4 for nursery corals propagated on fixed-to-bottom platforms and 4.8 for outplanted corals as a frame of reference for the ranking of genotype performance within programs. While the data described here are all from the US, other regions within the Caribbean may use these suggestions or develop benchmarks for their specific programs using similar approaches. If a project fails to meet accepted benchmarks, adaptive strategies should be used to improve performance. For example, high mortality within a nursery may indicate that the nursery should be relocated to an area with better water quality or a more sheltered location to avoid storm damage. High mortality at an outplant site may indicate poor water quality, that predator removal is necessary, or that attachment methods should be adjusted. Finally, in addition to the benchmarks proposed here, expanded benchmarks should be developed for other crucial steps in the restoration process such as the contribution of outplanted corals to reef structure and coral cover, the number of corals developing and releasing gametes, and the diversity and structure of the fish and invertebrate community supported by restored sites. Such benchmarks, and subsequent adaptive actions, are needed to fully assess the long-term success of coral restoration and species recovery programs. | A new study found that Caribbean staghorn corals (Acropora cervicornis) are benefiting from "coral gardening," the process of restoring coral populations by planting laboratory-raised coral fragments on reefs. The research, led by scientists at the University of Miami (UM) Rosenstiel School of Marine and Atmospheric Science and partners, has important implications for the long-term survival of coral reefs worldwide, which have been in worldwide decline from multiple stressors such as climate change and ocean pollution. "Our study showed that current restoration methods are very effective," said UM Rosenstiel school coral biologist Stephanie Schopmeyer, the lead author of the study. "Healthy coral reefs are essential to our everyday life and successful coral restoration has been proven as a recovery tool for lost coastal resources." In the study, the researchers set out to document restoration success during their initial two years at several coral restoration sites in Florida and Puerto Rico. Their findings showed that current restoration methods are not causing excess damage to donor colonies as a result of removing coral tissue to propagate new coral in the lab, and that once outplanted, corals behave just as wild colonies do. Staghorn coral populations have declined as much as 90% in the Caribbean since the 1980s. As a result, the species was listed as threatened under the U.S. Endangered Species Act in 2006 to help protect and conserve these species that form the foundation of the biologically rich coral reef habitats. The findings, published in the of the journal Coral Reefs, offers a guide for successful restoration and recovery efforts of the threatened species worldwide. Thousands of corals are raised in laboratories and planted onto degraded reefs each year. This study is the first to collect baseline coral restoration survival and productivity data at regional scales including data from 1,000s of individual A. cervicornis colonies, more than 120 distinct genotypes within six geographical regions to develop benchmarks to fully assess the progress and impacts of the region's coral and reef restoration efforts. Coral reefs provide many goods and services including fisheries habitat, food for humans and other ocean species, and protection against natural hazards such as hurricanes. As a result, coral restoration is viewed as an effective and cost-efficient strategy to buffer coastlines from the effects of storm surge and sea-level rise. "Coral reefs are declining at an alarming rate and coral restoration programs are now considered an essential component to coral conservation and management plan," said Diego Lirman, UM Rosenstiel School professor of marine biology and ecology and a coauthor of the study. "Our findings provide the necessary scientific benchmarks to evaluate restoration progress moving forward." The study was conducted in collaboration with U.S. Acropora Recovery Program partners: Nova Southeastern University, University of Miami, Florida Fish and Wildlife Conservation Commission, Mote Marine Laboratory, The Nature Conservancy, and the National Oceanic and Atmospheric Administration (NOAA). The public can get involved in restoration through the UM Rescue-a-Reef program, where citizen scientists help plant nursery-grown corals onto depleted reefs alongside scientists. | 10.1007/s00338-017-1596-3 |
Earth | Research finds deep listening could help fight climate change | Diana Bogueva et al. Autonomous Sensory Meridian Response for Responding to Climate Change, Sustainability (2020). DOI: 10.3390/su12176947 | http://dx.doi.org/10.3390/su12176947 | https://phys.org/news/2020-09-deep-climate.html | Abstract Guidelines Hypothesis Interesting Images Letter New Book Received Obituary Opinion Perspective Proceeding Paper Project Report Protocol Registered Report Reply Retraction Short Note Study Protocol Systematic Review Technical Note Tutorial Viewpoint All Article Types Advanced Search Section All Sections Air, Climate Change and Sustainability Bioeconomy of Sustainability Development Goals towards Sustainability Economic and Business Aspects of Sustainability Energy Sustainability Environmental Sustainability and Applications Green Building Hazards and Sustainability Health, Well-Being and Sustainability Pollution Prevention, Mitigation and Sustainability Psychology of Sustainability and Sustainable Development Resources and Sustainable Utilization Social Ecology and Sustainability Soil Conservation and Sustainability Sustainability in Geographic Science Sustainability, Biodiversity and Conservation Sustainable Agriculture Sustainable Chemical Engineering and Technology Sustainable Education and Approaches Sustainable Engineering and Science Sustainable Food Sustainable Forestry Sustainable Management Sustainable Materials Sustainable Oceans Sustainable Products and Services Sustainable Transportation Sustainable Urban and Rural Development Sustainable Water Management Tourism, Culture, and Heritage Waste and Recycling All Sections Special Issue All Special Issues (In)Corporate Sustainability: A Systemic Shift towards Sustainability 10th Anniversary of Sustainability—Recent Advances in Sustainability Studies 14th CIRIAF National Congress - Energy, Environment and Sustainable Development 15th CIRIAF National Congress – Environmental Footprint and Sustainable Development 16th CIRIAF National Congress – Sustainable Development, Environment and Human Health Protection 17th CIRIAF National Congress—Energy–Environmental Sustainability and Seismic Retrofit of Built Heritage 3D Printed Object or Molds for Educational Use: Design and Validation 3D Printing Applications and Sustainable Construction 3D Printing Influence in Engineering 3D Technology for Sustainable Education, Culture and Divulgation 40th Anniversary of 'The Limits to Growth' 4th Industrial Revolution and New Trends in Service Industry 5G, Energy Efficiency and Sustainability 5G: Smart Technology for Environment and Sustainable Development 5th World Sustainability Forum - Selected Papers 6th World Sustainability Forum - Selected Papers 7th World Sustainability Forum—Selected Papers 8th World Sustainability Forum—Selected Papers A Contextual and Dynamic Understanding of Sustainable Urbanisation A Decision-Making Model on the Impact of Vehicle Use on Urban Safety A Multidisciplinary Approach to Sustainability A Promising Approach: Nanotechnology A Research Agenda for Ecological Economics A Sustainable Revolution: Let's Go Sustainable to Get our Globe Cleaner A Systemic Perspective on Urban Food Supply: Assessing Different Types of Urban Agriculture Academic Contributions to the UNESCO 2019 Forum on Education for Sustainable Development and Global Citizenship Accounting, Financing and Governing Global Sustainability for Businesses and Societies Achieving a Just Transition in the Pursuit of Global Sustainability Achieving Sustainable Development Goals (SDGs) among Walking and Talking Achieving Sustainable Development Goals in COVID-19 Pandemic Times Achieving Sustainable Development of Enterprises through Digital Transformation Achieving Sustainable Public Transportation System: Travel Behaviors, New Technologies and Their Equity, Accessibility and Planning Implications Achieving Sustainable Village Development though Traditional and Innovative Approaches Achieving Zero Carbon Strategy: Towards What Direction Should Supply Chain Focus On? Adaptation or Extinction Adapting to Climate Change: The Interplay between International and Domestic Institutions in the Context of Climate Finance Adaptive Components for Building Performance Control Adaptive Reuse Processes: Bridging the Gap between Memory and New Values Additive Manufacturing and Sustainability in the Digital Age: People, Factories and Businesses of the Future Addressing Sustainability at a Community Scale Addressing Sustainable Development in the Digital Construction Age Advanced Analysis of Energy Economics and Sustainable Development in China in the Context of Carbon Neutrality Advanced Application of Green and Sustainable Computing Advanced Cutting-Edge Research on Applied Research on Human-Computer Interaction Advanced Forum for Sustainable Development Advanced IT based Future Sustainable Computing Advanced IT based Future Sustainable Computing Advanced Methodologies for Sustainability Assessment: Theory and Practice Advanced Science and Technology of Sustainable Development for Cities, Infrastructures, Living Spaces and Natural Environment: Including Collections from the Latest Papers of KRIS 2023 Advanced Technologies Applied to Renewable Energy Advanced Technology for Sustainable Development in Arid and Semi-Arid Regions Advanced Theory and Practice in Sustainable Sport Management Advances and Challenges of Green Chemistry and Engineering Advances and Challenges of Sustainability in/by Software Engineering Advances in Architectures, Big Data, and Machine Learning Techniques for Complex Internet of Things Systems Advances in Artificial Intelligence in Sustainable Business Management Advances in Corporate Governance Mechanisms and Corporate Social Responsibility Advances in Decision Making and Data Analysis for Sustainable Operations and Supply Chain Management in the Industry 4.0 Era Advances in Food and Non-Food Biomass Production, Processing and Use in Sub-Saharan Africa: Towards a Basis for a Regional Bioeconomy Advances in Gas Separation Technologies for Green Process Engineering Advances in Green Infrastructure Planning Advances in Hydrological Modelling, Quantitative Analysis and Prediction for a Changing Environment Advances in Industrial Risk Analysis and Management Advances in Machine Learning Technology in Information and Cyber Security Advances in Multiple Criteria Decision Making for Sustainability: Modeling and Applications Advances in Post Occupancy Evaluation Advances in Research and Sustainable Applications of Energy—Related Occupant Behavior in Buildings Advances in Rural and Aquatic Sustainability Advances in Sports Science and Physical Activity: New Methodological Perspectives Advances in Sustainability Oriented Innovations Advances in Sustainability Research at the University of Malta Advances in Sustainability Research from Poznan University of Technology Advances in Sustainability Research from the University of Oradea Advances in Sustainability: Selected Papers from 1st World Sustainability Forum Advances in Sustainability: Selected Papers from the Second World Sustainability Forum (2012) Advances in Sustainable Development: Selected Papers from 2016 Energy and Environment Knowledge Week Advances in Sustainable Drainage Systems and Stormwater Control Measures Advances in Sustainable Energy Systems and Environmental Remediation Technologies Advances in Sustainable Nanocomposites Advances in Sustainable Technology: The Lean 6S Methodology Advances in Sustainable Tourism and Responsible Travel Advances in the Sustainability and Resilience Interface at the Urban and Regional Levels: Sciences, Plans, Policies, and Actions for the Next Challenging Decades Advances of Digital Transformation in the Education Domain Advances of Sustainability Research: A Canadian Perspective Advances on Building Performance and Sustainability Advancing a Sustainable Future: Automation, Data Analysis, Modelling, and Simulations Advancing Sustainability through Well-Being Advancing Urban Sustainability through a Diverse Social-Ecological System Research Agenda Aerosol Pollution and Severe Weather Affordable Housing Planning for Sustainability After the Anthropocene: Time and Mobility Agricultural and Food Systems Sustainability: The Complex Challenge of Losses and Waste Agricultural Development and Food Security in Developing Countries: Innovations and Sustainability Agricultural Domain and Its Dual Role in Global Food Security, Biorefinery and Sustainability Agricultural Economics Meet Environmental Challenges: Innovations & Technology Adoption Agricultural Engineering for Sustainable Agriculture Agricultural R&D: Recent Trends and Tools for the Innovation Systems Approach Agricultural Sustainability at the Crossroads: Indicators, Approaches, Impacts and Options AI and Interaction Technologies for Social Sustainability AI and Sustainability: Risks and Challenges AI for Sustainable Real-World Applications AI-Driven Technology for Sustainable Living Air Pollution as a Threat to Sustainable Development Air Quality Assessment Standards and Sustainable Development in Developing Countries Airport Management System and Sustainability Algorithms, Models and New Technologies for Sustainable Traffic Management and Safety An International Approach of Corporate Social Responsibility and Environmental Management in Multinational Enterprises Anaerobic Digestion and Biogas Production as a Renewable Energy Source with Increasing Potential Analysis on Real-Estate Marketing and Sustainable Civil Engineering Animal Nutrition and Welfare in Sustainable Production Systems Animal Welfare and the United Nations Sustainable Development Goals Announced and Submitted Papers (Regular Issues) Application of Artificial Intelligence and Virtual Reality Technology in Sustainable Fashion Industry Application of Big Data and Artificial Intelligence in Sustainable Development Application of Big Data and Artificial Intelligence in Tourism Application of Microfluidic Methodology for Sustainability Application of Multi-Criteria Decision Analysis in Sustainable Development Application of Nanomaterials in Oil and Gas Drilling and Production Applications and Advanced Control of Microgrids Applications of Artificial Intelligence Based Methods in Transportation Engineering Applications of Artificial Intelligence for Sustainable Development (AAISD) Applications of Artificial Intelligence in New Energy Technology Systems Applications of Big Data Analysis for Sustainable Growth of Firms Applications of Business Model Innovations for Multi-Sized Software Companies for Sustained Competitive Advantage Applications of Communication and Decision Support Systems in Sustainable Farming Applications of Complex System Approach in Project Management Applications of Cyber-Physical Systems for a Sustainable Future Applications of Internet of Things (IoT): Challenges and Opportunities Applications of Internet of Things and Artificial Intelligence for Smart Urban Living from a Sustainable Perspective Applications of the Multi-Criteria Decision Aid Methods in the ex-ante Assessment and Management of Strong and Weak Sustainability Applications of Theory and Techniques on Cosmic Rays and Their Secondaries into Sustainable Practice Applications, Methods, and Technologies of Sustainable Landscape Planning and Design Applied Sustainability for SDG Implementation Applying Remote Sensing for Sustainable Land Use Changes Architecture and Salutogenesis: Beyond Indoor Environmental Quality Artificial Intelligence (AI) For Sustainability Artificial Intelligence & Quantum Computing Artificial Intelligence and Artificial Intelligence in Education for Sustainable Development Artificial Intelligence and Cognitive Computing: Methods, Technologies, Systems, Applications and Policy Making Artificial Intelligence and Smart Technologies for Achieving Sustainable Goals Artificial Intelligence and Sustainability Artificial Intelligence and Sustainability of Development Artificial Intelligence and Sustainable Digital Transformation Artificial Intelligence Applications in Safety and Risk Assessment Artificial Intelligence for Sustainability Artificial Intelligence of Things for Carbon Neutrality Assessing and Valuing Ecosystem Services Assessing the Sustainability of Urban Agriculture: Methodological Advances and Case Studies Assessing Urban Policies for No-Net Land Take for 2050 Assessment Methods Applied to Environmental Projects in Low- and Middle-Income Countries Assessment of Disparities as a Sustainable Tool for the Improvement of Public Health Astrobiology and Sustainability Asymmetric Response of Trade Flows to Exchange Rate Volatility Atmospheric Pollution Augmented Fashion—Sustainable Fashion and Textiles through Digital Technologies Augmented Reality and Virtual Reality-supported Sustainable Education Autonomous and Sustainable Computing for preparing the Internet of Things Environment Banking, Corporate Finance and Sustainability Barriers and Incentives to an Electric Vehicle Mobility Transformation Behavior and Marketing for Sustainability Behavior of Road Users and Sustainable Traffic Modes Behavioral Changes in the Tourism Industry: Implications for Sustainable Tourism Development Behavioral Economics and Sustainable Public Policies Belt & Road Initiative in Times of ‘Synchronized Downturn’: Issues, Challenges, Opportunities Benefit Corporations – Diffusion, Societal Impact, Challenges, and Best Practices Better Product Lifetimes Beyond the Parks – Exploring the Potential of Informal Green Spaces for Ecosystem Services Provisioning Beyond the Waste Hierarchy Bibliometric Reviews of Research on Sustainability in Management Big Data and Sustainability Big Data for Sustainable Anticipatory Computing Big Data Research for Social Sciences and Social Impact Big Data Security, Privacy and Sustainability Big Data, Artificial Intelligence and the Sustainability of Democracy Bio-Based Construction Materials for Sustainable Development of the Built Environment Bioeconomy Innovation Pipelines and Supply Chain and Policy Shocks Bioenergy and Sustainability Bioenergy for Sustainable Development: Advances and Applications Bioenergy, Land Use and Ecology Protection Biophilic Cities: Putting the Vision into Practice Biophysical Sustainability of Food System in a Global and Interconnected World Biosustainability and Waste Valorization Biotechnological Processes for the Valorization of Sludge and Waste Within a Circular Economy Context Biotechnology and Sustainable Development Blockchain and Building Information Modeling (BIM) Blockchain Fostering Sustainability: Challenges and Perspectives Blockchain Technology and Operations Sustainability Blockchain Technology for Enhancing Supply Chain Performance and Reducing the Threats Arising from the COVID-19 Pandemic Blue Economy and Resilient Development: Natural Resources, Shipping, People, and Environment Blue-Green Sustainability and Tomorrow Shocks Borderland Studies and Sustainability Boundary Organizations & Sustainability Brand Equity, Satisfaction and Word of Mouth Branding the eco-city: Rhetoric or reality? Bridging the Gap in the Technology Commercialization Process and Sustainability Bridging the Gap: The Measure of Urban Resilience Bridging the Labor Market Gender Gap: Towards a Holistic Paradigm Building Energy Assessment Building Regional Sustainability in Urban Agglomeration: Theories, Methodologies, and Applications Building Stronger Communities through Social Enterprise Buildings and Infrastructures Management: Models Strategies and Evaluation Tools Business and Energy Efficiency in the Fashion Industry and Branding in the Age of Industry 4.0 Business and the Environment: Critical Issues and Emerging Perspectives Business Innovation and Sustainable Development Business Model Innovation for Sustainability. Highlights from the Tourism and Hospitality Industry Business Models and Financial Innovation for Sustainable Infrastructure Business Sustainability Management and Eco-Innovation Can Smart Cities Cope with Sustainability? Carbon Emissions: Economic Consumption Carbon reduction strategies and methods in transportation Ceramic Industry Transition to a Sustainable Model Challenges and Possibilities for Sustainable Development in a Baltic Sea Region Context Challenges and Possibilities for Sustainable Development in a Baltic Sea Region Context - Volume 2 Challenges and responses to population health and urbanization in the 21st century Challenges and Solutions for Greater Sustainability in Agri-Food Transport and Logistics Systems Challenges and Strategies for Sustainable Development in Deep Mines Challenges for a Sustainable Water Use and Re-Use Challenges for Historic Gardens’ Sustainability between Restoration and Management Challenges in Achieving Sustainable Development Goals by 2030 Challenges in Overcoming Current and Future Sustainability Crises Challenges on the Asian Models and Values for the Sustainable Development Challenging the Human-Nature Relationship: Towards New Educational Paradigms Changing the Energy System to Renewable Energy Self-Sufficiency (RESS) - Selected Papers from the RESS Conference, 15-16 September 2011, Freiburg, Germany Characteristics, Sources, and Impacts of Black Carbon Aerosols China and World Sustainability: The Present Problem and Solution for the Future Circular and Climate-Neutral Solutions: Biomass as the Base of Our Future Circular Society Circular Economy and Eco-Innovation: Taking Stock and Looking Ahead Circular Economy and Supply Chain Management 4.0 Circular Economy and Technological Innovation Circular Economy Evaluation: Towards a Transparent and Traceable Approach under a Life Cycle Perspective Circular Economy in Industry 4.0 Circular Economy in Small and Medium Enterprises Circular Economy in the COVID-19 Era Circular Economy in the Service Sector: Current Situation and Future Perspectives Circular Economy Practices in the Context of Emerging Economies Circular Economy Transformations in the Production and Consumption System: Critical Issues and Emerging Perspectives under the Scope of the Sustainable Development Goals Circular Economy: A Move towards Economical Viable Sustainability Circular Procurement - A Means towards Sustainability Circularity in the Built Environment Cities and Waterfront Infrastructure Citizen Participation in Sustainable Local Decision-Making City Marketing and Planning for Sustainable Development Civil Engineering as a Tool for Developing a Sustainable Society Clean Energy Management: Emerging Technologies and Mathematical Modeling Cleaner Production Principle and Application in Promoting Sustainability in Developing Countries Climate Change Adaptation and Mitigation- Organic Farming Systems Climate Change Adaptation, Mitigation and Development Climate Change and Sustainability Education Climate Change and Sustainable Development in the Global South Climate Change Effects at Watershed, Estuary and In-Stream Scales: Implications on Water Quality and Water Management Climate Change Impacts on Inland Fisheries Climate Change Influence in Agriculture-Experimenting the Introduction of Cultivated Crops in Culture Climate Change Mitigation and Adaptation - ZEMCH 2016 Climate Resilience and Sustainability of Interconnected Critical Infrastructures Cloud Platform Sustainability Technologies for Industrial Internet of Things Co-Creating Sustainability: Integration of Local Ecological Knowledge in Art Works Coding Literacy for Sustainable Society Cognitive Digital Twins for Construction Sustainability Cognitive Infocommunications in Service of Sustainable Digital Spaces Cold Region Environments and Sustainable Development of Civil Engineering Collaboration for Sustainability Collaboration, Risk Management and Governance in SMEs: Drivers of Competitiveness and Innovation towards meeting the SDGs Collaborative Supply Chain Networks Collapse of Easter Island Collective and Computational Intelligence Techniques in Energy Management Combining Multi-Criteria Decision Analysis and Life-Cycle Assessment: Challenges, Proposals, and Real-World Applications Commodity Trade and Sustainability Common-Pool Resources and Sustainability Communication for and about Sustainability Communication on Sustainability in Universities: A Bridge between Academia and Society Communicative and Behavioral Interventions to Increase Sustainability Community Self-Organisation, Sustainability, and Resilience in Food Systems Company and Climate Changes Conference CCC'2022 Competitive and Sustainable Manufacturing in the Age of Globalization Competitive and Sustainable Semiconductor Manufacturing Competitive Sustainable Manufacturing: Making Sustainability Make Business Sense Complex System Modeling Methods Applied to Sustainable Development Computational Intelligence for Sustainability Concerning the Application of Big Data-Based Techniques to Social Sciences Condition Assessment of Water Infrastructures Connecting Science with People: Creating Science Communication that Matters Conservation in Peripheral and Ultra-Peripheral Territories Considering Irreversibility in Transport Infrastructure Planning Constructed Natures: Shaping Ecology through Landscape Design Constructing Heritage in the Light of Sustainable Development Construction 4.0: The Next Revolution in the Construction Industry Consumer Behavior and Sustainability in the Electronic Commerce Consumer Behavior and Sustainable Food Development Consumer Behavior on Social Media in the Era of Artificial Intelligence and Chatbot Relationship Marketing Consumer Behavior Research in Food: A Focus on Health, Safety, and Sustainability Consumer Neuroscience and Sustainable Marketing Consumer Preferences for Sustainable Agriculture Product Consumer's Willingness to Pay for Green Products and Services Contemporary Trends in Sustainable Development: A Double-ECO (edged) Sword Cooperation and Internationalization Networks for the Promotion of Corporate Sustainability Cooperative Longevity: Why are So Many Cooperatives So Successful? Coping with Climate Change in Developing Countries Corporate Environmental Management and Voluntary Actions to Pursue Competitiveness and to Support the Transition to a Circular Economy Corporate Finance Management and Social Responsibility for Sustainable Development Corporate Social Responsibility (CSR) and CSR Implementation Corporate Social Responsibility (CSR) and CSR Reporting Corporate Social Responsibility (CSR) and Sustainable Development Goals Corporate Social Responsibility and Communication during COVID-19 Pandemic Corporate Social Responsibility and Corporate Performance Corporate Social Responsibility Disclosure Research – Sustainability Perspective Corporate Social Responsibility Practice in the High-Tech Sector Corporate Social Responsibility: Organizational Strategy for Sustainable Growth Corporate Sustainability and Innovation in SMEs Corruption in Sustainable Health Care COVID-19 Impacts on Last-Mile Logistics: Catalyst for Change or Enhancer of Current Trends? Crack Prediction and Preventive Repair Methods for the Increasing Sustainability and Safety Requirements of Structures Creating a Brighter Future for Life in the Tropics Creating a More Equitable City: Alternative Narratives of Spatial Production Creative Industries Entrepreneurship as a Means for Sustainable Innovative Transition Creative Solutions to Big Challenges Creativity and Innovation for Sustainability—State of the Art and Future Perspectives Critical Environmentalism: Questioning Practices of Sustainability Critical Junctures in Assistive Technology and Disability Inclusion Cross-fertilized Fields of Knowledge and Sustainable Development Goals (SDGs) Crowd-Powered e-Services Cryospheric Hydrological Processes and Water Resources under Climate Change Cultivating the Ecological Transition: Knowledge and Practices of Sustainability, Neo-Endogenous Development and Participatory Processes Cultural Cities: A Path towards Sustainability Cultural Heritage Storytelling, Engagement and Management in the Era of Big Data and the Semantic Web Current Energy and Environmental Issues in Emerging Markets Current Issues in Sustainable Energy Production: Multidimensional Outlook for the Sustainable Economic Growth Customer Engagement and Organizational Performance for Sustainability Customer Relationship Management—Strategies and Business Implications Cutting Edge Chemistry and its Impacts on Sustainable Development Data Analysis and Data Envelopment Analysis in Municipal Solid Waste Management Data Analytics on Sustainable, Resilient and Just Communities Data Driven Analysis for Active Transportation Data Science Applications for Sustainability Data-Driven Emergency Traffic Management, Optimization and Simulation Dealing with Projects in a Dynamic Scenario: New Managing Approaches and User-Centered Tools Decarbonisation Investment Towards Environmental Sustainability Decarbonised Economy Decarbonization and Circular Economy in the Sustainable Development and Renovation of Buildings and Neighborhoods Decarbonization of Industry through Green Hydrogen and Power to X Processes Decarbonizing the International Shipping Industry Decipher the Present to Shape the Future- Rethinking the Urban–Rural Nexus Decision Models for Sustainable Development in the Carbon Neutrality Era Decision Support Systems and Multiple Criteria Decision Making for Sustainable Development Decision-Making Approaches to Support the Sustainability of Supply Chain System in Pandemic Disruptions Deep Mining Engineering in Sustainability Degrowth: The Economic Alternative for the Anthropocene Density and Sustainability Design (be)for(e) Disaster Design and Control of Advanced Powertrain Technologies Design and Emotional Sustainability Design and Implementation of Sustainability Programs in Higher Education Design for Sustainability—Axiomatic Design Science and Applications Design of New Efficiency and Productivity Indexes with Applications Designing and Implementing Innovative Business Models and Supply Chains: The Digitalization and Sustainability Imperative Designing Artifacts/Tools for Increasing Sustainability Designing Products and Services for Circular Consumption Determinants and Aspects of Regional and Local Development in Poland Determinants of Sustainable Productivity Growth in Post-COVID Central and Eastern Europe Determinants, Components and Impacts of Sustainable Governance Developing 5G/6G Wireless Technologies for Sustainable Communication Systems Developing Agricultural Produce as Part of Local Sustainable Management: New Challenges and Opportunities Developing Competencies for Sustainability of Future Managers: What, How and Why Development and Application of Sustainable Refrigeration Adsorption Technology Development at the Crossroads of Capital Flows and Migration: Leaving no One Behind? Development Economics and Social Resilience: Perspectives for Sustainability with or without COVID-19 Development for the Role of Japanese food Overseas Development of Renewable Energy from Perspectives of Social Science Development, Sustainability and Finance in Emerging and Developing Economies Developments in Machining Applications by Abrasive Waterjet Developments on Corporate Social Responsibility Reporting Digital Consumption, Privacy Issues and Sustainability Digital Culture Sustainability Digital Divides and Sustainability: How to Overcome Technological Inequality to Secure Sustainability in the Post-pandemic Future? Digital Economy, E-commerce, and Sustainability Digital Entrepreneurship: Sustaining the Bridge between Innovation and Entrepreneurship Digital Innovation and Technology Transfer in Emerging Markets and/or from Emerging Markets Digital Manufacturing and Industrial Sustainability Digital Marketing and Digital Capabilities Digital Marketing for Sustainable Growth: Business Models and Online Campaigns using Sustainable Strategies Digital Processes in Social, Cultural and Ecological Conservation Digital Supply Chain and Sustainability Digital Sustainability and Customer-Centric Project and Operations Management in Support of the People, Planet, and Profit Digital Technologies and Sustainability Management in Organisational and Developmental Environments Digital Technologies Enabling Sustainability in Manufacturing and Supply Chain Digital Technology Supports for Sustainable Development of Human Society during and Post-pandemic Contexts Digital Transformation and Its Opportunities for Sustainable Manufacturing Digital Transformation and Sustainability: Tips to Sport Industry from Research Digital Transformation for Smart Logistics under the Impact of 5G and IoT Technologies Digital Transformation of Business Model Innovation and Circular Economy Digital Transformation, Information Management, and Sustainable E-government Digitalization and Its Application of Sustainable Development Digitalization and Sustainable Development Digitalization Leading the Way for a Sustainable Future Disaster Risk Reduction and Sustainable Development Discursive Mobilization for Green Transformation: Culture, Policy and Methodology Disruptive Technologies in Smart Systems for Sustainability: Challenges and Opportunities Distributed and Sustainable Manufacturing Divestment and Sustainability Drilling Technologies and Process Safety Drivers and Forms of Sustainability-Oriented Innovation Driving Sustainability through Engineering Management and Systems Engineering Drones for Precision Agriculture: Applications and Impacts in Developing Countries Drought Management in Semi-Arid and Arid Environments for Food Security Dust Events in the Environment Dynamic Sustainability of Small and Medium Size Towns E-business - The Perspective of Systems Thinking and Sustainability E-commerce and Sustainability E-learning, Digital Learning, and Digital Communication Used for Education Sustainability Earthquake Source Imaging and Rupture Process Study Eco-Initiatives and Eco-Attitudes at Leisure Events Eco-Responsible Use of Technologies for Sustainable Development Ecofriendly Materials and Clean Energy Ecological Footprint Indicator Ecological Restoration of Soils and Wastewater Ecologically Sustainable Transport and Other Linear Infrastructure in Asia and Europe Economic Feasibility for Sustainability Economic Growth and Sustainable Wildlife Management Economic Impact of Water and Soil Salinity Economic Profitability and Agriculture Sustainable Development Economic Sustainability: Strategy, Efficiency, Profitability, and Prediction of the Insolvency of Organizations Economic Thought, Theory and Practices for Sustainability Economics of climate change impacts on developing countries: Selected studies on Sub-Sahara Africa and South-East Asia Economics of Climate Smart Agriculture Ecosystem Approach for Sustainability - A special issue in memory of James J. Kay (1955-2004) Ecosystem Services and Institutional Dynamics Ecosystem Services in a Bio- and Circular Economy Ecosystem Services in Community Well-Being for a Sustainable Future Edge Artificial Intelligence in Future Sustainable Computing Systems Editorial Board Members’ Collection Series: Transport, Environment and Sustainability Education and Skills for the Green Economy Educational Spaces and Sustainability Effect of 6G and beyond Communication Technologies on Healthcare Sector Effectiveness of Sustainability Reporting Tools Effects of COVID 19 for Sustainable Education, Systems and Institutions Efficiency and Effectiveness of Universities in Achieving Sustainable Development Goals (SDGs) Efficiency and Sustainability of the Distributed Renewable Hybrid Power Systems Based on the Energy Internet, Blockchain Technology and Smart Contracts-Volume II Efficiency in Energy Storage Efficient and Non-polluting Biomass and Wastes Thermal Gasification Efficient Management of Sustainable Supply Chains Efficient Purification and Recycling of Heavy Metals in Wastewater Electric and Hybrid Vehicles in a Smart Grid Scenario Electric Vehicles: Production, Charging Stations, and Optimal Use Electronic Marketing Sustainability Embedded Entrepreneurship and Innovation: The Role of Places, Institutions, and Networks Emergence of Resource and Capability of Logistics Service Providers (LSPs) under Dynamic and Uncertain Environments Emergency Plans and Disaster Management in the Era of Smart Cities Emerging Markets’ Competitive Advantages in Sustainable Management Emerging Research on Socio-Technological Sustainability Transitions Emerging Technologies for Sustainability and Safety Emerging Trend in Achieving ‘Zero Waste’ and Sustainable Consumption Emotional Communication, Organizations, and Sustainability Enabling Sustainable IoT toward State-of-the-Art Technologies Encouraging Social and Environmental Sustainability through Public Procurement End of Life Products and Processes in the Emerging Circular Economy Endangered Human Diversity: Languages, Cultures, Epistemologies Energy Economics and Sustainability Energy Harvesting Communication and Computing for Sustainable IT Energy in Districts Energy Modeling Related to Sustainability Energy Policy and Sustainability Energy Sustainability after Global Fossil Energy Depletion Energy Sustainability and Power Systems in an Industry 4.0 Context Energy Sustainability and Tourism Energy Sustainable Management Energy-Efficient Scheduling in Production and Transportation Energy–Sustainable Real Estate: Challenges and Goals Engineering and Decision Support for Sustainable Development Engineering Sustainable Building Materials: Advancing the Structural Performance of Earth-based Technologies Enterprise Sustainability Entrepreneurial Ecosystems in Tourism and Events Entrepreneurial Education Strengthening Resilience, Societal Change and Sustainability Entrepreneurial Innovation and Sustainable Growth in the Era of the COVID-19 Crisis Entrepreneurial Orientation for Sustainable Development Entrepreneurial Sustainability: New Innovative Knowledge Entrepreneurship and Business Cases for a Sustainable Accounting and Financial System Entrepreneurship and Eco-Innovation Entrepreneurship and Sustainability Entrepreneurship and Sustainable Firms and Economies Entrepreneurship and Sustainable Gastronomic Tourism Entrepreneurship and the Sustainable Development Goals for the Business-Health Relationship Entrepreneurship, Competitiveness and Innovation: A Trilogy Research Environment in Sustainable Development Environmental Analysis of Water Pollution and Water Treatment Environmental and Social Sustainability in Relocations of Second Degree Environmental and Sustainability Assessment Using Simulation Modelling Environmental and Sustainability Education: Building Bridges in Times of Climate Urgency Environmental Disclosure and Global Reporting Environmental Economics. Contributions to Sustainability of Agricultural Ecosystems Environmental Education for Sustainability Environmental Education: High School Students’ Perception of Sustainability Environmental Geography, Spatial Analysis and Sustainability Environmental Impact Assessment and Sustainable Development Environmental Impact of Livestock Production and Mitigation Strategies Environmental Impacts of Public Transport Systems Environmental Impacts under Sustainable Conservation Management Environmental Justice and Ecosystem Co-governance Environmental Justice and Sustainability Environmental Law for Sustainability Environmental Law for Sustainability 2018 Environmental Laws and Sustainability Environmental Management of Post-Epidemic Mass Carcasses Burial Sites Environmental Management Optimization Environmental Migration and Displacement-Migration Aspirations in Response to Environmental Changes Environmental Policy for Sustainability Environmental Protection Engineering Environmental Resilience in the Pandemic Years 2020–2021 Environmental Sustainability and Strategy: Resilience, Resourcefulness, and Strategic Refresh in the Post-pandemic Era Environmental Sustainability and the Built Environment Environmental Sustainability in IR 4.0 Environmental Sustainability of Agriculture in a Changing Climate Environmental Sustainability, Planning and Energy Efficiency in Energy Communities Environmental, Social and Governance (ESG) Performance Assessment Environmentally Sustainable Diets Environmentally Sustainable Livestock Production Equality, Diversity and Inclusion in STEM for a Sustainable Future Ethics and Sustainability Ethics for a Sustainable World: Academic Integrity and Corrupt Behaviour Evaluating the Impact of Innovation and, Identifying the New Challenges in Smart and Sustainable Manufacturing Evaluation and Indicators for Sustainability: Tools for Governance and Resilience Everyday ICT Consumption and Sustainability Executive Gender Diversity and Corporate Social Responsibility Exobiology Studies and the Study of the History, Present, and Future of Life on our Planet Experience Economy in Times of Uncertainty Expert Systems: Applications of Business Intelligence in Big Data Environments Exploring of Sustainable Supplier Selection Exploring Sustainable Pathways: The Role of Carbon Trading for Climate Solutions in Industry 4.0 Exploring the Connection between Digital Communities, Sustainability and Citizen Science Exploring the Interoperability of Public Transport Systems for Sustainable Development Exploring the Workplace Practices that Foster a Sense of Purpose and Meaning in Life Extended Evolutionary Approaches to Sustainability Extension (and) Education for Sustainable Farming Systems External Thermal Insulation Composite Systems (ETICS): Sustainable Technology for the Growth of a Resource-Efficient and Competitive Economy Extractive Industries toward Sustainable Development Facets of Sustainability in Construction Informatics and Project Management Facing the Crisis: Sustainable Practices as Enablers of Business Resilience Fairness in Transport Faith and Sustainable Development: Exploring Practice, Progress and Challenges among Faith Communities and Institutions Family Business Model and Practices of Sustainability Farm Cooperatives and Sustainability Farming System Design and Assessment for Sustainable Agroecological Transition Fates, Transports, Interactions and Monitoring of Emerging Pollutants FDI and Institutional Quality: New Insights and Future Perspectives from Emerging and Advanced Economies Feature Paper on Sustainability Wastewater Management Finance and Agenda 2030: Building Momentum for Systemic Change—A Special Issue Coordinated by the Sustainable Finance Group of SDSN France Finance in a Sustainable Environment: Uncertainty and Decision Making Financial Implications of Sustainability. Linkages and Tensions between Natural, Social and Financial Capital Financial Innovation for Industrial Renewal Financial Markets in Sustainable Development Finding Common Ground. Conservation and Conflict Resolution/Prevention Fintech: Recent Advancements in Modern Techniques, Methods and Real-World Solutions Firm Responses to Sustainable Development Goals in the Context of the Digital Era Firm Size and Sustainable Innovation Management Fluid Power Components and Systems Food and Agroindustrial Waste Trends and Prospective towards Circular Bioeconomy Food Choice and Environmental Concerns Food Land Belts Food Security and Environmental Sustainability Food Security and Environmentally Sustainable Food Systems Food Sovereignty, Food Security, and Sustainable Food Production Food Systems Transformation and the Sustainable Development Goals in Africa Footprints on Sustainable Consumption and Production in Emerging Economies Forecasting Financial Markets and Financial Crisis Foresight Methodologies in Field of Sustainability Analyses Forms of Informal Settlement: Upgrading, Morphology and Morphogenesis Framework for Managing Sustainable Development From an Atomized to a Holistic Perspective for Safety Assessment of Structures and Infrastructures: Exploring Ecosystems From Eco-Design to Sustainable Product-Service Systems From Green Marketing to Green Innovation From Lean to Green Manufacturing From Rhetoric to Sustainability Research Impact: Sustainability Assessment, Methods and Techniques to Action the Sustainable Development Goals Frontier Information Technology and Cyber Security for Sustainable Smart Cites Frontiers and Best Practices in Bio, Circular, and Green Growth and Eco-Innovation Frontiers in Sustainable Agroecosystems Design and Management Frontiers in Sustainable Information and Communications Technology Future Design Future of Built Environment Seen from the Lens of Sustainability Science Gender and Rural Development: Sustainable Livelihoods in a Neoliberal Context Gender Diversity Across Entrepreneurial Leadership in Hospitality Gender in Sustainable Innovation Geological Storage of CO2 and Climate Control Geospatial Technologies and the 4th Industrial Revolution for Sustainable Urban Environment Geotechnical Stability Analysis for Sustainable Development of Infrastructure GIAHS and Community-Based Conservation in National Parks GIS and Linked Digitisations for Urban Heritage Global Energy System in Transition: Challenge Our Myths Global Warming Global Water Vulnerability and Resilience Globalisation in a VUCA Environment Globalization and Sustainable Growth Going Net Zero—Case Studies of How Firms Are Managing the Challenge of Ambitious Emissions Reduction Aspirations Governance, Power and Institutions and Overall Weaknesses of the SDG System: The Public Participation and the Role of Stakeholders Governing for Sustainability in a Changing Global Order Governing Green Energy Trade: Challenges and Opportunities Government Policy and Sustainability Grape Winery Waste: Sustainability and Circular Economy Gray Shades of Sustainability Issues in Organization Management Green Advertising Impact on Consumer Behavior Green and Sustainable Textile Materials Green Building Green Business: Opportunities and Challenges for Sustainability Green Chemistry and Biorefinery Concepts Green Chemistry for Environment and Health Green City Logistics Green Construction Supply Chain: Sustainable Strategy and Optimization Green Consumer Behavior, Green Products & Services, and Green Brands in the Tourism/Hospitality Industry Green Economy Research for Transformative Green Deals Green Economy, Ecosystems and Climate Change Green Energy and Low-Carbon Environment for Sustainable and Economic-Friendly Cities Green Energy and Tourism Policy for Sustainable Economic Growth Green Growth Policy, Degrowth, and Sustainability: The Alternative Solution for Achieving the Balance between Both the Natural and the Economic System Green Hydrogen Economics and Planning towards Carbon Neutrality Green Information Technologies Practices and Financial Performance: Emerging Issues Green Information Technology and Sustainability Green Infrastructure and Nature-Based Solutions in the Urban and Rural Context Green Infrastructure towards Sustainable Cities Green Innovations and the Achievement of Sustainable Development Goals Green IT and Sustainability Green Manufacturing Processes for Leading Industrial Sectors Green Materials in Sustainable Construction Green Practice in Data Storage Green Public Procurement in Civil Engineering at a Regional Level Green Stormwater Infrastructure for Sustainable Urban and Rural Development Green Supply Chain Management and Optimization Green Technology and Sustainable Development Green Technology Innovation for Sustainability Green Transformation of the Construction Industry through Project Management Green Transition and Waste Management in the Digital Era: Strategies, Learnings, Challenges and Future Trends Green Transition Paths under the Carbon-Neutral Targets: Policy Design, Digital Governance, and Technological Progress Green Urban Development Greening Behavior towards Carbon Neutrality Greenwashing and CSR Disclosure of Sustainability in Controversial Industries Groundwater Vulnerability and Sustainability Group Processes and Mutual Learning for Sustainability Hakka Tulou and Sustainability: The Greenest Buildings in the World Harmful Organisms and their Management for Sustainable Environment Health Geography—Human Welfare and Sustainability Healthy and Sustainable Cities by Day and Night. The Future of Research-Based Practice. Heating and Cooling: Mapping and Planning of Energy Systems High Precision Positioning for Intelligent Transportation System High-Strength Steels Welding—Sustainability Based Approach Household Sustainability Housing and Public Health How Consumer Behavior Patterns Change in a Pandemic Condition Human Dimensions of Conservation Research and Practice Human Exposure to Carbon Monoxide in Urban Regions of Asia and the Global South Human Oriented and Environmentally Friendly Lighting Design of Exterior Areas Human-Centered Design and Sustainability: Are They Compatible? Human-Computer Interaction and Sustainable Transportation Human-Cyber-Physical Systems (H-CPS) for Intelligent Civil Infrastructure Operation and Maintenance Hydrometallurgy of Metals from Primary and Secondary Resources in Sustainable Method and Process ICMTs for Sustainability in the Post COVID-19 Era: Revisiting Conceptual and Policy Narratives ICT Adoption for Sustainability ICT and Sustainable Education ICT Implementation toward Sustainable Education ICT4S— ICT for Sustainability IEIE Buildings (Integration of Energy and Indoor Environment) Impact of Climate Change on Urban Development Impact of COVID-19 and Natural Disasters: Energy, Environmental, and Sustainable Development Perspectives Impact of Industry 4.0 Drivers on the Performance of the Service Sector: Human Resources Management Perspective Impact of Management Changes on Seminatural Grasslands and Their Sustainable Use Impact of Social Innovation on Sustainable Development of Rural Areas Impact of the Tax Systems, Tax Administration and Administrative Burdens on Sustainable SMEs’ Performance Impacts of Climate Change on Cultural Landscapes and Strategies for Adaptation Impacts of Climate Change on Tropical Cyclone Activities: Cloud–Radiation Feedback and Dynamics Implementation of Sustainable Technologies for the Transition towards a Circular Economy Model Implementation of the Sustainable Development Goals (SDGs) Implications of the COVID-19 Pandemic for Future Urban and Spatial Planning Improving Governance of Tenure: Progress in Policy and Practice Improving Life in a Changing Urban Environment through Nature-based Solutions and Biophilic Design Improving Sustainability Performance of Physical Assets with Green Approaches and Digital Technologies In Quest for Environmental Sustainability: Microorganisms to the Rescue Incorporating Sustainable and Resilience Approaches in Asphalt Pavements Independence and Security of Energy Supply: A Current Issue of Vital Importance Indigenous Peoples and Sustainable Development in the Arctic Indigenous Transformations towards Sustainability: Indigenous Peoples' Experiences of and Responses to Global Environmental Changes Indoor Air Pollution and Control Inductive Charging for Electric Vehicles: Towards a Safe and Efficient Technology Industrial Automation: Realising the Circular Economy through Autonomous Production Industrial Engineering for Sustainable Industry Industrial Sustainability: Production Systems Design and Optimization across Sustainability Industry 4.0 – Implications for Sustainability in Supply Chain Management and the Circular Economy Industry 4.0 and Artificial Intelligence for Resilient Supply Chains Industry 4.0 and Industrial Sustainability Industry 4.0 Implementation in Food Supply Chains for Improving Sustainability Industry 4.0, Digitization and Opportunities for Sustainability Industry 4.0, Internet 3.0, Sustainability 2.0 - Fostering New Technological Paradigms for Resilience Industry 4.0: Quality Management and Technological Innovation Industry 4.0: Smart Green Applications Industry Development Based on Deep Learning Models and AI 2.0 Influence of Emotions and Feelings in the Construction of Digital Hate Speech: Theoretical and Methodological Principles for Building Inclusive Counter-Narratives Information Sharing on Sustainable and Resilient Supply Chains Information Society and Sustainable Development – selected papers from the 2nd International Scientific Symposium`2015 Information Society and Sustainable Development—Selected Papers from the 5th International Conference ISSD 2018 Information Society and Sustainable Development—Selected Papers from the 6th International Conference ISSD 2019 Information System Model and Big Data Analytics Information Systems and Digital Business Strategy Information Systems and Sustainability Information Systems for Sustainable Development Information Technology in Healthcare and Disaster Management Information, Cybersecurity and Modeling in Sustainable Future Innovating in the Management and Transparency of the Sustainability of Governments to Achieve the Objectives of Sustainable Development Innovating Practice and Policy for Sustainable Pest Management Innovation and Environmental Sustainability Innovation and Governance in the Global Energy Transition Innovation and Sustainability in a Turbulent Economic Environment–Selected Papers from the 12th International Conference on Business Excellence Innovation Development and Sustainability in the Digital Age Innovation Ecosystems: A Sustainability Perspective Innovation for Sustainability Development Innovation in Engineering Education for Sustainable Development Innovation in the European Energy Sector and Regulatory Responses to It Innovation Management in Living Labs Innovation Strategies and Sustainable Development: Tensions and Contradictions on the Way to Sustainable Well-Being Innovation, Emerging Technologies and Sustainability in R&D Intensive Industries – Volume 2 Innovation, Emerging Technologies and Sustainability in R&D Intense Industries Innovations in Façade Design and Operation for Healthy Urban Environments Innovations in Small Businesses and Sustainability Innovations in the Circular Economy: Commons or Commodity? Innovations Management and Technology for Sustainability Innovative Advances in Monitoring, Control, and Management of Microgrids Innovative and Sustainable Design for Mechanics and Industry Innovative Design, Technologies, and Concepts of Commercial Wind Turbines Innovative Development for Sustainability in Water Constrained Regions Innovative Economic Development and Sustainability Innovative Management Practice for Resilience and Sustainability of Civil Infrastructures Innovative Solution for Sustainable and Safe Maritime Transportation Innovative Technology for Sustainable Anticipatory Computing Computing Innovative Training Sustainability in an Uncertain Information and Knowledge Society Institutions and Policies for Rural Land Conversion in the Quest for Sustainability Insurtech, Proptech & Fintech Environment: Sustainability, Global Trends and Opportunities Integrated Approaches to Biomass Sustainability Integrated Evaluation of Indoor Particulate Matter (VIEPI) Project: Study Design, Results and Open Questions Integrated Migration Management, ICTs' enhanced Responses and Policy Making: Towards Human Centric Migration Management Systems Integrated Pest Management and Risk Assessment of Biopesticides Integrated Reporting and Corporate Sustainability Integrating Green Infrastructure, Ecosystem Services and Nature-Based Solutions for Landscape Sustainability Integrating Sensors, AI, and Biodiversity Indices for Sustainable Farming and Livestock: Evaluating Environmental Impacts, Efficiency, and Life Cycle Assessment Integration and Optimization of Smart Mobility for Sustainable Rural Electrification Integration of Green ICTs and Industry into Green Governance for a Sustainable Ecosystem Integration of LCA and BIM for Sustainable Construction Intellectual Capital and Sustainability Intellectual Capital and Sustainability Intellectual Property Strategies and Sustainable Business Models for Sustainability Transitions Intelligent Algorithms and Systems for C-ITS and Automation in Road Transport Intelligent Knowledge-Based Models for Sustainable Spatial Planning and Engineering Intelligent Manufacturing for Sustainability Intelligent Networking for Sustainable Environment and Human-Natural Systems Intelligent Sensing for Sustainable Production Industries Intelligent Sensing, Control and Optimization for Sustainable Cyber-Physical Systems Intelligent System and Application Improving Enterprise’s Sustainable Development Intelligent Transportation Systems (ITS), Traffic Operations and Sustainability Intensification of Digitization Tools, Their Development and Applicability Intention and Tourism/Hospitality Development Interactive Learning Environments in Student’s Lifelong Learning Process: Framework for Sustainable Development Goals of the 2030 Agenda Intercropping Systems and Pest Management in Sustainable Agriculture Interdisciplinary Approaches to Sustainability Accounting and Management: Selected Papers from the 4th EMAN Africa Conference on Sustainability Accounting & Management (including related invited papers) International Business Theories and Internationalization of Emerging Economies International Entrepreneurship and Innovation International Finance and Money Market International Fisheries Policy and Economic Analysis International Migration and Sustainable Development: Globalization, Move-In Move-Out Migration and Translocal Development Internet Finance, Green Finance and Sustainability Internet of Things: Towards a Smart and Sustainable Future IoT and Computational Intelligence Applications in Digital and Sustainable Transitions IoT and Sustainability IoT Data Processing and Analytics for Computational Sustainability IoT Learning for the Future of Online Engineering Education IoT Quality Assessment and Sustainable Optimization ISMO—Sustainability in Engineering and Environmental Sciences IT-Enabled Sustainability and Development Just Food System Transformations Karst and Environmental Sustainability Knowledge Management and Sustainability in the Digital Era Knowledge Management, Innovation and Big Data: Implications for Sustainability, Policy Making and Competitiveness Knowledge-based Systems and Sustainability Land Teleconnection, Governance and Urban Sustainability Land Use/Cover Drivers and Impacts: New Trends and Experiences from Asia Land, Water, Food, Energy (LWFE) Security NEXUS for Sustainable Ecosystems and Natural Resource Management Landscape and Sustainability Landscape Monitoring, Ecosystem Services and Sustainable Development Large-Scale Systems: Sustainable Economy and Transport in the Modern World Latest Applications of Computer Vision and Machine Learning Techniques for Smart Sustainability Latest Developments and Challenges in MCDM Theory, Models, and Applications Law and Sustainability in Global Value Chains Leaders and Team Members’ Perceptions of Cooperation at Work Leadership and Sustainable Human Resource Management in the Tourism and Hospitality Industry Leadership, Occupational Stress and Sustainable Operations: Multinational Perspective Lean and Green Manufacturing Lean Design and Building Information Modelling Lean Six Sigma for Manufacturing Sustainability: Present Innovation and Future Prospects Learning Design for Sustainable Education Development Learning, Resilience, and Employability in Organisational Sustainability Less Impact, More Resilience and Welfare: Computer and Electronics to Improve the Sustainability in Agriculture Production Leveraging Digitalization for Advanced Service Business Models: Challenges and Opportunities for Circular Economy Life Assessment and Dynamic Behavior of Components Life Cycle Assessment and Environmental Footprinting Life Cycle Assessment for Sustainable Waste Management Strategies Life Cycle Assessment in Sustainable Products Development Life Cycle Assessment of Agri-Food Products Life Cycle Assessment on Green Building Implementation Life Cycle Assessment, a Tool for Sustainability and Circular Economy Life Cycle Sustainability Assessment Life Cycle Sustainability Assessment: Implementation and Future Perspectives Light and Industry Lighting the Way for Retail Design: Interactions, Trade-Offs and ROI’s Live Well, Live Long: Strategies for Promoting Physical Activity as a Healthy Habit among University Students Living in a Changing Climate: Everyday Knowledge and Everyday Lives Local and Global Threats to Rural and Peri-Urban Cultural and Natural Landscapes Local Development Initiatives and Sustainable Employment Policies Local Heritage and Sustainability Logistics and Sustainable Supply Chain Management (Series) II Looking at Strategic Plans of Universities: Sustainable Education, Innovative and Collaborative Learning Looking Back, Looking Ahead: Environmental Dispute Resolution and Sustainability Looking beyond Sustainability: Selected Papers from the 9th World Sustainability Forum (WSF 2021) Low Carbon Development for Emerging Markets Low CO2 Concrete Low-Carbon Affordable Houses for Sustainable Societies Machine Learning and Data Mining Techniques: Towards a Sustainable Industry Machine Learning and Robots for Sustainability Machine Learning Methods and IoT for Sustainability Machine Learning Techniques in Designing the Efficient Platforms for the Internet of Behaviors (IoB) Machine Learning with Metaheuristic Algorithms for Sustainable Water Resources Management Machine Learning-Enabled Radio Resource Allocation for Sustainability of Wireless Engineering Technologies Machine Learning, IoT and Artificial Intelligence for Sustainable Development Macroprudential Policy, Monetary Policy, and Financial Sustainability Maladaptation to Climate Change Management Approaches to Improve Sustainability in Urban Systems Management of Freshwater Fisheries in the XXI Century: Perspectives, Approaches and Challenges within a Sustainability Framework Management of Sustainable Development with a Focus on Critical Infrastructure Managerial Decision Making: A Sustainable Behavioral Approach Manufacturing and Maintenance Manufacturing and Management Paradigms, Methods and Tools for Sustainable Industry 4.0 oriented Manufacturing Systems Marketing and Business Motivations for Implementing Sustainability Marketing of Innovation, Science and Technological Change Marketing Strategies for Sustainable Product and Business Development Mass Timber and Sustainable Building Construction Materials Properties and Engineering for Sustainability Math Education and Problem Solving Mathematical and Data-Driven Tools to Measure Sustainability Maximizing the Potentials of Unmanned Aerial Vehicles (UAVs) in Sustainability Measuring Progress towards the Achievement of Sustainable Development Goals (SDGs) Measuring Socio-Economic Well-Being Mechanism, Evaluation and Early Warning of Coal–Rock Dynamic Disasters Mechanisms Involved in Sustainable Metabolism of Legume Plants under Biotic and Abiotic Stresses Mechanisms, Technologies, and Policies for Carbon Peaking, Carbon Neutral Processes Medical Education: The Challenges and Opportunities of Sustainability Medium Sized Cities and Their Urban Areas: Challenges and Opportunities in the New Urban Agenda Mergers and Acquisitions Processes and Sustainability Methodological Advances in Research on Sustainable Ecosystems Methodological Aspects of Solving Sustainability Problems: New Challenges, Algorithms and Application Areas Methodologies and Applications of Multiple Criteria Decision Making for Sustainable Development Methods, Tools, Indexes and Frameworks in Sustainability Assessment Microbial Populations and Their Interactions in Agroecosystems: Diversity, Function and Ecology Microbial Resources and Sustainable Remediation Mineral and Microorganism Interactions for Sustainability Mobile and Personalized Learning for Sustainable Development and Education Mobile Communications and Novel Business Models Mobile Networks and Sustainable Applications Mobility for Sustainable Societies: Challenges and Opportunities Modeling and Simulation Formalisms, Methods, and Tools for Digital-Twin-Driven Engineering and Sustainability-Led Management of Complex Systems Modelling and Analysis of Sustainability Related Issues in New Era Modelling and Simulation of Human-Environment Interactions Modelling Approaches to Support Decision Making Modelling of Industrial Processes Modelling the Economic, Social and Environmental Components of Natural Resources for Sustainable Management Modern Statistical Techniques and Sustainability Studies: Selected Papers from the 9th International Conference on PLS and Related Methods (PLS'17) Modernization and Sustainability of Urban Water Systems Monitoring and Intervening with Adolescent Green Attitudes and Values Monitoring and Modelling Techniques for Sea Environment and Sustainable Development Monitoring Arctic Sustainability: Methods, Indicators, Monitoring Systems and Experiences Moving Forward to the Paris Agreement Warming Targets Moving toward Sustainability: Rethinking Gender Structures in Education and Occupation Systems Moving towards Maturity in Sustainable Human Resource Management Multi-disciplinary Sustainability Research Multi-Objective and Multi-Attribute Optimisation for Sustainable Development Decision Aiding Multi-Scale Integrated Energy Management in the Built Environment Multicriteria Decision Analysis and the Sustainability of Public Systems Worldwide Multidimensional Sustainable Development in Higher Education Institutions Multidisciplinary Approaches to Multilingual Sustainability Multifunctional Coatings Innovating in the United Nations 17 Goals on Sustainable Development Multiple Criteria Decision Making for Sustainable Development Municipal Solid Waste Management and Environmental Sustainability Nano- and Micro-Contaminants and Their Effect on the Human and Environment Nanobiotechnology Approach for Sustainable Agriculture Nanomineral and Their Importance on the Earth and Human Health: A Real Impact National Parks: Theories and Practices Natural Events Threatening the Cultural Heritage: Characterization, Prevention and Risk Management for a Sustainable Fruition Natural Products and Sustainable Bioresource Recovery Natural Resource Management for Green Growth in Developing Countries Natural Resources of Tourism: Towards Sustainable Exploitation on Regional Scale Nature Based Solutions to Support Climate Change Adaptation and Sustainable Development Nature-Based Solutions—Concept, Evaluation, and Governance Nature-Inspired Sustainable Development Navigation and Remote Sensing for Sustainable Development Neo-Geography and Crowdsourcing Technologies for Sustainable Urban Transportation Network Science for All in Sustainability Neural Networks and Data Analytics for Sustainable Development Neutrosophic Modelling for Circular Economy and Sustainable Decision Making New Business Models: Sustainable. Circular. Inclusive New Challenges for Sustainable Development New Challenges for Sustainable Organizations in Light of Agenda 2030 for Sustainability New Challenges in Science-Based Entrepreneurship New Challenges in Sustainable Finance New Concepts for Regeneration of Industrial Cities New Data Collection and Processes for Sustainable Development New Directions in Co-Producing Research to Action for Sustainability New Economic and Financial Challenges for Social and Environmental Sustainability New Frontiers in Environmental Citizenship and Risk Management of Sustainability New Horizons for Sustainable Architecture New Insights on Intelligence and Security for Sustainable Application-2nd Edition New Insights on Intelligence and Security for Sustainable Applications New Methodological, Technical-Tactical and Biopsychosocial Perspectives in Opposition Sports New Multidisciplinary Approaches for Reducing Food Waste in Agribusiness Supply Chains New Project Financing and Eco-Efficiency Models for Investment Sustainability New Research Trends and Developments in Nanostructured Materials: Thin Films and Nanotechnology New Studies in EROI (Energy Return on Investment) New Urban Agenda and New Urban Studies: A Sustainable Planning Toolkit Next Generation Technologies for Building Sustainable Smart Cities Nitrification and Denitrification for Nitrogen Removals in Wastewater Treatment Nitrogen: Too Much of a Vital Resource Novel Decision Methods to Enable Technological Innovation for a Sustainable Future OBOR—One Belt One Road Research: New Forms of International and Cross-Industry Collaboration for Sustainable Growth and Development Occurrence, Impact, and Removal of Nutrients in Stormwater On the Socioeconomic and Political Outcomes of Global Climate Change On the Sustainable Relationship between Product-Service Innovation and Outcomes: Pitfalls and Solutions Online Algorithms and Green Data for Sustainable Development Online Global Citizenship, Open Education and Civic Collaboration Open Challenges and Novel Quantitative / Qualitative Techniques in Definition, Policy Instruments, and Measurement Open Innovation Strategy and Advanced Technology Discovery for Sustainable Growth of Firms Operational Performance, Degradation and Reliability of Photovoltaic Systems Operational Research Tools for Solving Sustainable Engineering Problems Operationalizing the Circular City Model for Metropolitan and Port Cities Regeneration: Multiple Approaches, Tools and Evaluations Operative Metropolitan Transformations: A Prospective for Equitable Resilience Optimal Decisions in Sustainable Supply Chains Impacted by Health Crisis Optimal Transition toward Innovation-Led Sustainable Governance under the 2020 Paris Regime in Asia Optimization in Green Supply Chain Management and Ecological Transition Optimization of Logistics Systems Using Industry 4.0 Technologies Optimized Design of Hybrid Microgrid Organic By-products and Waste for Industrial, Agricultural and Energy Applications Organic Farming and a Systems Approach to Sustainable Agroecosystems Organizational Sustainability: Theory, Culture, and Practice Out of the Lab Employment of Neurophysiological Measures and Sustainability Overcoming Social, Political and Economic Impediments to Environmental Education and Action Packaging Sustainability - The Role of Packaging in Reducing Food Waste Participatory OR Approaches for Sustainable Decision Making and Systems Development Passive House Development and High Energy Efficiency Sustainable Buildings Pastoral Goat Husbandry and Environment Path to Carbon Neutrality Pathways for Implementing the Sustainable Development Goals (SDG) Patient Centredness, Values, Equity and Sustainability: Professional, Organizational and Institutional Implications Payment for Ecosystem Services, Environmental Taxation and Environmental Management: Cases from Evidence Based Policy Making and Global to Local Benefit Sharing Schemes Pedagogy for Education for Sustainability (EfS) in Higher Education (HE) People and the City: Real Estate-Scape as a Sustainable Citizenship Project Performance and Durability of Sustainable Cementitious Mixtures Performance Benefits of Circular Economy: Between Convergent and Conflicting Interests Perspectives in the Provision of Public Goods by Agriculture and Forestry Physical Activity as a Means to Reduce Violent Behaviors in the Educational Environment for a Sustainable Education Physical Activity, Education and New Methodologies Physical Fitness and Healthy Lifestyles in Childhood and Adolescence Pipeline Science and Innovation Planet Mosaic: Detecting Environmental Changes Planning for Climate Change Plant Biotic and Abiotic Stress Tolerance: Physiological and Molecular Approaches for Sustainable Agricultural Production Plant Breeding for Sustainable Agriculture Plant Diversity in Sustainable Agroecosystems Plant Environmental Stress Physiology and Sustainability Plant Molecular Genetics and Biotechnology Approaches in Sustainable Agriculture and Global Food Security Plant-Microbe Interactions for Sustainable Agriculture in Changing Environment Policies and Governance for Sustainability in the Cultural Management Context Policy Pathways for Sustainability Political Economy and Sustainability Political Participation and Sustainability: Exploring Contemporary Challenges Post-Industrial Design toward Sustainability Development Goals: Techno-Socio-Economic Development Models for Global Society in the Era of Digital Transformation Power Conversion Systems for Concentrating Solar Thermal and Waste Heat Recovery Applications at High Temperatures Power Distribution System and Sustainability Precision Agriculture and Sustainability Preferential Trade Agreements and Global Value Chains Preserving Ecosystem Services via Sustainable Agro-Food Chains Privacy-Aware Authentication in a Sustainable Internet-of-Things Environment Pro-environmental Decisions: Sustainable Use of Urban Rooftops Proceedings of the 3rd International Sustainability Conference Process Innovations in Agri-Food Supply and Value Chains Process Integration and Optimisation for Sustainable Systems Procurement for a Sustainable Built Environment Product Design and Consumer Behavior in A Circular Economy Product Innovation and Sustainability Product-Service Innovation as a Springboard for Enhancing Sustainable Manufacturing Systems: Challenges & Opportunities for Sustainable Development in Manufacturing Enterprises Product-Service System (PSS) Development and Customization for Sustainability Product/Service System Design for Sustainability Production Line Optimization and Sustainability Progress, Challenges and Priorities of Digital Sustainability in Developing Nations Project Management Practices for Sustainable EPC Projects Submission Promoting Environmental Resiliency and Justice through Urban Blue and Green Spaces Promoting Pro-Sociality in the Real World: Theory and Experiments Promoting Sustainability in Higher Education Promoting the Sustainability of Agricultural Heritage Systems through Dynamic Conservation Prospects and Challenges of Sustainable Public Purchasing Prosumption within Tourist Experiences Prudential Regulation of Financial and Sustainability Risks from Climate Change: Empirical and Theoretical Research on Banks, Insurers and the Wider Financial System Psychological and Behavioral Aspects of Sustainability Psychosocial Risk and Protective Factors for Sustainable Development in Childhood and Adolescence Public Diplomacy, Social Responsibility and Place Branding: A Glocal Perspective Public Health and the Sustainable Development of Human Beings Public Marketplaces Promoting Resilience and Sustainability Public Participation in Sustainability-Oriented Research: Fallacies of Inclusiveness and the Ambivalences of Digital and Other Remedies Public Policy Evaluation and Sustainable Economic Development: Theoretical and Empirical Aspects Public Transport Accessibility and Sustainability Public-Private Partnerships for Sustainable Development Public-Private Partnerships: Development of Sustainable Projects Quality Management and Development of Organizations Quantitative Assessment of Decentralized Sanitation Systems in Small and Remote Communities and Developing Regions Quantitative Research of Technological Innovation and Energy Market Evolution on Carbon Emissions and Green Economy R&D Policies and Economic Sustainability Re-Inventing Globalization: Community, Virtues and the Power of Purpose Real Estate Economics, Management and Investments Real Estate Landscapes: Appraisal, Accounting and Assessment Real Option Valuation for Business Sustainability under Uncertainty Recent Advances for Sustainable Development of Multi-Generation Energy Systems Recent Advances in Clean Technologies for Environmental Sustainability Recent Advances in Geotechnical Stability and Technological Applications Recent Contribution from Large-Scale Data Analytics to the Sustainable Energy Transition Recent Development in Financial Sustainability Recent Research on Statistics, Machine Learning, and Data Science in Sustainability and Penta Helix Contribution Recent Trends and Applications in Intelligent Systems for Sustainability Recent Trends and Applications in Physical Asset Management in the Context of Sustainable Development Recent Trends in Digital Financial and Payment Services: Perspectives for Sustainability and Consumer Well-Being Recent Trends in Sustainable Supply Chains: Challenges and Opportunities in the Era of Crises and Disruption Recent Trends in Time-Frequency Signal Analysis: Sustainable Applications and Systems Reconciling Deterioration and Failure of Soil with Ecological Restoration Recovery and Sustainability of the Sport Sector during the COVID-19 Pandemic Recycling Agricultural Waste towards Low Carbon Recycling and Reuse of Construction and Demolition Waste Reframing Sustainable Tourism Regional Cooperation for the Sustainable Development and Management in Northeast Asia Reimagining Environmental Law for the Anthropocene Reimagining Exercise and Sports Sustainability Remote Sensing and Open-Source Applications of Renewable Energies and Sustainable Management Monitoring Remote Sensing Studies Applied to the Use of Satellite Images in Global Scale Renewable Agriculture Renewable and Sustainable Energy for Sustainable Development Renewable Biodiesel/Green Diesel for a Sustainable Future Renewable Energy and Farming for Sustainability Renewable Energy and Sustainability Renewable Energy Applications and Energy Saving in Buildings Renewable Energy Applications in Livestock Production Renewable Energy Technologies in Households Research in Motivational and Group Processes in Sport Research on Sustainability and Artificial Intelligence Researching Entrepreneurship at Different Levels: Micro, Meso and Macro Researching Simulations and Serious Games for Sustainability Resident-Tourist Relationships, Interactions and Sustainability Resilience and Sustainability in Agricultural and Food Systems Resilience and Sustainability of Health Systems Resilience Engineering for Sustainability: Methodological Approaches and Practical Experience Resilience Engineering Practices for Sustainable Development Resilience in City and Rural Areas under Global Environmental Change Resilience to Natural and Man-Made Disasters Resilience, Sustainability and Voluntary Temporary Populations Resilient Architectural and Urban Design Resilient Cyber-Physical Systems Resilient Economics and the Regional Sustainable Economic Growth Resilient Infrastructure Systems and Sustainable Economic Growth Resilient, Sustainable and Smart Cities: Emerging Trends and Approaches Resource Scarcity and Prospects for Cooperation in Developing Regions Responding to Crisis in Industry 4.0: Sustainable Marketing Innovation in Uncertain Times Responding to Global Changes through Science Promotion and Ecological Education Responding to Pressing Sustainability Issues through Agro-Food Policies Responsible Innovation for a Sustainable Future Responsible Leisure and Sustainable Human Development in Times of Change Responsible Research and Innovation (RRI) in Industry Responsible Supply Chain and the UN's Sustainable Development Goals Responsible Transitions in Agri-Food: Towards a Sustainable Future Responsible Value Chains for Sustainability: Practices and Challenges from EcoBalance Restoring Coastal Resilience Retailing and Sustainability Rethinking Approaches to Sustainability Challenges by Sharing Solutions among Areas Rethinking Novel Tourism Demand Modelling and Forecasting Due to COVID-19: Uncertainty, Structural Breaks and Data Rethinking Sustainability in Human Resource Management Rethinking Sustainable Construction: Renewing Old Perspectives and Emergent New Frames of Thinking Reuse of Waste Streams for Geotechnical and Geo-Environmental Applications Revamp Tourism—Utilization of Affective Reasoning, Artificial Intelligence, and Big Data through Natural Language Processing Reviving Drivers for Sustainable Architecture and Urban Design—Ecology and Energy Revolutionizing Internet of Things (IoT)-Based Technologies for Sustainable Environment Rhizo-Microbiome for the Sustenance of Agro-Ecosystems in the Changing Climate Scenario Risk Analysis, Assessment and Management for Sustainable Engineering Projects Risk and Security Management for Critical Infrastructures Risk Management as a Tool for Sustainability Risk Measures with Applications in Finance and Economics Risk-Informed Decision-Making in Sustainable Management of Industrial Assets River Ecosystem Processes in the Context of Sustainability RNAi-Based Pesticides: A New Tool to Improve Sustainable Agriculture, between Real Opportunities of Suitable Applications and Facing Legal Vacuum and Citizen Bias Robotic Co-Workers for Work and Workforce Sustainability Role of AI, Big Data, and Blockchain in IoT Devices Role of Religion in Sustainable Consumption Role of Third- and Fourth-Party Logistics Providers to Support Sustainable Development Goals in Developing Countries Rural Development and Sustainable Rural Tourism Rural Development: Challenges for Managers and Policy Makers Rural Energy Transition in the Global North: Community Benefits, Contradictions and Future Challenges Rural Sustainable Environmental Management II Safety and Security Issues in Industrial Parks Scheduling Problems in Sustainability Science, Technology and Innovation Reforms for Sustainable Development of the GCC Countries Scientific Research on Sustainable Development Goals Scientific Theory and Methodologies toward a Sustainable Future under Post-COVID-19 Transition Movement SDN Networks for Modern Communication Systems Security Challenges in the Context of Sustainability Security on Web-Based Applications: Technologies, Methodologies, Analysis Methods and Recent Advances in Cybersecurity Selected Papers from "Soil Security and Planetary Health Conference" Selected Papers from 2nd Eurasian Conference on Educational Innovation 2019 Selected Papers from 6th Annual Conference of Energy Economics and Management Selected Papers from 7th Annual Conference of Energy Economics and Management Selected Papers from AISL 2021 Conference on Improving Scientific Literacy through Interdisciplinary Research on Technology-Enhanced Practice Selected Papers from Eurasian Conference on Educational Innovation 2021 Selected Papers from the 2nd International Conference on Transitions in Agriculture and Rural Society. The Global Challenges of Rural History Selected Papers from the 9th International Conference “Production Engineering and Management” (PEM 2019) Selected Papers from the Eurasian Conference on Educational Innovation 2020 Selected Papers from the Sustainable Globe Conference 2021&2022 within the theme of Education, Agriculture and Sustainable Communities Selected Papers from TIKI IEEE ICASI 2019 Selected Papers on Sustainability from IMETI 2022 Self-Employment Sustainability: Exploring the Long-Term Survival Self-Organised Simulation for Sustainable Building Design Sensors and Biosensors in Environmental Applications Services Marketing and Sustainability Services Sector Trade and Investment Sewage Sludge Management and Environmental Control Sharing Economy and Its Role in Fostering Sustainability: How Trust and Regulation Shape Relations of Providers and Consumers Sharing Economy for Sustainability Sharing for Caring: On the Role of Collaborative Economy in the Sustainable Development Goals (SDGs) Simulations and Methods for Disaster Risk Reduction in Sustainable Built Environments Slow Fashion: Past, Present and Future Small and Medium-Size Towns Across the World: From the Past into the Future (SMTs) Smart and Low-Carbon Transition: Urban Planning and Governance under Climate Change Smart and Resilient Interdependent Infrastructure System Smart Approaches to Predict Floods and Droughts: Current Technology and Challenges Smart Cities, Smart Mobilities, and Sustainable Development of Cities Smart City Building and Sustainable Governance Smart City Innovation and Resilience in the Era of Artificial Intelligence Smart City: Intelligent Technology, Renewable Energy, and Public Wellness Smart Classrooms in Higher Education for Sustainability Smart Educational Games and Gamification Systems in Online Learning Environments Smart Energy Regions—Drivers and Barriers to the Implementation of Low Carbon Technologies at a Regional Scale Smart Farming and Bioenergy Feedstock Crops Smart Grid Technologies and Renewable Energy Applications Smart Industry – Theories and Practices for Sustainability Smart Living Technology and Innovations Smart Specialization Regional Development in Times of Uncertainty Smart Sustainable Education: Innovative Digital Transformation for Innovation and Entrepreneurship Smart Technology and Gamification for Exploring the Sustainability of Social-Ecological Systems Smart Technology-Enhanced and Sustainable Assessment Smart Thermal Grids for Sustainable Energy Systems Smart X for Sustainability Social and Environmental Entrepreneur Social and New Technology Challenges of Sustainable Business Social Customer Relationship Management Social Impact and Challenges of Sustainability Reporting in the Digital Era Social Impact Investments for a Sustainable Welfare State Social Innovation and Entrepreneurship: Toward Ecology Friendly Models Social Innovation and Participatory Governance? Exploring Their Relationship in the Context of Urban Sustainability and Climate Adaptation Social Innovation towards Sustainability: Embracing Contemporary Challenges for Regional Development Social Marketing and Social Entrepreneurship Education Social Media and Sustainability in the Digital Era Social Media Strategy in Sustainable Business Social Media Usage in Consumer Behavior Evaluation Social Media, Crisis Communication, and Publics Social Polarization, Inequality and Segregation Social Sustainability and Justice Social Sustainability: Theory, Practice, Problems and Prospects Social-Ecological Systems. Facing Global Transformations Society 5.0 and Industry 4.0 Relations and Implications Socio-Cultural Perspective for Martial Arts Tourism and Motor Recreation Research Socio-Ecological Interactions and Sustainable Development Socioeconomic Approaches to Sustainable Development of Real Estate Markets—COVID-19 Pandemic Lessons Socioeconomic Modelling and Prediction with Machine Learning Soft Computing for Sustainability Soil Carbon Management: Improving Soil Health and Mitigating Climate Change Soil Erosion and Sustainable Land Management (SLM) Solar Energy in Africa Sophisticated Soft Computing Techniques for Sustainable Engineering and Sciences Space for Sustainability: Using Data from Earth Observation to Support Sustainable Development Indicators Spatial Analysis and Geographic Information Systems Spatial Analysis and Real Estate Studies Spatial Planning and Sustainability in the Global South Special issue of Sustainable Asia Conference 2014 Sport Policy and Finance Sport Policy and Finance Ⅱ Sport, Tourism, and Hospitality for SDGs Sports Tourism and Sustainability Stakeholders in Sustainable Project Management Start Sweating the Small Stuff: Prompting Ideas for Doing More Small Things in Sustainability State of the Art of Assessment for Sustainable Development Goals Statistical Applications and Data Analysis for Sustainable Development Strategic and Managerial Decision Making for Enterprise Sustainability Strategic Challenges in Sustainable Human Resources Management Strategic Management and Organizational Innovation: Strategic Innovation as a Means to Improve Organizational Sustainability Strategic Planning of Sports Systems Strategies and Applications for Sustainable Engineering Education Strategies for Increasing the Sustainability of the Built Environment Strategies for Responsible Tourism and Sustainability Strategies for Sustainable Land Use: An Environmental Science Perspective Strategies for the Conservation of Industrial Heritage Aligned with the Sustainable Development Goals Structural Health Monitoring and Sustainability Structural Upgrading Systems for Sustainable and Resilient Concrete Infrastructure Successful Actions on Sustainability Impact Sulfur Compounds in a Sustainable World Sulfur Removal from Hydrocarbon Streams SUMP for Cities’ Sustainable Development Supply Chain Innovability: Combining Innovation and Sustainability for the Future of Supply Chains Supply Chain Sustainability Sustainability Accounting and Integrated Reporting Sustainability Accounting and Reporting Sustainability Accounting in the Global Context Sustainability Analysis and Environmental Decision-Making Using Simulation, Optimization, and Computational Analytics Sustainability Analytics over Responsible Organisations Sustainability and Application of Green Production Sustainability and Artificial Intelligence Sustainability and Consumption Sustainability and Development of Remote Regional Economies in the North Sustainability and Digital Innovation in Places: A Spatially-Bounded Innovation System Perspective Sustainability and Digital Retailing Sustainability and Digital Transformation: The New Challenges of the Construction Industry Sustainability and Econophysics Sustainability and Efficiency of E-mobility in the Current Global Context – the Perspectives of SDGs and Circular Economy Sustainability and Energy Efficiency of Developing 5G/6G Wireless Technologies for Sustainable Wireless Communication Systems Sustainability and Environmental Management: How Can Circular Economy Contribute to Fight Climate Change? Sustainability and Ethics: Reflections on the UN Sustainable Development Goals Sustainability and Green Finance in the Era of Non-Financial Reporting: From Sustainability Reporting to Greenwashing Sustainability and Innovation in an Era of Global Change Sustainability and Innovation: Concepts, Methodology, and Practices Sustainability and Innovation: New Technologies Shaping the Marketplace Sustainability and Institutional Change Sustainability and Justice Sustainability and Management Information and Control Systems Sustainability and Product Differentiation Sustainability and Production of Cropping Systems Sustainability and Resilience in the Urban Realm Sustainability and Resilience of Collaborative Public Service Delivery Sustainability and Resilient Performance Assessment of Corporations and Supply Chains Sustainability and Technological Trajectories of Erosion Sustainability and the Global Pandemic: Issues, Policies and Strategies Sustainability and Urban Metabolism Sustainability as a Component of Competitive Advantage Sustainability Assessment of Energy System Sustainability Assessment of Land Use and Land Cover Sustainability Assessment of Pavement De-icing and Anti-icing Methods Sustainability at the Nexus between Climate Change and Land Use Change Sustainability Challenges in Cyber Security for Smart Cities Sustainability Development: Challenges and Opportunities in Dynamic Times Sustainability for Next Generation Smart Networks Sustainability in 2023 Sustainability in Agriculture: Selected Papers from the 8th International Conference on Agriculture 2021 (AGRICO 2021) Sustainability in an Urbanizing World: The Role of People Sustainability in Applications Using Quantitative Techniques Sustainability in Business Processes Management Sustainability in China: Bridging Global Knowledge with Local Action Sustainability in Construction Sustainability in Construction Engineering Sustainability in Corporate Governance and Strategic Management Sustainability in E-Business Sustainability in Education: a Critical Reappraisal of Practice and Purpose Sustainability in Fashion Business Operations Sustainability in Financial Industry Sustainability in Firm Internationalization and International Trade Sustainability in Food Consumption and Food Security Sustainability in Hospitality and Tourism Sustainability in Interaction of Traditional and Mechanical Financial Systems Sustainability in International Trade Sustainability in Land Use Planning Sustainability in Leadership and Education Sustainability in Manufacturing Sustainability in Manufacturing Scheduling: New Extended Optimization Approaches Sustainability in Marketing Sustainability in Metal Additive Manufacturing Sustainability in Organizational Culture and Intercultural Management Sustainability in Organizational Values and Public Administration Sustainability in Product Development Sustainability in Project Management in the Digital Transition Era: Principles, Tools and Practice Sustainability in Protected Crops Sustainability in Smart Farms Sustainability in Social Investment and Responsible Trade Sustainability in Sport: When the Future of Sport Is Being Decided Now Sustainability in Supply Chain Management Sustainability in Supply Chain Operations and Collaboration in the Big Data Era Sustainability in Supply Chains with Behavioral Concerns Sustainability in Synchromodal Logistics and Transportation Sustainability in Textiles Sustainability in the Equine Industry Sustainability in the Mining, Minerals and Energy Industries Sustainability in the Mountains Region Sustainability in the Platform Ecosystems Era. New Approaches to an Ancient Problem Sustainability in Through-Life Engineering Services (TES) Sustainability in Tourism and Economic Growth Sustainability Issues in Civil Engineering and Management Sustainability Issues in the Textile and Apparel Supply Chains Sustainability Issues in Transport Infrastructure Sustainability Leadership in Small to Medium-Sized Enterprises (SMEs): Bridging the Divide between Theory and Practice Sustainability Marketing: the Use of Sustainability Messages, Labels, and Reports in the Marketing Communication Sustainability Meets Humanitarian Logistics Sustainability of Agricultural Plants under Fluctuations of Environmental Conditions: Physiological Bases, Monitoring, Simulation, and Prediction Sustainability of Bioresources, By-Products and Bio-Waste Management in Rural Areas Sustainability of Care for Older People in Ageing Societies Sustainability of Constructions - Integrated Approach to Life-time Structural Engineering Sustainability of Developing 5G/6G Wireless Technologies for Sustainable Wireless Communication Systems Sustainability of Electric Power Devices Sustainability of Families and Child Welfare Sustainability of Fisheries and Aquaculture: Selected Papers from the 8th International Conference on Fisheries and Aquaculture (ICFA 2021) Sustainability of Global Economy and Governance—Ethics, Cohesion and Social Responsibility Sustainability of Large Satellite Constellations for 5G/B5G Sustainability of Nanotechnology: Novel Approaches Sustainability of Online Communities and Online Communities for Sustainability Sustainability of Protein in Food Ecosystem Sustainability of Rural Areas and Agriculture under Uncertainties Sustainability of Rural Tourism and Promotion of Local Development Sustainability of Small-Scale Fisheries: Recent Trends and Future Prospects Sustainability of Soil Reuse in Civil Construction Sustainability of the Electronic Equipment Sector: Advances and Controversial Aspects Sustainability of the Theories Developed by Mathematical Finance and Mathematical Economics with Applications Sustainability of Young Companies–Contemporary Trends and Challenges Sustainability Optimisation of Electrified Railways Sustainability Outlook: Forecasting the Future Development Sustainability Practices and Corporate Financial Performance Sustainability Rating Tools in the Built Environment Sustainability Science and Technology Sustainability Teaching Tools in the Digital Age Sustainability Technologies and Applications for Green Cloud Computing Sustainability through the Lens of Environmental Sociology Sustainability Transition Towards a Bio-Based Economy: New Technologies, New Products, New Policies Sustainability vs Uncontrollability: COVID-19 and Crisis Impact on the Hospitality and Tourism Community Sustainability with Optimization Techniques Sustainability with Robo-Advisor and Artificial Intelligence in Finance Sustainability, Biodiversity, and Conservation Sustainability, Industry 4.0, and Economics Sustainability, Resilience and Risk Assessments Enabled by Multiple Criteria Decision Analysis (MCDA) Sustainability: An Impossible Future? Sustainable 3D Printing for Smart Manufacturing and Production Sustainable Action in Consumption and Production Sustainable Agribusiness Decision making model in Belt and Road Green Development Sustainable Agriculture and Environmental Impacts Sustainable Agriculture Guided by the Environmentally Responsible Consumption Sustainable Agriculture Through Technological Intervention Sustainable Agriculture-Food Supply Chains: Innovation, Technologies, and Decisions Sustainable Agro-Ecosystem Management: Mechanisms, Measurements and Modelling Strategies with Special Emphasis to the Soil Properties Sustainable Analysis of Traffic Crash Risk Sustainable and Healthy Public Spaces: Towards a More Inclusive City Sustainable and Human-Centric E-Commerce Sustainable and Optimal Manufacturing Sustainable and Resilient Supply Chains Sustainable and Safe Two-Wheel Mobility Sustainable Applications in Agriculture Sustainable Applications of Remote Sensing and Geospatial Information Systems to Earth Observations Sustainable Approaches for Industrial Sector Sustainable Approaches within the Chemical Sciences Sustainable Aquaculture and Community Development Sustainable Architecture Sustainable Assessment in Supply Chain and Infrastructure Management Sustainable Assessment of Urban Water Systems Sustainable Biofuel Production from Lignocellulosic Biomass: Special Focus on Pretreatments Methods, Biomass Hydrolysis and Assessment Methods Sustainable Biomass Fuelled and Solar Driven Tri-generation Systems Sustainable Blockchain and Computer Systems Sustainable Branding and Marketing Sustainable Building Sustainable Building and Indoor Air Quality Sustainable Building Renovation Sustainable Buildings in Developing Countries Sustainable Built Environment of Post-Carbon Era (Sustainable Built Environment 2016 Seoul Conference) Sustainable Built Environments Sustainable Business Model and Digital Transformation Sustainable Business Models and Common Goods Sustainable Business Models and Innovation in the Knowledge Economy/Business Revolution in the Digital Era- Selected Papers from the 13th and 14th International Conference on Business Excellence Sustainable Business Models in Tourism Sustainable Care: Facing Global Ageing More Effectively Sustainable Cities Sustainable City Logistics and Humanitarian Logistics Sustainable Clothing Consumption: Circular Use of Apparel Sustainable Clothing Industry: Production, Consumption and Recycling Systems Sustainable Coastal Development: Justice, Transitions and the Blue Economy Sustainable Communication and Networking Sustainable Communities: Economic, Social and Environmental Dimensions Sustainable Concrete Structures Sustainable Construction and Architecture Sustainable Construction and Demolition (Best Contributions of the International Conference on Sustainable Construction and Demolition, Valencia, November 2021) Sustainable Construction and Interior Comfort Sustainable Construction Engineering and Management Sustainable Construction Investments - Technical and Organizational Implications Sustainable Construction Management Sustainable Construction Project and Program Management Sustainable Consumer Behavior Sustainable Consumer Behavior and Its Role in the Future Economic System Sustainable Consumption and Consumer Socialization Sustainable Consumption, Eco-Friendly Apparel, and Environmental Beliefs Sustainable Corporate Finance and Risk Management Sustainable Country's Concept: Challenges and Development Perspective Sustainable Critical Infrastructures: Progresses, Challenges and Opportunities Sustainable Cross-Border Cooperation: Common Planning, Policies, Strategies, Methods and Activities Sustainable Cyber-Physical Production and Manufacturing Systems Sustainable Data Governance of Government Sustainable Data Usage for Predicting Consumer Behaviour Sustainable Design and Construction Sustainable Design and Construction Sustainable Design in Offsite Construction Sustainable Design Innovation Sustainable Development and Entrepreneurship in Contemporary Economies Sustainable Development and Innovations in the US-Mexico Transborder Region Sustainable Development and Policies: Active Ageing Policies Sustainable Development and Practices: Production, Consumption and Prosumption Sustainable Development from the Management and Social Science Perspective Sustainable Development Goals Sustainable Development Goals through Corporate Social Responsibility Sustainable Development in Natural Protected Areas Sustainable Development in Small and Medium-sized Enterprises Sustainable Development Initiatives towards Poverty Alleviation Sustainable Development of Chinese Economy—In Search of New Sources of Growth: A Sustainable Approach Sustainable Development of Energy, Water and Environment Systems (SDEWES 2021) Sustainable Development of Energy, Water and Environment Systems (SDEWES 2022) Sustainable Development of Energy, Water and Environment Systems (SDEWES 2023) Sustainable Development of Energy, Water and Environment Systems (SDEWES) Sustainable Development of Fluid Mechanics and Hydraulic Engineering Sustainable Development of Irrigated Agriculture Sustainable Development of Pavement Materials, Design, and Road Construction Technologies Sustainable Development of Rural Areas and Agriculture Sustainable Development of Seaports Sustainable Development of Social Commerce in the New Era Sustainable Development of the Bioeconomy—Challenges and Dilemmas Sustainable Development of Urban Electric Transport Systems Sustainable Development, Climate Change, and Green Finance in a Changing World Sustainable Development: The Need for Technological Change Sustainable Diet Combining Socio-Economic, Environmental, and Nutritional Objectives Sustainable Disruptive Technologies in the Built Environment: A Step towards Industry 5.0 Sustainable Drainage Systems: Past, Present and Future Sustainable Eco-Design and Environmental Analysis of Products Sustainable Ecological Infrastructures and Human Well-Being: Regional Landscape and Socioeconomic Contributions Sustainable Ecology and Forest Management Sustainable Economics of Biotechnology Sustainable Ecosystems and Society in the Context of Big and New Data Sustainable Education Sustainable Education in COVID-19 Pandemic in Urban and Rural Areas and its Effects on Overall Development Sustainable Emergency Management based on Intelligent Information Processing Sustainable Energy and Environmental Protection: The Role of Science in Society Sustainable Energy Conscious Design and Refurbishment of Sustainable Buildings Sustainable Energy Planning Sustainable Energy Transition Sustainable Energy Use Sustainable Enterprise Excellence and Innovation Sustainable Enterprise Resources Planning Systems: Current Status, Challenges, and Future Directions Sustainable Entrepreneurial Process and Journey: From Education and Innovation to Green Entrepreneurship Sustainable Entrepreneurship and Eco-Innovation Sustainable Entrepreneurship, Firm Performance and Innovation Sustainable Environmental Beliefs Sustainable Evaluation and Competitiveness in Food Production Sustainable Fashion and Technology: Emerging Opportunities in Research and Practice Sustainable Field Crops Sustainable Finance Sustainable Finance and Banking Sustainable Finance and the 2030 Agenda: Investing to Transform the World Sustainable Financial and Business Performance: Perspectives for Economic Development Sustainable Financial Markets Sustainable Financial Markets II Sustainable Flood Risk Management Sustainable Food Chains Sustainable Futures Sustainable Geomatics and Civil Engineering Sustainable Geotechnical & Geoenvironmental Engineering Designs Sustainable Geotechnics—Theory, Practice, and Applications Sustainable Growing Media for Agriculture Sustainable Healthcare Settings and Health and Social Care Workforce Sustainable Horticultural Practices Sustainable Hospitality and Tourism Marketing Sustainable Human Populations in Remote Places Sustainable Human Resource Management Sustainable Human-Computer Interaction Development Sustainable Hydropower Project Development Sustainable Implications of Anywhere Working Sustainable Industrial Engineering and Electrical Development Sustainable Industry and Innovation in the Industrial IoT Era Sustainable Information Engineering and Computer Science Sustainable Information Systems Sustainable Information Technology Capabilities Applied in Management and Education Sustainable Innovation and Transformation Sustainable Innovations and Governance in the Agri-Food Industry Sustainable Integration of Technology in Mathematic Didactics Sustainable Intensification in the Future Agriculture: Bridging the Gap between Research and Application Sustainable Interdisciplinarity: Human-Nature Relations Sustainable Investment and Finance Sustainable Investment Issues: Financial Products, Performance and Methodologies Sustainable Irrigation System II Sustainable Land Transport from the Point of View of Modern Society Sustainable Land Use and the Bioeconomy Sustainable Land Use in China Sustainable Leadership: Crossing Silos in Leadership Research and Practice Sustainable Living: An Interdisciplinary and Transdisciplinary Challenge for Researchers, Decision Makers and Practitioners Sustainable Local Economic Development of Eco-Friendly Sources Sustainable Logistics and Supply Chain Management in the Aspect of Globalization Sustainable Logistics Management Sustainable Management in Tourism and Hospitality Setting Sustainable Management of Aquatic Ecosystems Sustainable Management of Supply and Consumption Sustainable Management Practices - Key to Innovation Sustainable Manufacturing Sustainable Manufacturing Technology Sustainable Manufacturing: Digital Transformation of Production Technologies and Energy Management Sustainable Marketing, Branding and CSR in the Digital Economy Sustainable Materials: Finding Innovative Practices and Solutions for a Changing Economy Sustainable Mega-Events Sustainable Microgrids for Remote, Isolated and Emerging Areas: Current Trends and Perspectives in Policies, Practices and Technologies Sustainable Mining and Circular Economy Sustainable Mobility and Transport Sustainable Nanocrystals Sustainable Nuclear Energy Sustainable Operations and Supply Chain Management for Small Businesses and Multinational Corporations Sustainable Operations and Supply Chain Management: Evolution and Future Trends Sustainable Organic Agriculture for Developing Agribusiness Sector Sustainable Outdoor Lighting Sustainable Pavement Materials Sustainable Pavement Materials and Technology Sustainable Perspectives: Green Operations Management and Supply Chain Sustainable Perspectives: Green Supply Chain and Operations Management Sustainable Perspectives: Renewable Energy Policy and Economic Development Sustainable Physical Activity and Student’s Health Sustainable Physical Activity, Sport and Active Recreation Sustainable Pig Production Sustainable Planning and Preparedness for Emergency Disasters Sustainable Planning of Urban Regions Sustainable Planning, Management and Economics in Transport Sustainable Plant Responses to Abiotic and Biotic Stresses Sustainable Policy on Climate Equity Sustainable Portfolio Management Sustainable Power Supply in Emerging Countries Sustainable Power System Operation and Control Methodologies Sustainable Practices in Watershed Management and Ecological Restoration Sustainable Processes Development in BioTechSciences Sustainable Product Design and Manufacturing Sustainable Product-Service Systems (PSS) Solutions for Product Development and Management under Governmental Regulations Sustainable Product-Service Systems in Practice Interdisciplinary Perspectives Sustainable Production and Manufacturing in the Age of Industry 4.0 Sustainable Production in Food and Agriculture Engineering Sustainable Production of Renewable Bioenergy Sustainable Production, Consumption, and Policy Applications of Life Cycle Assessment Sustainable Productive Systems – Assessing the Past, Envisioning the Future Sustainable Project Management Sustainable Public Health: Economic and Environmental Performance of the Healthcare Industry Sustainable Public-Private Partnerships for Future-Proof Efficient Assets Sustainable Re-manufacturing Sustainable Regional and Urban Development Sustainable Regional Development: The Social, Environmental and Economic Challenges and Solutions Sustainable Research on Renewable Energy and Energy Saving Sustainable Retailing & Brand Management Sustainable Reuse of Historical Buildings Sustainable Risk Assessment Based on Big Data Analysis Methods Sustainable Rural Community Development and Environmental Justice Sustainable Rural Economics Development in Developing Countries Sustainable Rural Landscape: Study, Planning, and Design Sustainable Safety Development Sustainable Security Management and Analysis of Engineering and Information by Data-Driven Sustainable Solar Thermal Energy Use and Solar Thermal System Sustainable Solutions for Improving Safety and Security at Crossroads, Junctions and Level-Crossings Sustainable Spatial Planning and Landscape Management Sustainable Sport and Physical Activity Education Sustainable Strategic Operations and Management in Business Sustainable Strategies and Technologies for Wastewater Management Sustainable Structures and Construction in Civil Engineering Sustainable Supply Chain and Lean Manufacturing Sustainable Supply Chain and Logistics Management in a Digital Age Sustainable Supply Chain Management for Process Industry Sustainable Supply Chain Management in the Fashion Industry in the Aftermath of COVID-19 Sustainable Supply Chain System Design and Optimization Sustainable Systems Analysis for Enhanced Decision Making in Business/Government Sustainable Territorial Development Sustainable Tourism and Climate Change: Impact, Adaptation and Mitigation Sustainable Tourism and Hospitality Management Sustainable Tourism Experiences Sustainable Tourism Management under Challenge from Climate Change and Economic Transition Sustainable Tourism Strategies in Pandemic Contexts Sustainable Tourism: Issues, Debates and Challenges Sustainable Transformation through Information Systems Use, Design and Development Sustainable Transport and Air Quality Sustainable Urban and Regional Management Sustainable Urban Development Sustainable Urban Development and Regional Management Sustainable Urban Development and Strategic Planning Sustainable Urban Landscape Design for Well-being Sustainable Urban Mining Sustainable Urban Planning and Design Education in Practice Sustainable Urban Stormwater Management Sustainable Urban Transitions: Towards Low-Carbon, Circular Cities Sustainable Urban Transport Policy in the Context of New Mobility Sustainable Urbanization Strategies in Developing Countries Sustainable Use of Biocontrol Agents Sustainable Value Co-Creation Sustainable Venture Capital and Social Impact Investment Management Sustainable Virtual Organization: Management Challenges and Development Perspectives Sustainable Wastewater Management and Water Demand Analysis Sustainable Wastewater Treatment by Biotechnologies and Nanotechnologies Sustainable Water–Energy–Food Nexus Sustainable Wildlife Management and Conservation Sustainable Wind Power Development Sustaining Suburbia: Reassessing the Policies, Systems, and Form of Decentralized Growth Synergies between Quality Management and Sustainable Development Synergies of Soft Computing, Artificial Intelligence and Signal/Image Processing Techniques in the Advancement of Sustainable Technologies. System-wide Disruption of Organisations for Sustainability Systems Engineering for Sustainable Development Goals Tackling the Complexities of the Pearl Industry to Enhance Sustainability Tall Buildings Reconsidered Technological Innovation and the Effect of Employment on Green Growth Technologies and Innovations for Sustainable Growth Technologies and Innovations for Sustainable Storage and Transportation of Oil and Gas Technologies and Models to Unpack, Manage Inventory and Track Wasted Food towards Sustainability Technologies for Developing Sustaining Foods for Specialized Missions Technologies for Sustainability in Smart Cities Technology and Innovation Management in Education Technology Assessment, Responsible Research and Innovation, Sustainability Research: Conceptual Demands and Methodological Approaches for Societal Transformations Technology Enhanced Learning Research Technology, Organisation and Management in Sustainable Construction Telework and Its Implications for Sustainability Terrestrial Ecosystem Restoration Textile Technologies in Sustainable Development, Production and Environmental Protection The 1st International Conference on Future Challenges in Sustainable Urban Planning & Territorial Management—SUPTM 2022 The 4th Industrial Revolution, Financial Markets and Economic Development The Adaptive Reuse of Buildings: A Sustainable Alternative towards Circular Economy Solutions The AI-Augmented Smart Transformation for Sustainable Governance The Application of Communication Technology in Smart Residential Communities The Art and Science of Economic Evaluation and Maintenance Planning for Airports The Arts, Community and Sustainable Social Change The Challenge of Food Waste Reduction to Achieve More Sustainable Food Systems The Circular Economy as a Promoter of Sustainability The Close Linkage between Nutrition and Environment through Biodiversity and Sustainability: Local Foods, Traditional Recipes and Sustainable Diets The Competitiveness and Sustainability of Global Agriculture The Contribution of Sustainable Businesses to achieve the Agenda 2030 and the Sustainable Development Goals The Contribution of the Project Management to the Sustainable Development Goals (SDGs) The Contribution of the Social Economy to the Sustainable Development Goals The Current and Future Role of Public Transport in Delivering Sustainable Cities The Deployment of IoT in Smart Buildings The Eco-Philosophy of an Organic Community The Economics and Ethics of Sustained Individual and Food System Resilience The Effects of COVID-19 Pandemic on Engineering Design and the Sustainability of the New Developed Products The Energy-Sustainability Nexus The Environmental and Economic Sustainability in Building Construction The Environmental Effects from Consumer Behaviour in the Contexts of the Circular Economy and the Sharing Economy The Evolution of Social Innovation: Building a Sustainable and Resilient Society through Transformation The Future and Sustainability of Financial Markets The Future of Interior Lighting is here The Future of Maritime Industry: How Climate Change and Other Environmental Challenges Will Impact on New Market Developments The Gender Dimension in Sustainability Policies and Their Evaluation The Global Jatropha Hype—Drivers and Consequences of the Boom and Bust of a Wonder Crop The Governance of Social Innovation for a Sustainable Economy: Requirements, Actors and Approaches The Human Dimensions of Coastal Adaptation Strategies The Human Side of Sustainable Innovations The Imbalances in the Urban Growth of 21st Century Cities: Case Studies, Innovative Approaches and New Emerging Phenomena The Impact of Audio-Visual Content on Sustainable Consumer Behavior The Impact of COVID-19 Pandemic on Sustainable Development Goals The Impact of Digitalization on the Quality of Life The Impact of Economic Complexity and Trading Complex Products on Environmental Indicators The Impact of Global Change on Biological Control of Pest in Agriculture The Impact of Plant Genome Editing The Impacts of Climate Changes: From Sustainability Perspectives The Importance of Wetlands to Sustainable Landscapes The Influence of Covid-19 on Sustainable and Financial Analysis in Public Administrations The Informatization of Agriculture The Involvement of Crowds for Advancing Knowledge, Promoting Innovation and Nurturing New Entrepreneurial Ventures The Key to Sustainable Manufacturing Enterprise The Link between Tourism, Agriculture and Sustainability in Special Green or Ecological Zones The New Era of Sustainable Public Procurement The Nexus between Information and Communication Technologies (ICT) and Sustainability: The Current Digital Bet The Oil and Gas Industry and Climate Change: Role and Implications The Planetary Wellbeing Initiative: Pursuing the Sustainable Development Goals in Higher Education The Political Economy of Home: Settlement, Civil Society and the (Post-)Global Eco-City The Potential and Contradictions of Mega Infrastructure Development Projects in Contributing to Sustainable Development in Rapidly Growing Countries and Regions The Provision of Ecosystem Services in Response to Habitat Change The Quality of Urban Areas: New Measuring Tools and Methods, Impact on Quality of Life and Costs of Bad Design The Realistic Sustainable Management of Development Operations in Africa The Remediation and Re-qualification of Contaminated Sites The Rise of Domestic Tourism and Non-travelling in the Times of COVID-19 The Role of Artificial Intelligence in Sustainable Development The Role of Communication in Sustainable Development The Role of Corporate Mergers and Acquisitions in Enhancing Environmental Sustainability The Role of Public Policy in Managing and Ensuring the Sustainability of Public and Private Organizations The Role of Sustainable Infrastructure in Climate Change Mitigation and Community Resilience The Role of Underutilized Crops in Sustainable Agriculture and Food-Systems The Specific Role and Value of Accounting within the Private Firm Context The Sustainability of Fishery and the Aquacultural Sector The Sustainability of Social Media Research The Sustainability of the Welfare State The Transition to a Low-Carbon, Smart Mobility in a Sociotechnical Context The Valorization of the Cultural Heritage and Landscape as the Entrance Point for the Circular City Strategy The Value Generation of Social Farming Thermal Management of Urban Subsurface Resources Through the Lens of Telecoupling: New Perspectives for Global Sustainability Tolerance Management in Architecture, Engineering and Construction Tools for Potential Impact Analysis Due to CAVs and ITS Operation: Traffic Microsimulation, Neural Network and Fuzzy Logic Techniques Tools, Methodologies and Techniques Applied to Sustainable Supply Chains Tourism and Sustainability: Combining Tourist’s Needs with Destinations’ Development Tourism Development, Economic Prosperity and Environmental Sustainability Toward a Circular Economy in the Agro-Industrial and Food Sectors Toward a Sustainable Transportation Future Toward a Sustainable Wellbeing Economy Toward Sustainability: Design Techniques in Service Sector Toward Sustainable 6G Wireless Communication Systems Toward Sustainable Environmental Quality – Toxicology Sustainability Towards a Circular Housing Economy Towards a Sustainable Life: Smart and Green Design in Buildings and Community Towards Circular Economy: Evaluation of Waste Treatment Towards Healthy and Sustainable Built Environments Post-2020 Towards Resilient Entrepreneurship and Technological Development in Self-Sustainable Economies Towards Sustainability: Selected Papers from the Fourth World Sustainability Forum (2014) Towards Sustainability: Selected Papers from the Third World Sustainability Forum (2013) Towards Sustainable and Innovative Development in Rural Areas Towards Sustainable Building & Infrastructure Operations & Maintenance (O&M) Towards Sustainable Development of National and Supranational Systems—Political and Legal Challenges Towards Sustainable Engineering: New Technologies and Methodologies Towards Sustainable Tourism: Pros and Cons of Geotourism Towards True Smart and Green Cities? Traditional Knowledge, Revitalization, and Sustainability Traditional Landscapes—from the Past to the Sustainable Future (Factors and Trends of Landscape Functions and Services Provision towards the 21st Century) Tragedy or Transcendence: Reflections on 'The Tragedy of the Commons' Transboundary Sustainable Mountain Governance Transformation to Sustainability and Behavior Change Transformations for a Sustainable Future Transformative Processes for a Circular Economy Transformative Times for Food Consumption: Moving towards Sustainability Transforming Built Environments: Towards Carbon Neutral and Green-Blue Cities Transforming Materials Industries for a Sustainable Future Transition from China-Made to China-Innovation Transitioning Household Energy towards Sustainability Transportation and Sustainability Transportation Network Modelling and Optimization for Sustainability Transportation Operations and Safety Analysis for Sustainable Networks Trends and Challenges in the Management of Technological Projects for the Sustainable Development in the Digital Revolution Era Trends in Sustainable and Ethical Food Consumption Trends in Transport Sustainability and Innovation Trends in Waste Utilization in Construction Trends, Environmental Implications, Recent Obstacles and Solutions in the Sustainable Growth of the Renewable Energy Integrated Power Sector Trust Management: Key Factor of the Sustainable Organizations Embedded in Network Ubiquitous Green IT System for Sustainable Computing Uncertainty in Prospective Sustainability Assessment Understanding Arctic Sustainability Challenges from Systems Perspective Understanding Innovation and New Venture Creation in Reducing Poverty Understanding Sustainable Human Resource Management Understanding the Economic Value of Nature Base Solutions (NBS) towards Sustainable Built Environments Uninterruptible Power Supplies (UPS) Universities, Industries and Sustainable Development University Management Innovations toward Meeting the Sustainable Development Goals Urban Heat Island Urban Pathways: Transition towards Low-Carbon, Sustainable Cities in Emerging Economies Urban Planning and Smart City Decision Management Urban Planning and Social Well-being Urban Political Ecology: The Uneven Production of Urban Space and Its Discontents Urban Regeneration and Ecosystem Services Assessment Urban Regeneration and Sustainability Urban Sprawl and Energy Efficiency: The Relevance of Urban Form in the Environmental Sustainability of Cities Urban Sustainability and Planning Support Systems Urban Sustainability in Historic Cities Using Applied Statistics and Multivariate Data Analysis in the Challenge to Solve Current Real-World Problems Using Life Cycle Thinking and LCA Data in Interdisciplinary Research on Production and Consumption Systems Using Project Management as a Way to Sustainability Using the Psychosociocultural Approach to Academic Persistence and Educational Wellness Valorisation of Waste from Non-Dairy Plant-Based Beverages Values and Housing Vehicular Networks and Sustainability Ventilation and Air Distribution Methods to Promote above Ground and Underground Built Environment Sustainability Virtual and Augmented Reality Learning Environments for Sustainable Development Vision 2030 in Saudi Arabia: Robust Research, Sustainable Development, Limitless Innovation and Economic Boost Visions, Values and Principles for a Sustainable Circular Economy Visual Landscape Research in Sustainable Urban and Landscape Planning Visualising Landscape Dynamics Walkable living environments Warehouse 4.0: Best Practices, Opportunities, and Challenges Waste Minimization: Strategies for the Reduction and Prevention of All Forms of Waste Waste, Garbage and Filth: Social and Cultural Perspectives on Recycling Water Footprint in Supply Chain Management Water Law and Sustainability Water Quality and Its Interlinkages with the Sustainable Development Goals Welfare Implications of Environmental Change and Policy Well-Being and Urban Density What is Sustainability? Examining Faux Sustainability What’s Sustainability for Restoration? Why the Physical Environment Matters: Sustainability’s Role in Child Development Wildlife Conservation: A Sustainability Perspective Women’s Special Issue Series: Sustainable Energy Wood-Product Trade and Policy Youth Action for a Sustainable Future Youth School Violence and the Impact of Social Environment ZEMCH International Research 2020 ZEMCH Research Initiatives: Mass Customisation and Sustainability Zipf’s Law, Central Place Theory, and Sustainable Cities and City Systems All Special Issues Volume Issue Number Page Logical Operator Operator AND OR Search Text Search Type All fields Title Abstract Keywords Authors Affiliations Doi Full Text | Curtin University research has found deep listening or Autonomous Sensory Meridian Response (ASMR) could be used as an effective tool to encourage pro-environmental behavior and create social bonding among young people. ASMR is a spontaneous, calming, positive feeling that occurs in response to certain stimuli including whispering and brushing sounds. This exploratory study used similar stimuli in a purpose-made video to promote positive climate change messages to high school students, and then gaged their opinions on whether the approach could be effective. Lead researcher, Curtin Adjunct Postdoctoral Fellow and University of Sydney researcher and Manager of the Center for Advanced Food Enginomics (CAFE) Diana Bogueva, who undertook the study while at Curtin University Sustainability Policy (CUSP) Institute said the study was focused on whether ASMR can be used to communicate positively about climate change with young people. "Young people are consistently bombarded with gloomy messages on climate change and the environment, including devastating pictures of bushfires and other extreme weather events. So we asked whether there were other, more optimistic ways, to talk about the issue to evoke positive feelings and empower young people to take action," Dr. Bogueva said. "To do this, we employed the relatively new and unexplored concept of ASMR. We made a video featuring an anonymous 16-year-old girl talking about what positive climate change actions can be taken in everyday life. Her performance was based on the traditional soft whispering voice used for ASMR, with some sound effects, including tapping and noises, to grab the attention of the viewers. The majority (65 percent) of high school students who participated in the study found the video would be effective to communicate positively about climate change. Importantly, some of the participants who felt overwhelmed by climate change prior to watching the video, reported feeling 'confident' and 'encouraged' to act, after the viewing." Co-author Professor Dora Marinova at the Curtin University Sustainability Policy Institute said the findings from the exploratory study could now lead to a larger-scale survey on the impact of using ASMR with young people. "Our study is just the beginning. We've shown that Generation Z is willing to be experimental, experience-focused and socially active in responding to climate change challenges, now more needs to be done on both ASMR (which is relatively untested) and the use of positive messaging, instead of focussing on the negative," Professor Marinova said. "We know climate change is a major concern for young people in Australia and around the world, and causes a range of negative feelings including anger, pessimism, stress and despair, as well as mental health issues—so any methods we can use to change the narrative, could also change lives." | 10.3390/su12176947 |
Biology | Freshwater mussels can inhibit bacterial diseases | M. Motiur R. Chowdhury et al, Glochidial infection by the endangered Margaritifera margaritifera (Mollusca) increased survival of salmonid host (Pisces) during experimental Flavobacterium disease outbreak, Parasitology Research (2021). DOI: 10.1007/s00436-021-07285-7 Mahsa Hajisafarali et al, Does the freshwater mussel Anodonta anatina remove the fish pathogen Flavobacterium columnare from water?, Hydrobiologia (2021). DOI: 10.1007/s10750-021-04769-6 Journal information: Hydrobiologia | https://dx.doi.org/10.1007/s00436-021-07285-7 | https://phys.org/news/2022-04-freshwater-mussels-inhibit-bacterial-diseases.html | Abstract Co-infections are common in host-parasite interactions, but studies about their impact on the virulence of parasites/diseases are still scarce. The present study compared mortality induced by a fatal bacterial pathogen, Flavobacterium columnare between brown trout infected with glochidia from the endangered freshwater pearl mussel, Margaritifera margaritifera , and uninfected control fish during the parasitic period and after the parasitic period (i.e. glochidia detached) in a laboratory experiment. We hypothesised that glochidial infection would increase host susceptibility to and/or pathogenicity of the bacterial infection. We found that the highly virulent strain of F. columnare caused an intense disease outbreak, with mortality reaching 100% within 29 h. Opposite to the study hypothesis, both fresh ongoing and past infection (14 months post-infection) with glochidia prolonged the fish host’s survival statistically significantly by 1 h compared to the control fish (two-way ANOVA: fresh-infection, F 1, 82 = 7.144, p = 0.009 and post-infection, F 1, 51 = 4.227, p = 0.044). Furthermore, fish survival time increased with glochidia abundance (MLR: post-infection, t = 2.103, p = 0.045). The mechanism could be connected to an enhanced non-specific immunity or changed gill structure of the fish, as F. columnare enters the fish body mainly via the gills, which is also the glochidia’s attachment site. The results increase current knowledge about the interactions between freshwater mussels and their (commercially important) fish hosts and fish pathogens and also emphasise the importance of (unknown) ecosystem services (e.g., protection against pathogens) potentially associated with imperilled freshwater mussels. Working on a manuscript? Avoid the common mistakes Introduction Free-living individuals are likely to be infected by several parasitic species and pathogens, although most studies on the subject focus on a one host–one parasite interaction (Rigaud et al. 2010 ). A parasitic infection can either directly or indirectly influence any subsequent infections (Cox 2001 ; Johnson et al. 2015 ; Vaumourin et al. 2015 ; Kotob et al. 2016 ; Gopko et al. 2018 ). This influence can be positive, negative or neutral (Vaumourin et al. 2015 ; Chowdhury et al. 2017 ; Klemme and Karvonen 2017 ). In fishes, parasitic infections increase the host’s risk of secondary infections and can act as a vehicle for the transmission of bacteria to fish (Kotob et al. 2016 ). For instance, the monogenean parasite Dactylogyrus intermedius increases the susceptibility of goldfish, or Carassius auratus , to Flavobacterium columnare , resulting in higher mortality when compared to non-parasitised fish (Zhang et al. 2015 ). Similarly, rainbow trout co-infected with Diplostomum pseudospathaceum experienced higher host mortality when exposed to bacteria compared with single infections (Louhi et al. 2015 ). Flavobacterium columnare causes columnaris disease (warm water disease) in fish, including salmonids. This pathogen can cause remarkable economic losses in fish farming due to the high mortality associated with the disease (Wagner et al. 2002 ; Pulkkinen et al. 2010 ). Flavobacterium columnare is an opportunistic fish pathogen that can also grow outside of the fish host (Kunttu et al. 2012 ). Flavobacterium columnare strains differ in their virulence (Suomalainen et al. 2006 ; Pulkkinen et al. 2018 ) and are capable of causing up to 100% mortality in juvenile salmonids (Suomalainen et al. 2005 ). Since there is no effective vaccination available for young salmonids (Sundell et al. 2014 ), the only treatment against F. columnare is antibiotics (Rach et al. 2008 ). Freshwater mussels, including the freshwater pearl mussel Margaritifera margaritifera , are critically endangered in Europe IUCN ( 2019 ). They have declined worldwide due to habitat destruction, loss of fish hosts, siltation, pollution, invasive species and over-exploitation (Bauer 1986 ; Lopes-Lima et al. 2017 ). The life cycle of M. margaritifera in Europe includes an obligatory host-specific parasitic period in the gills of the Atlantic salmon Salmo salar or the brown trout S. trutta (Salonen et al. 2016 , 2017 ) that lasts for 9–11 months (Salonen and Taskinen 2017 ). Therefore, the success of restoration of the endangered M. margaritifera is entirely dependent on the success of glochidium larvae in salmonid fish host. When matured and metamorphosed, the glochidia detach from the fish host and drop to the bottom of the river as juvenile mussels, where they begin their benthic life, which lasts up to 200 years (Helama and Valovirta 2008 ). Freshwater mussels are involved in many ecosystem functions and services such as biofiltration, nutrient cycling and storage, food web dynamics and bottom quality modification, leading to improved water quality, habitat structure and biodiversity, in addition of direct provision of food, tools and jewellery (Vaughn & Hakenkamp 2001 ; Strayer 2014 ; Vaughn et al. 2008 ; Haag 2012 ; Vaughn 2018 ). Furthermore, filtration by unionid mussel ( Anodonta anatina ) reduced the density of the cercarial larvae of the harmful fish parasite Diplostomum (Gopko et al. 2017b ). Therefore, the decline of M. margaritifera can potentially induce changes in ecological interactions and services in aquatic ecosystems. Infection by M. margaritifera glochidia has many adverse effects on the fish host such as hyperplasia and gill filament fusion, reduced swimming capability, increased mortality, altered thermoregulation, reduced foraging, decreased activity, lowered growth rate and social dominance and increased metabolic rate (Treasurer and Turnbull 2000 ; Taeubert and Geist 2013 ; Österling et al. 2014 ; Thomas et al. 2014 ; Filipsson et al. 2016 , 2017 , Chowdhury et al. 2019 ; Horký et al. 2019 . Furthermore, brown trout infected with M. margaritifera glochidia had an increased susceptibility to subsequent infection with the trematode parasite D. pseudospathaceum (Gopko et al. 2018 ). Hence, it can be expected that exposing fish to F. columnare — and the pathogenicity and virulence of F. columnare during the disease outbreak — would be higher in glochidia-infected individuals than in uninfected fish, as gills are the leading and first site of infection by F. columnare (Declercq et al. 2013 , 2015 ). In contrast, Ziuganov ( 2005 ) proposed that M. margaritifera infection could stimulate healing of the fish host Salmo salar , e.g. from hook wounds, and provide resistance against epitheliomata and cutaneous mycoses; however, the researcher’s empirical evidence to support this idea was limited. The co-infection interactions between brown trout, M. margaritifera , and bacterial disease have not been studied. Therefore, the present study investigated the impact of M. margaritifera infection on the susceptibility of fish to — and on the pathogenicity/virulence of — F. columnare infection. The hypotheses were that (i) a previous M. margaritifera glochidia infection would decrease brown trout survival when exposed to F. columnare , (ii) decreased survival would be dose-dependent (depending on the number of glochidia) and (iii) decreased survival would be the highest among the fish from which the metamorphosed glochidia have dropped off, as glochidia detachment involves rupturing the gill epithelium, presumably helping the bacterium to enter the gill tissues. Materials and methods In order to challenge the two groups (the glochidia attached and glochidia detached) of brown trout with F. columnare at the same time, the first experiment to study the post-infection period (glochidia detached) was started by infecting brown trout with M. margaritifera glochidia 1 year before the second experiment to study the period of fresh infection (glochidia attached). Post-infection experiment To study the effect of M. margaritifera infection on the susceptibility of fish to F. columnare — and on the pathogenicity/virulence of bacterial infection — after the glochidia had detached (the post-parasitic period), a total of 300 (0 + years old) brown trout (Rautalampi strain) were transported from the Laukaa Fish Farm (Natural Resources Institute Finland) to the Konnevesi Research Station, University of Jyväskylä, on 25 August 2016. Laukaa and Konnevesi are located in a watershed that is not inhabited by M. margaritifera , but five fish individuals were dissected and verified microscopically to ensure that they did not have glochidia. The rest of the fish were allocated randomly into two 163-L flow-through tanks. Two weeks later, fish in one of the tanks were exposed to 5.0 × 10 5 M . margaritifera glochidia (mass exposure, approximately 3300 glochidia per fish) that were collected on the same day from the River Haukioja in northern Finland. The control group in the other tank was exposed to an equal volume (1.5 L) of filtered glochidia suspension without glochidia (see Chowdhury et al. 2017 ). Before the 1.5-h exposure at 14.3 °C, the water volume in the tanks was reduced to 70 L, the water flow was stopped and aeration was provided. The success of glochidia infection was checked by dissecting three fish individuals 3 days after exposure. All the primed fish were infected with glochidia, the average ± SE abundance of infection being 1421 ± 210 M . margaritifera glochidia per fish. In July 2017, 9 months after infection, the fish were individually measured (length and weight), marked with passive integrated transponder (PIT) tags (7 × 1.35 mm, Loligo Systems, Denmark) and examined for M. margaritifera glochidia with the naked eye (see, Salonen and Taskinen 2017 ); they were anesthetised using MS-222. After marking, mixed groups of infected and control fish were established so that 123 fish (the rest of the fish were used for another experiment) were allocated randomly into two replicate tanks (34 infected + 28 control fish; 34 infected + 27 control fish) to be maintained until they were exposed to F. columnare in November 2017. Four fish from the glochidia-infected group and two fish from the control group died before the challenge with F. columnare . Fresh-infection experiment To study the effect of M. margaritifera infection on fish’s susceptibility to F. columnare — and on the pathogenicity/virulence of bacterial infection — during the parasitic period, when the glochidia are attached to the fish’s gills, a group of fish (200 brown trout, 0 + year old, Rautalampi strain from the Laukaa Fish Farm, Natural Resources Institute Finland) was transported to the Konnevesi Research Station in late August in 2017. The fish were randomly allocated into two 163-L flow-through and were verified as being uninfected by glochidia by dissecting the gills of five individuals. The fish in one of the tanks were exposed to M. margaritifera glochidia on 2 September 2017, at 14.4 °C, as described above, but with a suspension of 4.0 × 10 5 glochidia (mass exposure, approximately 4000 glochidia per fish) collected on the same day from River Jukuanoja in northern Finland. The control group in the other tank was exposed to an equal volume of filtered glochidia suspension without glochidia. Three brown trout from the Margaritifera -infection tank were examined for glochidia 3 days after exposure. All the primed fish were infected, the average ± SE abundance of infection being 1,041 ± 43 glochidia per fish. Later, on 19 September 2017, all the fish were marked via fin clipping while anesthetised using MS-222. They were then reallocated randomly, similar to the post-infection experiment, into two replicate (163 L) flow-through tanks containing both infected and control fish (47 infected + 43 control fish; 47 infected + 44 control fish). They were maintained under these conditions until the challenge with F. columnare in November 2017. One fish from each group died before the bacterial challenge. Challenge with F. columnare , survival monitoring and bacterial infection detection Even though the experiments focusing on the period when the glochidia were attached to the gills (fresh-infection experiment, see above) and on the period when the glochidia had detached (post-infection experiment, above) were separate experiments, their start was timed so that challenging the brown trout with F. columnare could be performed simultaneously. The F. columnare strain B549 used in the experiment was isolated from Lake Kuuhankavesi, Central Finland, in 2013 and stored at − 80 °C in a solution containing 10% foetal calf serum and 10% glycerol. The strain was revived by culturing it in a modified Shieh medium (Song et al. 1988 ) at room temperature under agitation (120 rpm) overnight. The revived culture was further sub-cultured in the same conditions three times into a larger medium volume in a ratio of 1-part bacterial culture to 10 parts fresh medium to obtain a sufficient concentration for the experimental exposures. The strain had been tested and found to be highly virulent in previous rainbow trout challenges (Aaltonen et al . unpublished). During the week before the challenge, the water temperature in the fish tanks (fresh- and post-infection experiments) was slowly increased from 4.5 to 18 °C, which was the challenge temperature, as infection with F. columnare (warm water disease) is not effective in cold water (Pulkkinen et al. 2010 ). In both the post-infection and the fresh-infection experiments, the challenge with F. columnare began on 16 November 2017. Flavobacterium challenge was performed in an isolated tank room to avoid contamination of the facility. The fish were allocated randomly into four replicate bacterium-challenge tanks and four replicate unchallenged control tanks in both experiments, for a total of 16 × 80 L tanks with a water volume of 50 L in each. In the post-infection experiment, there were 8 Margaritifera -infected and 5–7 control fish mixed per tank. In the fresh-infection experiment, there were 11–13 Margaritifera -infected and 10–11 control fish mixed per tank. The combined number of fish per tank varied between 13 and15 (post-infection experiment) and 21 and 23 (fresh-infection experiment); the total number of bacterium-challenged/control fish was 60/57 (post-infection experiment) and 90/89 (fresh-infection experiment). Within the bacterium-challenged fish, the total number of glochidia-infected/uninfected control fish was 32/28 (post-infection experiment) and 47/43 (fresh-infection experiment). Among the unchallenged fish, the total number of glochidia-infected/uninfected control fish was 32/25 (post-infection experiment) and 46/43 (fresh-infection experiment). Rich aeration was provided but water was not changed in the 80-L challenge tanks during the experiment. Challenge infection was started by adding 500 mL of the bacterial culture to each of the challenge tanks so that the final bacterial cell concentration was 1.0 × 10 4 CFU mL −1 (continuous challenge method, Kinnula et al. 2015 ). An equal volume of sterile modified Shieh medium was added to the control tanks. The fish were initially monitored for signs of bacterial infection and morbidity at 1-h intervals, but after the first morbid fish was detected, monitoring was continuous. Upon detecting signs of morbidity (mainly, when the fish swam dorsal side down continuously), the fish were removed, anesthetised with MS-222 and killed with a blow to the head. Bacterial samples were taken from the gills of each dead fish with a sterile loop and cultured on agar plates with modified Shieh medium and tobramycin to selectively isolate F. columnare (Decostere et al. 1997 ). The samples were then incubated for 48 h at room temperature. The plates were checked for the growth of yellow rhizoid colonies characteristic of F. columnare. The experiment ended after 29 h when all the fish from the challenge tanks had been removed as dead/moribund. After this, the remaining (unchallenged) fish were killed with an overdose of MS-222. To avoid unwanted contamination of the research station facilities with the virulent F. columnare –diseased fish, all the fish were immediately disposed of without taking further measurements (fish size) or examining the gills for glochidia. However, at the time of bacterial challenge, fish in the post-infection experiment were 1 + year of age and their mean ± SE length was in July 2017 (4 months before bacterial challenge) 109.64 ± 0.85 mm, while fish in the fresh-infection experiment were 0 + year of age and their mean ± SE length was 69.11 ± 0.48 mm in September 2017 (2 months before the bacterial challenge). Statistical analyses Practically, all fish mortality was associated with the F. columnare challenge, as there was only one fish from the unchallenged Flavobacterium -control tanks that died. Therefore, only the Flavobacterium -challenged tanks were included in the statistical analyses to investigate the dependence of brown trout mortality on M. margaritifera infection. Post-infection experiment In the post-infection experiment, the effects of glochidia infection, possible tank effect and the possible effect of fish size (weight) on survival time of the fish (response variable) were analysed using two-way analysis of covariance (ANCOVA). The model assumptions of two-way ANCOVA — independence of observations, normal distribution of the dependent variable in all subpopulations, covariate linearity, regression slope homogeneity and that all subpopulations had the same variance (homoscedasticity) — were checked before the analysis. The assumption about the homoscedasticity of the subpopulations was checked using Levene’s test (Levene = 0.917, df1 = 7, df2 = 52, p = 0.501). The assumption of normality was examined graphically and by using the Shapiro–Wilk test, as the number of individuals in each subpopulation was between 10–13; it was met in all the subpopulations (Shapiro–Wilk, p ≥ 0.118) except for one (Shapiro–Wilk, p = 0.030). Due to the robust nature of two-way analysis of variance (ANOVA), this slight deviation from the normal distribution in one subpopulation is not problematic regarding the accuracy of the results; however, it should be considered when interpreting the results (Leik 1997 ). The assumptions of covariate linearity and regression slope homogeneity were checked graphically and by performing two-way ANCOVA; the model included all of the possible two-way interactions and a three-way interaction (infection*tank, infection*weight, tank*weight, infection*tank*weight) instead of using a full factorial load. The results suggested that both assumptions were also met (all p -values ≥ 0.328). Multiple linear regression analysis was performed to determine how the number of glochidia (i.e. the intensity of glochidial infection) and fish size (weight) might affect fish survival times based on fish size and number of glochidia measured during the PIT tagging. The model assumptions (normality, homoscedasticity and linearity of the residuals) were checked graphically by examining the residual plots; the independence of the residuals was checked using Durbin–Watson statistics ( d = 2.130). All the model assumptions were met, and there was no multicollinearity problem in this model. Fresh-infection experiment In the fresh-infection experiment, the effects of glochidia infection and the possible tank effect on the survival time of the fish (response variable) were analysed using two-way ANOVA. Before conducting the analysis, all the model assumptions were checked as above. According to the Shapiro–Wilk test ( W = 0.736, df = 11, p = 0.001), one of the subpopulations did not appear to be normally distributed; this also held when examined graphically. The rest of the subpopulations were normally distributed both graphically and via the Shapiro–Wilk test (all p -values ≥ 0.185). As before, this slight deviation from a normal distribution does not jeopardise the reliability of the results (Leik 1997 ). The assumption of homoscedasticity of subpopulations was met (Levene = 0.598, df1 = 7, df2 = 82, p = 0.756). Results Challenge with the bacterium resulted in a strong disease outbreak — when assessed over both experiments, the first morbidity was observed after 17 h and mortality among the F. columnare –challenged tanks reached 100% within 29 h. However, none of the fish that were not challenged with F. columnare died (four tanks in the post-infection experiment and four tanks in the fresh-infection experiment), except for one individual. While F. columnare could be isolated from all individuals from the F. columnare –challenged tanks, the bacterium was not isolated from the single unchallenged fish that died. Post-infection experiment The effect of the glochidia infection on fish survival time was statistically significant ( F 1, 51 = 4.227, p = 0.044). The survival time of the Margaritifera -infected individuals was longer than that of the control fish by an average 1 h (Fig. 1 ). The covariate, fish weight (measured in July, 4 months before the bacterial challenge), was not statistically significant ( F 1, 51 = 1.282, p = 0.263), suggesting that the survival time of the fish was independent of fish size (weight). In contrast, there was a statistically significant tank effect ( F 3, 51 = 5.156, p = 0.003). Tukey’s test in equivalent two-way ANOVA without the covariate fish weight indicated that one tank had a higher survival rate that differed from all the other tanks ( p < 0.029 in all comparisons between tanks) (Tank 2, Fig. 1 ). However, the interaction between glochidia infection and tank was not statistically significant ( F 3, 51 = 1.330, p = 0.275), indicating that the effect of glochidia infection was parallel to increased survival time in all tanks. Fig. 1 Tank-specific mean ± SE survival times of brown trout previously infected with Margaritifera margaritifera glochidia and those of uninfected control brown trout in the ‘post-infection experiment’, where fish were challenged with Flavobacterium columnare 14 months after exposure to M. margaritifera (i.e. when the glochidia had already detached from the infected fish) Full size image Multiple linear regression indicated a statistically significant positive association between the number of glochidia (counted in July before challenge exposure) and survival time of Flavobacterium -challenged fish ( t = 2.103, p = 0.045). However, multiple linear regression did not indicate any association between glochidia number and fish weight ( t = 1.677, p = 0.105). The resulting regression model was. $$\mathrm{Survivaltime}=1094.382+0.097*\mathrm{Glochidiaintensity}+12.173*{\mathrm{Fishweight}}_{\mathrm{g}}$$ with R 2 = 0.159. Thus, the higher the abundance of glochidia, the longer the survival time (Fig. 2 ). Fig. 2 Survival time of brown trout previously infected with Margaritifera margaritifera glochidia as plotted against the unstandardised predicted value of the number of glochidia in brown trout, according to results of the multiple linear regression analysis (line), in the ‘post-infection experiment’ where fish were challenged with Flavobacterium columnare 14 months after exposure to M. margaritifera . In this experiment, the numbers of glochidia were counted 4 months before the bacterial challenge (before the glochidia detached) Full size image Fresh-infection experiment Two-way ANOVA on the survival time of fish — with glochidia infection (infected, uninfected) and tank (four tanks; Flavobacterium -challenged tanks, only) as fixed factors — indicated that the effect of glochidia infection on fish survival time was statistically significant ( F 1, 82 = 7.144, p = 0.009). The survival time of the Margaritifera -infected fish was longer than that of the control fish (Fig. 3 ). There was also a statistically significant tank effect ( F 3, 82 = 38.557, p < 0.001). Tukey’s test indicated that one tank had a higher mean survival time that differed from all the other tanks ( p < 0.001 in all comparisons between tanks) (tank 4, Fig. 3 ). However, the interaction between glochidia infection and tank was not statistically significant ( F 3, 82 = 0.722, p = 0.541), indicating that the effect of glochidia infection was parallel to increased survival time in all tanks. Fig. 3 Tank-specific mean ± SE survival times of brown trout previously infected with Margaritifera margaritifera glochidia and those of uninfected control brown trout in the ‘fresh-infection experiment’ where fish were challenged with Flavobacterium columnare 2 months after exposure to M. margaritifera (i.e. when the glochidia had recently attached to, and not yet detached from, the infected fish) Full size image Discussion Opposite to the study hypothesis, the survival time of the fish infected with M. margaritifera glochidia was longer than the survival time of the uninfected control fish during the F. columnare disease outbreak, both in the fresh-infection and in the post-infection experiments. In the post-infection experiment, the survival time of the fish increased with the number of M. margaritifera glochidia on the fish before the challenge with F. columnare . After the experiment, the bacterium F. columnare was isolated in every (dead) fish from the F. columnare –exposed tanks. Only one fish from one of the control tanks died, but F. columnare could not be isolated from that individual. Thus, the mortality of fish in this experiment can be specifically connected to F. columnare . This finding justifies two conclusions. First, the longer survival of M. margaritifera –infected fish, and the positive relationship between glochidia number and survival rate, indicates that the glochidia infection was associated with decreased pathogenicity (virulence) of F. columnare in the brown trout. Second, in these experiments, susceptibility differences between fish infected with M. margaritifera glochidia and uninfected control fish were not observed, as all the F. columnare –exposed trout acquired F. columnare infection regardless of glochidia infection. All the fish challenged with this virulent F. columnare strain died within 29 h, and the difference in the mean survival time between M. margaritifera –infected and control fish was about 1 h in both experiments. The current challenge method is widely used when studying the infectivity and virulence of bacterial pathogens, including F. columnare , in fish; it is not exceptional that a virulent F. columnare strain with a relatively high bacterial dose can cause 100% mortality in juvenile salmonids within hours in experimental conditions (Kunttu et al. 2009 , 2012 ; Kinnula et al. 2017 ; Pulkkinen et al. 2018 ). Although the survival of glochidia-infected fish was only 1 h longer than that of the control fish (on average) in this experiment with a highly virulent bacterial strain, the findings can provide a substantial survival benefit with a less virulent pathogen or with lower bacterial doses and in less stressful (natural) conditions. Thus, it is possible that M. margaritifera infection may decrease the pathogenicity/virulence of F. columnare in both natural and aquaculture conditions. The mechanism behind the longer survival of glochidia-infected fish after exposure to F. columnare is not known. It could be the enhancement of an unspecific immunity within fish due to M. margaritifera infection. In teleost fishes, unspecific immune defence (primary immune system, innate immunity) includes cellular components (i.e. phagocytotic cells (macrophages and granulocytes)), natural killer cells and humoral components (i.e. defence molecules such as cytokines, interferons and the complement system) (e.g. Jørgensen 2014 ). Margaritifera margaritifera infection has been shown to induce transitory spleen enlargement (Thomas et al. 2014 ). The spleen is the major antibody-producing organ in teleost fish (Manning 1994 ), but spleen enlargement can also be a signal of infection — rather than a signal of enhanced immunocompetence — in fish (Seppänen et al. 2009 ). Furthermore, relative spleen size can decrease due to stress in fish (Kortet et al. 2003 ). Kunttu et al. ( 2009 ) failed to create protection against F. columnare in rainbow trout, even though the applied immunostimulant treatments raised the values of several parameters of innate immunity in the fish. However, immunostimulation as an explanation for the current results cannot be rejected. If the enhanced unspecific immune defence is behind the present results, it suggests that the immunostimulating effect of M. margaritifera glochidia is long-lasting, as the exposure to Flavobacterium in the post-infection (post-parasitic period) experiment took place 14 months after infection with glochidia and 3–4 months after the glochidia detached from the fish host. Flavobacterium columnare enters the fish body mainly through the gills (Declercq et al. 2013 , 2015 ). Therefore, an alternative mechanism for the protective effect of glochidia could be that the structure of the gills may change due to M. margaritifera infection so that the entry of the bacterium through the gills, or the establishment of the bacterium on the gills, is weakened. Margaritifera margaritifera glochidia cause hyperplasia and fusion of the gill filaments (Treasurer and Turnbull 2000 ; Thomas et al. 2014 ) and lessen the mucous cells of the gills (Thomas et al. 2014 ), but it is not known whether these changes could increase trout’s resistance to F. columnare . Furthermore, the protective effect of M. margaritifera infection after the glochidia drop-off is especially surprising. Metamorphosed glochidia rupture the gill epithelium when detaching (Waller and Mitchel 1989 ), which should increase vulnerability to secondary infections — especially to bacteria. Exposure of fish to pathogens entering the host via gills increases with ventilation rate (e.g., Mikheev et al. 2014 ). Therefore, the observed protective effect of M. margaritifera glochidia against F. columnare could be contributed by behavioural changes induced by glochidiosis — especially the decreased locomotor activity of host fish Horký et al. 2014 , 2019 ) — which would decrease the ventilation rate of fish. In theory, the observed protective effect of glochidia may be an adaptive feature of M. margaritifera to increase its survival and fitness (Ziuganov 2005 ; Poulin 2010 ; Hughes et al. 2012 ; Gopko et al. 2015 , 2017a ). It is not unprecedented that a parasite would enhance the immune defence of its host (so-called apparent competition), or in some other way impair the ability of a second parasite/microbe to enter the host (e.g. Ashby and King 2017 ), but it remains unclear why the effect would last for several months after the glochidia are shed. Whatever the mechanism, it is possible that any parasite, not just M. margaritifera , could produce the observed decreased pathogenicity/virulence. However, several studies have found a lowered resistance to bacterial infections in fish that were pre-infected with parasites (Kotob et al. 2016 ). Fish’s increased susceptibility to bacterial infection has been shown in co-infection by monogenean gill parasites (Busch et al. 2003 ; Zhang et al. 2015 ), tissue-penetrating trematode metacercaria (Pylkkö et al. 2006 ), fish lice (Bandilla et al. 2006 ; Lhorente et al. 2014 ) and different ciliated ectoparasites (Evans et al. 2007 ; Xu et al. 2009 , 2012 ; Shoemaker et al. 2012 ). A chronic myxosporean parasite infection decreased the resistance of rainbow trout to bacterial disease even 12 months after exposure to the parasite (Densmore et al. 2004 ). Thus, M. margaritifera glochidia increasing the survival of brown trout during an F. columnare disease outbreak is a notable exception among co-infections between parasites and bacterial pathogens, urging further investigation into the relationship between fish and parasitic glochidia. The present results are especially interesting when compared to other F. columnare co-infection experiments which included pre-infection with parasites. Both the experiment on rainbow trout infected with the crustacean skin parasite Argulus coregoni (Bandilla et al. 2006 ) and the one on goldfish pre-infected with the monogenean gill parasite Dactylogyrus intermedius (Zhang et al. 2015 ) resulted in an increased susceptibility to F. columnare infection. Both A. coregoni and D. intermedius are ectoparasites equipped with suckers or hooks that penetrate host epithelial cells, and they also feed on the host’s blood or epithelial cells. This can damage the skin or gill epithelium, potentially creating a gateway for secondary microbial invasions. In contrast, the 70-µm diameter glochidia of M. margaritifera , after their initial penetration into the fish’s gills, are surrounded by a cyst embedded within the gill tissue; they stay and develop there for up to 11 months before detaching (e.g. Salonen et al. 2017 ). Thus, the association of M. margaritifera infection with decreased pathogenicity/virulence of F. columnare seems to be exceptional. Hence, this finding would increase the interest and willingness of various stakeholders such as commercial fish farms, fishing authorities, fishery collectives and landowners to participate in the conservation activities involving infection of salmonids with M. margaritifera . In the light of the present results, the possible protective influence on the salmonid host by M. margaritifera against the very harmful Flavobacterium fish pathogen can be added to the list of potential beneficial services provided by freshwater mussels. Therefore, and because of the large F. columnare problems of fish farms, present results urge further studies with deepened focus, for example, on the immune parameters and gill histology to better understand the interaction between fish, bacterium and the glochidia. Change history 15 August 2022 Missing Open Access funding information has been added in the Funding Note. | Researchers from the University of Jyväskylä found brown trout better survived a Flavobacterium disease outbreak if the fish had larvae of freshwater pearl mussel in their gills. In another study, duck mussels were observed to filter and remove Flavobacterium from the water. Flavobacteria are a severe problem for fish farming and cause substantial economic losses. The "warm water disease" caused by Flavobacterium columnare is especially problematic since a functional vaccine against the bacterium is not available. The skin and gill damage in diseased individuals can cause high mortality in young salmonids. Larvae of the freshwater pearl mussel can give protection against bacterial infection Glochidium larvae of the freshwater pearl mussel attach to salmon or trout gills, where they develop and grow for 9 to 11 months until they detach and sink to the river bottom, starting their life as mussels. Glochidium larva is a parasite in the gills of fish. Therefore, it was assumed that glochidia-infested fish would be more prone to bacterial infection. Against expectations, brown trout infested with pearl mussel larvae better survived an outbreak of Flavobacterium columnare. Moreover, the protective effect of glochidia infestation against bacterial disease sustained several months after the larvae had dropped off from the gills of fish. The higher the number of pearl mussel larvae in the gills was, the better the trout survived. Colonies of Flavobacterium columnare on a plate. Credit: Katja Pulkkinen / University of Jyväskylä Mussels can remove flavobacteria from water Freshwater mussel are filter-feeders. They remove suspended materials efficiently, cleaning tens of liters of water a day. Thus, their ability to filter and remove harmful F. columnare bacteria from water was tested. The result was clear: in aquaria, one mussel individual could halve the density of bacteria within two days. Freshwater pearl mussels. Credit: Mikko Ranta Species diversity loss will cause loss of important ecosystem services The freshwater pearl mussel is an endangered species which has disappeared from a large part of its original distribution area. The present results indicate that with the extirpation of species we may lose important and valuable ecosystem services. "Filtering freshwater mussels could be potentially utilized in water treatment applications," says Head of Konnevesi Research Station and LIFE Revives project, professor Jouni Taskinen. "As species disappear, we may lose ecosystem services the species provide—probably before we have even found them." The research was published in Parasitology Research and Hydrobiologia. | 10.1007/s00436-021-07285-7 |
Medicine | How the immune system remembers viruses: Memory T cells are formed earlier than previously thought | Simon Grassmann et al, Early emergence of T central memory precursors programs clonal dominance during chronic viral infection, Nature Immunology (2020). DOI: 10.1038/s41590-020-00807-y Journal information: Nature Immunology | http://dx.doi.org/10.1038/s41590-020-00807-y | https://medicalxpress.com/news/2020-11-immune-viruses-memory-cells-earlier.html | Abstract Chronic cytomegalovirus (CMV) infection leads to long-term maintenance of extraordinarily large CMV-specific T cell populations. The magnitude of this so-called ‘memory inflation’ is thought to mainly depend on antigenic stimulation during the chronic phase of infection. However, by mapping the long-term development of CD8 + T cell families derived from single naive precursors, we find that fate decisions made during the acute phase of murine CMV infection can alter the level of memory inflation by more than 1,000-fold. Counterintuitively, a T cell family’s capacity for memory inflation is not determined by its initial expansion. Instead, those rare T cell families that dominate the chronic phase of infection show an early transcriptomic signature akin to that of established T central memory cells. Accordingly, a T cell family’s long-term dominance is best predicted by its early content of T central memory precursors, which later serve as a stem-cell-like source for memory inflation. Main Inflationary T cell memory is a specific form of immunological memory 1 , 2 that critically depends on presentation of defined viral antigens throughout the chronic phase of cytomegalovirus (CMV) infection 3 , 4 . It is characterized by continued maintenance of large antigen-specific CD8 + T cell populations 5 , which circulate in the blood, home to peripheral tissues and show immediate cytolytic effector function 6 . These unique features of inflationary T cells have spurred great interest in utilizing CMV as a vaccine vector. Indeed, recombinant CMV, engineered to express defined vaccine antigens 7 , has shown immense promise for induction of protective cytotoxic T cell immunity, directed against simian immunodeficiency virus (SIV) 8 , 9 , 10 or Mycobacterium tuberculosis 11 . Importantly, not all CD8 + T cells that recognize epitopes presented during CMV latency strongly contribute to memory inflation. Instead, inflationary T cell populations become increasingly oligoclonal throughout the course of chronic CMV persistence 1 , 12 , 13 —a feature shared by other chronic infections 14 , 15 . The clonality of these responses indicates that they originate from one or few naive T cells harboring the same T cell receptor (TCR). Starting out from these rare naive precursors, the development of clonal dominance is thought to be predetermined by defined cellular features, such as TCR specificity and affinity 16 , 17 , 18 , 19 , 20 , 21 , 22 . However, in the context of acutely resolving infections, single-cell fate mapping has shown that even T cells expressing identical TCRs display massive variation of clonal expansion, with only some strongly dominating the acute response to infection 23 , 24 , 25 , 26 . The extent to which this fundamental, early variation that emerges from TCR-identical single cells also shapes T cell immunity against chronically persisting infections remains unknown. One option is that early single-cell-derived variation is erased by the effects of long-term TCR re-stimulation during chronic infection. Alternatively, such early variation in the life histories of single-cell-derived ‘T cell families’ may leave a lasting imprint on their long-term capacity for memory inflation and immunodominance. To resolve which of these two scenarios is true, we monitored, in vivo, the long-term development of T cell families derived from single naive precursors harboring identical TCRs specific to a vaccine antigen expressed by recombinant murine CMV (MCMV). We utilized blood sampling or hemisplenectomy to gauge a T cell family’s initial clonal expansion, monitored its differentiation state via RNA sequencing and flow cytometry and then tracked its long-term inflationary output throughout the chronic phase of MCMV infection. In addition, we utilized single-cell RNA sequencing (scRNA-seq) and RNA velocity analyses 27 to decipher the differentiation dynamics within an established inflationary T cell population. By integrating these approaches, we find that a T cell family’s initial clonal expansion does not predict its capacity for memory inflation. Instead, we identify a fundamental single-cell-derived variation in the amount of T central memory precursors (CMPs) generated during the acute phase of chronic viral infection. This variation defines a set point for the magnitude of memory inflation and imprints lasting hierarchies of immunodominance upon TCR-identical cells. We anticipate that measuring and altering this early CMP-related set point will be of critical relevance in the design of future vaccination strategies and immunotherapeutic approaches. Results Single-cell-derived memory inflation is highly variable and not predicted by acute clonal expansion Clonal expansion and phenotypic differentiation derived from single naive T cells has been found to vary massively upon acute infection 23 , 24 , 25 , 28 , 29 . Due to ‘response averaging’ this fundamental variation goes unnoticed when monitoring the expansion and differentiation of epitope-specific T cell populations 26 . Here, we aimed to evaluate the long-term consequences of such discrepant initial T cell behavior in the context of chronic infection. To achieve this, we made use of a recombinant MCMV vector into which a sequence encoding the SIINFEKL peptide of chicken ovalbumin (OVA) had been inserted at the 3′ end of the immediate early 2 (ie2) gene 30 (Extended Data Fig. 1a ). The promoter of ie2 remains active during chronic MCMV latency and, thereby, guarantees lasting expression of the engineered epitope 31 . We found that infection with MCMV- ie2 -SIINFEKL reliably induced memory inflation of endogenous SIINFEKL-specific CD8 + T cells (Fig. 1a ) and of adoptively transferred monoclonal populations of OT-1 T cells, whose transgenic TCR binds with high affinity to SIINFEKL in the context of H2-K b (Fig. 1b and Extended Data Fig. 1b ). Next, we adoptively transferred single naive CD8 + T cells and mapped their acute expansion and long-term memory inflation in response to MCMV- ie2- SIINFEKL. As done previously 23 , 25 , 32 , 33 , 34 , we increased our capacity for parallel fate mapping by collecting single naive T cells from OT-1 Rag-1 −/− donors expressing distinct combinations of the congenic markers CD45.1, CD45.2, CD90.1 and CD90.2. Upon adoptive transfer, we found that both acute and long-term expansion derived from these single naive T cells varied massively, with only a few T cell families strongly dominating the overall response (Fig. 1c ). Strikingly, the largest of these T cell families encompassed more than 10% of all CD8 + T cells (Fig. 1d , blue family) and more than 2% of all leukocytes (Fig. 1e ) found in peripheral blood during the chronic phase of infection. However, such massive memory inflation was not predicted by strong acute expansion (Fig. 1d,e , red families). In fact, the magnitude of acute clonal expansion did not correlate with that of memory inflation, as measured at day 195 post infection (p.i.) (Fig. 1f ) or determined as area under the curve (AUC) between day 0 and 195 p.i. (Fig. 1g ). Instead, T cell families that strongly dominated the acute phase contracted and stabilized at levels that lay up to 1,000-fold below that of the most strongly inflating T cell family (Fig. 1e ). We continued to track a cohort of T cell families until day 412 p.i., but found that the established hierarchies of immunodominance remained largely unchanged (Extended Data Fig. 1c ). Fig. 1: Single-cell-derived memory inflation is highly variable and not predicted by acute clonal expansion. a , Longitudinal quantification via multimer staining of the endogenous SIINFEKL-specific T cell population in blood of C57BL/6 mice immunized with MCMV- ie2 -SIINFEKL. Data are presented as mean ± s.e.m. b , Adoptive transfer of 100 CD45.1 + OT-1 Rag-1 − /− T cells followed by infection with MCMV- ie2 -SIINFEKL and longitudinal quantification of the response as mean ± s.e.m. c , Adoptive transfer of single congenically labeled OT-1 Rag-1 −/− T cells and quantification of resulting T cell family sizes in blood at an acute (day 8–9 p.i.) and a late (day 195 p.i.) time point after MCMV- ie2 -SIINFEKL infection. Pie charts show the contribution of individual T cell families to the cumulative size of all T cell families. Variation in size is quantified by the coefficient of variation (CV). The three largest acutely dominating and long-term dominating T cell families are labeled in red and blue, respectively. This color coding is maintained in d – g . d , Representative flow cytometry plots of an acutely dominating (top row) and a long-term dominating T cell family (bottom row). Pre-gated on CD8 T cells. e , Longitudinal quantification of T cell family sizes in blood (solid lines), measured as a percentage of living leukocytes. Mean size of immune responses derived from populations of 100 naive T cells as in b are also shown (dashed line). f , Scatter plot comparing the acute size (day 8–9 p.i.) of T cell families to their size during memory inflation (day 195 p.i.) ( P = 0.33). g , Scatter plot comparing the acute size (day 8–9 p.i.) of T cell families to their long-term proliferative output, assessed as AUC ( P = 0.52). The AUC was calculated as the integral of a T cell family’s size over time (day 0–195 p.i.). Data in a and b are representative of three independent experiments. Population data in e presented as mean, and error bars indicate s.e.m. Data in c , e – g are pooled from four independent experiments. r, Spearman’s correlation coefficient. For a : n = 6 mice, b : n = 15 mice, c , e – g : n = 43 T cell families. ND, not determined; NS, not significant. Source data Full size image If chronic antigen exposure were the sole factor required for memory inflation, we would have expected that individual T cells outfitted with identical TCRs would generate the same maximum level of inflation, or at least a level proportional to their acute clonal expansion. But instead, long-term expansion of TCR-identical T cell families varied massively and early dominance even appeared to counteract lasting memory inflation. These data led us to believe that early fate decisions may not be erased by chronic antigen exposure, but instead substantially influence a T cell family’s capacity for memory inflation. A T cell family’s fate is determined during the acute phase of MCMV infection To test the above-mentioned hypothesis, we transferred single naive OT-1 T cells of distinct congenic phenotypes and infected recipient mice with MCMV- ie2 -SIINFEKL. We then collected spleens at day 6 p.i. and screened for the presence of T cell families (Fig. 2a , left). If T cell families were detected (Fig. 2b ), we divided the remaining spleen sample into three fractions of 1.5 × 10 7 splenocytes and transferred these separately to three secondary recipients (Fig. 2a , right). After infection of these secondary recipients with MCMV- ie2 -SIINFEKL, we found that fractions derived from distinct families established distinct levels of inflation (Fig. 2c , compare red lines between distinct plots). However, T cell fractions belonging to the same family behaved similarly in all three secondary recipients (Fig. 2c , compare three black lines within the same plot). Taken together, almost 80% of the observed variability of memory inflation could be assigned to variation between distinct families (interfamily variation) and only 20% to variation between fractions of the same family (intrafamily variation) (Fig. 2d ). Importantly, the differences in magnitude of memory inflation observed in secondary hosts were not due to varying sizes of T cell families at the time of transfer (Fig. 2e ). Instead, these data indicated that T cell families received an early programming that was largely indepedent of their acute expansion in primary hosts, but critically determined their long-term behavior upon secondary transfer and re-infection. Fig. 2: A T cell family’s fate is determined during the acute phase of MCMV infection. a , Schematic of experimental setup: single OT-1 Rag-1 −/− T cells harboring distinct congenic labels were adoptively transferred into a primary recipient, followed by infection with MCMV- ie2 -SIINFEKL. At day 6 p.i., three aliquots of 1.5 × 10 7 splenocytes each were collected from this primary recipient and transferred separately into three secondary recipients, followed by infection with MCMV- ie2 -SIINFEKL. Secondary expansion of the resulting three fractions of each T cell family was tracked by longitudinal blood sampling of secondary recipients. b , Exemplary gating strategy for detecting single-cell-derived T cell families at day 6 p.i. in the spleen of primary recipients. Plots are pre-gated on CD8 + CD45.1 + and further separated based on the expression of CD90.1, CD90.2, CD45.1 and CD45.2. c , Each plot depicts the secondary expansion of three fractions (black lines) derived from the same T cell family and the mean of this expansion (red line) as measured in the blood of secondary recipients. d , The proliferative output of each T cell fraction between day 0 and day 60 after secondary infection was assessed as the AUC of their trajectories in c . The plot depicts the AUCs of all T cell fractions (gray circles) grouped according to their T cell family of origin. Red bars indicate mean AUCs of three fractions belonging to the same family. The pie chart depicts the percentage of overall variance attributable to intrafamily (gray) versus interfamily (red) variance (one-way analysis of variance (ANOVA)). e , Scatter plot comparing the absolute number of T cells per fraction at the time of adoptive re-transfer (day 6 p.i.) and their secondary expansion, measured as the AUC in d ( P = 0.71). Data in b – e are pooled from three independent experiments. CV, coefficient of variation. r , Spearman’s correlation coefficient. For d , e : n = 26 T cell fractions derived from nine T cell families. Source data Full size image Long-term dominating T cell families share an early transcriptional signature with established T CM cells To further investigate the nature of this early T cell programming, we developed a new approach that combined early RNA sequencing of single-cell-derived T cell families (‘single-cell-derived RNA sequencing’, sc-derived RNA-seq) with longitudinal analysis of the same T cell families’ inflationary output. We transferred single naive OT-1 T cells and immunized recipient mice with MCMV- ie2 -SIINFEKL. We then performed operative hemisplenectomy at day 8 p.i., collected 100 T cells from each T cell family and submitted these to bulk RNA sequencing (Fig. 3a ). By this procedure, the majority of each T cell family remained in situ, allowing us to track its development via longitudinal blood sampling throughout the following months. We again found that some T cell families dominated the acute phase of the response, while others showed particularly strong memory inflation (Fig. 3b , red versus blue families). Sc-derived RNA-seq provided us with the unique opportunity to connect these distinct long-term behaviors of individual T cell families with their early transcriptional profiles. For differential gene expression analysis, we first concentrated on the two most strongly inflating T cell families. These not only dominated the chronic phase of infection, but also showed a particularly dynamic expansion between the early and the late phase of infection (Fig. 3c ). Thereby, they segregated clearly from all other responding T cell families. When comparing the early sc-derived RNA-seq profiles of these long-term dominators with the early profiles of all other T cell families, we found a clear under-representation of transcripts related to cytolytic effector function and terminal T cell differentiation. These under-represented transcripts encoded for molecules such as granzyme A (GzmA), granzyme B (GzmB) and perforin as well as Zeb2, CX3CR1, S1pr5, KLRG-1 and various other killer-like receptors (Fig. 3d ). Accordingly, gene ontology analysis revealed terms such as ‘cytolysis’, ‘interferon gamma production’ and ‘immune effector processes’ among the under-represented transcripts (Extended Data Fig. 2a ). However, we found only very few transcripts to be significantly over-represented. To refine our differential gene expression analysis, we next compared the transcriptomes of the two long-term dominators against those of the two T cell families that most strongly dominated the acute response phase but then contracted (Fig. 3b,c , red families). We speculated that comparing short- against long-term dominators would increase the sensitivity of our differential gene expression analysis. Indeed, we now detected many over-represented genes in long-term dominators. These included transcripts encoding for molecules such as TCF-1 (encoded by Tcf7 ) 35 , 36 , Tsc1 (ref. 37 ), Egr2 (ref. 38 ), Sirt1 (ref. 39 ), Serpina3g 40 , CRTAM 41 and CXCR5 (ref. 42 ), all of which have been associated previously with differentiation and maintenance of memory T cells in general or T central memory (T CM ) cells in particular (Fig. 3e ). This suggested that long-term dominating T cell families were enriched for memory-like, noncytolytic T cells already at day 8 p.i. Interestingly, we found that effector-associated gene ontology terms were again under-represented (Extended Data Fig. 2b ), while terms such as ‘mitochondrial respiratory chain complex’, which are indicative of metabolic processes found in established memory T cells 43 , were enriched (Extended Data Fig. 2c ). Importantly, cellular senescence did not appear to be responsible for contraction of short-term dominating T cell families. In fact, enrichment of senescence-associated genes was rather found in nascent long-term dominators than short-term dominators (Extended Data Fig. 2d ). Taken together, these experiments revealed that long-term dominating T cell families were characterized by an early under-representation of transcripts related to cytolytic effector function and terminal T cell differentiation, and an over-representation of key transcripts related to T cell memory. Fig. 3: Long-term dominating T cell families share an early transcriptional signature with established T CM cells. a , Schematic of experimental setup: single OT-1 Rag-1 −/− T cells harboring distinct congenic labels were adoptively transferred into C57BL/6 mice that were subsequently immunized with MCMV- ie2 -SIINFEKL. Hemisplenectomy was performed at day 8 p.i. and 100 cells per T cell family were sorted for sc-derived RNA-seq. b , Longitudinal quantification of T cell family sizes in blood of hemisplenectomized mice; highlighted are the two families showing the strongest acute domination at day 7 p.i. (red lines, acute dominators) and the two families showing the strongest expansion between days 7 and 150 p.i. (blue lines). c , Expansion coefficient of T cell families between days 7 and 150 p.i. d , e , Differential gene expression analysis based on sc-derived RNA-seq of T cell families at day 8 p.i. d , Volcano plot depicts differential gene expression in long-term dominators ( n = 2, blue in b ) compared to all remaining T cell families ( n = 14, red and black in b ). e , Volcano plot depicts differential gene expression in long-term dominators ( n = 2, blue in b ) compared to acute dominators ( n = 2, red in b ). Horizontal dashed line marks the false discovery rate cut-off of 10%. Differentially expressed genes are indicated in red. f – j , scRNA-seq performed on established inflationary T cell populations in spleen (day 60 p.i.) that originated from 100 naive OT-1 T cells (pooled from n = 2 mice). f , Dimensional reduction using uniform manifold approximation and projection (UMAP) showing Leiden clusters and expression of genes associated with cytolytic activity ( Gzmb and Gzma ) and differentiation status ( Sell , Cd27 , Klrg1 ). g , scRNA-seq data from day 60 p.i. overlaid with the early gene signature of long-term dominators (blue in b ) identified via sc-derived RNA-seq. Color code depicts the signature score on the basis of genes that are significantly up- (blue) and downregulated (red) in differential gene expression analysis of long-term dominators. Scores were computed using the FastProject tool. h , RNA velocities indicate active directional differentiation (arrows) emerging from the T CM cluster. i , CytoTRACE score marks undifferentiated and terminally differentiated states and, thereby, corroborates the general direction of differentiation shown in h . j , Normalized marker gene expression of diffusion pseudotemporally (dpt) ordered cells. The most undifferentiated cell according to the CytoTRACE score was chosen to be the root cell of the analysis. Panels f – j represent an individual dataset. Source data Full size image T CM cells are situated at the origin of all differentiation trajectories during the chronic phase of memory inflation Next, we aimed to more precisely map the early gene signature of long-term dominating T cell families onto the transcriptional landscape that unfolds later during chronic MCMV infection. Therefore, we performed single-cell RNA sequencing (scRNA-seq) of OT-1 T cells found in spleen at day 60 after infection with MCMV- ie2- SIINFEKL. This analysis revealed a continuum of three interconnected clusters that aligned well with features of T CM cells (for example, Sell (encoding CD62L) and Cd27 ), T effector memory (T EM ) cells (for example, Cd27 and Gzmb ) and terminal effectors (TEs) (for example, Klrg1 and Gzma ) (Fig. 3f and Extended Data Fig. 3 ). Genes over-represented in the early transcriptional signature of long-term dominating T cell families were also strongly enriched in the T CM cluster, while genes under-represented in this signature showed the opposite pattern (Fig. 3g ). Thus, the early transcriptional signature of long-term dominating T cell families mapped closely onto a cluster of established T CM cells present at day 60 p.i. To clarify which role these established T CM cells played in maintaining the overall population of inflationary T cells, we performed RNA velocity analysis of our scRNA-seq dataset 27 . By considering the ratio of spliced versus unspliced RNA, this method allows one to extrapolate the activity and directionality of single-cell differentiation in situ. RNA velocity identified a robust stream of differentiation that emerged from the T CM cluster and fueled both the T EM and TE compartments (Fig. 3h ). We further corroborated this directionality of differentiation by using CytoTRACE 44 , a computational framework that utilizes single-cell gene expression diversity to pinpoint the origin of developmental processes (Fig. 3i ). Aligning the expression of key genes in pseudotime, relative to the origin of differentiation identified by CytoTRACE, confirmed the successive loss of Sell , Cd27 and Il7r and the later gain of Klrg1 expression as key indicators of increasing terminal differentiation (Fig. 3j ). Thus, the early transcriptional signature of long-term dominating T cell families overlapped with that of established T CM cells, which in turn appeared to constantly replenish the shorter lived T cell subsets that make up the bulk of inflationary T cell memory. Early abundance of CD62L + CD27 + CMPs best predicts a T cell family’s later magnitude of memory inflation After having explored the early characteristics of long-term dominating T cell families retrospectively, we aimed to choose the most useful of the identified parameters to actually predict the long-term fate of nascent T cell families. To further increase our capacity for parallel fate mapping of multiple T cell families within the same host, we made use of ‘retrogenic color barcoding’ 30 . In brief, we performed simultaneous transduction of hematopoietic stem cells (HSCs), derived from OT-1 mice, with five distinct retroviral constructs encoding the fluorophores GFP, YFP, T-Sapphire, CFP and BFP. This procedure regularly yielded 20–30 combinatorial color barcodes and, thereby, substantially exceeded the code diversity provided by congenic markers alone. After transfer to sublethally irradiated primary C57BL/6 recipients, these HSCs generated color-barcoded naive OT-1 T cells (Extended Data Fig. 4a ). These were single-cell sorted from peripheral blood of primary recipients, assembled into cohorts of 20–30 cells, with each cell harboring a distinct color barcode, and then transferred simultaneously to secondary recipients (Extended Data Fig. 4b,c ). After infecting these secondary recipients with MCMV- ie2- SIINFEKL, we hemisplenectomized them at day 8 p.i., and measured the frequency of putative T central memory precursors (CMPs), T effector memory precursors (EMPs) and TEs on the basis of CD62L and CD27 surface expression in the excised half of the spleen. We then tracked the size of T cell families by longitudinal blood sampling throughout the chronic phase of infection (Fig. 4a ). Color barcoding enabled efficient parallel tracking of multiple T cell families within the same recipient (Fig. 4b,c ) and hemisplenectomy allowed us to acquire large enough samples to detect even minute numbers of CMPs (Fig. 4d ). Following the dynamics of T cell families within such individual recipients suggested that the early abundance of CMPs may indeed correlate with a T cell family’s capacity for long-term dominance (Fig. 4b–d ). When applying this approach to the complete dataset, we found that the early abundance of TEs and EMPs strongly correlated with the acute expansion of a T cell family, but not with its later magnitude of memory inflation (Fig. 4e and Extended Data Fig. 5a , top two rows). In fact, beginning at day 60 p.i., a T cell family’s size no longer correlated at all with the amount of TEs and EMPs detected during the acute phase of the immune response (Fig. 4f , upper two panels). The early abundance of CMPs, however, remained a highly significant predictor of T cell family sizes up until day 140 p.i. (Fig. 4e,f and Extended Data Fig. 5a , bottom row). In line with these data, the top-10 long-term dominating T cell families contained significantly more CMPs at day 8 p.i. than did the remaining T cell families (Fig. 4g , blue circles, bottom panel). In relative terms, a higher early percentage of CMPs per T cell family correlated with weaker initial expansion but stronger memory inflation. (Fig. 4h,i and Extended Data Fig. 5b , bottom row). Accordingly, the top-10 long-term dominators also showed significantly higher initial percentages of CMPs compared to the remaining T cell families (Fig. 4j ). On the other hand, higher initial percentages of TEs correlated with decreased memory inflation (Fig. 4h–j and Extended Data Fig. 5b , top row). This is consistent with our observations on the transcriptional level, showing an early over-representation of memory-associated and an under-representation of effector-related transcripts in long-term dominating T cell families. Fig. 4: Early abundance of CD62L + CD27 + CMPs best predicts a T cell family’s later magnitude of memory inflation. a , Schematic of experimental setup: single-color barcoded OT-1 Rag-1 −/− T cells were adoptively transferred into C57BL/6 mice that were subsequently immunized with MCMV- ie2 -SIINFEKL. Hemisplenectomy was performed at day 8 p.i. to measure the size of color-barcoded T cell families and the amount of CMPs (CD62L + CD27 + ), EMPs (CD62L − CD27 + ) and TEs (CD62L − CD27 − ) contained within them. The further expansion of each T cell family was then followed up by longitudinal blood sampling of hemisplenectomized mice. b , Exemplary gating strategy for decoding color-barcoded T cell families within the same recipient at day 8 (top row) and day 120 p.i. (bottom row). Color barcodes are first separated based on the expression of GFP and YFP and then segregated further according to the expression of CFP, BFP and T-Sapphire. Examples of acute and long-term dominating T cell families are depicted in red and blue, respectively. c , Longitudinal expansion dynamics of T cell families within the same recipient (same families as in b ). The size of a T cell family is indicated as a percentage of the cumulative size of all T cell families detected in the same recipient (percentage of transferred T cells). d , Exemplary phenotypic analysis of an acutely dominating (red) and a long-term dominating T cell family (blue) at day 8 p.i. (top row) and day 120 p.i. (bottom row) (same families as in b ). e , Scatter plots comparing the amount of TEs (top row), EMPs (middle row) and CMPs (bottom row) in each T cell family on day 8 p.i. to the total size of the respective T cell families at day 8 p.i. (first column, spleen, TE: P < 0.0001, EMP: P < 0.0001, CMP: P = 0.001) and day 140 p.i. (second column, blood, TE: P = 0.97, EMP: P = 0.48, CMP: P = 0.0007). The amount of CMPs, EMPs and TEs and the total size of T cell families is measured as a percentage of living leukocytes. Top-10 long-term dominating T cell families (blue). f , Correlation between the amount of TEs (top row), EMPs (middle row) and CMPs (bottom row) at day 8 p.i. and the size of T cell families over time. g , Comparison of the amount of TEs (top row, P = 0.40), EMPs (middle row, P = 0.99) and CMPs (bottom row, P = 0.0003) at day 8 p.i. in the top-10 long-term dominating T cell families (blue) versus all other T cell families detectable at day 140 p.i. (‘Dataset’). Bars indicate median. h , Scatter plots comparing the relative amount of TEs (top row), EMPs (middle row) and CMPs (bottom row) in each T cell family on day 8 p.i. to the size of the respective T cell families at day 8 p.i. (first column, spleen, TE: P = 0.99, EMP: P = 0.59, CMP: P < 0.0001) and day 140 p.i. (second column, blood, TE: P = 0.002, EMP: P = 0.10, CMP: P = 0.0004). The relative amount of CMPs, EMPs and TEs is measured as a percentage of the corresponding T cell family’s overall size. i , Correlation between relative amount of TEs (top row, P = 0.009), EMPs (middle row, P = 0.28) and CMPs (bottom row, P < 0.0001) at day 8 p.i and the size of T cell families over time (Spearman’s correlation coefficient). j , Comparison of the relative amount of TEs (top row), EMPs (middle row) and CMPs (bottom row) at day 8 p.i. in the top-10 long-term dominating T cell families (blue) versus all other T cell families detectable at day 140 p.i. (‘Dataset’). Bar in g and j indicates median. r , Spearman’s correlation coefficient. P values in e , h were measured as approximate P values. Significances in g , j were measured using a two-sided Mann–Whitney U -test. * P < 0.05, ** P < 0.01, *** P < 0.001, **** P < 0.0001. In b , c : n = 8 T cell families in one exemplary recipient. In e , f , h , i : n = 104 T cell families tracked in 11 recipient mice (zero values are not shown and not used for determining Spearman’s correlation coefficients and respective P values). In g , j : n = 50 T cell families (excluding those that were undetectable at day 140 p.i.). Source data Full size image Generally, so-called memory precursor effector cells (MPECs), identified by the absence of KLRG-1 and the presence of CD127 or CD27, are considered as the main precursors of lasting immunological memory 45 . MPECs, however, have also been described to harbor strong cytolytic function 46 —a feature absent from the early transcriptional signature that we identified for long-term dominating T cell families (Fig. 3d,e and Extended Data Fig. 2a,b ). Reconciling these data, we found that CD62L + CMPs constituted only a minute subset of all MPECs present during the acute phase of MCMV infection (Extended Data Fig. 6a,b ). Thus, their high content of GzmB – cells did not substantially affect the overall cytolytic phenotype of MPECs (Extended Data Fig. 6c,d ). Importantly, noncytolytic T cells were also over-represented among CMPs versus MPECs found during acutely resolving infection with Listeria monocytogenes (Extended Data Fig. 6e,f ). Moreover, when subsuming CMPs and EMPs, detected in single-cell-derived T cell families at day 8 after MCMV infection, the size of this CD27 + memory precursor compartment served only as a weak predictor of inflationary capacity (Extended Data Fig. 6g,h ). Taken together these data show that, while exceedingly few CMPs may previously have gone unnoticed, their abundance critically predicts a T cell family’s capacity for memory inflation. T cell families without detectable CMPs are at higher risk of attrition To further explore the relevance of early CMP abundance, we divided the observed T cell families into those that contained and those that lacked detectable CMPs at day 8 p.i. (Fig. 5a,b ). Compared to CMP-containing families, a significantly higher percentage of CMP-lacking families dropped below the limit of detection during the chronic phase of infection (Fig. 5c , left panel). In addition, CMP-lacking families generated significantly lower inflationary output than their CMP-containing counterparts (Fig. 5d , left panel). CMP-containing and -lacking families each made up approximately half of all T cell families (51% versus 49%). While the top versus bottom 50% of EMP-containing families showed no significant difference in their expansion behavior (Fig. 5c,d , second panel from the left), the top 50% of TE-containing families even expanded significantly less than the bottom 50% (Fig. 5c,d , second panel from the right). In line with these data, those 50% of T cell families that were largest at day 8 p.i. did not show greater inflation or persistence compared to their 50% smallest counterparts (Fig. 5c,d , right panel). Taken together, not only did the early abundance of CMPs predict the magnitude of memory inflation, but their absence also increased the risk of T cell family attrition. Fig. 5: T cell families without detectable CMPs are at higher risk of attrition. a , Line plot shows long-term persistence (full lines) or attrition (dashed lines) of T cell families that contained (blue) or lacked (red) detectable CMPs at day 8 p.i. b , Exemplary flow cytometry data depicting T cell families that contained (top row) or lacked (bottom row) detectable CMPs at day 8 p.i. c , Kaplan–Meier plots compare the percentage of detectable T cell families grouped according to the following characteristics found at day 8 p.i: CMP-containing versus CMP-lacking T cell families (left plot, P < 0.0001); top versus bottom 50% of T cell families defined by EMP content (second from left plot); top versus bottom 50% of T cell families defined by TE content (second from right plot); top versus bottom 50% of T cell families defined by family size (right plot). Of note: CMP-containing and CMP-lacking T cell families amounted to 51% and 49% of all T cell families. d , Mean size of T cell families over time grouped as in c . P values in CMP-containing versus CMP-lacking T cell families (left plot) are P = 0.003 (day 60), P = 0.01 (day 90), P = 0.002 (day 120) and P < 0.0001 (day 140). P values for top versus bottom 50% TE content (second from right plot) are P = 0.02 (day 120) and P = 0.0003 (day 140). Significance in c was assessed using a log-rank test. Data points in d show mean, error bars indicate s.d. Significance in d was measured using two-way ANOVA with Šidák multiple comparisons test. * P < 0.05, ** P < 0.01, *** P < 0.001, **** P < 0.0001. For c and d : n = 104 T cell families. Source data Full size image Single T CM cells show a stem-cell-like capacity for reconstitution of memory inflation The aforementioned data suggested an exquisite importance of CMPs and of established T CM cells for the generation and maintenance of strong inflationary T cell memory. To further explore these processes, we first collected T CM , T EM and TE cells derived from populations of OT-1 T cells during the chronic phase of MCMV infection (Fig. 6a ). When 100 T cells of each subset were transferred to C57BL/6 mice, which were subsequently infected with MCMV- ie2- SIINFEKL, we found that T CM populations showed superior capacity for memory inflation (Fig. 6b ). This is well in line with the stem-cell-like potential that has been found to reside within single CD62L + T CM cells generated upon acutely resolving infection 32 . To test whether stemness could also be maintained within single T cells upon chronic infection, we adoptively transferred individual naive color-barcoded OT-1 T cells and exposed them to infection with MCMV- ie2 -SIINFEKL. More than 100 days after infection, we collected the ensuing primary T cell families and again sorted single T cells from the T CM , T EM and TE compartment (Fig. 6c ). After adoptive transfer and re-infection, we found that single T CM cells taken from an inflationary memory population generated detectable secondary T cell families at roughly the same rate as previously found for single T CM cells taken from a resting memory population 32 . Moreover, they far surpassed the reconstitution capacity of single T EM cells and TEs (Fig. 6d ) and showed a superior capacity to re-inflate upon renewed MCMV infection (Fig. 6e ). Thus, single T CM cells showed a stem-cell-like capacity for reconstitution of inflationary memory in response to chronic MCMV infection. Fig. 6: Single T CM cells show a stem-cell-like capacity for reconstitution of memory inflation. a , Schematic of experimental setup: adoptive transfer of congenically labeled OT-1 T cells followed by immunization with MCMV- ie2 -SIINFEKL. Collection of splenic T CM cells, T EM cells and TEs at day 120 p.i. and re-transfer of 100-cell populations of each subset into secondary recipients, followed by infection with MCMV- ie2 -SIINFEKL. Longitudinal analysis of population-derived responses in peripheral blood. b , Purity of T CM cells, T EM cells and TEs sorted via flow cytometry. Expansion of sorted T CM , T EM and TE populations in secondary recipients ( n = 18) (right). c , Schematic of experimental setup: day 0: adoptive transfer of single-color-barcoded naive OT-1 Rag-1 −/− T cells and subsequent immunization with MCMV- ie2 -SIINFEKL. Day 120: detection of single-cell-derived T cell families, followed by sort and re-transfer of single-color-barcoded T CM cells, T EM cells and TEs and immunization with MCMV- ie2 -SIINFEKL. d , Rate at which detectable progeny were recovered from transferred single T CM cells, T EM cells or TEs at day 8 p.i. Of note: cohorts of seven color-barcoded single T cells of T CM , T EM or TE phenotype were transferred into n = 13, 11 and 11 secondary recipients, respectively. T CM cells were recovered at significantly higher rates ( P < 0.0001) than T EM and TE cells. No significant difference was observed between recovery rates for T EM and TE cells ( P = 0.6). e , Inflationary expansion derived from single T CM cells, T EM cells and TEs until day 60 p.i. as measured in the blood of secondary recipients. Expansion in b is measured as mean + s.e.m. Bars in d show mean, and error bars indicate s.d. Significance in d is measured using a two-sided Mann–Whitney U- test. **** P < 0.0001. Data in d are pooled from two independent experiments. Source data Full size image Discussion Inflationary T cell immune responses to CMV infection are extraordinarily large and highly oligoclonal 1 , 12 . Throughout the course of chronic CMV infection, dominant versus subdominant T cell clones are thought to emerge due to differences in their TCRs’ specificity and affinity for defined viral epitopes 6 . However, in response to acutely resolving infections, single epitope-specific T cells show a remarkable variation of clonal expansion, despite harboring identical TCRs 23 , 24 , 25 , 26 . In this study, we addressed whether and how such affinity-independent variation in the early programming of single-cell-derived T cell families also determined their long-term capacity for memory inflation. Remarkably, we found that strong acute expansion did not predict a T cell family’s capacity for memory inflation. However, by extracting emerging T cell families from their original hosts and re-exposing them to new infectious stimuli, we found that hierarchies of later immunodominance were programmed already during the first few days of infection. In the search for hallmarks of this early programming, we traced back the origins of those rare T cell families that later dominated the inflationary immune response and characterized their early gene expression profiles. We found that already at day 8 p.i. transcripts related to cytolytic effector function were strongly under-represented in these T cell families, while transcripts related to the development of T CM cells, such as TCF-1 (refs. 35 , 36 ), Tsc1 (ref. 37 ) and Sirt1 (ref. 39 ), were clearly over-represented. In addition, the early gene expression profiles of long-term dominating T cell families mapped stringently onto a cluster of established T CM cells identified by scRNA-seq at day 60 p.i. This is surprising since T CM cells or their committed precursors were long considered to arise only gradually upon transition into the memory phase. However, more recent evidence, gathered in the context of acutely resolving infections, suggests that CMPs already segregate from the bulk of shorter lived T cells at the peak of expansion 23 , 47 , or even days before that 34 , 48 . Our work now establishes the early abundance of CMPs as the key predictor for long-term memory inflation upon chronic viral infection. On the basis of RNA velocity analyses and single-cell adoptive transfers, we further show that T CM cells appear to constantly replenish inflationary T cell populations. This finding is further supported by recent data showing that diphtheria toxin-mediated depletion of T CM cells (identified by TCF-1 expression) reduces the magnitude of memory inflation 49 . Moreover, we demonstrate that these chronically active T CM cells possess a stem-cell-like capacity for single-cell-derived regeneration of inflationary memory, akin to that previously found in resting T CM cells 32 . Conceptually, these data are most compatible with a model of T cell differentiation in which classical immunological memory and memory inflation are not derived from the bulk of effector T cells present at the peak of acute expansion, but rather from a minute subset of self-renewing CMPs that segregate early from the rest of the responding T cells. We find that CMPs arising during both chronic MCMV and acutely resolving Listeria monocytogenes infection share a weak cytolytic profile. Such low cytolytic activity has also been found for T cells expressing TCF-1 at day 5 after acutely resolving infection with lymphocytic choriomeningitis virus. Importantly, these weakly cytolytic T cells were shown to preferentially seed the mature T CM compartment 48 . Our work now extends these observations to the level of individual T cells responding to a chronic viral infection, and provides the methodological advances that allow us to directly connect a T cell family’s early programming to its long-term capacity for memory inflation. Our data show that only 1 in 10 naive T cells that harbor the same high-affinity TCR will generate large amounts of CMPs and, thereby, develop into long-term dominating T cell families. This probabilistic nature of single-cell-derived immune responses indicates that reliable development of memory inflation not only requires long-term antigen presentation and optimal TCR affinity, but also a certain redundancy of the naive TCR repertoire. Interestingly, so-called ‘public’ TCRs, which are present in multiple individuals and strongly over-represented among inflationary T cell clones, are also more likely to arise in the naive TCR repertoire than their nonpublic counterparts. It is tempting to speculate that this redundancy increases the chances for at least one T cell family harboring this TCR to display long-term immunodominance. Computational modeling of CD8 + T cell responses to human Epstein–Barr virus infection 50 as well as recent work on CMV-specific T cell immunity 20 suggest that affinity-dependent changes in the composition of an inflationary T cell response occur in two phases. During the first phase, T cell populations expressing high-affinity TCRs are preferentially selected to expand. During the second phase, which occurs late during chronic latent infection, T cell populations expressing lower affinity TCRs increasingly contribute to memory inflation, and high-affinity T cell populations begin to show evidence of proliferation-induced senescence 20 . However, sc-derived RNA-seq profiles of short-term dominating T cell families showed little evidence for senescence just days before their contraction began. Instead, the differential behaviors of short- versus long-term dominating T cell families seem to be driven by probabilistic variation in the early abundance of CMPs versus shorter lived subsets. This probabilistic variation in the early life histories of individual T cell families likely provides a separate driver for the establishment of immunodominance hierarchies, which acts in parallel to the deterministic effects of TCR affinity 20 , 50 . Taken together, the experiments outlined here show the immense developmental potential of a single naive CD8 + T cell responding to CMV-based vaccination—with more than 10% of all peripheral CD8 + T cells originating from a single naive precursor. Currently, we have to rely on chance and a sufficient number of epitope-specific naive T cells to find an individual cell that lives up to this potential. However, our work demonstrates that a better understanding of how to guide single-cell-derived immune responses toward optimal, early generation of CMPs, may be a way to unlock this potential and harness it for the design of future vaccines and cell therapeutic strategies. Methods Mice C57BL/6JOlaHsd were purchased from Envigo. SIINFEKL peptide-specific TCR transgenic OT-1 (C57BL/6-Tg(TcraTcrb)1100Mjb/J, CD45.1 (B6.SJL-Ptprca Pepcb/BoyJ), CD90.1 (B6.PL-Thy1 a /CyJ) and CD11c-DTR (B6.FVB-Tg(Itgax-DTR/EGFP)57Lan/J) mice were originally obtained from The Jackson Laboratory and bred under specific pathogen-free conditions at our mouse facility at the Technical University of Munich. Different combinations of congenic markers CD45.1/.2 and CD90.1/.2 were derived from in-house breeding as previously described 23 . Recipient mice were used at an age of 6–12 weeks and donor mice at an age from 6 to 24 weeks. Mice were housed at an ambient temperature of 22 °C ± 2 °C, a humidity of 40–60% and a light/dark cycle of 12 h. All animal experiments were approved by the District Government of Upper Bavaria (Department 5—Environment, Health and Consumer Protection). Murine cytomegalovirus The MCMV- ie2 -SIINFEKL strain was generated by ‘en passant’ mutagenesis as described before from the BAC-derived mouse cytomegalovirus clone pSM3r 3.3 (ref. 31 ). Infections C57BL/6 mice were infected by intraperitoneal injection of MCMV- ie2 -SIINFEKL (5 × 10 5 to 1 × 10 6 plaque-forming units) or intravenously with recombinant Listeria monocytogenes expressing chicken ovalbumin ( Lm -OVA, 5 × 10 3 colony-forming units). Tissue culture The Platinum-E packaging cell line was grown in cDMEM (DMEM (Life Technologies)), supplemented with 10% FCS, 0.025% l -glutamine, 0.1% HEPES, 0.001% gentamycin and 0.002% streptomycin. Generation of retroviral color barcodes For retrovirus production, Platinum-E packaging cells were transfected via calcium phosphate precipitation with retroviral vectors (mp71, a kind gift from W. Uckert) encoding for five fluorescent proteins (GFP, YFP, CFP, BFP and T-Sapphire). The supernatant of the Platinum-E cells was collected at 48 and 72 h after transfection and purified from the remaining cells by centrifugation at 1,500 r.p.m. at 4 °C for 7 min. The supernatant was stored at 4 °C and used within 4 weeks of collection. Retrogenic mice Bone marrow was collected from the tibia and femur of 8–20-week-old CD45.1 + Rag-1 −/− OT-1 mice. After red blood cell lysis, the cells were brought into single-cell suspension and stained with anti-mouse Ly6A/E (Sca-1) and anti-mouse CD3/CD19 antibodies. Propidium iodide was used for live/dead discrimination. Sorted Sca-1-positive CD3/19-negative cells were incubated at 37 °C in cDMEM, supplemented with 20 ng ml −1 murine IL-3, 50 ng ml −1 murine IL-6 and 50 ng ml −1 murine SCF, for 3–4 d in a tissue culture-treated 48-well plate (250,000–300,000 cells per 400 ml). Retroviral transduction of the expanded stem cells with a fluorescent barcode was achieved by spinoculation. In brief, 400 ml of the combined Platinum-E supernatants were centrifuged at 3,000 g and 32 °C for 2 h in a tissue culture-untreated 48-well plate coated with RetroNectin according to the manufacturer’s instructions. Afterward, 200 ml were discarded and 200 ml of stem cells were added in 2× stimulation medium containing transduction enhancers (40 ng ml −1 of mIL-3, 100 ng ml −1 of mIL-6, 100 ng ml −1 of SCF, 1:100 Lentiboost solution A, 1:100 Lentiboost solution B) in a final concentration of 300,000 cells per 400 ml per well. Cells were then spinoculated at 800 g at 32 °C for 1.5 h. After 2 d in culture the transduced stem cells were suspended in FCS at 500,000–1,000,000 cells per 100 ml and injected i.v. into irradiated C57BL/6 recipient mice (two times 4.5 Gy, with a resting period of 4 h). Flow cytometry Spleens and lymph nodes were collected and mashed through a 70-µm cell strainer to generate a single-cell suspension, followed by red blood cell lysis. Peripheral blood samples were lysed directly. Cells were incubated with EMA and anti-CD16/CD32 for 30 min, exposed to light at 4 °C and then washed with FACS buffer (PBS with 0.5% BSA and 2 mM EDTA). Afterwards, cells were stained with a panel of fluorophore-conjugated antibodies assembled from the following list and used at the indicated dilution: anti-Ly6A/E (Sca-1) (1:100), anti-CD3 (1:250), anti-CD4 (1:250), anti-CD8 (1:200), anti-CD19 (1:300), anti-CD27 (1:100), anti-CD44 (1:300), anti-CD45.1 (1:100), anti-CD45.2 (1:100), anti-CD62L (1:100), anti-CD90.1 (1:1,000), anti-CD90.2 (1:100), anti-CD127 (1:100), anti-GzmB (1:100), anti-KLRG-1 (1:100) for 30 min at 4 °C in the dark (for multimer staining, see below). After washing with FACS buffer, the cells were resuspended in FACS buffer containing 1% paraformaldehyde and analyzed using a Cytoflex S (Beckman Coulter) or CyAn ADP 9 color (Beckman Coulter) flow cytometer. Summit (v.4.3; Beckman Coulter) and FlowJo (v.9.6 and v.10.4; Becton Dickinson) were used for data acquisition and analysis, respectively. Multimer staining Biotinylated pMHC molecules for the generation of nonreversible multimers were refolded according to established protocols. For multimer staining 0.4 μg of biotinylated pMHC I molecule, 0.5 μg of fluorophore-conjugated streptavidin and 50 µl of FACS buffer for every 5 × 10 6 cells, were preincubated for at least 30 min for multimerization. After incubation with EMA and anti-CD16/32, cells were incubated with the multimer mix for 30 min in total. After 10 min of incubation the respective antibody panel was added. Sorting of naive single OT-1, adoptive transfer and tracking of single-cell-derived T cell families For adoptive transfer of single Rag-1 −/− OT-1 T cells, peripheral blood mononuclear cells from retrogenic or congenic mice were stained following red blood cell lysis with antibodies directed against CD8 and CD44. Propidium idodide was added during sort for live/dead discrimination. Multiple single naive T cells of distinct congenic phenotype or distinct color barcode (PI − , CD8 + , CD44 low ) were sorted into a 96-well V-bottomed plate containing a cell pellet of 400,000 C57BL/6 splenocytes in FCS. Cells were injected i.p. into C57BL/6 or C57BL/6-CD11c-DTR-GFP mice. Adoptive transfers for single T cell fate mapping were performed in a multiplexed fashion. This means that up to 6 × 1 T cells harboring distinct congenic marker profiles or up to 24 × 1 distinctively color-barcoded T cells were sorted into the same well and transferred in parallel into the same host (see also Extended Data Fig. 5 ). Peripheral blood or splenocytes of recipient mice were analyzed as described below. Single-cell-derived T cell families were discriminated by congenic markers and/or retrogenic color barcodes. Re-transfer experiments At day 6 after single-cell adoptive transfer and infection, spleens of primary recipient mice were collected, mashed and lysed to generate a single-cell suspension. A representative sample of each spleen was analyzed via flow cytometry to identify spleens containing single-cell-derived T cell families. From spleens that contained detectable T cell families, three aliquots of 1.5 × 10 7 splenocytes were separately re-transferred into three secondary recipients that were subsequently infected with MCMV- ie2 -SIINFEKL. The re-transferred fractions of each T cell family were further tracked via longitudinal blood sampling. Hemisplenectomy Laparotomy was performed on anesthetized mice at day 8 p.i. with MCMV-IE2-OVA by a left subcostal incision of the skin and the peritoneum. The spleen was mobilized and approximately one-third of the spleen was ligated and removed. The remaining spleen was cauterized. Peritoneum and skin were closed by surgical stitches. The obtained spleen sample was placed in PBS until further processing. Single-cell-derived RNA sequencing Of each single-cell-derived T cell family recovered via hemisplenctomy at day 8 p.i., 100 cells were sorted into 10 µl of PBS + RNase inhibitor. Samples were immediately frozen in liquid N 2 and stored at −80 °C until sequencing. RNA was isolated using the Arcturus Pico Pure Isolation Kit according to the manufacturer’s recommendations. Subsequently, complementary DNA was generated using the SMART Seq V4 Ultra low input RNA Kit (TaKaRa), followed by library preparation with the Nextera XT DNA Kit (Illumina) according to the manufacturer’s recommendations. Samples were then sequenced on a HiSeq 4000 (Illumina) and data were acquired using the HCS (v.3.4.0; Illumina) software. Single-cell RNA sequencing OT-1 T cells were sorted at day 60 p.i. and directly used for droplet-based scRNA-seq using the Chromium Single Cell 3′ Library & Gel Bead Kit v2 (10X Genomics). Sequencing was performed after library preparation with a Chromium Single Cell 3′ Library Construction Kit on a HiSeq 4000 (Illumina) and data were acquired using the HCS software. Statistical analysis Parameters of single-cell-derived T cells were not found to be normally distributed. Therefore, significances were determined using a two-sided Mann–Whitney U -test and correlations were measured as Spearman correlations, unless otherwise indicated. Where appropriate ‘0’ values were excluded for calculating Spearman correlations. Differential gene expression analysis We used the DESeq2 R package 51 (v.1.26.0) for differential gene expression analysis. Since the single-cell-derived RNA-sequencing data stemmed from two different experiments, we corrected for potential batch effects by incorporating a batch factor in the generalized linear model of DESeq2. An absolute log-fold change bigger than one and a false discovery rate cut-off of 10% was employed to select significantly over- and under-represented genes in long-term dominating versus acutely dominating or versus all remaining T cell families. Gene ontology term enrichment analysis To interpret the collective signature of the significantly over- and under-represented genes in long-term dominating T cell families, we performed gene ontology (GO) term enrichment analysis using the goseq R package 52 . We selected the top 10 GO terms, with respect to P value of the Biological Process category, that were significantly enriched among over- or under-represented genes (Extended Data Fig. 2a–c ). In the comparison of long-term dominators versus all remaining T cell families, we only performed this analysis with respect to the downregulated genes, as there were only three significantly upregulated genes. P values were calculated as follows: by default, the goseq package uses Wallenius distribution to approximate the true null distribution for GO category membership. Wallenius approximates the true distribution of numbers of members of a category amongst differentially expressed genes by the Wallenius noncentral hypergeometric distribution. This distribution assumes that within a category all genes have the same probability of being chosen. Having established a null distribution, each GO category was then tested for over- and under-representation amongst the set of differentially expressed genes and the null was used to calculate a P value for under- and over-representation. Gene-set enrichment analysis We performed gene-set enrichment analysis (GSEA) for the up- and downregulated genes of the small–large group using the fgsea R package (Extended Data Fig. 2d ). Data from sc-derived RNA-seq were compared to a set of genes associated with cellular senescence ( GO:0090398 ). GSEA analysis yields an enrichment score (ES) and a corresponding P value for every gene set. A positive ES means enrichment of the corresponding gene set in the upregulated genes and, similarly, a negative ES means enrichment of the gene set in the downregulated genes. P values were calculated as follows: for each gene set an enrichment score was calculated and, respectively, a P value of this ES not to be random was calculated. For every gene set P , an empirical null distribution by sampling n random gene sets of the same size as P was obtained. The empirical, nominal P value of the observed ES was then calculated relative to this null distribution. Generation of gene expression count matrix The software Cell Ranger (v.2.1.0; 10X Genomics) was used for demultiplexing and alignment (mouse reference genome GRCm38 release 84) using default parameters. The resulting bam file served as input for the software tool velocyto 27 (v.0.17.17), which performed the counting of spliced and unspliced RNA molecules. Preprocessing and quality control Preprocessing and quality control of the data was carried out using the Python software package SCANPY 53 (v.1.4.4). A total of 73 cells were removed that expressed less than 200 genes, showed more than 3% mitochondrial counts or highly expressed genes indicative of B cells or erythrocytes (for example, Cd79a/b, Hba-a1). Gene filtering thresholds lay at 20 counts for spliced and at 10 counts for unspliced messenger RNA molecules. Gene counts were per-cell normalized and (log + 1)-transformed and the R package scater 54 (v.1.12.2) was utilized to identify 830 highly variable genes to be used in downstream analysis. Dimensionality reduction, clustering and visualization SCANPY was also used to perform dimensionality reduction and clustering. The neighborhood graph was based on n = 30 principal components and k = 30 neighbors. Clustering was performed using the Leiden algorithm with resolution r = 0.6. UMAP dimensionality reduction was computed using SCANPY’s default parameters. Signature scores were calculated using the FastProject tool 55 (v.1.1.4). The signature stems from the differential gene expression analysis of the sc-derived RNA-seq data of the T cell families following hemisplenectomy at day 8 p.i., comparing the two most strongly inflating T cell families with the rest of the families. Over-represented genes contributed positively to the signed signature list, while under-represented genes contributed negatively. Trajectory inference The software tool scvelo 56 (v.0.1.24) was used to compute RNA velocities within a deterministic model 27 . Mean expression values for spliced and unspliced mRNA counts were calculated by averaging over nearest neighbors ( k = 30 cells, n = 30 principal components). The R package CytoTRACE 44 (v.0.1.0) was used to compute a relative differentiation score for each cell. The cell with the largest CytoTRACE score (most undifferentiated) served as the root cell for calculating the diffusion pseudotime 57 using SCANPY. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability RNA sequencing data generated for this study have been deposited in the Gene Expression Omnibus under accession number GSE157644 . All other data generated or analyzed during this study are included in the published article (and its supplementary information files) or are available from the corresponding author upon reasonable request. Source data are provided with this paper. Code availability No custom code or algorithms were used in this study. | For a person to acquire immunity to a disease, T cells must develop into memory cells after contact with the pathogen. Until now, the number of cells that do this was believed to depend above all on the magnitude of the initial immune response. A team of researchers at the Technical University of Munich (TUM) has now called this into question. When a virus enters the body, it is picked up by certain cells of the immune system. They transport the virus to the lymph nodes where they present its fragments, known as antigens, to CD8+ T cells responsible control of viral infections. Each of these cells carries a unique T cell receptor on the surface that can recognize certain antigens. However, only very few T cell receptors match a given viral the antigen. To bring the infection under control and maximize the defenses against the virus, these few antigen-specific T cells start dividing rapidly and develop into effector T cells. These kill virus-infected host cells and then die off themselves once the infection is cleared. Some of these short-lived effector cells—according to the generally accepted theory—turn into memory T cells, which persist in the organism long term. In case the same pathogen enters the body again, memory T cells are already present and ready to fight the invader more swiftly and effectively than during the first encounter. Memory cells and their origin "Prevailing scientific opinion says that activated T cells first become effector cells and only then gradually develop into memory cells," says Dr. Veit Buchholz, a specialist in microbiology and working group leader at the Institute for Medical Microbiology, Immunology and Hygiene at TUM. "In our view, however, that isn't the case. It would mean that the more effector cells are formed after contact with the pathogen, the more numerous the memory cells would become." However, Buchholz and his colleagues observed a different course of events and have now published their results in the journal Nature Immunology. "We investigated the antiviral immune responses resulting from individual activated T cells in mice and traced the lineage of the ensuing memory cells using single-cell fate mapping," reports first author Dr. Simon Grassmann. "Based on these experiments, we were able to show that certain 'T cell families' descended from individual cells form up to 1000 times more 'memory' than others. However, these long-term dominating T cell families only contributed little to the magnitude of the initial immune response, which was dominated by effector cells derived from other shorter-lived T cell families." At the level of individual cells, it therefore became evident that development of effector and memory cells segregates at a much earlier stage than previously believed: "Already in the first week after the confrontation with the pathogen, we saw major differences in the transcriptomes of the detected T cell families," says Lorenz Mihatsch, also a first author of the study. "Normally at this time of the immune response CD8+ T cells are enriched in molecules that help to kill virus infected cells. However, we found no indication of these cytolytic molecules in the long-term dominating T cell families. Instead, they were already geared exclusively towards memory development at this early stage." Optimization of vaccines These results could help to improve vaccine development in the future, says Veit Buchholz: "To generate an optimal immune response through vaccination, the body needs to produce as many memory cells as possible. For that purpose, it is important to have a precise understanding of how individual T cells are programmed." Buchholz's study might also prove useful in helping to recognize sooner whether a new vaccine is effective. "To determine the long-term strength of an immune response, it could be helpful to measure the number of memory precursors within a few days of administering a vaccine," says Buchholz. | 10.1038/s41590-020-00807-y |
Medicine | New treatment options for the deadliest of cancers | Katarzyna Z. Haza et al, RAS-inhibiting biologics identify and probe druggable pockets including an SII-α3 allosteric site, Nature Communications (2021). DOI: 10.1038/s41467-021-24316-0 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-021-24316-0 | https://medicalxpress.com/news/2021-06-treatment-options-deadliest-cancers.html | Abstract RAS mutations are the most common oncogenic drivers across human cancers, but there remains a paucity of clinically-validated pharmacological inhibitors of RAS, as druggable pockets have proven difficult to identify. Here, we identify two RAS-binding Affimer proteins, K3 and K6, that inhibit nucleotide exchange and downstream signaling pathways with distinct isoform and mutant profiles. Affimer K6 binds in the SI/SII pocket, whilst Affimer K3 is a non-covalent inhibitor of the SII region that reveals a conformer of wild-type RAS with a large, druggable SII/α3 pocket. Competitive NanoBRET between the RAS-binding Affimers and known RAS binding small-molecules demonstrates the potential to use Affimers as tools to identify pharmacophores. This work highlights the potential of using biologics with small interface surfaces to select unseen, druggable conformations in conjunction with pharmacophore identification for hard-to-drug proteins. Introduction The RAS family of small GTPases consists of four members, KRAS4A, KRAS4B, HRAS, and NRAS, which act as bi-directional molecular switches that cycle between an inactive GDP-bound form, and an active GTP-bound form 1 . Mutations in RAS are the most common oncogenic drivers, with KRAS being the most frequently affected member; especially in pancreatic, lung, and colon cancer 1 . This makes RAS a strong therapeutic target, but despite having been identified as a drug target for over 30 years, only recently have compounds been developed that show promise in pre-clinical trials 2 . This paucity of agents has been caused by the lack of clearly druggable pockets on the surface of RAS. However, recent work has identified two pockets that may be amenable for drug binding 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 . The first of these, the SI/II-pocket, exists between the Switch I and Switch II regions of RAS in an area involved in the binding of the nucleotide exchange factor, Son of Sevenless (SOS). Several groups have independently developed compounds that bind this pocket with varying affinities and efficacies, predominately in the micromolar range 5 , 6 , 7 , 8 , except for BI-2852 which has nanomolar binding affinity and efficacy 11 . The second, the SII-pocket, is located under the Switch II loop and was identified using a tethered warhead approach relying on the reactive nature of the cysteine in the G12C mutant 9 , 10 . This pocket is not fully formed in published KRAS structures in the absence of the inhibitor; however, a groove is evident in some structures and it has been identified as a potential allosteric site computationally 3 , 4 . The development of tethered compounds targeting this pocket has led to the only series of RAS inhibitors currently in clinical trials 12 , 13 . This compound series is limited to cancers harboring G12C mutations, however, a cyclic peptide, KRpep-2d with a preference for G12D mutations has been identified that binds in a similar pocket 14 , 15 showing that biologics can also probe pockets in KRAS. It would be interesting to determine whether this pocket can be non-covalently exploited in other RAS isoforms and mutants, giving wider application to RAS-driven cancers. Targeting of RAS has also been explored using scaffold-based biologics, besides the cyclic peptide KRpep-2d. Antibodies and their alternatives have been developed that bind RAS with nanomolar affinities, inhibiting nucleotide exchange, interactions with RAF, and activation of downstream pathways concurrent with negative impacts on RAS-induced cell growth and transformation 16 , 17 , 18 , 19 , 20 , 21 . Amongst others, these include scFvs, DARPins, and monobodies 16 , 17 , 19 , 21 , 22 . Although to date some of these have been used to assist the identification of small molecules 5 , 7 , 23 , none have directly probed druggable pockets on RAS, as the majority of these scaffold-based biologics tend to bind over large protein interfaces 16 , 17 , 19 , 22 that are difficult to mimic with small molecules 24 . As some scaffold-based biologics form smaller interfaces 21 , 25 , 26 there emerges the tantalizing prospect that biologics could be used as tools to identify druggable pockets and novel conformers, and could also have the potential to act as pharmacophore templates for in silico-informed drug discovery. Here, we explore the possibility of using Affimer proteins, an established biologic with a small probe surface formed by two variable regions 26 known to bind at protein interaction ‘hotspots’ 25 , 27 , 28 , to probe RAS for druggable pockets and conformers that might be amenable to small-molecule inhibition. Such a direct approach utilizing small probe surfaces has not previously been used with scaffold-based biologics and could revolutionize drug discovery, by exemplifying a pipeline for small molecule design that has the potential to unlock the currently ‘undruggable’ proteins. Here, we demonstrate the use of Affimer proteins, a scaffold-based biologic, to identify and directly probe two druggable pockets on wild-type KRAS associated with inhibition of nucleotide exchange and effector molecule binding. The Affimer that binds to the SI/SII pocket actually mimics the current small molecule inhibitors that target the pocket as seen in a competitive NanoBRET assay, providing a proof-of-principle for using Affimer-target interfaces as pharmacophore templates. The Affimer that binds the SII region selects a previously unseen conformer of the pocket present in wild-type KRAS. The Switch II region adopts a more open position, demonstrating that selecting and targeting this site via non-covalent binding is possible. Our work supports two important concepts in the use of biologics: firstly, they can be used to select for, and stabilize, conformations that are only present as a small fraction of the conformations of the target protein in solution, particularly those that may not be present in extant crystal structures; and secondly, scaffold-based biologics can act as drug discovery tools and/or pharmacophore templates for identification and development of small-molecule inhibitors. This approach is likely to be applicable to other important therapeutic targets and presents the exciting potential for an alternative pipeline for drug discovery. Results Identification and biochemical characterization of anti-RAS Affimers Seven unique Affimer proteins that bind wild-type KRAS in both the inactive GDP-bound form and the active form, bound to the non-hydrolyzable GTP analog-GppNHp, were isolated by phage display 29 (Supplementary Table 1 ). To identify inhibitors of RAS, these Affimer proteins were screened at 10 µM for their ability to inhibit SOS1-mediated nucleotide exchange, the primary process in RAS activation. Three of the Affimers, K3, K6, and K37, showed clear inhibition of this process (Fig. 1a ), the remaining four Affimers that bound to KRAS showed partial (Affimers K19 and K68) or no inhibition (Affimers K69 and K91) of nucleotide exchange. The latter four Affimers are not discussed further, besides K69 which was used as a control in the Nano-BRET assays. The dose-dependency of nucleotide exchange inhibition by the 3 selected Affimers was then assessed (Fig. 1b ). Affimer K3 displayed the greatest inhibition of nucleotide exchange on wild-type KRAS with an IC 50 of 144 ± 94 nM, with Affimers K6 and K37 also displaying strong inhibition with IC 50s of 592 ± 271 nM and 697 ± 158 nM, respectively (Supplementary Table 2 ). Next, the abilities of the inhibitory RAS-binding Affimers, K3, K6, and K37 to disrupt the interaction of RAS with its effector protein RAF were determined by KRAS:RAF immunoprecipitation experiments using purified proteins (Fig. 1c and d ). Again, all three Affimer proteins caused a significant reduction in the amount of KRAS immunoprecipitated, with K3 being the most potent with a 79% reduction compared to a control Affimer in which the variable regions are AAAA and AAE, respectively, while K6 and K37 showed 40% reductions ( p < 0.0001, p = 0.0307 and p = 0.0439, respectively; One-way ANOVA with Dunnett’s post hoc test). Fig. 1: Biochemical analysis of RAS-binding Affimers. a Nucleotide exchange assay shows 3 Affimers, K3 (green triangles), K6 (turquoise triangles), and K37 (yellow triangles), inhibit SOS1-mediated nucleotide exchange, whilst K19 (orange diamonds) and K68 (magenta triangles) show inhibition of intrinsic nucleotide exchange and K69 (purple hexagons) and K91 (navy stars) do not inhibit nucleotide exchange at 10 µM. KRAS alone is shown as black squares and in the presence of SOS1 cat as red circles. b Affimers K3, K6, and K37 demonstrate dose-response inhibition of KRAS WT nucleotide exchange (Data fitted to the Hill Model ([Affimer] vs. response (three parameters)), n = 3 independent experiments for K3 and K6 and n = 5 for K37). c Immunoprecipitation of KRAS with GST-RAF-RBD is inhibited by the RAS-binding Affimers, K3, K6, and K37 compared to control Affimer (Variable regions of AAAA and AAE) which does not differ from the no Affimer (PBS). GST alone does not pull down RAS (Quantification using ImageQuantTL; n = 3 independent experiments). d A representative Western blot of the pull-down experiment from c . Data are mean ± SEM, One-way ANOVA with Dunnett’s post-hoc test * p = 0.0307 K6, p = 0.0439 K37, *** p = 0.0002 GST, **** p < 0.0001 K3. SOScat catalytic domain of SOS1 (Son of Sevenless). Con. Control Affimer, RBD RAS Binding Domain, IP immunoprecipitation, GST Glutathione-S-Transferase. Full size image Next, having been selected against KRAS, the specificity of the Affimer proteins for distinct RAS isoforms was assessed by nucleotide exchange assays. Whilst K6 and K37 showed no isoform specificity (Supplementary Table 2 ), Affimer K3 demonstrated a degree of isoform specificity, with weaker inhibition of HRAS and no measurable inhibition of NRAS with IC 50 values of 144 ± 94 nM for KRAS, 2585 ± 335 nM for HRAS and not obtainable for NRAS, respectively. The effects of the Affimer proteins on mutant forms of KRAS were also evaluated by nucleotide exchange assays with recombinant G12D, G12V, and Q61H KRAS mutants. Only Affimer K3 displayed a distinct mutant profile with 20-fold weaker inhibition of Q61H (IC 50 = 3005 ± 865 nM), suggesting specificity towards wild-type KRAS and G12 mutations. Affimer proteins bind to intracellular RAS and inhibit downstream signaling We then examined whether the Affimer proteins retained their ability to interact with, and inhibit RAS in human cells by using HEK293 cells and transiently transfecting in plasmids for expression of His-tagged Affimer proteins. Affimers K3, K6, and K37 all showed the ability to pull down endogenous RAS, while the control Affimer showed no such activity (Fig. 2a ), demonstrating binding within live cells. To understand the effects of Affimer protein binding to endogenous RAS on downstream signaling, activation of the MAPK pathway was explored. HEK293 cells were transiently co-transfected with plasmids encoding turbo-GFP (tGFP)-tagged Affimers and FLAG-tagged ERK1 expressing constructs and were then stimulated with epidermal growth factor (EGF)—this stimulation normally induces phosphorylation of ERK1 via MAPK. A co-transfection approach was used to ensure assessment of ERK1 phosphorylation in Affimer expressing cells only, a similar approach has been previously used 20 . All three Affimer proteins significantly reduced phosphorylation of the recombinant ERK1 (One-way ANOVA with Dunnett’s post-hoc test p = 0.0002 K3, p < 0.0001 K6, and p < 0.0001 K37, respectively). However, the effect of Affimer K3 was lower in magnitude than those of K6 and K37, with K3 showing only a 31% reduction compared with an 85% reduction for K6, and a 69% reduction for K37 (Fig. 2b and c ). Fig. 2: Affimers bind to intracellular RAS and inhibit downstream signaling. a Ni-NTA immunoprecipitation of transiently expressed intracellularly His-tagged Affimers with endogenous RAS from HEK293 cells. RAS-binding Affimers, K3, K6, and K37, pulled down endogenously expressed RAS, whilst the control Affimer did not. A representative blot from 3 independent experiments is shown. b , c HEK293 cells were co-transfected with FLAG-ERK1 plasmid and pCMV6 encoding tGFP tagged Affimers. Twenty-four hours post-transfection, cells were serum-starved and treated with EGF for 5 min. FLAG-ERK1 was precipitated from cell lysates and analyzed for phosphorylation. b shows a representative blot from 3 independent experiments quantified in ( c ) showing that Affimers K6 and K37 significantly reduced ERK phosphorylation by over 60% while Affimer K3 reduced it by 30%. (One-way ANOVA with Dunnett’s post-hoc test ** p = 0.0002 K3, **** p < 0.0001 K6, and p < 0.0001 K37). d RAS-binding Affimers reduce EGF-induced phosphorylation and nuclear translocation of endogenous ERK in HEK293 cells as measured by immunofluorescence as a percentage of the control Affimer, with Affimers K6 and K37 showing inhibition of over 80% whilst Affimer K3 inhibit by 50% in GFP-expressing cells over 1500 arbitrary units (One-way ANOVA with Dunnett’s post-hoc test **** p < 0.0001, n = 3 independent experiments). e Representative images of the effects of RAS-binding Affimers, K3, K6, and K37, and the control Affimer on EGF-stimulated upregulation of pERK in HEK293 cells. A selection of GFP-positive cells (green) expressing RAS Affimers (arrowed) show reduced staining for pERK (yellow). Scale bars are 50μm. f Assessment of Affimer expression level on pERK inhibition as determined by immunofluorescence, increased GFP expression, and thus Affimer expression shows a reduction in pERK nuclear translocation ( n = 3 independent experiments). g RAS-binding Affimers inhibition of EGF-induced phosphorylation and nuclear translocation of endogenous ERK in mouse embryonic fibroblasts (MEFs) expressing single human RAS isoforms as measured by immunofluorescence as a percentage of the control Affimer. Affimers K6 and K37 shown inhibition in all RAS isoforms, whilst Affimer K3 inhibited KRAS and HRAS to a lesser degree with no inhibition of NRAS (Two-way ANOVA with Tukey’s post-hoc test * p = 0.0237 HRAS vs. NRAS and ** p = 0.0096 KRAS vs. NRAS, n = 3 independent experiments). Data are mean ± SEM. Con. Control Affimer (Variable regions of AAAA and AAE), EGF epidermal growth factor, tGFP turbo green fluorescent protein, WCL whole cell lysate, IP immunoprecipitation, Empty transfection reagents only. Full size image To further study the impacts of RAS-binding Affimer proteins on ERK phosphorylation, we developed an immunofluorescence assay to allow the phosphorylation levels and nuclear translocation of endogenous ERK to be examined. HEK293 cells were transiently transfected with tGFP-tagged Affimer-expressing constructs, stimulated with EGF, fixed and stained with an anti-phospho-ERK (pERK) antibody, and analyzed for alterations to nuclear pERK levels. In accordance with results from the immunoprecipitation experiments, expression of all three Affimer proteins resulted in significant reductions in pERK levels (One-way ANOVA with Dunnett’s post-hoc test p < 0.0001 for all three Affimers), with K3 having a less pronounced effect (ca. 50% reduction for K3, compared with K6, 90% and K37, 85% reductions) (Fig. 2d and e ). There were no significant differences ( p = 0.429 One-way ANOVA), in the percentage of tGFP positive cells between the different Affimer constructs. We determined if the decrease in pERK nuclear translocation correlated with increased Affimer expression on a cellular level by determining pERK inhibition at a number of different tGFP intensities. Increased tGFP expression, and thus Affimer expression, showed a reduction in pERK nuclear translocation with a plateau reached at 1000 arbitrary units (Fig. 2f ). We speculated that the lower level of inhibition observed in K3-transfected cells was due to the RAS variant specificity of this Affimer: HEK293 cells express all RAS genes, but predominantly HRAS 30 , against which K3 is 20-fold less active. To test this hypothesis, the Affimer proteins were transfected into Mouse Embryonic Fibroblasts (MEFs) that have been engineered to express single human RAS genes (gift from William Burgen at Fredrick National Laboratory for Cancer Research, Maryland, USA). All three Affimer proteins showed robust inhibition of ERK phosphorylation in KRAS-expressing MEFs with K6 and K37 showing a similar level of inhibition in MEFs expressing HRAS and NRAS. By contrast, K3 showed variant specificity, with inhibition of NRAS significantly ablated whilst the degree of inhibition for HRAS was reduced, but not significantly when compared with that of KRAS (Two-way ANOVA with Tukey’s post hoc test p = 0.0237 for NRAS vs. HRAS and p = 0.0096 for NRAS vs. KRAS) (Fig. 2g ). Thus, the cellular data support the biochemical data that Affimer K3 has a preference for KRAS over NRAS, with HRAS values being intermediate. To evaluate the impact of the Affimer proteins on MAPK signaling in the presence of mutant, oncogenic forms of KRAS, the following cancer cell lines were utilized: Panc 10.05 (KRAS G12D ), SW620 (KRAS G12V ), and NCI-H460 (KRAS Q61H ). As anticipated from the biochemical data, all three Affimer proteins showed robust inhibition of FLAG-ERK1 phosphorylation in Panc 10.05 and SW620 cells ( p < 0.0001 All Affimers in Panc 10.05 cells and p = 0.0026 K3, p = 0.0054 K6 and p = 0.0088 K37 in SW620 cells, respectively, One-way ANOVA with Dunnett’s post-hoc test) (Fig. 3a and b ). However, in the Q61H mutant NCI-H460 cells despite all three Affimers showing inhibition of FLAG-ERK1 phosphorylation, the magnitude was reduced by 30-40% for K3 and K37 compared to the other mutant cell lines ( p = 0.0269 K3, p = 0.0062 K6, p = 0.0210 K37, One-way ANOVA with Dunnett’s post hoc test) (Fig. 3c ) supporting the biochemical data that the Affimer proteins show mutant specificities. It is possible that the differences in responses to Affimers between cell lines are a result of variations in Affimer-expression levels relative to RAS. The impacts of the Affimer proteins on mutant KRAS were therefore also tested using an immunofluorescence assay in conjunction with MEF cells expressing KRAS mutants; G12D, G12V, and Q61R allowing only cells with high Affimer-expression where it is probable that RAS is saturated to be analyzed. Only Affimer K3 in the Q61R background showed a significant lack of inhibition of pERK nuclear intensity ( p = 0.0221 Two-way ANOVA with Tukey’s post-hoc test) (Fig. 3d ), supporting the observation that Affimer K3 distinguishes between KRAS Q61 mutants and other KRAS variants. Together, these cellular data confirm that RAS-binding Affimer proteins are functional in cells, reducing ERK phosphorylation and that Affimer K3 demonstrates RAS variant and mutant specificity in cells, as well as in vitro. Given the similarities in biochemical profile and cellular activities between K6 and K37, together with a similar amino acid sequence motif (Supplementary Table 1 ), we postulate that they are likely to bind the same epitope although K37 showed less potency in the nucleotide exchange assays. We, therefore, focused on Affimer proteins K3 and K6 in our subsequent structural studies. Fig. 3: RAS-binding Affimers show different mutant specificities. a Panc 10.05 (KRAS G12D ), ( b ) SW620 (KRAS G12V ), and ( c ) NCI-H460 (KRAS Q61H ) cells were co-transfected with FLAG-ERK1 plasmid and pCMV6 encoding tGFP tagged Affimers. Twenty-four hours post transfection cells were serum-starved for 1 h. FLAG-ERK1 was precipitated from cell lysates using anti-FLAG beads and analyzed for phosphorylation by immunoblotting with anti-ERK and anti-phospho-ERK antibodies. Representative blots are shown together with quantification graphs. All three RAS-binding Affimers, K3, K6, and K37, inhibit ERK phosphorylation in Panc 10.05 cells ( a ) One-way ANOVA with Dunnett’s post-hoc test compared to control Affimer, **** p < 0.0001) and SW620 cells ( b ) One-way ANOVA with Dunnett’s post-hoc test compared to control Affimer **p = 0.0026 K3, p = 0.0054 K6, and p = 0.0088 K37). The magnitude of inhibition by Affimers K3 and K37 is reduced in NCI-H460 cells ( c ) One-way ANOVA with Dunnett’s post-hoc test compared to control Affimer *p = 0.0269 K3, **p = 0.0062 K6, *p = 0.0210 K37). d RAS-binding Affimers inhibit EGF-induced phosphorylation and nuclear translocation of endogenous ERK in mouse embryonic fibroblasts (MEFs) expressing single human KRAS mutants (G12D, G12V, Q61R) as measured by immunofluorescence as a percentage of the control Affimer. Only Affimer K3 shows weaker inhibition in the Q61R expressing cell line (Two-way ANOVA with Tukey’s post-hoc compared to KRAS WT for each Affimer * p = 0.0221. Data are mean ± SEM, n = 3 independent experiments for all cell lines. Con. Control Affimer (Variable regions of AAAA and AAE), EGF epidermal growth factor, tGFP turbo green fluorescent protein, IP immunoprecipitation, Empty transfection reagents only, WT wild type. Full size image K6 binds the SI/II hydrophobic pocket on KRAS The crystal structure of Affimer K6 in complex with GDP-bound wild-type KRAS was determined at 1.9 Å resolution, revealing that Affimer K6 binds to a shallow hydrophobic pocket on KRAS between the switch regions (Fig. 4a and Supplementary Table 3 ). The Affimer K6 binding site overlaps that of SOS1 providing structural evidence that K6 acts as a SOS1 competitive inhibitor (Supplementary Fig. 1a ). The binding site further overlaps with that of the RAS-binding domain (RBD) of RAS-bound RAF (Supplementary Fig. 1b ) supporting the RAS:RAF immunoprecipitation results. Affimer residues 40–45 from variable region 1 are important in the binding interface between KRAS and Affimer K6 (Fig. 4b and c ). The Affimer tripeptide motif formed of P42, W43, and F44 binds the shallow hydrophobic pocket of KRAS (Fig. 4b –top left panel). The P42 residue forms no interactions with KRAS, however, it is critical for function suggesting it has a geometric and structural role facilitating the interactions formed by W43 and F44. W43 of Affimer K6 and V7/L56 from KRAS form a hydrophobic cluster strengthened by Affimer residues T41 and Q45 forming hydrogen bonds with KRAS switch region residues D38/S39 and Y71, respectively (Fig. 4b –top right panel). The importance of these amino acid residues for K6 function was confirmed by mutational analysis. Individual replacement of P42, W43, F44, or Q45 with alanine reduced Affimer-mediated inhibition of nucleotide exchange (Fig. 4d ). These data also revealed the importance of residues F40, N47, and R73 for the inhibitory function of Affimer K6. Indeed, complete removal of the second variable region abolished the inhibitory ability of K6 (Fig. 4d ∆VR2). This effect is most likely a result of F40, N47, and R73, and Q45 forming intra-Affimer hydrogen bonds that stabilize the tripeptide, P42, W43, F44 (Fig. 4b –bottom panel). These data suggest that the functional Affimer motif responsible for binding and inhibition of KRAS is small, in agreement with the total interacting interface estimated by PISA analysis 31 of 478.3 Å 2 for the K6:KRAS complex, a substantially smaller area than most common protein-protein interaction surfaces. The combination of a functional motif and small interaction interface provides a strong basis from which to consider the development of small molecule inhibitors. Fig. 4: Variable region 1 of Affimer K6 binds between Switch I and Switch II of KRAS. a Affimer K6 (green) was co-crystallized with KRAS GDP (slate) and solved to a resolution of 1.9 Å. The switch I (magenta), switch II (orange) and α3 helix (black) are depicted, showing their relative positioning around variable region 1 of Affimer K6. b Intramolecular and intermolecular interactions in the KRAS:Affimer K6 co-crystal structure are depicted; black dotted lines represent the hydrogen bonds that stabilize the critical hydrophobic contacts. c Affimer K6 (VR1) and KRAS interactions shown in planar form. Hydrogen bonds are shown as black, dotted lines between the contributing atoms; additional hydrophobic interactions are represented by arcs, their spokes radiating towards the residues they contact. (Data was generated using PDBePISA (CCP4i) 31 and verified in MacPyMOL). d Alanine scanning data of the variable regions of Affimer K6 highlights Affimer residues important for inhibition of nucleotide exchange and the importance of VR2. This highlights the residues that are important for both KRAS:K6 interactions and intra-Affimer interactions that stabilize the conformation of Affimer K6. Unaltered K6 is shown in black, variable region 1 residues are shown in dark gray, variable region 2 residues in light gray, and removal of variable region 2 (ΔVR2) in white. ( n = 3 independent experiments). e Comparison of Affimer K6 tripeptide, P42, W43, F44, (green) with the small molecules (yellow) that bind the same SI/SII pocket. Data are mean ± SEM, One-way ANOVA with Dunnett’s post hoc test * p = 0.0224 Q45A ** p = 0.0002 F40A **** p < 0.0001 P42A, W43A, F44A, R73A, and ΔVR2. Images were generated in MacPyMOL v1.7.2.3, and ChemDraw Prime 16.0. VR variable region. Full size image Affimer K6 binds the SI/SII pocket that has been previously documented 3 , 4 , 5 , 6 , 7 , 8 , 11 , and alanine scanning has suggested a functional pharmacophore from K6 is responsible for the observed inhibition. This PWFQxN peptide motif is also present in Affimer K37 and it is thus likely that K37 interacts in a similar manner to K6, but this remains to be confirmed. This binding pocket has previously been identified, and a number of small molecules exist to target it, including DCAI 6 , compound 13 8 , Abd-7 7 , and BI-2852 11 (Supplementary Table 3 ). We measured the affinity of K6 for both GDP-bound and GppNHp-bound KRAS by SPR to test whether this was comparable to these small molecules. Affimer K6 bound both forms of KRAS with low nanomolar affinities and showed a significant preference for GDP-bound KRAS ( K D = 1.36 ± 0.87 nM for GDP and K D = 7.88 ± 1.09 nM for GppNHp, Student t -test p = 0.0095) (Fig. 5a and b ). Thus, Affimer K6 has a 10-fold higher affinity for KRAS than Abd-7 7 , the strongest-binding small molecule with a K D of 51 nM (c.f 750 nM for BI-2852 11 , 1.1 mM for DCAI 6 and 340 µM for compound 13 8 , see Supplementary Table 3 ). We also inspected the SI/SII-binding small molecules for structural similarities to the K6 pharmacophore (Fig. 4e ). All of the small molecules have an aromatic ring that inserts into the pocket and which is reproduced by the side chain of W43 of Affimer K6. However, only K6 and BI-2852 appear to interact across the whole of the pocket surface. This suggests that additional points of interaction may underlie efficacy, as BI-2852 is the most potent of the compounds to date 6 , 7 , 8 , 11 . Affimer K6 shows a similar degree of potency to BI-2852, both showing IC 50 values in the nanomolar range for inhibition of nucleotide exchange (IC 50 = 592 ± 271 nM for Affimer K6 and IC 50 = 490 nM for BI-2852 11 ). Fig. 5: Affimers K3 and K6 bind KRAS with nanomolar affinity. SPR measured binding activities for Affimer K6 with GDP bound KRAS ( a ), GppNHp bound KRAS ( b ) and Affimer K3 with GDP bound KRAS ( c ), GppNHp bound KRAS ( d ). Affimers were immobilized on streptavidin-coated CM5 sensor chips via C-terminal biotin and differing concentrations of GDP or GppNHp bound KRAS flowed over. Representative curves of 3 replicate experiments are shown with experimental data in color and Langmuir 1:1 fitting curves in black. Full size image K3 locks wild-type KRAS in a conformation harboring a SII/α3 pocket Affimer K3 lacks the PWFQxN motif of Affimers K6 and K37 and has a different biochemical profile in terms of mutant and isoform specificities. The underlying reasons for this were confirmed by determining the crystal structure of Affimer K3 in complex with the GDP-bound form of KRAS to 2.1 Å resolution (Fig. 6a and Supplementary Table 3 ), however, crystallization of this complex was difficult leading to lower than anticipated Rfactor and FreeRfactor values (see “Methods” section for details). Affimer K3 binds KRAS with high affinity irrespective of the nucleotide bound ( K D = 59.4 ± 15 nM for GDP-bound and K D = 44.4 ± 0.8 nM for GppNHp-bound) (Fig. 5c and d ). Variable region 1 binds KRAS between SII and the α3 helix, with residues 41–46 being crucial for interaction (Fig. 6b and c ) and inhibition. Indeed, the functional importance of these Affimer residues was highlighted by mutational analysis, as individual alanine substitutions abolished the inhibition of SOS1-mediated nucleotide exchange (Fig. 6d ). Affimer K3 residue D42 bridges the gap between SII and α3 helix, forming hydrogen bonds with R68 and Q99, respectively (Fig. 6b –top left panel). The KRAS:K3 binding is strengthened by Affimer K3 residue D46, binding both Q99 and R102 of the α3 helix (Fig. 6b –top left panel), as well as Y45 of K3, forming a hydrogen bond with E62 of SII (Fig. 6b –top right panel). This forms a larger binding surface allowing Affimer K3 residues I41 and I43 to form hydrophobic interactions with V103 and M72, and V9 of KRAS, respectively (Fig. 6b –bottom left panel). In addition, the backbone carbonyl of W44 forms a hydrogen bond with H95 of the α3 helix, as well as orientating its indole side chain towards KRAS so that it is packed against the residues of Q61, H95, and Y96 (Fig. 6b –top right panel). The involvement of Affimer K3 residue W44 in binding H95 may explain the specificity of K3 for KRAS seen in the biochemical and cellular data, as H95 is a unique residue only present in KRAS and not in HRAS and NRAS. The importance of H95 in mediating Affimer K3 selectivity for KRAS was confirmed by immunoprecipitation assays of mutants H95Q and H95L that mimic the corresponding residues in HRAS and NRAS. Mutating H95 affected the ability of Affimer K3 to immunoprecipitate KRAS, with mutation to glutamine (as found in HRAS) reducing the amount of RAS pulled down by 45% and mutation to a hydrophobic leucine residue (as found in NRAS) abolishing RAS immunoprecipitation completely, thus giving support to our previous observations (Fig. 6e ). Fig. 6: Variable region 1 of Affimer K3 explores a druggable pocket between the switch II region and α3 helix of KRAS. a Affimer K3 (magenta) was co-crystallized with KRAS GDP (cyan) and solved to a resolution of 2.1 Å. The switch I (deep blue), switch II (orange) and α3 helix (dark gray) are depicted, showing their relative positioning around variable region 1 of Affimer K3. b Intramolecular and intermolecular interactions in the KRAS:Affimer K3 co-crystal structure is depicted; black dotted lines represent hydrogen bonds. c All intermolecular interactions are shown in planar form. Hydrogen bonds are shown as short dotted black lines between the contributing atoms; electrostatic interactions are shown as long dashed black lines additional hydrophobic interactions are represented by arcs, their spokes radiating towards the residues they contact (Data was generated using PDBePISA (CCP4i) 31 and verified in MacPyMOL). d Alanine scanning data of the variable regions of Affimer K3 highlights Affimer residues important for inhibition of nucleotide exchange. Unaltered K3 is shown in black, variable region 1 residue are shown in dark gray, and variable region 2 residues in light gray (One-way ANOVA with Dunnett’s post hoc test **** p < 0.0001, n = 3 independent experiments). e Mutation of KRAS H95 affects the ability of Affimer K3 to bind, H95Q and H95L represent the residues in HRAS and NRAS, respectively, a representative blot is shown ( n = 3 independent experiments). f Binding of Affimer K3 causes a conformational shift to Switch II compared to WT-KRAS GDP . The KRAS molecule (cyan) from KRAS:Affimer K3 co-crystal structure was overlaid with WT-KRAS GDP (deep blue; PDB code: 4OBE). Conformational shifts were observed in the switch II region (red-dotted box). g Alterations in the conformation of the Switch II region (orange and α3 helix (black) (top row) and the corresponding alterations in the electrostatics (bottom row). Residues 41–45 of Affimer K3 (green) shown with KRASGDP (left-hand panels), overlaid with the co-crystallized KRAS:ARS1620 structure (middle panels, PBD: 5V9U) and KRAS:AMG510 structure (right-hand panels, PDB: 6OIM) (ARS1620 is shown in yellow and AMG510 is shown in magenta). Data are mean ± SEM, n = 3 independent experiments. Images were generated in MacPyMOL v1.7.2.3, and ChemDraw Prime 16.0. VR variable region. Full size image The inter-molecular interactions described above cause significant conformational shifts in the effector lobe of KRAS, most notably, the α2 helix of SII being forced further from the α3 helix, as compared to the WT-KRAS GDP crystal structure (PDB code: 4OBE 32 ) (Fig. 6f ). The observed conformational shift originates from the flexible glycine residue (G60) of the DxxGQ motif of SII. The N-terminal loop of SII is further rearranged, orienting itself over the K3 binding motif towards the α3 helix. These changes in SII conformation not only generate a larger binding surface but also facilitate a number of intramolecular hydrogen bonds in KRAS, distinct to WT-KRAS GDP (PDB code: 4OBE 32 ). We postulate that the binding of Affimer K3 residue D42 to KRAS R68, by a salt bridge interaction, shifts the R68 residue into an orientation necessary to facilitate a hydrogen-bonding network between E37 of SI, A59, G60, S65, and R68 of SII (Fig. 6b –bottom right panel), thereby stapling the SII region to the SI site. This hydrogen bonding network is not seen in WT-KRAS GDP (PDB Code: 4OBE 32 ), or WT-KRAS GppNHp (PDB code: 6GOD 5 ). It is possible these global conformational shifts, together with specific KRAS residue changes, such as E37, and even M67 whose side chain abrogates the RAF-RBD interface may explain the ability of K3 to abolish the KRAS-RAF interaction (Supplementary Fig. 1d ). Furthermore, as SII acts as the main anchor point for SOS1 binding 33 , 34 , the significantly reduced flexibility of this site may, together with occlusion of the Cdc25 domain of SOS1 forming a steric clash (Supplementary Fig. 1c ), underlie the K3-mediated inhibition of SOS1-mediated nucleotide exchange. We postulate that K3 binding locks KRAS in an inactive conformation by stapling the switch regions together through induced hydrogen bonding, reducing conformational dynamics required for its activity. The dynamic freedoms of SII are further reduced by the folding of the N-terminal loop of SII over the K3 binding motif resulting in a hydrogen bond interaction between Q61 of SII and Y96 of the α3 helix. This locks Q61 in a position distal to the active site 33 , 35 . This involvement of Q61 supports the biochemical and cellular data, showing the loss of inhibition of nucleotide exchange and a reduction in EGF-stimulated pERK nuclear translocation in NCI-H460 cells and Q61R MEFs when Q61 is replaced with a histidine or arginine residue (KRAS Q61H/Q61R ). Thus, Affimer K3 has identified a conformer of wild-type KRAS that generates a druggable pocket, with an estimated interface area of 790.6 Å (PISA (EMBL-EBI) 31 analysis), buried between the SII region and the α3 helix, not previously observed in wild-type KRAS. A similar pocket has previously been reported in the KRAS G12D mutant in a complex with a cyclic peptide, KRpep-2d 15 . However, there are critical differences between the pocket bound to K3 and when bound to KRpep-2d. Notably, K3 binding involves the KRAS-specific residue H95 giving rise to the KRAS-specificity seen in our cellular assays, the isoform specificity of KRpep-2d has not been assessed to our knowledge. In addition, K3 binding induces the KRAS intra-molecular bonds between Q61 and Y96 without the involvement of residue 12, whereas KRpep-2d requires an aspartic acid residue to coordinate the same intramolecular bonding network 15 . Furthermore, a comparable groove has also been documented in the KRAS G12C mutant, together with a small molecule series that inhibits RAS function via binding at this pocket 9 , 10 , 12 (Fig. 6g ). Of this series, the most recently published AMG510 compound has reached clinical development 12 . The compound is covalently tethered to the C12 residue and explores the same SII/α3 helix pocket. We hypothesize that AMG510 induces a similar mode of inhibition seen by K3, whereby the switch regions are stabilized by the ligand. However, it is clear that although the AMG510 compound and Affimer K3 explore the same cryptic groove, the conformations of SII lead to distinct pocket conformations with widely different electrostatics. Affimer K3 stabilizes a more open conformation compared to the closed conformation seen with AMG510 (Fig. 6g ). Further to this, binding of K3 to KRAS decreases flexibility by inducing hydrogen bond interactions between SI/SII, and SII/α3 helix, which is not present in the KRAS G12C :AMG510 structure (PDB code: 6OIM 12 ). The shape, size, and physiochemical composition of the pocket identified by Affimer K3 suggests a potentially druggable site 36 , 37 . The K3 data shows that we have isolated a non-covalent KRAS binder and have identified a druggable pocket and pharmacophore combination through which to inhibit KRAS preferentially over other RAS isoforms. Disruption of the intracellular RAS:Affimer interactions can be used to identify compounds that bind in the pockets Having shown Affimers can identify and probe pockets on RAS, we explored the potential of utilizing these interactions as tools to identify compounds that bind in the pockets. To achieve this, a KRAS:Affimer NanoBRET system was developed, initially demonstrating that both K6 and K3 interact with KRAS, within cells (Fig. 7a and b ) and that greater NanoBRET signal was seen with increased Affimer to KRAS ratio, whilst the control Affimer shown no evidence of an interaction. Subsequently, the impacts of small molecule inhibitors that bind in the SI/SII and SII pockets, respectively, on the NanoBRET signal were assessed. Increasing concentrations of the SI/SII-pocket binding compound, BI-2852 11 , reduced the NanoBRET signal from the Affimer K6:KRAS interaction. This reduction in signal is compatible with BI-2852 displacing Affimer K6 from KRAS and the NanoBRET signal was completely abolished with a dose of 60 μM of BI-2852 (Fig. 7c ). Similarly increasing concentrations of the SII-pocket binding compound, ARS-1620, reduced the K3:KRAS G12C NanoBRET signal. KRAS G12C was used as ARS-1620 requires a disulfide linkage to C12 to bind and access the SII pocket 12 . Affimer K3:KRAS G12C showed an increased NanoBRET signal with an increased Affimer:KRAS ratio that was comparable to K3:KRAS WT (Fig. 7a and b ). ARS-1620 disrupted the K3:KRAS G12C interaction at lower concentrations than BI-2852 did for K6 (0.0468 µM for ARS-1620 vs. 0.468 µM for BI-2852), however complete signal abolition was not achieved with ARS-1620 even at the top dose of 60 μM (Fig. 7d ). Neither BI-2852 nor ARS-1620 disrupted the NanoBRET signal from Affimer K69 (Fig. 7c and d ) which binds at a distal site to both pockets, between helices 4 and 5 of the allosteric lobe, (Fig. 7e , PDB code 7NY8 ) and does not inhibit nucleotide exchange (Fig. 1a ). Thus, we have demonstrated that the inhibitory RAS-binding Affimers that bound in pockets on RAS can be used as tools to identify compounds that bind in the same pockets. Using this assay, it may be possible to find RAS inhibitors that target KRAS WT in the SII/α3 pocket specifically. Fig. 7: Affimer:KRAS NanoBRET can be used to identify small molecules which bind in the SI/SII or SII/α3 pocket. Increased Affimer (acceptor) to KRAS (donor) ratio increases the NanoBRET signal as measured by NanoBRET Ratio for Affimers K3, K6, and K69 with KRAS WT ( a )) and Affimers K3 and K69 with KRAS G12C ( b )). Small molecule BI-2852 binds in the SI/SII pocket and increasing concentrations displace Affimer K6 reducing the NanoBRET Ratio with no impact on NanoBRET signal from Affimer K69 that binds between helix 4 and helix 5 ( c )). ARS-1620 covalently tethers to C12 in KRAS G12C and occupies the SII pocket, and increasing concentrations displace Affimer K3 reducing the NanoBRET Ratio with no impact on NanoBRET signal from Affimer K69 ( d )). e Affimer K69 (blue) binds the allosteric lobe between helices 4 and 5 on the opposite side of KRAS (gray) to Affimers K3 (magenta) and K6 (green). Data are mean ± SEM, fitted to 3 parameters [agonist]/[inhibitor] vs. response model in Prism, n = 3 independent experiments for all assays. Images were generated in MacPyMOL v1.7.2.3 from PDB codes 6YXW (K3), 6YR8 (K6), and 7NY8 (K69). Control Affimer (Variable regions of AAAA and AAE). Full size image Discussion We have isolated RAS-binding Affimer reagents that inhibit RAS both in vitro and in cells. The Affimer proteins generated show nanomolar affinities for KRAS together with IC 50 values in the nanomolar range for inhibition of SOS1-mediated nucleotide exchange. Furthermore, they are functional intracellularly demonstrating inhibition of the MAPK pathway as assessed by ERK phosphorylation levels. Structural analyses showed that the Affimer proteins interact with RAS within druggable pockets, notably identifying a pocket between the Switch II region and the α3 helix, with a non-covalent binder. Thus, we have exemplified a site for the development of compounds to inhibit both wild-type and G12 mutant forms of KRAS, together with a pharmacophore as a starting point for this approach. The biochemical and cellular profiles of the Affimer proteins used in this study are comparable with scaffold-based biologics that have previously been identified that also inhibit RAS, again with nanomolar affinities and IC 50 values 16 , 17 , 19 , 20 , 21 , 22 , 38 . However, the majority of these do not distinguish between RAS variants, and/or mutants, and structural analyses reveal that these pan-RAS inhibitors are binding in the Switch I/II region, except for the NS1 monobody that binds the α4-β6-α5 dimerization domain 20 , and the DARPins K13 and K19 16 (Supplementary Fig. 2 ) as discussed below. The binding positions of the scFV, iDab6, and the DARPins K27 and K55 all span the SI/SII pocket 17 , 22 , which is the location of Affimer K6 binding; however, none of these other biologics have been shown to protrude into the pocket. Indeed, structural analysis of the K6:KRAS complex showed that a tripeptide motif, P42, W43, and F44, inserts into this SI/SII pocket in a manner that mimics the binding of known small molecules targeting this pocket. The aromatic indole ring of W43 extends into the pocket, a motif seen with all the compounds targeting this site. The interactions of the compounds are then diverse compared to Affimer K6 (Fig. 4e ). It would be interesting to determine if compounds based on the K6 pharmacophore were more potent than the current compounds, as K6 binds with a higher affinity than any published reagent and shows comparable inhibition 5 , 6 , 7 , 8 , 11 , and indeed requires micromolar concentrations of the highest affinity inhibitor, BI-2852, to displace it from KRAS. Thus, Affimer K6 demonstrates that the use of biologics with small interaction interfaces can not only bind and inhibit difficult-to-drug proteins but also identify and probe druggable pockets on such proteins, potentially acting as templates for small molecules. Importantly, the K3 Affimer shows inhibition of RAS but also demonstrates a preference for KRAS over the HRAS and NRAS variants. To our knowledge, the only other biologics to express such RAS variant specificity are DARPins K13 and K19 16 . This preferential behavior is underpinned by the involvement of the H95 residue unique to KRAS; mutation of this residue abolished binding of both Affimer K3 and the DARPins K13 and K19 16 . This ablation was more complete with mutation to glutamine for the DARPins but was still significant with Affimer K3. These differences may in part be due to the distinct binding locations of the DARPins K13 and K19 on the allosteric lobe side of H95, whereas K3 binds on the effector lobe side and locks KRAS in a conformation where a pocket is revealed 16 . The residues involved in this pocket, specifically Q61, underlie the mutational preferences of Affimer K3 for wild-type/G12 mutants vs. Q61; this selectivity is not seen with DARPins K13 and K19 16 . The specificity of DARPin K19 for KRAS has been exploited as a macro drug fused with an E3 ligase and, whilst proteolysis of both mutant and wild-type KRAS occurred, only cells expressing mutant KRAS were killed both in vitro and in vivo 39 . Thus, this demonstrates the importance of being able to target the KRAS-isoform specifically, but irrespective of its mutant status, for utility as a potential cancer therapy. The interaction between Affimer K3 and KRAS provides an insight into how this may be achieved with small molecules further advancing this area of research. The pocket revealed in wild-type KRAS by Affimer K3 binding is a previously unseen conformer of the SII pocket 3 , 4 , 9 , 10 , 12 , 13 , 14 , 15 , which we have termed the SII/α3 pocket, and it coincides with a cryptic groove identified computationally 4 , 12 . A similar conformer of the SII pocket has previously been targeted by a cyclic peptide, KRpep-2d, in the KRAS G12D mutant 15 . However, whilst showing nanomolar inhibition in nucleotide exchange assays, KRpep-2d has only shown micromolar efficacy in cells and was deemed not sufficiently efficacious for in vivo studies 14 , 15 . In contrast, the covalently-tethered KRAS G12C inhibitors, the ARS series, and the most recent iterations are in clinical trials 9 , 10 , 12 , 13 demonstrating the clinical importance of this pocket. Whilst this compound series has yielded the most clinically promising RAS-inhibitors to date, its dependence on covalent tethering to C12 restricts its utility to KRAS G12C mutant cancers only. Affimer K3 binds to an SII-derived pocket non-covalently and demonstrates a similar degree of in vitro potency to AMG510 with IC 50 values for nucleotide exchange of 0.15 µM and 0.09 µM, respectively 12 . We, therefore, suggest that targeting this region is a strong approach for allosteric inhibition of RAS. As Gentile et al. 40 noted, for binding to the SII pocket, a substituted phenolic ring is required for insertion within the subpocket formed by V9, R68, D69, and M72, and to form hydrogen bonds with R68 and D69 residues. Affimer K3 fulfills these criteria with the aromatic ring of W44 extending into this subpocket and its surrounding residues, S40 and D42 forming the necessary hydrogen bonds, thus the SII/α3 pocket shares a key subpocket with the SII pocket. Indeed, K3 also forms hydrogen bonds with H95 and Q99 in common with AMG510, albeit with different orientations of H95. Nevertheless, the positioning of the α2 helix of SII is significantly different with K3 inducing a conformation where the α2 helix is distal to the α3 helix, and AMG510 a more closed conformation where the α2 helix is semi-distal to the α3 helix, but the loop region of SII is held across the pocket due to hydrogen bonding between AMG510 and KRAS E63. Whilst ARS-1620, the most potent ARS compound, leaves the helices proximal to one another as seen in WT-KRAS GDP (PDB: 4OBE 32 ), (Fig. 6g ) 10 , 12 . These differences suggest that there may be an extended pocket area for small molecules based on the K3 pharmacophore, as identified by mutational analysis, to exploit. This is supported by the competitive NanoBRET as the highest dose of ARS-1620 could not fully abolish the K3:KRAS NanoBRET signal. Providing small molecules, based on the K3 pharmacophore, engage the H95 that governs the KRAS-specificity of Affimer K3 and the DARPins K13/K19, it may be possible to achieve the first non-covalent small molecule inhibitors of KRAS via the SII/α3 pocket that may have similar properties to the E3-ligase fused DARPin K19 that possess the ability to selectively kill mutant KRAS cells 39 . This is an exciting avenue to be explored with future studies. Our work presented here demonstrates the concept of using biologics that bind with a relatively small interface as precursors for the development of small-molecule inhibitors for difficult-to-drug proteins. This has the potential to add an alternative strategy for drug discovery to commonly used methods such as computational analysis, covalent tethering that is dependent on suitable residues, and experimental screening approaches. The Affimer proteins identified in this study inhibited RAS, by binding to shallow pockets previously identified, or pockets derived from those previously identified, with comparable affinities and in vitro efficacies to the best small molecules available that target these pockets. This highlights the ability of Affimer proteins to select conformers of target proteins and reveal druggable regions on protein surfaces concurrent with pharmacophore identification. Indeed, it will be interesting to use the pharmacophore motifs identified in this study as templates for novel series of RAS-binding small molecules, and for potential hit-to-lead optimization using the Affimer-RAS NanoBRET system, as has been previously achieved with RAS-binding biologics 5 , 7 . The approach utilized in this study is likely to be applicable to other important therapeutic targets and provides a useful pipeline for drug discovery. Methods Protein production The human wild-type KRAS (Isoform b), HRAS, and NRAS gene sequences (residues 1–166, with an N-terminal His-tag and C-terminal BAP-tag, were synthesized by GenScript (Piscataway, USA) and cloned into pET11a (all primers used in this study are detailed in Supplementary Table 5 ). RAS mutants G12D, G12V, Q61H, H95Q, and H95L were produced by Quikchange TM site-directed mutagenesis using the wild type as a template. RAS proteins were produced in BL21 STAR TM (DE3) E. coli induced with 0.5 mM IPTG and grown overnight at 20 °C at 150 rpm. Cells were harvested by centrifugation at 4816× g for 15 min at 4 °C and resuspended in 20 mM Tris, pH 7.5, 500 mM NaCl, 10 mM Imidazole, 5% Glycerol, supplemented with EDTA-free protease inhibitor, 0.1 mg/ml lysozyme, 1% Triton X-100 and 10U/ml Benzonase nuclease. Proteins were purified from the supernatant by Ni-NTA chromatography and size exclusion chromatography into RAS buffer (20 mM Tris, 100 mM NaCl, 10 mM MgCl 2 , 1 mM DTT, 5% Glycerol, pH 7.5). GST-thr-RAF1-RBD was provided as a gift from Dominic Esposito (Addgene plasmid # 86033) and produced as previously described 41 . GST-thr-RAF1-RBD supernatants were used for RAS:RAF immunoprecipitation. Human SOS1 catalytic domains (SOS1 cat ) coding region (residues 564-1059) with an N-terminal His-tag in pET11a were produced in BL21 Star TM DE3 E. coli following 0.5 mM IPTG induction and grown overnight at 25 °C at 150 rpm. Cells were harvested by centrifugation at 4816× g for 15 min at 4 °C. Cell pellets were lysed in 20 mM Tris-HCl pH 8; 300 mM NaCl; 20 mM imidazole; 5% glycerol supplemented with 1% Triton-X100, EDTA-free protease inhibitor, 0.1 mg/ml lysozyme and 10 U/ml Benzonase nuclease. The lysate was centrifuged at 12,000× g for 20 min then applied to Ni-NTA resin. Proteins were eluted using 20 mM Tris-HCl pH 8; 500 mM NaCl; 300 mM Imidazole; 5% Glycerol and dialyzed into 50 mM Tris-HCl pH 7.5; 100 mM NaCl; 1 mM DTT. RAS nucleotide loading RAS was desalted into nucleotide loading buffer (25 mM Tris-HCl, 50 mM NaCl, 0.5 mM MgCl 2 pH 7.5) using a Zeba spin column according to the manufacturer’s instructions (ThermoFisher). MANT-GDP (mGDP, SigmaAldrich) or GppNHp (SigmaAldrich) was added in a 20 fold excess over RAS together with DTT and EDTA to a final concentration of 1 mM and 5 mM, respectively, in a volume of 130 µl and incubated at 4 °C for 1 h. MgCl 2 was added then in a 9 fold excess over EDTA and incubated for a further 30 min at 4 °C. Loaded RAS was then desalted into nucleotide exchange buffer (20 mM HEPES pH 7.5, 150 mM NaCl, 10 mM MgCl 2 ) using a Zeba spin column. Nucleotide loading was confirmed by native mass spectrometry. Phage display and Affimer protein production Affimers against GDP-bound KRAS were isolated by phage display 26 . Biotinylated KRAS (1–166) GDP (EZ-Link® NHS-Biotin, Thermo Scientific) was immobilized on blocked (2× blocking buffer, Sigma containing 10 mM MgCl 2 ) streptavidin wells. The Affimer phage library was applied for 2 h and unbound phage removed by PBS-T washes (27 times). Bound phage was eluted in a two-phase step, firstly with 0.2 M glycine pH 2.2 neutralized with 15 ml of 1 M Tris-HCl, pH 9.1 and then 7.18 M triethylamine, pH 11 neutralized with 1 M Tris-HCl, pH 7. Three panning rounds were undertaken and after the final panning around 24 randomly picked colonies were used in phage ELISA with positive clones sent for sequencing 26 . The seven unique sequences were cloned into pET11 using Affimer-His primers (Supplementary Table 5 ). RAS-binding Affimers were produced in BL21 STAR TM (DE3) E. coli (Life Technologies, Invitrogen) and affinity purified using Ni-NTA resin. The cross-reactivity against GppNHp-bound KRAS was determined by ELISA 26 . Guanine nucleotide exchange assays Nucleotide exchange buffer was supplemented with 0.4 mM GTP (SigmaAldrich) and 0.5 μM SOS1 cat for experiments involving WT RAS, or 2 μM SOS1 cat for experiments involving mutant RAS proteins. The Affimer proteins were then diluted with the GTP-SOS1 cat supplemented nucleotide exchange buffer to make 20 μM stock solutions, which were then used for a 2-fold serial dilution series using the supplemented nucleotide exchange buffer. A 1 μM stock of the WT/mutant RASmGDP protein was prepared by diluting the stock RAS in nucleotide exchange buffer supplemented with 2 mM DTT. Solutions were incubated at 37 °C for 10 min before the assay. The reaction was initiated by the addition of Affimer/SOS1 cat /GTP solution to RAS/DTT containing solution. Changes in fluorescence were measured by a fluorescence spectrometer (Tecan Spark) in a Corning black, flat-bottomed, non-binding 384 well plate at 440 nm every minute for 90 min. The data were then normalized to initial fluorescence reading and fitted to a single exponential decay using OriginPro 9.7.0 software (OriginLab, Massachusetts). The derived rates were normalized to these of RAS-SOS1 minus RAS-only samples and fitted to the Hill equation ( y = START + (END–START) (xn/kn + xn)) from which the IC 50 values were calculated. RAS interaction assays The interaction of KRAS with RAF-RBD or Affimer K3 was assessed by immunoprecipitation using a KingFisher Flex (ThermoFisher Scientific). Glutathione magnetic agarose beads were blocked overnight at 4 °C with 2× blocking buffer (Sigma, B6429) then incubated with RAF-RBD-GST supernatants for 1 h at room temperature. Simultaneously 1 μg/μl of KRAS-GppNHp was incubated with 0.6 μg/μl of Affimer or PBS (no Affimer control) in a total volume of 100 μl PBS. Beads were washed 3 times with assay buffer (125 mM Tris, 150 mM NaCl, 5 mM MgCl2, 1 mM DTT, 0.01% Tween-20, pH 8.0) and KRAS:Affimer solutions added and incubated for 1 h at room temperature. Beads were washed 4 times (15 secs/wash) with assay buffer and proteins were eluted with SDS-PAGE sample buffer (200 mM Tris-HCl, 8% SDS, 20% glycerol, 20% mercaptoethanol, 0.1% (w/v) bromophenol blue, pH 7). RAS:Affimer immunoprecipitation was as for RAS:RAF immunoprecipitation except for His Mag Sepharose™ Ni 2+ -NTA beads (GE healthcare ® ) were used and incubated with 20 µg of Affimer K3 for 1 h with agitation. After washing 3 times the beads were mixed with 100 µl of KRAS/KRAS(H95Q)/KRAS(H95L) lysate. For elution of K3:KRAS SDS-PAGE sample buffer was supplemented with 500 mM imidazole. Proteins were analyzed by immunoblot with anti-GST-HRP (1:5000, GeneTex, GTX114099) or anti-6X His tag antibody (HRP) (1:10,000, Abcam, ab1187) and KRAS + HRAS + NRAS (1:1000, Abcam, ab206969) antibodies. Immunoblotting Protein samples were separated on 15% SDS-PAGE and transferred onto nitrocellulose membranes. These were blocked with 5% milk (SigmaAldrich) in TBS-0.1% Tween 20 (TBST), incubated overnight at 4 °C with primary antibody as detailed, washed 3 times with TBST, and incubated for 1 h at room temperature with HRP-conjugated secondary antibody (1:10,000 goat anti-rabbit HRP, Cell Signaling Technology, CST7074S) if required. Following 3× TBS-T washes the membranes were developed with Immunoblot Forte Western HRP (Millipore), according to the manufacturer’s instructions, and imaged using an Amersham TM Imager 600 (GE Healthcare). Uncropped images of all membranes used in this manuscript are shown in Supplementary Fig. 3 . Cell culture HEK293, Panc 10.05, and NCI-H460 cells were purchased from ECACC, UK. RAS-expressing mouse embryonic fibroblasts (MEFs) were from William Burgen at Fredrick National Laboratory, Maryland, USA. SW620 cells were from Professor Mark Hull, University of Leeds, UK. HEK293 and MEF cell lines were maintained in Dulbecco’s Modified Eagle Medium (SigmaAldrich) supplemented with 10% fetal bovine serum (FBS) (Gibco), and 4 µg/ml blastocidin S (SigmaAldrich)(MEFs expressing KRAS and NRAS isoforms) or 2.5 µg/ml puromycin (SigmaAldrich) (MEFs expressing HRAS). NCI-H460 and SW620 cell lines were maintained in RPMI-1640 media (SigmaAldrich) supplemented with 10% FBS. Panc10.05 cells were maintained in RPMI-1640 supplemented with 15% FBS and 10U insulin (SigmaAldrich). All cells were maintained at 37 °C in CO 2 and were mycoplasma-free. RAS immunoprecipitation 4 × 10 5 HEK293 cells/well were plated in 12 well plates and incubated at 37 °C, 5% CO2 for 24 h before transient transfection with plasmids encoding Affimer-His constructs using Lipofectamine 2000 (ThermoFisher), as per the manufacturer’s instructions. After 48 h cells were lysed in NP-40 buffer (50 mM Tris, 150 mM NaCl, 1% NP-40 (v/v), 1× Halt TM protease inhibitor cocktail (ThermoFisher), 1× phosphatase inhibitor cocktail 2 (SigmaAldrich), pH 7.5.) and cleared lysates incubated overnight at 4 °C with Ni-NTA resin. After washing, proteins were eluted in SDS sample buffer and analyzed by immunoblotting with the anti-KRAS + HRAS + NRAS or anti-6X His tag-HRP antibody. FLAG-ERK pull-down assays Cells were plated into 12-well plates (1 × 10 5 cells/well for HEK293 cells, 2 × 10 5 cells/well for SW620 and NCI-H460 cells and 4 × 10 5 cells/well for Panc10.05 cells) and incubated at 37 °C, 5% CO2 for 24 h before transfection with a 4:1 DNA ratio of pCMV6-Affimer-tGFP and FLAG-ERK plasmids using Lipofectamine 2000 (SW620, HEK293, NCI-H460) or X-tremeGENE 9 (Roche; Panc10.05). After 24 h cells were serum-starved for 1 h, HEK293 cells were then stimulated with 25 ng/ml EGF (Gibco) for 5 min (other cell lines were not stimulated). Cells were washed with ice-cold DPBS then incubated for 10 min on ice with NP-40 lysis buffer. Cleared lysates were incubated overnight at 4 °C with 20 μl anti-FLAG M2 magnetic beads (SigmaAldrich). The beads were washed 3× with TBS before protein elution by incubation at 95 °C for 5 min in SDS sample buffer. Levels of ERK and pERK were then analyzed by immunoblotting with anti-ERK antibody (1:2000, Abcam, ab184699) and phospho-ERK antibody (1:1000, Abcam, ab76299). Densitometry analysis used ImageJ software v.1.52 (NIH, Maryland). pERK immunofluorescence assay Cells were plated into 96 well plates (1 × 10 5 cells/ml for HEK293 cells, 2–8 × 10 4 cells/ml for MEFs) and incubated at 37 °C, 5% CO2 for 24 before transfection with pCMV6-Affimer-tGFP plasmids using Lipofectamine 2000 as per the manufacturer’s instructions. After a further 24 h cells were serum-starved for 1–18 h before stimulation with EGF (25 ng/ml) for 5 min, rinsed with PBS, and fixed in 4% paraformaldehyde (VWR) for 15 min. Cells were rinsed with PBS and permeabilized with methanol at −20 °C for 10 min, before rinsing with PBS and blocking (1% milk (SigmaAldrich) in PBS) and incubating with anti-pERK antibody (1:150 Cell Signaling Technology 4370) in blocking solution for 1 h at room temperature followed by 3x PBS rinses and incubating with Hoechst 33342 (1 µg/ml Molecular Probes) and anti-rabbit AlexaFluor 546 or 568 (1:1000 Molecular Probes) in blocking solution for 1 h at room temperature. Following a final set of PBS washes, plates were scanned and images collected with an Operetta HTS imaging system (PerkinElmer) or ImageXpress Pico (Molecular Devices) at ×20 magnification. Images were analyzed with Columbus 2.7.1 (PerkinElmer) or MetaExpress 6.7 (Molecular Devices) software. Crystallization, data collection, and structure determination Purified Affimer proteins were incubated with KRAS lysates overnight at 4 °C and the complexes purified by Ni-NTA affinity chromatography and size exclusion chromatography using HiPrep 16/60 Sephacryl S-100 column (GE Healthcare). Representative chromatographs and SDS-PAGE of the complexes are shown in Supplementary Fig. 4 . The complexes were concentrated to 24 mg/ml (KRAS:K3) 12 mg/ml (KRAS:K6) and 11 mg/ml (KRAS:K69) in 10 mM Tris-HCl pH 8.0, containing 50 mM NaCl, 20 mM MgCl 2 and 0.5 mM TCEP (K3 and K6), and 20 mM HEPES (pH 8.0), 150 mM NaCl, 5 mM MgCl2, 0.5 mM DTT (K69), respectively. KRAS:K3 crystals were obtained in 2 M (NH 4 ) 2 SO 4 , 0.2 M K Na tartrate, and 0.1 M tri-sodium citrate pH 5.6 by sitting drop vapor diffusion. Crystals were flash-cooled in a mixture of 75% mother liquor and 25% ethylene glycol. KRAS:K6 crystals were obtained in 0.1 M C 2 H 3 NaO 2 pH 5, 25% w/v PEG 4 K, 0.2 M (NH 4 ) 2 SO 4 , 5% MPD by hanging-drop vapor diffusion. Crystals were flash-cooled in 30% w/v PEG 4 K, 0.1 M C 2 H 3 NaO 2 t, pH 5, 0.2 M (NH 4 ) 2 SO 4 , 20 mM MgCl 2 , 5% PEG 400, 5% MPD, 5% ethylene glycol and 5% glycerol. KRAS:K69 crystals were obtained from the Morpheus Screen 0.12 M Alcohols (0.2 M 1,6-Hexanediol, 0.2 M 1-Butanol, 0.2 M 1,2_Propanediol, 0.2 M 2-Propanol, 0.2 M 1,4-Butanediol, 0.2 M 1,3-Propanediol), 0.1 M Buffer System 1 (1.0 M Imidazole, MES monohydrate) pH 6.5, 30% precipitant mix 1 (20% v/v PEG 500 MME, 10% w/v PEG 20000) by sitting drop vapor diffusion. Crystals were flash-cooled in 75% mother liquor and 25% glycerol. X-ray diffraction data for the KRAS:K6 and KRAS:K69 complexes were recorded on beamline I04-1 (Wavelengths 0.9159 Å and 0.9795 Å, respectively) at the Diamond Light Source, with data for KRAS:K3 being recorded on beamline ID30A-1 (Wavelength 0.9660 Å) at the European synchrotron radiation facility, at 100 K. Data collection statistics are reported in Supplementary Table 4 . Diffraction data were processed and scaled with the Xia2 suite of programs 42 . The KRAS:Affimer structures were determined by molecular replacement with the KRAS-GDP structure (PDB 4OBE) and an Affimer structure (PDB 4N6T) excluding the variable regions as the initial search models in the program Phaser 43 . Structures were refined using REFMAC5 44 , followed by iterative cycles of the manual model building using COOT 45 . Whilst the final model of the KRAS:K6 structure contains all the residues of the variable regions, the electron density maps for residues 75-80 for both the KRAS:K3 complexes in the asymmetric unit cell were highly disordered with incomplete connectivity even when contoured at low sigma level. This is reflected in the final statistics for our refined structure for the KRas:K3 complex being higher than other deposited structures in the Protein Data Bank of similar resolution as judged by the Rfactor and FreeRfactor values. Crystallization of the KRAS:K3 complex was very difficult and with only small needle clusters grown. However, the dataset from one such crystal was processed that gave diffraction patterns that were multi-lattice and also anisotropic. The data was processed accounting for the anisotropy, and also for comparison without anisotropy. Separately, model building and refinement rounds were undertaken with each dataset but there was little difference in terms of the final refinement statistics. The deposited structure contains two KRAS:K3 complexes. Overall the quality of the electron density for both the KRAS molecules is good including the density for the GDP molecules. However, both Affimer K3 molecules within the crystal lattice have little inter-molecule interactions due to the crystal packing, resulting in the quality of the density being poor with little to no density for many side-chain atoms and some main chain atoms. In addition, the two Affimer K3 molecules are facing each other across a two-fold axis with poor main chain connectivity density for the regions 74–83. Numerous attempts were made to model this region with the best model presented in the deposited structure with residues S77, H78, and T79 included but as poly-alanines. Hence although the final model statistics could be better for the deposited structure, the electron density maps clearly show the structure of the KRAS and Affimer K3 (apart from the 74–83 region) with very clear density for the interacting residues between the molecules. Model validation was conducted using the Molprobity server 46 with Ramachandran statistics of 94.8% in the favored region and 2 outliers for KRAS:K3, 96.8%, and 0 for KRAS:K6 and 97.9%, and 0 KRAS:K69. Molecular graphics were generated using MacPyMOL version 1.7.2.3. Surface area calculations were performed using the PDBePISA 31 protein–protein interaction server. The KRAS:K6, KRAS:K3, and KRAS:K69 structures have been deposited with the PDB codes 6YR8 , 6YXW , and 7NY8 , respectively. Affimer affinity measurements Affimer affinities for KRAS were determined by surface plasmon resonance (SPR) using a BIAcore 3000 (GE Healthcare Europe GmbH). Affimer proteins with a C-terminal cysteine residue were biotinylated with biotin-maleimide (SigmaAldrich) as previously described 26 and immobilized onto streptavidin-coated CM5 sensor chips (Biacore). Biacore experiments were performed at 25 °C in HEPES buffer (20 mm HEPES, pH 7.5, 150 mm NaCl, 10 mm MgCl 2 , 0.1% Tween 20, 0.1% Triton-X100). KRAS (bound to GppNHp or GDP) was injected at 6.25, 12.5, 25, 50, 100, 200, 400, and 800 nM at a flow rate of 5 μl min −1 , followed by 3 min stabilization and 10 min dissociation. The on-rates and off-rates and K D parameters were obtained from a global fit to the SPR curves using a 1:1 Langmuir model, using the BIA evaluation software. Quoted K D values are the mean ± SEM of three replicate measurements. Alanine scanning by site-directed mutagenesis To assess the importance of each residue in the Affimer variable regions point mutations to encode sequential alanine residues were introduced by Quikchange TM site-directed mutagenesis. Reactions contained 1× KOD polymerase reaction buffer, 0.2 mM dNTP, 2 mM MgSO 4 , 0.3 μM of forward and reverse primer, 10 ng DNA template, and 1 U KOD polymerase. PCR amplification consisted of 30 cycles of 20 s at 98 °C, 10 s at 68 °C, and 3.5 mins at 70 °C. Samples were digested with Dpn I for 1 h at 37 °C and introduced by transformation into XL1-Blue super-competent cells. DNA was extracted using QIAprep Spin Miniprep Kit as per the manufacturer’s instructions and mutagenesis was confirmed by DNA sequence analysis (Genewiz). Construction of K6ΔVR2 mutant To generate the Affimer K6ΔVR2 mutant, the 9 residues of the K6 VR2 were replaced with AAE. Affimer K6 VR1 and control Affimer VR2 (AAE) were amplified and subjected to splice overlap extension (SOE) PCR. The spliced product was subcloned into pET11a and Affimer K6ΔVR2 produced as described above 26 . NanoBRET assays The NanoBRET proximity-based assay was carried out using the NanoBRET Nano-Glo detection system (Promega) according to the manufacturer’s instructions. Briefly, HEK293 cells were co-transfected with Nluc-KRas and HaloTagged-Affimer plasmids using the FuGENE HD transfection reagent (Promega). Forty-eight hours post-transfection, cells were detached and reseeded in white 384-well plates at a density of 1.2 × 10 4 cells/ml in HyClone DMEM no-phenol red (GE Life Sciences), complemented with 0.1% FBS. Reseeded cells were incubated with HaloTag NanoBRET 618 ligand (100 nM final concentration) for an additional 16–24 h at 37 °C, NanoBRET signal was determined 5 mins after the addition of the NanoBRET furimazine substrate (10 µM final concentration) using a Tecan Spark Multimode microplate reader ( λ Em = 460 ± 40 nm, λ Em = 618 ± 40 nm; Life Sciences). Raw NanoBRET ratios were calculated as: RawBRET = 618 nm Em /460 nm Em × 1000. Corrected NanoBRET ratios (milliBRET units; mBU) were calculated by discounting the donor-only control Raw NanoBRET ratio. For competition analysis using BI-2852 and ARS-1620, the compounds were titrated using a final concentration range of 0–60 µm alongside the HaloTag NanoBRET 618 ligand and incubated for 16–24 h prior to substrate addition and NanoBRET signal measurement. For compound analyses, the following donor-acceptor ratios were used 1:2 for K3 and K6, and 1:10 for K69, and NanoBRET signal expressed as a ratio relative to the DMSO-only control. Statistical analysis Data were analyzed in Prism v9.1.0 (GraphPad Software). Normality was tested using Shapiro Wilk test. Data presented are mean ± SEM unless otherwise stated. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The X-Ray crystal structures generated during and analyzed during this study are available in the PDB repository [ ] with the following codes: 6YXW , 6YR8 , and 7NY8 . Source data are provided with this paper. The Affimer constructs generated during this study are available under a standard Material Transfer Agreement (MTA) from the University of Leeds via the corresponding author (DCT). Source data are provided with this paper. | A new way to target a mutant protein which can cause the deadliest of cancers in humans has been uncovered by scientists at the University of Leeds.The mutated form of the RAS protein has been referred to as the "Death Star" because of its ability to resist treatments and is found in 96% of pancreatic cancers and 54% of colorectal cancers. RAS is a protein important for health but in its mutated form it can be switched on for longer, leading to the growth of tumors. One drug has already been approved for treatment but it can only tackle a small subset of the total number of cancers driven by RAS. Now a team from the University of Leeds' School of Molecular and Cellular Biology has gone further and found a new way to target the protein to pave the way for a greater range of treatments for more patients. Lead author of the report, Dr. Darren Tomlinson, of the Astbury Centre for Structural and Molecular Biology, said: "The RAS protein has been referred to as the Death Star with good reason and that's because it's spherical and impenetrable, essentially preventing drugs binding and inhibiting it. We've identified a further chink in the Death Star that can be used to develop new drugs beyond the ones already in development." The researchers used the School of Molecular and Cellular Biology's own patented Affimer biotechnology platform to pinpoint druggable "pockets" on the protein to allow effective treatment to take place. The study was funded by the Wellcome Trust, the Medical Research Council, the Technology Strategy Board and Avacta and is published today (30 June 2021) in the journal, Nature Communications. Dr. Tomlinson added: "This work opens up the door for the hundreds of other disease targets. We could effectively probe any protein involved in any disease for druggable pockets in the future." Co-first author of the report and Ph.D. student, Amy Turner, from the School of Molecular and Cellular Biology, said: "Because it causes 20-30% of all known cancers, RAS really is the Holy Grail of therapeutic targets. The fact that it has previously been termed 'undruggable' has allowed us to demonstrate the huge impact that our Affimer technology can have when it comes to treating challenging pathologies. We have already identified small molecules that bind to RAS, so it will be very exciting to be involved in developing these over the next few years." The researchers say work on expanding more ways to target RAS is still in its early stages but they believe their discovery could lead to new treatments, putting Leeds at the forefront of the fight against cancer. | 10.1038/s41467-021-24316-0 |
Biology | Put eggs all in one basket, or spread them around? Birds know best | Social parasitism as an alternative reproductive tactic in a cooperatively breeding cuckoo, Nature (2019). DOI: 10.1038/s41586-019-0981-1 , www.nature.com/articles/s41586-019-0981-1 Journal information: Nature | http://dx.doi.org/10.1038/s41586-019-0981-1 | https://phys.org/news/2019-02-eggs-basket-birds.html | Abstract Cooperatively nesting birds are vulnerable to social parasites that lay their eggs in host nests but provide no parental care 1 , 2 , 3 , 4 . Most previous research has focused on the co-evolutionary arms race between host defences and the parasites that attempt to circumvent them 5 , 6 , 7 , 8 , 9 , but it remains unclear why females sometimes cooperate and sometimes parasitize, and how parasitic tactics arise in cooperative systems 10 , 11 , 12 . Here we show that cooperative and parasitic reproductive strategies result in approximately equal fitness pay-offs in the greater ani ( Crotophaga major ), a long-lived tropical cuckoo, using an 11-year dataset and comprehensive genetic data that enable comparisons of the life-histories of individual females. We found that most females in the population nested cooperatively at the beginning of the breeding season; however, of those birds that had their first nests destroyed, a minority subsequently acted as reproductive parasites. The tendency to parasitize was highly repeatable, which indicates individual specialization. Across years, the fitness pay-offs of the two strategies were approximately equal: females who never parasitized (a ‘pure cooperative’ strategy) laid larger clutches and fledged more young from their own nests than did birds that both nested and parasitized (a ‘mixed’ strategy). Our results suggest that the success of parasites is constrained by reproductive trade-offs as well as by host defences, and illustrate how cooperative and parasitic tactics can coexist stably in the same population. Main Conspecific brood parasitism—in which a female lays her egg in the nest of another female of the same species, but provides no parental care—is a widespread reproductive tactic that has been documented in over 250 species of birds 13 and many insects 14 . This form of social parasitism has been the subject of much research over the past thirty years, which has primarily focused on quantifying the costs suffered by hosts 15 , 16 and the benefits gained by parasites 2 , 17 , 18 , 19 . In some species, the co-evolutionary arms race between brood parasites and hosts has resulted in an array of specialized adaptations, including distinctive egg and juvenile phenotypes 6 , 7 and the cognitive abilities to detect parasitic offspring by offspring number, phenotype or timing of laying 3 , 8 , 20 , 21 . However, a major unresolved question is why some females care for their own offspring whereas other females in the same population act as parasites 5 , 12 . Three hypotheses have previously been proposed to explain the coexistence of independent nesting and conspecific brood parasitism 22 . First, parasitism may be a ‘best-of-a-bad-job’ tactic: females lay parasitically when they are unable to reproduce independently (owing to poor environmental conditions or individual quality), or after their own nest is destroyed during laying. This hypothesis predicts that parasitism should be a conditional tactic, which yields lower fitness benefits than independent nesting but higher benefits than failing to reproduce entirely. Second, parasitism may be a discrete behavioural polymorphism, such that some females act as specialized parasites throughout their lives. This hypothesis predicts that the frequencies of hosts and parasites in the population should be maintained in a mixed evolutionarily stable state by negative-frequency-dependent selection, and that hosts and parasites should have equal fitness at equilibrium. Third, parasitism may be an opportunistic tactic to increase total fecundity, in which females nest independently but lay additional eggs in host nests if possible. This hypothesis predicts that the fitness benefits of adopting both tactics are greater than either parasitism or independent nesting, but that the opportunities for parasitism are limited (either by female body condition or by host availability 14 ). There is currently no empirical evidence to support the hypothesis that parasitism is a discrete behavioural polymorphism, whereas there is limited support for the best-of-a-bad-job and opportunistic hypotheses 12 , 14 , 15 , 16 , 17 , 18 , 19 , 23 , 24 . Even with the recent development of molecular techniques to genotype eggs, it is difficult to identify parasitic eggs in host clutches, trace these eggs to the parasites that laid them and track the reproductive behaviours of these females across years. As a result, there is a critical lack of empirical data on the long-term fitness costs and benefits of alternative tactics, on the individual life-histories of parasitic females and on whether these females consistently adopt the same tactics over time. Here we address these questions in a long-term study of a tropical cuckoo, the greater ani ( C. major ), in Panama. The cooperative breeding system of anis provides a rare opportunity to examine the selective pressures that maintain alternative reproductive tactics: nesting groups typically consist of two or three genetically unrelated pairs that together build a single nest, lay approximately equal numbers of eggs and share parental care of the mixed clutch 25 . Previous studies have found that cooperative nesting reduces the probability of nest predation: reproductive success increases with group size and lone nesting is extremely rare 25 , 26 . Conspecific parasitism—in which extra-group females lay their eggs into host nests but provide no parental care—affects between 10% and 30% of nests each year 3 . Female anis therefore have three options for reproduction: (1) nesting cooperatively to share parental care; (2) acting as a parasite and laying eggs in another nest; or (3) nesting both cooperatively and parasitically. Here we show that although ‘pure’ parasites do not occur in this population, some females consistently lay parasitically if their first clutch is destroyed by predators (a mixed strategy) whereas others never lay parasitically (a pure cooperative strategy). Females that practiced the latter strategy laid larger clutches and fledged more young from their own nests than did females that practiced the former strategy, such that the two strategies yielded approximately equal fitness returns across years. We used 12 polymorphic microsatellite loci 27 to genotype 1,776 eggs laid by 210 females in 240 clutches from 2007–2017. We obtained an average of 3.5 consecutive years of reproductive data from each female (range = 1–11 years, s.d. = 2.8) and determined the total number of eggs laid, incubated, hatched and fledged by each female for each year (745 female-years). Cooperative nesting was the most common reproductive tactic; 189 (78.8%) groups consisted of 2 pairs, 45 (18.8%) groups consisted of 3 pairs, 4 groups (1.6%) consisted of 4 pairs, and there were only 2 (0.8%) instances of lone pairs. However, brood parasitism was also widespread, as 61 host clutches (25.4%) were parasitized by a total of 33 females. Overall, 65 of 1,776 eggs (3.7%) laid were parasitic. By matching the genotypes of parasitic eggs to those of nesting females in the population, we found that brood parasitism was a tactic that was primarily used by cooperatively nesting females that had their own clutches depredated early during the nesting cycle (hereafter referred to as ‘failed nesters’). Of the 65 parasitic eggs, 55 were laid by failed nesters. Eight parasitic eggs were laid by females that did not have their own nests in the same year that they parasitized (but did have their own nests in other years), and two were laid by cooperative nesters that had already successfully raised offspring. A mixed-effects logistic regression model that controlled for year and female identity revealed that the stage of nest failure and the date of nest failure were the only significant predictors of the likelihood of a female acting as a parasite. Females that had nests that failed early in the year, and early during the nesting cycle, were the most likely to lay parasitically (Table 1 , response variable 1). The inclusion of female age and other variables did not improve model fit, and these variables were therefore removed from the best-fit model (Extended Data Table 1 ). No females adopted a purely parasitic strategy, but a minority of females in the population (33 of 210) adopted a mixed strategy of cooperative nesting combined with parasitism. Across years, females showed a high level of individual consistency in whether they adopted a pure non-parasitic strategy or a mixed strategy. Within-female repeatability was significant when estimated for the entire population (intra-class correlation coefficient = 0.39; R 2 = 0.53, F 140,460 = 3.74, P < 0.0001) and—importantly—for the subset of failed nesters, which indicates that repeatability in the tendency to parasitize was not due solely to repeatability in the risk of nest depredation (intra-class correlation coefficient = 0.39; R 2 = 0.59, F 132,264 = 2.91, P < 0.0001). Table 1 Best-subset mixed-effects logistic regression models Full size table Parasites did not preferentially target or avoid host groups that contained relatives: the average genetic relatedness between hosts and parasites did not differ from population-wide dyadic relatedness between adult breeders (mean host–parasite r ± s.e. = −0.000269 ± 0.002, P > 0.05; Extended Data Figs. 1 , 2 ). Host-group size, territory quality or reproductive success did not influence the likelihood of being parasitized (Extended Data Table 2 ). However, nests that were closer to the failed nest of the parasite were more likely to receive parasitic eggs than were more distant nests (Extended Data Figs. 3 , 4 ). The linear distance between potential host nests and the failed nests of parasites was the only term that was retained in the best-fit model for predicting the likelihood that a nest would be parasitized (Table 1 , response variable 2, Extended Data Table 2 ). Parasitic eggs were typically laid asynchronously with respect to the host clutch, which is consistent with findings from a previous study 3 . Whereas the majority of host eggs were laid before the onset of incubation, parasitic eggs were more likely to be laid after the host clutch was completed and incubation had already begun (Fig. 1a ; χ 2 = 122.06, P < 0.0001; n = 1,686 host and 65 parasitic eggs). Parasitic eggs were significantly smaller than eggs laid in a female’s own nest, both in comparisons across different females (pure cooperators versus females practicing a mixed strategy; t 1,305 = 5.47, P < 0.0001) and between eggs laid by the same female using different strategies (Fig. 1b ; eggs laid parasitically versus eggs laid in the female’s own nest; z = −5.93, P < 0.0001, n = 1,264 host and 43 parasitic eggs). Overall, parasitic eggs were significantly less likely to survive to fledging than were host eggs (Fig. 1c , z = −2.69, P = 0.007, n = 844 host and 45 parasitic eggs, excluding entirely depredated clutches). This difference was due both to the asynchronous timing of laying ( z = 3.85, P < 0.001) and to the small size of parasitic eggs ( z = 2.2, P = 0.03). Fig. 1: Attributes of eggs laid by hosts and parasites. a , Laying synchrony. Per cent (± 95% confidence intervals) of host eggs ( n = 1,686) and parasitic eggs ( n = 65) laid before the onset of incubation at the host nest. Two-tailed Pearson χ 2 test, P < 0.0001. b , Mean egg mass ± s.e. of host eggs ( n = 1,264) and parasitic eggs ( n = 43). Mixed-effects generalized linear model controlling for female identity ( n = 193 females), two-tailed Wald test, P < 0.0001. c , Survival probability. Per cent (± 95% confidence intervals) of host ( n = 844) and parasitic ( n = 45) eggs that survived to fledging. Mixed-effects logistic regression controlling for female identity ( n = 155 females), two-tailed Wald test, P = 0.007. Full size image To determine the mean annual fitness pay-offs of the two strategies, we combined data on fecundity with data on survival rates of host and parasitic eggs, while controlling for female identity and group size. On average, females that never parasitized laid more eggs in their own nests than did females that practiced a mixed strategy (blue bars, Fig. 2a ; z = −2.59, P = 0.01). This difference remained significant when the analysis was restricted to females with nests that failed after clutch completion, which suggests that the difference in clutch size was not due to predation (failed nesters that never parasitized, mean clutch size ± s.e. = 4.49 ± 0.10 eggs; failed nesters that subsequently parasitized, mean clutch size ± s.e. = 3.76 ± 0.19 eggs, z = −3.0, P = 0.003). When parasitic eggs were included, females that practiced a mixed strategy laid more eggs overall than did females that never parasitized but this difference was not significant (stacked bars, Fig. 2a ; z = 1.91, P = 0.06). However, owing to the relatively low survival of parasitic eggs, this did not translate into higher reproductive output: there was no difference in the total number of offspring that hatched (Fig. 2b , z = 1.32, P = 0.68) or fledged (Fig. 2c , z = −1.58, P = 0.11). Fig. 2: Mean annual reproductive output of females practicing pure cooperative versus mixed strategies. Blue bars indicate eggs or offspring laid in the female’s own nest (non-parasitically); pink bars indicate eggs or offspring laid parasitically. Error bars indicate s.e.m. a , Total number of eggs laid per year by females that always cooperated ( n = 504 females), mixed-strategy females in their own (cooperative) nests ( n = 50 females) and mixed-strategy females laying parasitically ( n = 54 females). Mixed-effects generalized linear model controlling for female identity and group size, two-tailed Wald tests. P = 0.01 for comparison of eggs laid in own nests by pure cooperative- versus mixed-strategy females (blue bars); P = 0.06 for comparison of total number of eggs laid by pure cooperative- versus mixed-strategy females (stacked bars). b , Total number of eggs hatched per year by females that always cooperated ( n = 504 females), mixed-strategy females in their own (cooperative) nests ( n = 45 females) and mixed-strategy females laying parasitically ( n = 54 females). Mixed-effects generalized linear model controlling for female identity and group size, two-tailed Wald test, P = 0.68 for comparison of total number of offspring hatched. c , Total number of offspring fledged per year by females that always cooperated ( n = 504 females), mixed-strategy females in their own (cooperative) nests ( n = 48 females) and mixed-strategy females laying parasitically ( n = 54 females). Mixed-effects generalized linear model controlling for female identity and group size, two-tailed Wald test, P = 0.11 for comparison of total number of offspring fledged. Full size image These findings suggest that a mixed strategy of cooperative nesting and social parasitism is an evolutionarily stable alternative to a pure cooperative strategy, in that the two strategies yield approximately equal fitness returns over time and some females consistently use one strategy over another 11 . These behaviours may represent discrete phenotypes or points on a continuum of a single, flexible life-history strategy in which females have different behavioural reaction norms within the population. It is not yet known whether the tendency to parasitize (that is, the threshold of this reaction norm) has a genetic component or is primarily influenced by female body condition or quality. Our results suggest that the reproductive output of parasites may be, in part, limited by energetic constraints, as parasitic females laid fewer eggs in their own nests, small eggs when parasitizing and a relatively small number of parasitic eggs per attempt (58 attempts, mean ± s.d. = 1.1 ± 0.3 eggs). However, females that parasitized laid slightly more eggs overall than did females that always cooperated (Fig. 2a ); it is therefore also possible that the small size of parasitic eggs reflects an adaptive reduction in investment rather than limited resources. Female quality could also influence the ability to provide parental care, and potentially favour brood parasitism as a conditional tactic for low-quality—but not high-quality—females 28 . Finally, the high intra-individual repeatability of parasitic behaviours is also consistent with a heritable genetic polymorphism. We are not aware of any evidence that brood parasitism is heritable in wild birds, but previous studies have found parasitism to be individually repeatable in both wild 29 and captive 30 populations, which suggests that the tendency to parasitize may be less flexible than is often assumed. The average genetic relatedness between parasitic female anis and their hosts did not differ significantly from zero, which is a pattern that is shared by many birds 5 . However, in other species, females may preferentially parasitize or avoid related hosts, depending on the magnitude of the costs imposed on hosts and the potential kin-selected benefits of accepting parasitic offspring from a relative 2 , 31 . These non-random associations between kin could be the result of recognition or of spatial kin structuring within populations, such that potential parasites are more likely to be related to nearby hosts than would be expected by chance 5 . Neither mechanism is likely in this population, as adults are apparently unable to recognize kin—possibly because nestlings are typically reared alongside unrelated nest-mates and lack opportunities to learn phenotypic cues of kinship 32 —and the study population lacks detectable kin structuring 25 . Therefore, the ability to select hosts on the basis of kinship is probably highly constrained. Finally, these results shed light on the evolutionary stability of the cooperative breeding system of the greater ani, which is unusual among social birds in that members of breeding groups are unrelated, lay in a shared clutch and share reproduction 33 . Previous research has found that the predation rate at solitary nests is so high that cooperative nesting is essentially obligate in the study population 22 . This study extends these findings by showing that nest predation is the key selective pressure that favours social parasitism as well as cooperation. Our data suggest that although the success rate of parasitism is lower than that of cooperative nesting, it is still higher than that of solitary nesting, such that parasitism is primarily used as a conditional tactic following the failure of cooperative nests. Methods No statistical methods were used to predetermine sample size. Data collection was not randomized. Field workers were blinded to the status of eggs collected in the field (parasitic or nonparasitic) because genotyping was performed post hoc. Field sampling and study population Greater anis range from central Panama to northern South America and nest in low vegetation along forested waterways, typically in tree branches overhanging the water or in emergent shrubs in shallow water 26 . We studied a genotyped population of greater anis in the Barro Colorado Nature Monument, Panama (9° 9′ 16.3512′′ N, 79° 50′ 43.944′′ W), on the shoreline of Barro Colorado Island and the four mainland peninsulas protected by the Barro Colorado Nature Monument (Gigante, Frijoles, Peña Blanca, Bohio and Buena Vista). The vegetation at this study site is described in ref. 26 and monitoring protocols for nests are available in refs. 25 , 34 , 35 , 36 . In brief, we used small motorboats to search for nests at the beginning of the breeding season, monitoring 40–60 nesting attempts per year (representing >90% of nesting attempts within the study area 37 ). Nesting groups are typically composed of two or three socially monogamous pairs (rarely four or more pairs, or lone pairs). Between 15 and 20% of groups in the population also include at least one non-reproductive helper, which may be related to one of the breeding pairs (that is, non-dispersed offspring from a previous brood) 25 or an unrelated immigrant (C.R. and M.J.S., unpublished data). Ani groups may therefore contain co-breeding females—sometimes referred to as ‘communal’ nesting—as well as non-breeding helpers, which is sometimes referred to as ‘cooperative’ nesting in the literature. In this paper, we use cooperative as an umbrella term to include co-breeding by multiple females as well as helping by non-breeders. Because we did not record any instances in which helpers reproduced (either as parasites or as cooperative nesters), they were excluded from the analyses in this paper, and group sizes were coded simply as the number of breeding pairs in the group. Breeding groups or pairs typically use the same territory for several consecutive years 35 and nests are locally clustered into ‘neighborhoods’ 34 , which enables repeated sampling from the same individuals across years. Nests were checked daily before laying, daily during the laying period, every 2–3 days during the incubation period and daily during the first 6 days after hatching. To determine the reproductive output and success of individual females, each egg was numbered with a non-toxic black marker according to its position in the laying order of the shared clutch, and a non-invasive genetic sample was taken within 24 h of laying to determine egg maternity (see ‘Genetic analyses’). The fate of each egg was recorded as depredated, hatched or ejected by other group members. Each egg was weighed (at an accuracy of 1 g) with Pesola spring scales, and the length and width were measured at an accuracy of 0.1 mm with dial calipers. Nestlings were genetically sampled by brachial venipuncture at 3 days of age (see ‘Genetic analyses’), and banded with a unique combination of coloured leg bands at 4–5 days of age. Genetic analyses We used four sources of genetic material to genotype eggs and cross-validate maternal identity: (1) non-destructive swabs of maternal DNA taken from the shell of freshly laid eggs; (2) destructive sampling of egg membranes taken from un-incubated eggs that were ejected from the shared nest by female group members; (3) blood samples taken from nestlings; and (4) blood samples taken from adults. First, maternal genomic DNA was non-destructively sampled from freshly laid eggs using previously described protocols 38 , modified and validated for this study population 3 . In brief, a Q-tip was dipped in lysis buffer and swiped over the surface of the shell of the egg within 24 h of laying, focusing on areas with visible blood stains. The head of the Q-tip was stored in lysis buffer and maternal DNA was subsequently extracted using Omega E.Z.N.A. Forensic DNA spin columns (catalogue no. D3591) and eluted in 200 μl elution buffer to maximize total yield. Initial elutions were low concentration (<6 ng/μl) and were then cleaned and concentrated to smaller volumes (final target concentration > 30 ng/μl). Second, all ejected eggs were collected and membranes were destructively sampled using previously described techniques 39 . DNA was extracted from membranes using Omega E.Z.N.A. Forensic DNA spin columns and concentrated as described above. Finally, from 2007 to 2011, 25–50 breeding adults per year were captured in mist nets, individually colour-banded and genetically sampled by brachial venipuncture ( n = 225). From 200 to 2017, blood samples were taken from nestlings between 3 and 6 days of age ( n = 824 nestlings recorded, n = 405 nestlings sampled in this dataset). Genomic DNA from whole blood samples of adults and nestlings was extracted using either Qiagen DNEasy Blood and Tissue Kits (samples from 2006 to 2010) or Omega Bio-Tek E.Z.N.A. Tissue Kits (samples from 2011 to 2017), following the manufacturer’s protocols. All genetic samples were genotyped with 12 polymorphic microsatellite markers developed for this species 27 . The mean number of alleles per locus was 6.2 (range = 4–10), with a mean heterozygosity of 0.58 (range 0.22–0.80) 27 . Combined nonexclusion probability for maternity was 0.048 37 . Genetic egg maternity was assigned to females in shared clutches by using the ‘identity check’ function in CERVUS 3.0, a maximum-likelihood-based program for parentage assignment that can also identify repeat samples from the same individual. For repeat PCRs from the same extraction, the measured typing error rate for samples from blood and eggshell membranes was 0.7%, whereas the measured typing error rate for samples from eggshell swabs was 2.5% (presumably owing to the poorer quality of DNA from shell swabs). For genotyping of different eggs from the same individual, the estimated error rate was higher (sample sizes of cross-validation are given below). We cross-validated maternity assignments by comparing genotypes from different sources of genetic material within the same nest (egg membranes, nestling blood, adult blood and eggshell swabs). For example, when we obtained both shell swabs and membranes from the same egg, we compared the two genotypes to ensure that they matched at all loci (see below). When we obtained maternal DNA and nestling DNA from the same egg, we used CERVUS 3.0 to verify that the maternal genotype was consistent with parentage of the nestling genotype. We recorded the presence of 2,065 eggs during the study. We were able to genotype 1,709 of these directly from genetic material from the egg, and we combined this information with data on the timing of laying within each clutch to infer the maternity of another 67 eggs. Inferences based on the timing of laying are possible in some greater ani clutches because each female lays one egg every two days, and the timing of egg laying for a particular female appears to be inflexible 26 . For example, in a group containing two females, eggs may be laid on the same day (two eggs appear in the nest every other day) or on alternating days (one egg appears in the nest each day). In a group containing three females, eggs may be laid on the same day (three eggs appear in the nest every other day), or two females may lay on the same day with the third female laying on alternate days (two eggs appear in the nest every other day and one egg appears in the nest on the days in between). Therefore, if some but not all eggs in a clutch were successfully genotyped, it was sometimes possible to infer maternity of the remaining eggs if they were laid on alternating days. Including both the 1,709 genotyped eggs and the 67 for which maternity was inferred on the basis of the timing of laying, we assigned maternity to 1,776 of 2,065 eggs (86%). The remaining 289 eggs (14%) were discarded from the dataset because they could not be sampled (because the nests were too high to reach or otherwise inaccessible, but the presence of eggs was documented by using a camera mounted on a pole; n = 94, 4.5%), they were ejected from the nest before sampling and could not be retrieved ( n = 18, 0.87%), they were located after incubation had already begun ( n = 109, 5.3%), a genetic sample was taken but DNA could not be extracted or amplified for unknown reasons ( n = 45, 2.2%) or conflicting genotypes were obtained from different types of genetic material from the same egg ( n = 23, 1.1%). Of the 1,776 eggs for which we assigned maternity, 1,302 assignments (73.3%) were made on the basis of one source of genetic material, 377 (21.2%) on two sources and 30 (1.7%) on three sources; 67 (3.7%) assignments were inferred from laying pattern alone. We obtained genotypes from more than one source of genetic material for 430 eggs. For these, we cross-validated the maternity assignments using the following criteria. For comparisons between different sources of maternal DNA (eggshell swabs and membranes, n = 103), we considered the maternity assignment to be cross-validated if genotypes were identical ( n = 82), mismatched at just one allele ( n = 8) or mismatched at two alleles at different loci by two or fewer base pairs ( n = 5). For comparisons between maternal and nestling genotypes ( n = 327), we considered the maternity assignment to be cross-validated if all of the nestling alleles were consistent with the maternal genotype (none of the nestling alleles excluded the mother’s genotype as one parent; n = 299) or if the nestling genotype was inconsistent with the maternal genotype at only one locus by two or fewer base pairs ( n = 13). Overall, therefore, 407 of 430 eggs with multiple sources were successfully cross-validated according to the above criteria (94.7%). The remaining 23 eggs (5.3%) were discarded from the analysis. In most cases, these mismatches involved mismatches between eggshell swabs and other sources of material, and could not be re-run because the small quantity of DNA extracted from eggshell swabs allowed just one attempt at PCR for each locus. Sample sizes for the cross-validation of egg maternity from different sources of genetic material are given in Extended Data Table 3 . Instances of nest parasitism were detected when (1) the number of unique female genotypes in the clutch exceeded the number of breeding females in the social group, and (2) inconsistences in the timing of laying indicated that an additional female had laid in the clutch 12 . Because each female ani lays one egg every two days, irregularities in the laying sequence were typically easy to detect (for example, the appearance of two eggs in the nest in less than two days indicated that they were laid by different females). Instances of parasitism were almost always associated with irregularities in the laying sequence—either with the appearance of more eggs in the nest that would be predicted by group size or with the appearance of eggs in the nest after host females had completed their clutches and incubation had begun. To confirm an instance of parasitism, we required that (1) the putative parasitic genotype mismatched that of any of the breeding females in the social group by more than one locus; (2) the mismatches were of more than two base pairs; and (3) the timing of laying was consistent with an extra-group female. From 2007 to 2009, we genotyped all putative parasitic eggs ( n = 30) and eggs from their host clutches a second time to check for typing error, and we used a PCR-based sexing test to confirm that the foreign genotype was from an extra-group female rather than from one of the males in the social group. Because we detected no instances of contamination by male DNA, and because all instances of typing error involved one- or two-base-pair mismatches, we did not perform these additional checks on samples from 2010 to 2017, but instead used the criteria outlined above. Finally, the genotype of the parasitic egg was cross-checked against all of the female genotypes from the study population to assign maternity of parasitic eggs to known females. Genetic relatedness between host and parasitic females was estimated from microsatellite genotypes using the program COANCESTRY 40 . This program estimates several different coefficients of relatedness between dyads of individuals. Here we present analyses using the previously published estimator of Queller and Goodnight 41 ; we ran the same analyses using previously published estimators of Ritland 42 and Wang 43 , and obtained qualitatively identical results (data not shown). We calculated pairwise relatedness between all known host–parasite dyads, and used bootstrapping to calculate 95% confidence intervals. Bootstrapping (1,000 bootstraps) was also used to estimate the distribution of the mean difference in relatedness between host–parasite dyads and randomly chosen dyads in the population, and to determine whether the observed difference was statistically significant at α = 0.05. Spatial mapping Spatial coordinates were recorded for each nest each year with handheld Garmin GPS units. We used ArcMap 10.3.1 to calculate pairwise distances between all possible dyads of nests in the study area each year using previously described techniques 34 . Anis occasionally build a second nest and attempt to re-nest if the first clutch fails; however, the second nest is usually in the same bush or shrub as the first nest (often <1 m from the first nest; C.R. and M.J.S., unpublished data). Therefore, when re-nesting attempts were included in the analyses, the same location was used for both nests. Pairwise distances between each potential host nest in the population and each parasite’s nest were calculated for each year (for example, if females from seven different nests acted as parasites in a given year, then seven pairwise distances were calculated for each potential host nest in the population that year). The distance between the potential host nest and the nearest parasite’s nest was then used as a predictor in the model that predicts the probability of a host nest’s being parasitized (see below). This predictor was used in preference to other metrics (such as the number of parasite’s nests within a given radius of the focal host nest) because parasite’s nests were widely distributed throughout the study area and rare compared to potential host nests, such that the density of parasite nests did not vary sufficiently across the study area to be a useful predictor. Statistical analyses We constructed mixed-effects logistic-regression models to identify factors that predict (1) the likelihood that a female would act as a parasite and (2) the likelihood that a potential host nest would be parasitized. Both response variables were binary. The goal of these models was to ask whether the past history of a female influenced the likelihood that she would act as a parasite, and whether parasites preferentially targeted host nests with particular characteristics. Year and either female identity (for the first set of models) or group identity (for the second set of models) were included as random effects to account for repeated sampling of females and groups across years. For both analyses, best-fit models were selected using a best-subsets approach, in which initial models included all terms and were compared with all possible models using subsets of the terms. Models were evaluated with Akaike’s information criterion corrected for finite sample size (AIC c (ref. 44 )). Models within two AIC c units of the top model (ΔAIC c = 0) were candidates of potential explanatory value; however, models within two AIC c units of the top model that differed from a higher-ranking model by the addition of one parameter were rejected as uninformative 45 . Full model results and overall significance tests for models are presented in Extended Data Tables 1 , 2 ; inferences from models were made only when the overall model was significant. Analyses were conducted in STATA 14. For models that predict the likelihood of a female acting as a parasite, initial models included nine candidate predictors (group size, female age, reproductive status, termination stage, end date, number of eggs laid and mean egg mass). Termination stage was defined as the stage in the nesting cycle at which the female’s initial (non-parasitic) reproductive attempt ended (1 = laying, 2 = incubation, 3 = nestling and 4 = fledgling). For example, a female with an initial (non-parasitic) nest that was depredated during the incubation period was coded as a 2, whereas a female that successfully fledged offspring was coded as a 4. End date was the ordinal date recorded when the reproductive attempt terminated (when the female’s clutch was depredated or young fledged). Number of eggs laid referred to the number of eggs laid by the female in her non-parasitic reproductive attempt (not including parasitic eggs that were laid subsequently), and female age was given in years. Reproductive status was coded as a binary variable (0 = did not reproduce successfully, 1 = successfully fledged offspring) and was therefore partially redundant with termination stage; both were included in initial models to determine which predictor better fit the data. The top candidate model included termination stage; including reproductive status as well did not improve model fit, so this latter variable was removed from the best-fit model. This dataset included 210 females across 11 years ( n = 609 female-years). Group identity was also not a significant predictor and was removed from the best-fit model, which indicates that—controlling for other variables—the likelihood of a female acting as a parasite was not influenced by the reproductive tactics of her co-breeding females. For models that predict the probability of a nest being parasitized, initial models included five candidate predictors (group size, initiation date, nest-site type, nest success and distance to nearest parasite). Nest-site type was coded as a binary variable (0 = nest in shoreline vegetation, 1 = nest in emergent vegetation) and was included as a proxy for nest-site quality, as previous studies have shown that nests in emergent vegetation are less vulnerable to terrestrial predators and are more likely to fledge offspring than those located in shoreline vegetation 25 . Nest success was also a binary variable (0 = did not fledge offspring, 1 = fledged offspring). Distance to nearest parasite was calculated using ArcMap 10.3.1 (see above). This dataset included 235 clutches across 11 years, of which 61 were parasitized. We used generalized linear mixed models with a binomial error structure to estimate the within-female, between-year repeatability for parasitic nesting behaviour 46 . Repeatability was first calculated for all females in the population for which we had at least two years of data ( n = 601 female-years), and then for the subset of females that experienced failure of first nesting attempts in at least two years ( n = 398 female-years), because results from the earlier analyses indicated that females rarely act as parasites if their first nesting attempt is successful. The repeatability of the tendency to parasitize among failed nesters indicated that some females never parasitized following depredation of their own nests, whereas others consistently did parasitize. We compared several different characteristics of eggs laid in females’ own nests (host eggs) versus eggs laid parasitically (parasitic eggs). First, we used a χ 2 test to evaluate differences in laying synchrony. For this analysis, laying synchrony was coded as a binary variable (0 = laid before the onset of incubation, 1 = laid after the onset of incubation) and we used the entire dataset with information on laying and incubation dates ( n = 1,686 host and 65 parasitic eggs). Second, we used two analyses to determine whether host and parasitic eggs differed in mass, using a subset of the data for which egg mass was recorded ( n = 1,264 host and 43 parasitic eggs). We used a two-tailed t -test to directly compare host and parasite egg masses, without controlling for female identity. We then used a mixed-effects generalized linear model with egg mass as the response variable and egg type (host versus parasitic) as the sole predictor, with female identity included as a random effect (as above, n = 1,264 host and 43 parasitic eggs, 193 females). Both analyses showed parasitic eggs to be significantly smaller than host eggs, and we therefore present both results in the main text. It is important to recognize that the size difference between parasite and host eggs could reflect either adaptive reduction in allocation to parasitic eggs (because those eggs are less likely to survive than are host eggs) or energetic constraints on parasitic females (because these females may be in poor condition relative to non-parasitic females, or because parasitic eggs may be laid at the end of the female’s clutch when her energy reserves have already been depleted). Distinguishing between these hypotheses requires further analyses that are beyond the scope of the present paper. Finally, we used logistic regressions to evaluate differences in hatching rates between host and parasitic eggs. For these analyses, we used a restricted dataset that excluded all clutches that were entirely depredated ( n = 844 host and 45 parasitic eggs, 155 females). We excluded depredated clutches because—by definition—these eggs cannot hatch; and we were primarily interested in determining whether laying asynchrony and egg mass (two aspects that differ between parasite and host eggs) affect the hatching probability of eggs that were not depredated. To determine whether parasitic and host eggs differ in hatching probability, we used a mixed-effects logistic regression with egg fate as the binary response variable (0 = failed to hatch, 1 = hatched), egg type as the binary predictor (0 = host, 1 = parasitic) and female identity as a random effect. To determine whether egg mass and laying asynchrony explained the observed difference in hatch success, we ran a second mixed-effects logistic regression that included egg mass as a continuous predictor and laying asynchrony as a binary predictor. To compare the overall reproductive output of females that either never parasitized (pure cooperators) or parasitized following nest failure (mixed strategy), we first compared the numbers of eggs laid by each type of female in their own nests. We ran three separate mixed-effects generalized linear models with different response variables (number of eggs laid, number of eggs hatched and number of offspring fledged) and the same set of predictor variables (reproductive strategy, 0 = pure cooperator, 1 = mixed strategy, and group size). Group size was included as a predictor because the number of eggs laid per female tends to increase with group size, to compensate for within-group competition 25 . Female identity was included as a random effect in all models. These analyses were conducted on the full dataset for which information was available ( n = 197 females in 547 nesting attempts). To ensure that differences in clutch size were not simply due to differences in nest depredation (by definition, females that parasitize following nest failure must have experienced nest failure, so depredation during the laying period could result in apparently smaller clutches for parasites), we re-ran the analysis of clutch size using a restricted dataset of females with first clutches that were depredated after clutch completion ( n = 159 females in 311 nesting attempts). Finally, we ran the same three mixed-effects generalized linear models (predicting number of eggs laid, number of eggs hatched and number of offspring fledged), but this time used a dataset that included eggs laid parasitically as well as eggs laid in the female’s own nest. One outlier data point has been excluded from Fig. 2a (a pure cooperator that laid 12 eggs in her own nest) to improve the readability of the figure, but this data point was not excluded from statistical analyses. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Source datasets for this manuscript are available in the Dryad Digital Repository . | In the tropical jungle of Central America where predators abound, a species of cuckoo has found safety in numbers by building communal nests guarded by two or three breeding pairs. Why then do these agreeable avians sometimes ditch the collaborative lifestyle and instead deposit eggs into nests outside the communal group, acting like social parasites, in the hopes that other females will raise the chicks as their own? In a paper published online in the journal Nature, Princeton researchers show that the cuckoos, known as greater anis (Crotophaga major), act collectively for the most part but can become social parasites after their communal nest is destroyed. They start the breeding season placing all their eggs in one basket, but if predators intervene, the birds switch to a strategy of spreading the eggs around in other nests. "When different females in a population pursue different reproductive tactics, like cooperative and parasitic nesting, this poses an evolutionary puzzle. We wondered why some females lay their eggs into other groups' nests, whereas other females never do," said Christina Riehl, assistant professor of ecology and evolutionary biology. She and co-author Meghan Strong, a research specialist in the Department of Ecology and Evolutionary Biology, conducted the study with help from undergraduates, graduate students and interns, including several from the Princeton Environmental Institute (PEI). "We found the parasitic strategy is more of a second-line option," Riehl said. "It's as if the birds say, 'If our cooperative attempt fails, let's go to Option B.'" The team concluded that the two reproductive strategies can coexist in the population because the mixed strategy of first cooperating and then parasitizing yields about the same number of offspring as the pure cooperative strategy does. If one or the other were clearly better, then all the birds would most likely adopt the most successful strategy. The greater ani is one of the few bird species in which unrelated females come together to raise their young communally. In contrast, social parasitism is quite common, found in over 250 bird species. To find out why the normally collaborative anis resort to parasitism, the scientists observed the birds and their nesting behaviors over an 11-year period at Barro Colorado Nature Monument in Panama. The birds build the nests in branches that overhang the Panama Canal, requiring the researchers to reach the nests by boat. Under constant threat from predators such as snakes and monkeys, nearly all of the anis start the breeding season in communal groups, the researchers found. However, social parasitism is also widespread, the researchers found, with about 25 percent of the nests being parasitized by a female who was not a member of the group. Communal nest of Greater Anis containing nestlings from two breeding pairs. Barro Colorado Island, Panama. Credit: Christina Riehl The team found that females whose nests were destroyed by predators early in the nesting cycle were most likely to lay parasitically, typically in nests close to their own failed nest. The researchers also asked whether switching mid-season from collective nesting to social parasitism might actually lead to more offspring. Although laying as many eggs as possible sounds like a good strategy, social parasitism has drawbacks. The eggs tend to be slightly smaller and thus less likely to thrive. And due to the risk of being found out, these eggs have a lower chance of survival to the stage where the chicks leave the nest. In contrast, the cooperative females lay fewer eggs but they put more effort into tending them, so more chicks survive to leave the nest. The take-away: Both of the reproductive strategies—communal breeding only versus the mixed approach of starting as communal breeders and switching to parasitism—are viable reproductive strategies. Mark Hauber is an ornithologist and behavioral ecologist of the University of Illinois at Urbana-Champaign who was not involved in the study. "The study explains why cooperative breeding is maintained evolutionarily in this species—solitary breeding or parasitism alone are simply not productive enough," Hauber said. "But either cooperative breeding with large clutches or failed cooperative breeding followed by parasitism are equivalent strategies fitness-wise. Thus parasitism continues to be present in the population and so is cooperative breeding." The team also found that the same females turned to parasitism after the failure of their nests year after year. "Previously in my own work on captive zebra finches I found that some individuals are more likely to become brood parasites than others," Hauber said. "This is now demonstrated and confirmed firmly in a wild population of free-living tropical birds." The ongoing study monitors roughly 40 to 60 nests per year. Overall, for the current finding, the team studied genetic data from 1,776 eggs laid by 210 females in 240 nests from 2007-17. Each year the team begins the breeding season in June by searching for nests in the vegetation along the shoreline. The researchers check the nests daily, swabbing each new egg to collect cells and blood left on the egg with the goal of using genetics to identify each egg's mother. They weigh and measure every egg, and collect blood from the chicks to confirm their parentage. "Getting into the nest is actually often the trickiest part of collecting measurements," said Luke Carabbia, Class of 2019, who conducted field work as a PEI intern. "While some groups of birds cooperate with us very well and nest right at an accessible chest height, others build their nests all the way at the top of their trees or in flimsy reeds deep in grass, and the birds often have a real knack for picking spots that are either incredibly thorny or right next to a nest of angry wasps." | 10.1038/s41586-019-0981-1 |
Medicine | The 'mutational signatures' of many genes hold the key to better cancer therapies | Jurica Levatić et al, Mutational signatures are markers of drug sensitivity of cancer cells, Nature Communications (2022). DOI: 10.1038/s41467-022-30582-3 Journal information: Nature Communications | https://dx.doi.org/10.1038/s41467-022-30582-3 | https://medicalxpress.com/news/2022-05-mutational-signatures-genes-key-cancer.html | Abstract Genomic analyses have revealed mutational footprints associated with DNA maintenance gone awry, or with mutagen exposures. Because cancer therapeutics often target DNA synthesis or repair, we asked if mutational signatures make useful markers of drug sensitivity. We detect mutational signatures in cancer cell line exomes (where matched healthy tissues are not available) by adjusting for the confounding germline mutation spectra across ancestries. We identify robust associations between various mutational signatures and drug activity across cancer cell lines; these are as numerous as associations with established genetic markers such as driver gene alterations. Signatures of prior exposures to DNA damaging agents – including chemotherapy – tend to associate with drug resistance, while signatures of deficiencies in DNA repair tend to predict sensitivity towards particular therapeutics. Replication analyses across independent drug and CRISPR genetic screening data sets reveal hundreds of robust associations, which are provided as a resource for drug repurposing guided by mutational signature markers. Introduction Cancer precision medicine draws on the presence of somatically acquired changes in the tumor, which serve as predictive markers of response to drugs and other therapies. Commonly these markers are individual genetic changes, such as driver mutations affecting oncogenes or tumor suppressor genes, or copy-number alterations thereof. Many commonly employed cancer drugs act by interfering with DNA synthesis or maintenance or by damaging DNA. Therefore, the altered capacity of cancer cells to repair and/or replicate DNA is the basis of many classical therapies, such as platinum-based agents, and also recently introduced or upcoming therapies, such as PARP inhibitors or ATR inhibitors (reviewed in refs. 1 , 2 , 3 ). It is paramount to identify predictive markers that are associated with failures of DNA maintenance in cancer cells. However, while DNA repair is often deficient in tumors, many DNA repair genes such as MLH1 , MGMT , BRCA1, or ATM do not commonly bear somatic mutations. Instead, they are commonly inactivated epigenetically 4 , 5 , 6 , or by alterations in trans-acting factors 7 , and so their deficiencies are difficult to predict from the gene sequence. Additionally, germline cancer-predisposing variants commonly affect DNA repair genes 8 , 9 , 10 , however, pathogenicity of such variants is often challenging to predict. Because of the above, other types of molecular markers may be more useful to infer about failed DNA repair. This is exemplified in ”BRCAness” – a gene expression signature that suggests a deficient homologous recombination (HR) pathway, even in the absence of deleterious genetic variants in the BRCA1/2 genes. In addition to gene expression, mutational signatures–readouts of genome instability–can characterize DNA repair deficiencies. One common type of signature describes relative frequencies of somatic single-nucleotide variants (SNV) across different trinucleotide contexts. Certain mutational signatures were found to be associated with failures in DNA mismatch repair (MMR) and HR pathways 11 as well as DNA polymerase proofreading 12 , 13 and base excision repair (BER) 14 , 15 , 16 and nucleotide excision repair (NER) 17 failures. Inducing DNA repair deficiencies in cancer cell lines is able to reproduce some of these signatures 18 , 19 , 20 , 21 . Other types of mutation signatures based on small insertions and deletions (indels) 9 and on structural variants 22 are also starting to be introduced. Because mutational signatures describe the state of the DNA repair machinery of a cancer cell, they may be able to serve as a drug sensitivity marker. This is exemplified by a mutational signature associated with pathogenic variants in BRCA1 and BRCA2 genes 11 , 23 , thus identifying HR deficient tumors. The signature is common in ovarian and breast cancers, but genomic analyses have detected it across other cancer types 24 , 25 , suggesting the potential for broad use of drugs that target HR-deficient cells, such as PARP inhibitors. To this end, genomics-based predictors that draw on mutational signatures of HR deficiency have been developed 26 , 27 . We propose that this principle may extend to other types of mutational processes, potentially revealing tumor vulnerabilities. Human cancer cell line panels provide an experimental model for the diversity in tumor biology that is amenable to scaling-up. Drug screens and genetic screens on large cell line panels 28 , 29 have identified correlations between the sensitivity to a drug (or to a genetic perturbation), and the genetic, epigenetic, or transcriptomic markers in the cell lines. Encouragingly, genetic markers known to have clinical utility (e.g. BRAF mutations for vemurafenib, EGFR mutations for gefitinib, BCR-ABL fusion for imatinib sensitivity) are also evident in cell line panel data analyses 30 , suggesting potential for discovery of further useful genomic markers. Here, we used large-scale cell line data to investigate the hypothesis that mutational signatures in cancer genomes constitute markers of drug sensitivity. Quantifying somatic mutational signatures in cell line genomes is however difficult, because a matched normal tissue from the same individual is typically not available and thus cannot be used to remove the abundant germline variation. After filtering the known germline variants listed in population genomic databases 31 , 32 , somatic mutations are still greatly outnumbered by the residual germline variants (Fig. 1a, b ), which may confound downstream analyses such as the inference of mutational signatures. We introduce a method to infer somatic mutational spectra from cancer genomes without a matched control sample, while adjusting for the residual germline variation. We apply this to infer trinucleotide mutation signatures in cancer cell line exomes, and identify associations with sensitivity to drugs and to genetic perturbation across cell line panels. Replication analyses across independent data sets indicated that mutational signatures are broadly applicable markers of drug sensitivity, matching or exceeding common genomic markers such as oncogenic driver mutations or copy number alterations. Fig. 1: Evaluation of the ancestry-matching method to infer somatic mutation spectra on exomes without a matched normal control. a , b Germline variants greatly outnumber somatic mutations in exomes of various tumor types ( n = 52 BRCA, 33 KIRC, 53 GBM, 19 BLCA, 15 LUSC, and 67 LUAD cancer exomes) ( a ), also after attempting to filter out germline variants according to the minor allele frequency (MAF) of variants listed in the gnomAD database ( n = 450 cancer exomes) ( b ). The center line of box plots denotes medians of data points and the box hinges correspond to the 1st and 3rd quartiles, while whiskers extend to 1.5× IQR from the hinges. Data points beyond the end of the whiskers are shown individually. c Error between the real somatic 96 tri-nucleotide profiles and the profiles obtained with the ancestry-matching procedure, after various numbers of clusters (based on principal components of common germline variants; see Methods) are considered ( n = 450 cancer exomes). d Comparison of the ancestry-matching method (with the number of clusters set to 13), the baseline procedure (variant filtering by population MAF<0.001%), regressing out mutational signatures reported as related to germline variants (signatures 1 and 5, and SNP signature 31 , 110 ), and the error expected by chance (estimated by bootstrapping mutations). P -values by two-sided Wilcoxon rank-sum test ( n = 450 cancer exomes). e A schematic representation of the ‘ancestry matching’ procedure. For compactness, the X-axes on the mutation spectra illustrations list only a subset of mutation types. PCA, principal components analysis. Error bars in panels b – d are the standard error of the mean. Source data are provided as a Source Data file. Full size image Results An ancestry-matching approach removes subpopulation-specific trinucleotide spectra to accurately infer mutation signatures A substantial amount of the germline variation in a cell line exome cannot be removed by filtering based on minor variant frequency in population databases (Fig. 1b ). Therefore we devised an approach to measure the somatic trinucleotide mutation spectrum – the input for the inference of mutational signatures 33 – while rigorously adjusting for the contamination by the residual germline mutation spectrum. Because mutational processes differ across human populations 34 , there is potential for this to confound analyses of the somatic mutation spectrum. Given the high number of residual germline variants post-filtering (Fig. 1b ), even slight differences in the germline spectrum can cause large deviations in the observed spectrum, which is a mix of somatic and germline variation. To address this, we implemented an ancestry-matching procedure, looking up the individuals with a similar ancestry to each cell line’s ancestry. In particular, we clustered the cell line exomes together with germline exome samples from the TCGA data set, grouping by principal components derived from common germline variation (Fig. 1e ; Methods section). The TCGA individuals clustered with a cell line provided a baseline germline mutational spectrum, which can be subtracted from the observed mutation spectrum to estimate the somatic mutation spectrum. We benchmarked our ancestry-matching procedure for the accuracy of reconstructing the correct somatic mutation spectrum in a cancer cell line exome. To this end, we used SNV calls from TCGA cancer exomes where the matched normal was ignored, thus simulating the mutation calls that would be obtained from cell line genomes (see Methods section). We then compared the reconstructed somatic SNV mutation spectrum to the true somatic spectrum, obtained by contrasting tumor exomes with the matched healthy tissue exomes from the same individuals. Ancestry-matching improves over the commonly used strategy, that is simply filtering out known germline variants according to population genomic databases (Fig. 1 c, 1d and Supplementary Fig. 1d ); error in somatic trinucleotide frequency spectrum (Methods section) is 68.8 versus 124.1 across all tissues, while for comparison, the error expected by ‘self-similarity’ via a bootstrap-resampling of mutations from the same tumor samples would be 67.5, close to that obtained via ancestry-matching. We considered various numbers of population clusters according to the error in reconstruction of the correct somatic trinucleotide spectrum. Selecting three clusters, expectedly, recovers the major ethnicity groups (European, Asian and African, Supplementary Fig. 1a ) and further increasing the number of clusters to 13 minimizes the error in reconstructing true somatic trinucleotide mutation spectra (Fig. 1c ; error is 68.8 for 13, versus 71.5 for 3 clusters; this improvement is modest and so the 3-cluster solution may also provide a satisfactory baseline for downstream mutational spectra analyses). Encouragingly, comparing the 13 ancestry clusters sorted by self-reported ethnicity (Supplementary Fig. 1b ), intra-ethnicity trinucleotide mutational profiles are more similar than the inter-ethnicity profiles (Supplementary Fig. 1c ), and a PC analysis of the trinucleotide spectra of the rare variants separates the major ethnicity groups (Supplementary Fig. 1d ). This is consistent with reports of differential mutagenic processes in the human germline across ancestral groups – for example, the H C C>H T C (H = not G) variants were reported to be increased in Europeans, N C G>N T G mutations in Native Americans and N A C>N C N and T A T>T T T in some East Asians 34 , 35 , 36 . These reports, together with our benchmarking using simulations, support the use of ancestry-specific baselines in inferring somatic mutational spectra of unmatched cancer genomes, such as cell line genomes. We applied the ancestry-matching methodology (Fig. 1e ) to exome sequencing data of 1071 cancer cell lines 37 , yielding their somatic trinucleotide spectra. On this data, we performed de novo discovery using an NMF approach, broadly as described by Alexandrov et al. 33 (with certain modifications, see Methods section), where we extracted those NMF solutions that resembled previously reported tumor mutational signatures 9 of single base substitutions (SBS). We tested a number of variations on the data filtering and the mutation extraction methodology (Supplementary Data S1 ) to improve agreement with the known set of SBS signatures 9 and their known distribution across tissues, as well to improve power of the set of mutational signatures to predict drug responses in the cell lines (Methods section; Supplementary Data S1 ). To further demonstrate the utility of the ancestry matching approach in combination with NMF signature extraction, we again used the set of simulated cell line exomes as above, where the true somatic mutation signatures are known because the matched-normal was available. The ancestry-matching significantly improves the cosine similarities towards true NMF signature spectra, compared with the usual approach of filtering population variants ( p = 0.021, Wilcoxon test; Supplementary Fig. 1g ) and similarly so for the signature exposures ( p = 0.017; Supplementary Fig. 1g ). We conclude that our implementation of ancestry-matching benefits NMF mutation signature extraction in unmatched cancer samples; we recognize that future variations on this methodology might bring improvements. We jointly inferred trinucleotide (or SBS) signatures together with a set of indel mutational features. Examining the SBS part of the spectrum, this yielded 30 cell line mutational signatures that very closely match (at a cosine similarity cutoff ≥0.95) the known tumor SBS signatures, and a further 22 cell line signatures that match known SBS tumor signatures (at a stringent cosine similarity ≥0.85 and <0.95; a randomization test estimated that a ≥0.85 cosine threshold corresponds to a 1.8% FDR in matching the correct SBS, Supplementary Fig. 4b ). The former group was labeled with the name of the corresponding SBS signature, while the latter similarly so plus the suffix “L” (for “like”). In some cases, our cell line signatures were similar to more than one previous tumor SBS (Supplementary Fig. 4a ) and they were named such as to make this evident, for instance our SBS26/12L matches the DNA mismatch repair (MMR) failure signature SBS26 and a possible MMR failure signature SBS12 38 (more similar signature listed first). Note that a comparable degree of ambiguity is also observed among some of the known tumor SBS mutational signatures (Supplementary Fig. 5 ). The full set of 52 mutational signatures we inferred and their ‘exposures’ across cell types are visualized in Supplementary Figs. 2 and 3 , and corresponding data is provided as Supplementary Data S2 and S3 . Additionally, there were five mutational signatures that appeared specific to cell lines (SBS-CL), meaning they did not closely match one of the signatures from current tumor catalogs (Supplementary Fig. 2 and Supplementary Data S2 ). These mutational processes may be evident only in rare tumor types or they may be active predominantly in cultured cells rather than in tumors. Some might originate from the incomplete separation of other signatures (Supplementary Fig. 6 shows examples). Finally, some SBS-CL may reflect contamination with residual germline variation, as well as with sequencing artifacts, similarly as was recently reported for many SBS signatures recovered from tumor genomes 9 . Mutational signatures predict cell line drug response more accurately than oncogenic mutations or copy number alterations Genetic and epigenetic alterations in cancer cell lines are often investigated as markers of sensitivity to chemical compounds 29 , 30 . We hypothesized that mutational signatures in a cell line genome can serve as similarly informative markers of drug sensitivity or resistance. We compared their predictive ability to that of the markers commonly used to predict drug response in cell lines: oncogenic mutations (in 470 cancer driver genes 30 ), recurrent focal copy number alterations (CNAs at 425 genes 30 ), and DNA methylation data at informative CpG islands (HypMet at 378 genes 30 ). Additionally, we examined gene expression patterns (mRNA levels of 1564 genes that are either represented in the L1000 assay 39 or are known drug target genes 40 ), because gene expression can be highly predictive of drug response 30 , 41 , possibly because it reflects differences between various cancer types and subtypes. We predicted the sensitivity (log IC50 concentration) of a panel of 930 cell lines (separately for 29 cancer types that had a sufficient number of cell lines available) to a set of 518 drugs from the GDSC database 37 . In particular, we used Random Forest (RF) regression applied to the complete set of genetic or epigenetic markers (listed above) in an individual cell line as features (Fig. 2a ). In addition to mutational signatures inferred herein, we also considered the cell line mutational signatures reported by two recent studies 31 , 32 , obtained using approaches that did not account for ancestry and that have moreover fit the data to pre-existing sets of SBS signatures, rather than extracting signatures de novo from cell line genomes (see Methods section). Fig. 2: Prediction of drug response with mutational signatures and other molecular data types. a Predictive performance (RRMSE, relative root-mean-square error) of drug response prediction with mutational signatures (“MSigs") reported here and previously 31 , 32 and other data types (oncogenic mutations (“Mut”), copy number alterations (“CNAs”) and DNA hypermethylation (“HypMet”)). P-values of paired one-sided Wilcoxon signed rank test are reported on the plots. Dashed line denotes the diagonal. Bottom right panel shows a schematic of how RRMSE for each tissue was estimated, where “Sig” is mutational signature or other marker (CNA etc.), “Cell” is cell line, “XV” is crossvalidation, and “Predict” implies a Random Forest model. b Average rank diagrams for the performance (RRMSE) of the drug response prediction from various sets of markers, using Random Forest. “This work” refers to the mutational signatures inferred here, while “ref. 31 .” and “ref. 32 .” refers to prior sets of mutational signatures. Each graph shows: the ranking among the different marker sets (those at the left-hand side are the best performing) and the significant differences between pairs of marker sets (if their ranks are at least critical distance (CD) apart, the difference in predictive performance is statistically significant at p < 0.05, by the Nemenyi post-hoc test, two-sided). The groups of marker sets for which there is no significant difference are connected by black lines. c Tests shown separately for four tissues-of-origin with the highest number of cell lines in our panels. d The percentage of the Random Forest models that are predictive of drug response, defined as having a predictive error lower than the one of an uninformative default model (predicts the average log IC50 for every cell line). Expressed relative to the total number of testable drug-tissue pairs. e The percentage of models that are predictive with gene expression but not another feature type (“exp_only”), by other feature type but not with gene expression (“other_only”), by either model (“either”), or only by a combination of both (“combination”). Source data are provided as a Source Data file. Full size image Firstly, mutational signatures predicted drug sensitivity significantly better than all other tested types of alterations: the increase in accuracy of RF models over CNAs, DNA hypermethylation, and oncogenic mutation features is significant (at p < 0.05, by corrected Friedman test followed by the post-hoc Nemenyi test on ranks; Fig. 2b ; Methods section; average rank for mutational signatures was 3.75 while for other types of (epi)genetic features it was 4.20–4.49, considering all RF models). Secondly, mutational signatures found by our ‘ancestry matching’ approach perform significantly better than other cell line signatures recently reported 31 , 32 , comparing across the set of cell lines that overlap between the publications (Fig. 2b, c ). Moreover, these previous sets of cell line mutational signature exposures were less predictive of drug sensitivity than were CNAs, oncogenic mutations and DNA methylation, suggesting the utility of adjustment for germline spectra contamination prior to mutational signature inference (Fig. 2b–d ). Next, we applied a different test that considers the average error in predicting the drug sensitivity profile (relative RMSE of a RF model in crossvalidation; Fig. 2a ), averaged across all drugs in a given tissue. In most cancer types, the mutational signatures obtained herein were better predictors of drug response, than all the usual genomic and epigenomic features (13 out of 16 tested cancer types compared with DNA hypermethylation, 10 out of 16 for CNAs, and 10 out of 16 for oncogenic mutations). Also, in most cancer types (Fig. 2a ) our cell line signatures significantly outperformed recent methods to infer mutational signatures 31 , 32 , 42 naive to germline mutational spectra (in 22 and 25 out of 27 cancer types for the two previous methods that used the same set of cell lines, Fig. 2a , and in 11 out of 16 cancer types on a set of overlapping cell lines for a third method (Supplementary Fig. 7a ); p < 0.0001, p < 0.0001, and p = 0.041, respectively, Wilcoxon test for decrease of relative RMSE). Gene expression was overall very highly predictive of drug response, (Fig. 2b–d ), consistent with recent reports 30 , 41 . We asked if the predictive power of gene expression can be complemented by additionally including mutational signatures and/or various sets of genetic markers. We predicted the drug response profile in RF models as above (Fig. 2a ), but here by using combinations of marker types with gene expression, and tallying the predictive RF models (drug-tissue pairs with better-than-baseline RRMSE in crossvalidation; Methods section). Notably, gene expression is complemented by mutational signatures and also by other types of features, yielding a higher percentage of predictive RF models when markers are combined than with gene expression alone (Fig. 2d, e ). If gene expression markers are unavailable, the mutational signatures were still complementary to oncogenic mutations, CNAs, or DNA methylation (Fig. 2d ). Next, we considered the complementarity analyses at the level of individual drugs, asking if the profile of drug sensitivity that can be predicted by a combined RF model (e.g. gene expression and mutational signatures) could also have been predicted by the two RF models drawing on the individual sets of features – on gene expression only, or on mutational signatures only (Fig. 2e ). The number of RF models where gene expression by itself is not predictive but mutational signatures are predictive is substantial (891 drug-tissue pairs, plus 432 where only a combination of signatures and expression is predictive). This is higher than the number of drug-tissue pairs where gene expression is not predictive but driver mutations are (563 plus 352), and similarly so for CNA (633 plus 347). In addition to gene expression, using DNA methylation as a baseline also supports that response profiles for many drug-tissue combinations can be predicted only by mutational signatures (Supplementary Fig. 7b ). This suggests that the predictive signal in mutational signatures does not simply reflect cancer subtype or cell-of-origin, at least to the extent that subtype can be identified via gene expression or DNA methylation patterns. Overall, mutational signatures, considered collectively, can complement gene expression and other types of markers in predicting drug sensitivity profiles of cancer cells. Associations with drug response that replicate in independent data sets Because of reproducibility concerns in large-scale drug screens 43 , 44 , 45 that might stem from technical reasons or from cancer cell line evolution during culture, we asked if the associations between mutational signatures and drug responses replicate across data sets. To this end, we tested for associations involving various markers, with the additional condition that the associations also replicate in an independent data set. We implemented a randomization-based procedure that tests that the smallest effect size (Cohen’s d statistic) across both datasets is above chance (Fig. 3a ; Methods section). A value of d ≥ 1, typically considered a large effect size, implies that a difference in mean drug sensitivity (log IC50) between the cell lines positive for a marker and those negative for a marker is greater than the pooled standard deviation (of the log IC50) of the two sets of cell lines. These replication tests were performed on binarized mutational signatures i.e. the signature present/absent indicator variables (Supplementary Fig. 15b ), considering different cancer types individually (Supplementary Data S4 ). Additionally the same tests were performed on the usual markers in cell line screening analyses, including oncogenic driver mutations (Fig. 3 b– e ), CNAs (Fig. 3e ), and promoter DNA methylation 30 . Fig. 3: Detecting robust associations between genetic or epigenetic markers and drug sensitivity by replication across measurements. a A schematic of the randomization test methodology to detect replicating associations using three different tests: (i) consistent effects of a drug between two screening assays (GDSC/PRISM), (ii) effects of a drug consistent with effects of knockout of the target gene (GDSC/PSCORE), and (iii) effects consistent across different drugs that share the same molecular target (GDSC/GDSC or “same target”). Box plots and scatterplot are illustrative. b – d Examples of replicated associations of a known example of oncogene addiction (to BRAF , b) and of additional cancer vulnerabilities associated with mutations in tumor suppressor genes ( ARID1A , c ; TP53 d ). Y-axes show a Z-score derived from either the ln IC50 value (i.e. drug sensitivity, in columns labeled “GDSC” or “PRISM”) or from the CRISPR essentiality score (in the column labeled “PSCORE”). Horizontal brackets show FDR for replicated significant difference between wild-type and mutant genotypes, obtained via a randomization test in panel a , where color denotes the type of the replication test (“GDSC”, “PRISM” or “PSCORE”). The center line of box plots denotes medians and the hinges correspond to the 1st and 3rd quartiles, while whiskers extend to 1.5× IQR from the hinges. e, Effect sizes of markers that associate with drug response in the GDSC drug screen (X-axes) and with response drug target gene knock-out in the Project SCORE genetic screen (Y-axes), shown separately for copy number alterations (CNA) and mutations in cancer genes (Muts). Gray points represent all tested associations, while colored points denote the statistically significant associations that also meet an effect size threshold. Blue lines are the contours of the 2D kernel density estimates. Representative points are labeled. Drug names on grouped labels (“Muts” sub-panel) are ordered by their appearance on the plot from left to right. Source data are provided as a Source Data file. Full size image We considered three different types of replication analyses: an external replication in an independent drug screening data set, internal replication with multiple drugs affecting the same target, and an external replication using CRISPR/Cas9 gene knockout fitness screening data. Randomization p-values from these three replication methods (across various tissues) were rarely inflated for mutational signatures, and in fact commonly exhibited deflation (mean lambda across tissues 0.68–1 for different replication methods, Supplementary Fig. 8 ) suggesting an overall conservative bias in the replication test as implemented. The few tissue-method combinations that did exhibit inflation in p -values (lambda >1.3; Supplementary Fig. 8 ) were omitted from further analyses of the associations; the full set of associations are nonetheless included in the Supplementary data, for completeness. Firstly, we performed a replication analysis where the drug association data from the GDSC was tested against another drug screening data set: PRISM (derived from an experimental methodology based on pooled barcoded screens 46 ; 348 cell lines and 178 drugs overlap with the GDSC set). In total, 290 drug-mutation signature associations were robustly supported across both GDSC and PRISM ( d ≥ 0.5 and same direction of effect in both datasets and additionally requiring randomization test FDR<15%; adjustment using the q -value method 47 ), observed across diverse tissues and diverse signatures (Fig. 4a, b and Supplementary Fig. 12a, b ). This exceeds the number of drug associations replicated in PRISM involving driver mutations (37), copy-number changes (55), or DNA methylation (64) in the same test. We list the associations in Supplementary Data S5 . Fig. 4: Tally of the significantly replicated associations of drug sensitivity or resistance with mutation signatures and other markers. a Comparison of the number of statistically significant associations (FDR<15% by randomization test, additionally requiring an effect size d > 0.5 for GDSC/PRISM and GDSC/PSCORE tests, and d > 1 for the GDSC/GDSC (same-target) test) per feature, among mutational signatures (“Signatures”), oncogenic mutations (“Muts”) and copy number alterations (“CNAs”) in the three types of replication tests (see Fig. 3a ). Features are ranked by the total number of significant associations, either for drug sensitivity (negative side of X-axis) or resistance (positive side of X-axis). b The number of different mutational signatures that have statistically significant associations across various cancer types (at FDR<15%; we consider signatures that have >1 significant association per cancer type), in the three replication tests. Source data are provided as a Source Data file. Full size image Given that the amount of cell lines available to the replication analysis is reduced and thus statistical power is limiting, particularly for some tissues (Supplementary Fig. 13b ), we suggest that some associations at permissive thresholds (here, nominal p < 0.005) might be of interest for use as supporting evidence, corroborating other associations (see below). Secondly, we performed an internal replication analysis within GDSC, enforcing that associations must be detected with two or more drugs that share the same molecular target. In total, 228 drugs in the GDSC data could be tested in this “same-target” analysis. Effectively, multiple drugs serve as pseudoreplicates, and additionally this test may help discard associations due to off-target effects, which are more likely to differ between two drugs than their on-target effects. Here, we identified 971 significant associations for mutational signatures, 206 for driver mutations, 288 for copy-number changes, and 762 for promoter DNA methylation (at effect size d > 1 and FDR < 15%) (Fig. 4a, b ; data in Supplementary Data S7 and S8 for the associations between the default FDR<15% threshold, and the permissive threshold at p < 0.005, respectively). Some associations overlapped between the same-target and the GDSC-PRISM replication analyses, suggesting more robustness: Supplementary Data S8 contains those replicated associations that were seen either across multiple cancer types, and/or across multiple drugs that target the same pathway, and/or across different replication methods (including also CRISPR genetic screens, see below). We suggest that this ‘silver set’ of 3911 associations, where each replication at a FDR < 25% is further supported by one or more additional replications at a suggestive (nominal p < 0.005) threshold, to be potentially suitable for further analyses or follow-up. Integrating drug screening data with genetic screening data to obtain robust associations As a third type of replication analysis, we prioritized cancer vulnerabilities by intersecting the drug sensitivity data with genetic screening data sets. Our rationale was that a biological process may be targeted similarly by pharmacological inhibition of a protein, or by editing the genes that encode the corresponding protein. In specific, we examined sensitivity to CRISPR/Cas9-mediated knockouts that target the protein-coding genes, across a panel of 517 cell lines 48 that overlapped the GDSC cell lines. We identified many associations (at effect size d > 0.5 and FDR<15%) with conventional markers – oncogenic driver mutations ( n = 123), copy number alterations ( n = 100), and DNA methylation ( n = 86) – that replicated across drug and genetic data (Supplementary Data S6 , Figs. 4 and 3e , and Supplementary Fig. 9b, c ). Demonstrating the utility of this test, we recovered well-known examples of the oncogene addiction paradigm, such as the breast and esophageal/gastric cell lines with ERBB2 ( HER2 ) amplification being sensitive to inhibitors of EGFR and ERBB2 (afatinib and 4 other drugs), and also to the ERBB2 gene knockout (replication FDRs all <15%). Similarly, we recapitulated the known associations between amplifications of a chromosomal segment 7q31 including the MET oncogene, and sensitivity of esophageal/gastric cancer to crizotinib 49 , 50 and also 3 other MET inhibitors (FDRs < 6%; Fig. 3E ). The BRAF mutations in skin and colorectal cancer likewise sensitize to different BRAF inhibitors in both the GDSC and PRISM drug data, and also to BRAF gene disruption (Fig. 3b and Supplementary Fig. 9d ). Conversely, we note that some oncogene mutations can also confer drug resistance e.g. NRAS -mutant leukemia cells (Supplementary Fig. 10 , n = 38 hits in Fig. 4a ), consistent with prior reports (discussed in Supplementary Note 1 ). These and other striking associations with gene mutations, CNA, and promoter DNA methylation that replicated in the genetic screening data are highlighted in the global data overview in Fig. 3e and Supplementary Fig. 9c . In addition to oncogene addiction, replicated associations can suggest ways to target mutated tumor suppressor genes via synthetic lethality. One example is a CDKN2A deletion that sensitizes brain cancer cells to palbociclib (a CDK4/6 inhibitor) and to knockouts in CDK4 and CDK6 genes. In contrast, RB1 mutations were associated with resistance, consistent with the biological roles of these genes, as well as prior preclinical studies (details in Supplementary Note 1 ). This demonstrates the power of the joint analyses of drug and genetic screening data here and elsewhere 51 , suggesting that the other associations we identified here (Supplementary Data S6 ) provide cancer dependencies promising for follow-up. For example, our analysis identifies vulnerabilities of TP53 -mutant cells to manipulating the activity of the CHK2/CDC25A/CDK2 axis across five different cancer types (Fig. 3d and Supplementary Fig. 9e ), echoing prior work on therapeutic interventions on CHKs in TP53 -deficient cells (Supplementary Note 1 ). These examples also illustrate how an integrated analysis of drug screening data with genetic screening data can reveal drug effects exerted via secondary drug targets (e.g. likely CHK2 for the MK-8776 inhibitor of CHK1; Fig. 3d , see discussion in Supplementary Note 1 ). We highlight a robustly supported synthetic lethality example involving mutations in the ARID1A tumor suppressor gene and the inhibition of the AKT2 gene or protein (Fig. 3c ). In particular, ARID1A mutant colorectal cell lines are more sensitive to the knock-out of the AKT2 gene by CRISPR, as well as to the pan-AKT inhibitors GSK690693 and capivasertib/AZD5363 (FDR = 6% and 12% in the replication test, respectively). The same is observed in ovarian cancer cell lines, again involving AKT2 knockout and the same two inhibitors (at FDR = 9% and 12%, respectively). This is supported by additional AKT inhibitor drugs: afuresertib (FDR=6%), AKT inhibitor VIII (FDR = 21%), and uprosertib (FDR = 5%) in colon, and MK-2206 (FDR = 9%) in ovary (Supplementary Data S6 ). Further evidence for an interaction between these genes is found in tumor genomic analysis. The AKT2 oncogene can be amplified in ovarian, endometrial, pancreatic and other cancer types, while the ARID1 tumor suppressor commonly bears truncating mutations in many cancers. In tumor genomes, AKT2 alterations significantly co-occur with ARID1A alterations (OR = 2.0, FDR<0.1% in MSK-IMPACT cohort of 10,945 samples; 52 replicated at OR = 1.4, FDR<0.1% in an independent TCGA pan-cancer cohort of 10,967 samples; analysis via cBioPortal 53 ). These genomic associations support that the AKT2 amplifications may bring a selective benefit to ARID1A -mutant tumors. Overall, our analyses solidify the notion that the PI3K/AKT/MTOR signaling inhibition is a vulnerability of ARID1A -mutant cells 54 , 55 , 56 , 57 , as reported before for individual examples of cell lines sensitized to AKTi drugs upon silencing of ARID1A 56 , and we further suggest specifically AKT2 as an opportune point of intervention. Next, we applied this same statistical methodology (Fig. 3a ; Methods section) to identify replicated drug sensitivity associations involving mutational signatures in cell line genomes. Mutational signatures associated with sensitivity both to pharmacological and genetic perturbations As a positive control in a study of mutational signatures as markers, we considered a recently reported vulnerability of cell lines that are microsatellite-instable (MSI) and therefore deficient in the DNA mismatch repair (MMR) pathway, which do not tolerate the loss of the WRN gene 48 , 58 , 59 , 60 . MMR deficiencies in tumors are known to associate with MSI and with trinucleotide mutational signatures SBS6, 15, 21, 26, and 44 9 (and additionally SBS14 and 20 which result from MMR failure concurrent with deficiencies in replicative DNA polymerases). In a joint analysis of MSI-prone cancer types (colorectal, ovary, stomach, uterus), we found links between the MMR SBS signatures that we inferred in the cell line exomes and the sensitivity to WRN knockout. However, levels of statistical support were variable across the MMR signatures (FDRs <0.01%,<0.01%, <0.01%, 11%, 18%, 21%, and n.s. [associated with resistance]) for SBS20, 15, 6, 26/12L, 14, 21, and 44L, respectively; Supplementary Fig. 11a ). Additionally, we noted some additional signatures with a high weight on the indel components – and thus might be MMR-related: SBS33L, SBS54, and SBS-CL1 (Supplementary Fig. 2 ) – also predicted sensitivity to WRN loss (Supplementary Fig. 11 ), in case of SBS33L with a high effect size. Thus, some MMR-associated signatures are more robust markers for WRN inhibition (particularly the C > T-rich SBS15 and SBS6, as well as SBS20) than the other MMR failure-associated signatures (such as the T > C rich 26, 21, or 44). Conceivably, this might be because these signatures reflect different types of MMR failure that confer differential requirements for WRN activity. Overall, the ability to recover the known WRN dependencies of MMR-deficient cell lines estimated via trinucleotide mutational signatures supports the utility of our methodology to infer mutational signatures in cell line genomes. Beyond WRN disruption, the MMR signatures as well as other mutation signatures can predict sensitivity to many perturbations, including those that the MSI status nor the other genetic markers would not predict (Supplementary Fig. 9a ; we note that the converse is also true, at least for the few cancer types where MSI labels are available). Next, we systematically examined all mutational signatures for the overlap between associations in the GDSC drug screen and Project SCORE genetic screen. This yielded 130 associations (at a randomization FDR < 15%, and additionally requiring an effect size of d > 0.5 in both the genetic and the drug screens) that replicated across data sets – a higher number than for oncogenic driver mutations, CNAs, and DNA methylation (123, 100, and 86, respectively, at the same FDR threshold). These associations (Fig. 4 , Supplementary Fig. 12c ; full list in Supplementary Data S6 ) involved k.o. in 64 different genes, indicating that mutational signatures associate with a variety of target genes and suggesting potential points of intervention for follow-up. The number of replicated associations involving mutational signatures highly ranked by this replication analysis (such as the chemotherapy-associated SBS25L, the haloalkane exposure-associated SBS42L, and the signature possibly related to NER deficiency 61 SBS8L/4L; Fig. 4b , Supplementary Fig. 12a ; n = 12, 8, and 7 replicated associations at FDR 15%, respectively) broadly matches the number of replicated associations involving common driver mutations such as EGFR or TP53 or KRAS ( n = 18, 17, and 13, respectively), or known copy number change events such as ERBB2 gain ( n = 17 replicated associations in Project SCORE) (Fig. 4a and Supplementary Data S6 ). We also show the tally of associations at a more permissive 25% FDR in Supplementary Fig. 12 , further supporting how mutational signatures provide markers as commonly associated with drug response as the usual markers based on driver mutations and CNA. We note that in this and further analyses we have, conservatively, stratified colorectal cancer cell lines into MSI and MSS (association counts without stratification are in Supplementary Fig. 13a and Supplementary Data S9 ), based on MSI being common in colorectal cell lines, and MSI status being strongly associated with mutational signatures 9 , 38 , 62 . Robust associations involving mutational signatures replicate across multiple cancer types We next focused on those drug associations involving mutational signatures that recurrently replicated across more than one replication method (see Fig. 3a ) and/or more than one cancer type (‘silver set’, Supplementary Data S10 ). We noted that some associations in this set recurred in three or more methods and/or tissues, thus we introduced a more stringent tier of hits, with a higher priority for follow-up. These ‘golden set’ hits recurred in at least three different tissues or in three different tests with effect size d > 0.5 and significant at p < 0.005, and at least once at FDR < 25%. This resulted in 995 higher-priority associations (tallying both mutational signature associations, and the driver mutation and CNA associations; Supplementary Data S8 ). A common occurrence in this higher-confidence association set was involvement of mutational signatures that were associated with DNA repair failures in previous analyses of tumor genomes. This included: MMR failures (various SBS; listed in Fig. 5 a, b ), BER failures (SBS36/56L 15 , 63 , 64 , SBS30L/7bL/11L 14 , 65 ), likely NER failures (SBS8L/4L) and replicative DNA polymerase failures (particularly SBS14 and SBS20; additionally SBS56/10aL/36L may be in this group). As tentatively DNA repair-associated signatures, here we additionally considered 6 , 9 , 66 SBS18/36L based on the similarity of the spectrum to the SBS36/18 and because it was found in MUTYH -variant patients 63 , 67 , and additionally SBS33L, SBS54, and SBS-CL1 because they have prominent indel components and were associated with sensitivity to WRN loss (Supplementary Fig. 11 ). Those DNA repair-associated signatures encompass 278 of 701 associations involving mutational signatures in this high-priority set; some individual examples are discussed below. Therefore, mutational signatures resulting from DNA repair failures often result in drug vulnerabilities. Fig. 5: Highlighted examples of robustly supported associations involving mutational signatures. a All tested associations between AKT inhibitors and DNA mismatch repair mutational signatures, across all three replication tests (by the “two-way” randomization test, see Methods section). Each bar represents a p -value of one association. For associations with effect size <0.2, p -values were not calculated in the randomization procedure and are here shown as having p = 0.5. b Associations having p < 0.005 between AKT inhibitors and individual DNA mismatch repair signatures. c The tally of significant associations at FDR < 25% across all three replication tests. The * and ° symbols denote groups of mutational signatures; see key embedded within the panel. d The average log2 ratio of observed vs. expected frequencies of occurrence of drug target pathways in resistance associations ( p -value < 0.005) with signatures of chemical exposure, over the top 6 signatures by the number of resistance associations (SBS25L, SBS18/36L, SBS42L, SBS11, SBS22L, and SBS4/45L). Source data are provided as a Source Data file. Full size image When inspecting the overall balance of sensitivity versus resistance associations, we noted that driver mutations and CNA present a mix of sensitizing and resistance associations. These may be unevenly distributed across genes: see NRAS example mentioned above, biased towards resistance, while EGFR is biased towards sensitivity (perhaps because mutant EGFR is a target for many approved drugs). By analogy to this, we also identified two opposing trends in the mutational signatures association tally. Firstly, the signatures associated with DNA repair failures, as listed above, tend to be more often sensitizing (considering relative frequencies of sensitivity to resistance associations shown in Fig. 5c ; also see top of the plot in Supplementary Fig. 12a for breakdown by type of test). Secondly, there is a group of mutational signatures tending towards resistance associations; these signatures also exhibit a higher overall number of associations (Fig. 5c ; bottom of plot in Supplementary Fig. 12a ). In this group, 6 out of top-7 signatures were previously linked to exposures of mutagenic chemicals: SBS25L (unspecified chemotherapy), SBS18/36L (reactive oxygen species), SBS42L (haloalkane exposure 68 ), SBS11 (a DNA methylating agent, the drug TMZ), SBS22L (an agent generating bulky DNA adducts, aristolochic acid 69 ), and SBS4/45L (mix of agents from tobacco smoke, where the mutagenesis likely results mainly from DNA adducts by polycyclic aromatic hydrocarbons). Of note, among these, the oxidative damage SBS18/36L does show some sensitizing associations as well (Fig. 5c ). These six signatures of mutagen exposures associated with resistance to various drugs, which are overall enriched in drugs targeting e.g. chromatin histone modification, DNA replication, the p53 pathway, JNK and p38 signaling, and ABL signaling (Fig. 5d and Supplementary Fig. 15 breakdown per signature). This data suggests that, overall, mutational signatures of prior chemical exposures in cancer cells commonly predict resistance to future drug exposures. Mutational signatures associated with various DNA repair failures predict drug sensitivity A manual curation of the sets of robust associations (Supplementary Data S8 ) reveals that several MMR signatures associate with sensitivity to AKT serine/threonine kinase inhibitors (Fig. 5 a, b ). This is seen consistently across many tissues: in colorectal, skin, lung (small cell), and brain (associations at FDR<15%), and additionally in prostate, ovary, and stomach/esophagus cancers (associations at permissive FDR thresholds, all with p < 0.005); see Fig. 6a–c for examples involving SBS26/12L, SBS14, and SBS20, respectively. The associations involve 9 different AKTi drugs including uprosertib, MK-2206, and ipatasertib. Several of these drugs have undergone clinical trials showing varying outcomes in unselected patients 70 , 71 , highlighting the need for identifying predictive biomarkers of response to AKT inhibitors. We considered the possibility that different MMR signatures have varied utility as AKTi markers; indeed, the MMR signature SBS26/12L most commonly associated with sensitivity across different AKTi drugs, with a lower utility of other signatures (Fig. 5a, b ). These associations between MMR signatures and AKTi sensitivity may be mechanistically related to associations between ARID1A mutations and AKTi sensitivity that we described above (Fig. 3c ). Such a link would be consistent with a reported loss of MMR activity in ARID1A -mutant cells 72 , and with correlations between ARID1A loss in tumors and MMR deficiencies reported in multiple cancer types 73 , 74 , 75 . Fig. 6: Associations with drug sensitivity or resistance that replicate in independent datasets. a – f Examples of associations of mutational signatures with drug sensitivity that replicated (using tests in Fig. 3a ) multiple times, across different cancer types and/or different types of replication tests. Y-axes show a Z-score derived from either the ln IC50 value (drug sensitivity: “GDSC” or “PRISM” columns) or from the CRISPR essentiality score (“PSCORE” columns). Horizontal brackets show FDR for replicated associations with the presence/absence of a given mutational signature, obtained via a randomization test (Fig. 3a ), where color denotes the type of the test (see legend at top of plot). The center lines of box plots denote medians and the box hinges correspond to the 1st and 3rd quartiles, while whiskers extend to 1.5 × IQR from the hinges. Source data are provided as a Source Data file. Full size image In addition to MMR, signatures resulting from failures in other DNA repair pathways may yield sensitivity associations (Fig. 6 d– f and Supplementary Fig. 14 ). An example is the signature SBS36/56L, possibly indicating failed BER since SBS36 was previously associated with loss-of-function in MUTYH . In five cancer types, SBS36 was associated with sensitivity to inhibition of EGFR or of ERBB2 via e.g. afatinib or AST-1306 drugs. These associations were additionally supported in CRISPR k.o. of the EGFR or ERBB2 genes in skin, liver, and head-and-neck cancer (Supplementary Fig. 14c ). The related signature SBS18/36L also predicts sensitivity to these agents in three of the five cancer types (Supplementary Fig. 14d ). Overall statistical support for sensitivity associations with SBS36/56L and SBS18/36L was higher for the EGFR-targeting drugs than for all other classes of drugs in these cancer types (Supplementary Fig. 14e ). Prior studies suggested that EGFR activity can control various DNA repair mechanisms 76 , 77 , 78 . We further highlight an example involving a signature SBS18/36L, associated with DNA damage by reactive oxygen species, and possibly also with certain deficiencies in the BER pathway due to the similarity of the spectrum with signature 36. This signature was associated with sensitivity to two inhibitors of sirtuin (SIRT) proteins, selisistat and tenovin-6, in pancreatic adenocarcinoma, lung squamous cell carcinoma, sarcoma, and lymphoid leukemia (all tissues at FDRs ≤ 30%; Fig. 6e ). In three out of four tissues the associations with SIRTi replicated in the CRISPR k.o. phenotype of the SIRT1 gene (Fig. 6e ). This adds confidence that SIRT1 may be a promising vulnerability of tumor cells that are undergoing and/or have previously undergone oxidative damage to their DNA and/or have lowered ability to repair such damage (the current analysis does not distinguish between these scenarios). Another example of a sensitizing mutational signature was interesting due to its occurrence across multiple tissues. The SBS30L/7bL/11L, which is ambiguous but possibly linked to base excision repair failures (since the SBS30 was previously associated with NTHL1 loss-of-function 14 , 65 ), associates with sensitivity to two related classes of drugs (Supplementary Fig. 14a, b ) that converge onto the cytoskeleton. Firstly, there are inhibitors of Aurora kinase A, a protein regulating mitotic spindle assembly and stability, including the drugs ZM447439, Genentech Cpd10, and GSK1070916 (an Aurora B/C inhibitor with some Aurora A activity). These associations are also replicated in the k.o. of the AURKA gene (Supplementary Fig. 14a ). Secondly, this signature SBS30L/7bL/11L associates with sensitivity to the vinca alkaloids vinblastine, vincristine, and vinorelbine that interfere with assembly of microtubules and forming of the mitotic spindle (Supplementary Fig. 14b ). These associations were observed across AML and CML leukemia and liver cancer at FDR ≤ 30%, as well as in colorectal cancer and multiple myeloma at more permissive FDR thresholds (with notable effect sizes, however; Supplementary Fig. 14a, b ). Some associations were found involving mutational signatures recovered from cell line genomes that did not closely match a known SBS spectrum from tumor genomes. An interesting example are associations of the SBS-CL1, an indel-rich signature seen in various cancer types (Supplementary Figs. 2 and 3 ). Our analysis suggests this associates with sensitivity to DNA damage signaling drugs. Firstly, we identified associations of exposure of SBS-CL1 with sensitivity toward PARP inhibitors olaparib, rucaparib, veliparib, and PARP1 gene k.o., observed across three cancer types (Fig. 6d ). Secondly, we identified associations with sensitivity to ATR inhibitors AZD6738, VE-822, VE-821 or k.o. of the ATR gene in four cancer types (Fig. 6f ). This example suggests the utility of indel signatures in predicting drug response, in this case to a category of DNA repair drugs that are trialed in the clinic, the ATRi. Additional associations were recurrently observed across multiple cancer types. Some examples highlighted by a manual curation of these ‘golden set’ associations include: the SBS17aL signature and the ZSTK474 drug and PIK3CA / PIK3CB genes; SBS3L signature and fedratinib and JAK2 gene; SBS8L and midostaurin; SBS17b and fludarabine (and possibly more generally DNA antimetabolites). These associations are some representatives among many other examples with a similar degree of confidence (based on the FDRs, and on recurrence across independent tissues/replication tests) in the ‘golden set’ of associations (Supplementary Data S8 ). In addition to our manual curation, we also provide a prioritization based on a pooled p-value across tissues and different replication tests to highlight the top 10 associations with mutational signatures, and additionally with driver mutations/CNA/DNA hypermethylation, in Supplementary Data S11 . Discussion A classical way to treat tumors is to employ DNA damaging drugs and ionizing radiation to target the lessened or overwhelmed capacity for DNA repair in cancer. Since this may manifest as a mutator phenotype, we asked if mutational signatures observed in cancer cells can serve as markers for treatment by drugs or by gene editing. This systematic study generalizes over the known individual examples of mutational patterns stemming from deficient HR 24 , 26 , 27 or MMR, which can guide therapeutic strategies 58 , 59 , 60 . Cancer cell line panels that were screened for drug response 37 , 79 and for gene loss effects 48 , 80 provided a resource to test our hypothesis. However, the lack of matched healthy tissues means that extracting somatic mutational signatures using existing methods 31 , 32 , 42 is a challenge. Since germline variation is abundant compared to somatic mutations, even slight variations of germline spectra between populations 34 , 36 can affect the trinucleotide context mutation tally. We thus subtracted the expected germline spectrum given the ancestry, and further integrated indel features into mutational signature inference from the cell line WES. Future refinements in the methodology, as well as availability of WGS of cancer cell lines, will improve its accuracy. For instance, this will permit a more detailed set of indel descriptors, as applied in the recent tumor WGS mutation signature studies 9 , in contrast to the limited set of only four indel features we were able to apply in our WES study. Among mutational signatures that we inferred, the number of associations with drug response was comparable to that of other types of commonly used genomic markers. Hundreds of such associations significantly replicated in independent data and across multiple tissues. Thus mutational signatures appear to be similarly robust predictors as driver mutations or CNA markers. We note that the associations we identified could not have resulted from tissue-specific variation in drug response, since tissues were considered individually in the association analyses (the only exception was merging of some cancer types known to be similar genomically, such as esophagus with stomach cancer, and glioma with glioblastoma). An important caveat to our study is that the discovered associations do not necessarily imply a causal relationship: the mutational signature-generating process and the drug phenotype may be only indirectly associated. In other words, the mutational signature might serve as a marker for another alteration (e.g. a driver mutation), which may be the proximal cause of the drug sensitivity or resistance. This is similarly possible with mutational signatures as with other genetic markers (mutations or CNA in cancer genes), and also with gene expression markers. Because these various sets of genomic/transcriptomic features can correlate across tumors, prioritizing the likely causal relations for further follow-up is a challenge that remains to be addressed; larger cell line panels will be helpful, as well as use of isogenic cell panels. Furthermore, another issue are false negatives in identifying associations – again similarly so with mutational signatures as with other markers – due to sparse data, where often only few cell lines from a cancer type bear each marker. Thus, statistical power may be limited for many markers in current cell line screens. Additionally, because drug sensitivity measurements can be noisy (as evidenced in less-than-ideal agreement between different screening data sets 43 , 44 , 45 ) replication analyses across diverse datasets will be conservatively biased. It bears mentioning that absence of a significant association in our lists does not imply there is not an association of that marker with that drug, but might rather mean that the analysis at current sample sizes may be underpowered to detect the association (see e.g. Supplementary Fig. 13b ). A key question to be addressed in future work is the clinical relevance of the mutational signature drug markers, which we here identified in cancer cell line panel data. Performing association studies on tumor genomic datasets (for which the clinical data about treatments and patient response are sometimes available) is complicated by the diversity of therapeutic regimes: most patients are treated with multiple overlapping sets of drugs and possibly radiotherapy, which makes it challenging to identify effects of individual drugs by retrospective analysis. Additionally, the drug assignment may be confounded by demographics and by cancer stage/grade or subtype, further complicating analysis. Large controlled randomized trials with treatment and control arms, for which the tumor genomic data is also available, would facilitate identifying various types of genomic markers (mutational signatures or otherwise) relevant to drug response and patient survival. With respect to mechanistic insight, future improvements in methodology will refine the mutation signature markers and clarify the underlying mechanisms. For instance, a known limitation of various mutational signature extraction methods – including ours – is the difficulty of discerning the ‘featureless’ signatures such as SBS3, SBS5, and SBS8 81 . A further issue that merits attention is timing: a genomic analysis of cell lines 31 suggested that the activity of some mutational processes is variable in time. While a genome sequence reflects a record of mutagenic activity in the past, it may or may not reflect current mutagenic activity, which is presumably more relevant for drug sensitivity phenotypes. Occurrence of recently active signatures is difficult to identify from bulk DNA sequencing from cell culture, as the recent mutations may not rise to sufficient allele frequencies to be detected, and may conservatively bias the results of an association analysis such as ours. We note that a related issue could affect also the more established markers i.e. driver mutations or CNAs in cancer genes: given the rapid accumulation of genetic changes in cultured cancer cell lines 82 , 83 , and prevalent epistasis in cancer 84 , 85 , it is plausible that recently occurring, unobserved mutations or CNAs affect the ability to identify drug sensitivity markers from analyses of cell line screening data. An interesting observation about mutational signatures associated with drug activity is that some likely result not from DNA repair deficiencies, but instead from exposure to mutagenic agents. Some of these signatures presented many associations in our analyses and were, overall, more commonly associated with drug resistance rather than sensitivity (Supplementary Fig. 12 and Figs. 4 and 6a, b ). For example, this includes SBS25 and SBS11, reported to be associated with chemotherapy, or signatures linked with exposure to chemicals causing DNA adducts (tobacco smoking, aristolochic acid), and additionally an SBS17a-like signature as well (where SBS17 was associated with gastric acid exposure, possibly via oxidative damage to the nucleotide pool). Even though cell lines are not exposed to these chemical agents during culture and thus the signatures are presumably not ‘active’, drug associations are identified with such signatures (Fig. 4 ). One possible explanation may be that mutational signatures of different processes are sometimes sufficiently similar such that they are not easily ‘unmixed’ only from trinucleotide spectra. For instance, SBS17 (mostly A>C/T>G changes) might result from varied mechanisms that converge onto the same spectrum, some of which are from chemotherapy exposure, while others may be endogenous 86 . This highlights an example where current statistical methods may not reliably deconvolve underlying biological mechanisms. This might be addressed by the use of additional mutational features, such as penta- or hepta-nucleotide contexts 87 , small indels 9 , copy-number changes 22 and strand-specific, or regional mutation rates 10 , 88 . Another explanation may be that prior exposure to a carcinogen would select tumor cells with an altered DNA replication/repair state, which continues after the carcinogen is withdrawn, thus generating resistance in cancer cells. Indeed, prolonged exposure to mutagens that are also cytotoxic – as is the case for many cancer therapeutics – is likely to select resistant cells, in some cases via altered DNA replication or repair mechanisms. For instance, therapy of tumors with temozolomide (associated with SBS11) is known to select for cells that are resistant due to a MMR-deficiency via loss-of-function of MSH6 89 . It is conceivable that also the various epigenetic changes resulting from carcinogen exposure might confer similar properties. In other words, even a temporary exposure to a mutagen may prime tumor cells for resisting later drug treatment. Methods Human cancer cell lines and primary tumor data We obtained WES bam files (human reference genome version hg19) of human cancer cell lines ( n = 1072; cancer cell lines from Genomics in Drug Sensitivity in Cancer (GDSC)) from European Genome-phenome Archive (EGA) (ID number: EGAD00001001039) and WES bam files (human reference genome version hg38) of primary tumors ( n = 6154) and their matched normal samples ( n = 6154) from the TCGA repository at NCI Genomic Data Commons (access via dbGaP accession phs000178). We downloaded samples from the following TCGA cohorts: BLCA, BRCA, COAD, GBM, HNSC, KICH, KIRC, KIRP, LIHC, LUAD, LUSC, OV, PAAD, READ, STAD, THCA, UCEC. We aligned the human cancer cell line bam files to the human reference genome (version hg38) using the bwa 90 software, and sorted and indexed them using the samtools software 91 . We used the Strelka2 software (version 2.8.4) 92 to call single nucleotide variants (SNVs) and small insertions and deletions. We called SNVs and indels for cell lines, primary tumors and normal samples. In samples where Strelka2 was unable to run, a re-alignment was performed using Picard tools (version 2.18.7) 93 to convert the bams to FASTQ and, following that, the alignment was performed by executing bwa sampe (version 0.7.16a) with default parameters. The resulting bam files were sorted and indexed using Picard tools. We used SNVs and indels marked as “PASS” in the Strelka2 output. We annotated SNVs and indels with minor allele frequencies (MAF) obtained from the gnomAD database 94 (for SNVs and indels that could be found in gnomAD). Data for human cancer cell lines We downloaded drug response data for 518 drugs from GDSC (Release 8.3; June 2019) 37 . We used the natural logarithm of the 50% growth inhibition values (IC50) as a measure of the activity of a compound against a given cell line. If for the same drug activities were available from both GDSC1 and GDSC2 versions, we used GDSC1. We downloaded the information about drugs (the list of drugs’ putative targets and target pathways) from the GDSC website ( ). We manually curated the list to correct inconsistencies (Supplementary Data S12 ). We obtained the following genetic features of cancer cell lines from the GDSC repository: 30 cancer driver genes (Muts), regions of recurrent focal copy number alterations (CNAs), hypermethylated informative CpG islands (HypMet), and microsatellite instability status (MSI). We downloaded ANOVA_input.txt files for 18 cancer types and pan-cancer analysis from . We downloaded cancer cell line gene expression data (“sanger1018_brainarray_ensemblgene_rma.txt”) from . We selected expression data for 1564 genes corresponding to the L1000 assay 39 and to known drug targets 40 . The drug response data (IC50) for 1502 drugs was downloaded from the PRISM 46 database (secondary-screen-dose-response-curve-parameters.csv). Drugs and cell lines from PRISM and GDSC databases were matched via drug names, obtaining in total 348 cell lines and 178 drugs that overlap between the two databases. The gene-level fitness effects for 16,827 genes in 786 cancer cell lines were downloaded as the Integrated cancer dependency dataset from Wellcome Sanger Institute (release 1) and Broad Institute (19Q3) (“integrated_Sanger_Broad_essentiality_matrices_20200402.zip”) from the Project SCORE database 48 and matched to the GDSC cell lines via cell line names, obtaining in total 517 overlapping cell lines ( ). Matching of cancer cell lines and primary tumors (ancestry matching procedure) To ensure that genomic data is comparable between cell lines and tumors we performed the following steps: (i) we used only SNVs detected in the regions of the exome with well sequencing coverage (≥20 reads in 9 out of 10 randomly selected samples); (ii) to avoid gender bias during analysis, variants on X and Y chromosomes were not used; (iii) only uniquely mappable regions of the genome were used, as defined by the Umap k36 criterion; 95 (iv) we discarded regions with frequent somatic deletions, namely pan-cancer deletions of significant SCNA 96 and frequently deleted chromosome arms (deleted in >18% of tumor samples); 97 (v) we detected copy number changes of cell lines with CNVkit 98 and removed deleted regions (log2 score < −0.3). From the remaining regions of the exome we selected common germline variants across the cell line exomes and the TCGA exomes (MAF>5% in GnomAD).To perform ‘ancestry matching’ of cancer cell lines with TCGA normal samples, we performed Principal Component Analysis (PCA) on the matrix of common germline variants, followed by clustering according to principal components (PC) (see below). Clustering of cell lines and TCGA germline samples We employed robust clustering (“tclust” algorithm; 99 discards outlying samples) on the first 140 principal components derived from the common germline variants. Initially, we considered the first 150 principal components, however, some PCs were attributed to the batch effect, i.e., they separate cell lines from TCGA samples. We removed the top 10 such PCs as determined by feature importance of the random forest classifier (“randomForestSRC” R package) trained to distinguish between TCGA samples and cell lines. We used the remaining 140 PCs as an input to the tclust algorithm. Outlying samples, as determined by tclust, were discarded. We varied the number of clusters from 4 to 20, determining the optimal number of clusters (13) by using simulated cell line exomes (Fig. 1c ; see below). The cancer cell lines were matched to their ‘ancestry-matched’ TCGA normal samples, i.e., those belonging to the same cluster as the cell line. We then used ‘ancestors’ to augment the filtering of germline SNVs from cancer cell line exomes, in addition to filtering according to the common practices to retain only variants absent or present in very low frequencies in population databases (see below). Filtering of germline SNVs from cell lines Here we considered SNVs from the regions of the exome as specified before (steps i–v), with the difference that we used SNVs with the sequencing coverage of ≥8 reads in at least 90% samples (9 out of 10 randomly selected samples). We next filtered germline variants from cancer cell lines following common practices 29 , 31 , 79 – here, removing population variants (found at MAF>0.001% in the gnomAD population database); additionally, we filtered germline variants that appeared >5% of samples in our TCGA data set or cell lines data set which removes germline variants that might be particular to this data set and also suspected sequencing artifacts). Next, from the remaining SNVs, for each cell line and TCGA germline sample, we calculated its trinucleotide mutation spectrum. The mutation spectrum contains 96 components, which are the counts of six possible mutation types (C>T, C>A, C>G, A>T, A>G, and A>C, considered DNA strand-symmetrically) in 16 possible 5′ and 3′ neighboring nucleotide contexts for each mutation type. For each cell line, from the cell line’s 96 trinucleotide spectrum, we subtracted the median 96 trinucleotide frequencies of its TCGA ‘ancestry-matched samples’ (i.e. the TCGA normal samples belonging to the same population cluster as the cell line in the PC analysis of common germline variants, see above). In case the subtraction resulted in a negative count for some of the contexts, we set it to zero. Insertion and deletion types In addition to the standard 96 trinucleotide spectra, to extract mutational signatures from cell lines (see below) we considered additional features based on small insertions and deletions. We filtered the regions of the exome using the steps (i-iv) as described above. Similarly as for SNVs, we discarded indels found at MAF>0.001% in the gnomAD population database and filtered the ones that appeared in >5% of cell line samples. The remaining indels were classified into 4, 5, 8, 14, or 32 different indel types considering: the length of insertion or deletion (considering lengths 1, 2, 3–4, 5+), microhomology at deletion sites (considering the lengths of microhomology 1 or 2+), and microsatellites at indel loci (considering repeat sizes of 1 and 2–5+). We benchmarked the cell line mutational signatures (see below) that used different indel types. The most favorable indel types according to that benchmark were the simplest where we differentiate 4 indel types: deletions with microhomology (Del-MH), deletions at microsatellite loci (Del-MS), other deletions (Del-Other), and insertions at any locus (Insertion). Simulated cell line exomes We used TCGA exomes to simulate cell line exomes (or more precisely, the variant calls that would originate from a cell line exome) in order to benchmark our ancestry matching procedure for efficacy in reconstructing the mutation spectrum of cancer cell line exomes. Additionally, we used the benchmark to optimize the number of clusters in the inference of subpopulations. To simulate variant calls from cell line exomes, we performed the variant calling for tumor samples in the same way as for the cell lines, i.e., without matched normal samples. In addition, for every sample we merged SNV calls obtained thus with the somatic SNV calls of tumors, ensuring that the true somatic mutations are a part of a simulated cell line to enable an accurate estimation of errors. In the tumor types that were represented both in the set of cancer cell lines and the set of TCGA tumors, we randomly selected 450 tumor samples taking approximately the same number of samples per tumor type as in cell lines. In the subsequent analysis involving simulated cell line exomes, we removed their corresponding normal samples from the pool of germline samples. We performed the ancestry matching procedure involving simulated cell line exomes in the same way as described above (Fig. 1g ) and evaluated the accuracy of mutation spectrum reconstruction by comparing the mutation spectrum of a tumor (ground truth) to the corresponding simulated cell line mutation spectrum obtained by ancestry matching. As an accuracy measure, we used Absolute Error (the sum of absolute values of differences between the 96 components of the ground truth spectrum and the reconstructed spectrum). To estimate the optimal number of subpopulations we varied the number of clusters in the tclust algorithm from 2 to 20, with 13 being the optimal according to the average Absolute Error of 450 simulated cell lines (Fig. 1c ). To investigate the variation between germline mutational spectra across human (sub)populations (as reported recently 34 ), we performed hierarchical clustering (based on cosine similarity; stats package in R) of the median mutation spectra of TCGA subpopulations (i.e., median mutation spectra of the TCGA normal samples within clusters) (Supplementary Fig. 1E ). In addition, we performed PC analysis of mutation spectra of TCGA normal samples showing that the main principal component separates the main ethnicity groups (Supplementary Fig. 1D ). We note that, in the previous steps, clustering to infer subpopulations was performed via common variants (see above), and the trinucleotide spectra were determined after excluding the non-rare variants (MAF >0.001% in the gnomAD population database and variants that appear in >5% of samples in our TCGA data set) ruling out circularity. Next, we compared our ancestry matching procedure (involving 13 clusters) to the baseline method of determining somatic variants in a cell line genome (i.e., filtering of germline variants according to MAF, where SNVs with MAF >0.001% and that appeared in >5% of samples in our dataset are removed) and the bootstrap self-similarity method (Fig. 1d ). The bootstrap self-similarity error was calculated as an error between true and randomly perturbed somatic mutation spectrum, averaged over 100 random runs. For random perturbation, we used the “sampling” function of the “UPmultinomial” R package. We compared the true number of somatic mutations versus the number obtained with the ancestry-matching and the MAF filtering (removal of SNVs with MAF where MAF>0.001% and that appeared in >5% of samples) to quantify the degree of overfiltering and underfiltering of these two filtering approaches (Supplementary Fig. 1h ). If after filtering, the simulated cell line exome has less mutations than the number of true somatic mutations, we consider it overfiltered (i.e., some of the true somatic mutations were removed). Otherwise, if it contains more mutations than the number of true somatic mutations, we consider it underfiltered (i.e., residual germline variants remained). Next, we compared the accuracy of mutational signature extraction (see below for the method) with the trinucleotide profiles of simulated cell lines obtained with MAF filtering and the ‘ancestry matching’ approach. We compared the 20 known mutational signatures (>0.85 cosine similarity to PCAWG signatures) that were extracted from both sets of trinucleotide profiles to the mutational signatures extracted from the true somatic trinucleotide profiles. We compared the cosine similarity of trinucleotide composition of the signatures and the cosine similarity of the exposure scores. The ‘ancestry matching’ signatures show statistically significant closer match to the true somatic signatures than ‘MAF filtering’ signatures in both trinucleotide composition and exposure scores (Supplementary Fig. 1g ). P -values were calculated by Wilcoxon rank-sum test (artifact signatures were excluded from the test). Extraction of mutational signatures We extracted cancer cell line mutational signatures from 96 component trinucleotide mutation spectra of cancer cell lines obtained with the ancestry matching procedure. In total, we used mutation spectra of 930 cancer cell line exomes (Supplementary Data S13 ). Note that some cell lines were excluded because they were assigned to the outlier cluster of the tclust algorithm, based on their common germline variants (see above). To extract mutational signatures we used a custom R script based on non-negative matrix factorization (NMF) as described by Alexandrov et al. 33 . We additionally implemented a number of signature extraction procedures proposed in the literature recently 9 , 38 , 42 and benchmarked the resulting signatures according to the several criteria: (a) The agreement with an established set of SBS signatures (PCAWG signatures 9 ) measured as the number of PCAWG signatures that were recapitulated at ≥0.85 cosine similarity cutoff; (b) The similarity of the cell line signature exposure profiles across cancer types to the exposure profiles of PCAWG signatures, measured as cosine similarity between an average exposure per-cancer-type profile of a cell line signature and its matching PCAWG signature (for signatures recapitulated at ≥0.85 cosine similarity); (c) The accuracy of the signatures in predicting drug sensitivity profiles across cancer cell line panels (see below for method). From the matrix containing mutation spectra of samples, we generated 300 matrices by bootstrap resampling. One bootstrap sample is obtained by applying the sampling function of the “UPmultinomial” R package to each sample’s spectrum. Next, we applied the NMF algorithm to each of the bootstrap samples (“nmf” function of the “nmfgpu4R” R package to obtain different NMF runs; we used the Multiplicative update rules algorithm 100 with 10,000 as the maximal number of iterations). For each bootstrap sample, we varied the number of signatures from 2 to 40. Computations were performed on an NVIDIA GeForce RTX 2080 Ti GPU. We implemented the following modifications to the above basic procedure (each tested independently and some in combinations; Supplementary Data S1 ) to obtain the candidate cell line signatures which were matched to the known tumor PCAWG signatures (see below): (i) Similarly as done in the seminal work of Alexandrov et al. 33 , we clustered each batch of NMF solutions and used the cluster medoids as candidate cell line signatures. We used the clara function in clusters R package, with Euclidean distance and standardization and “pamLike” options; the number of samples to be drawn from the dataset was set to 10% of the number of samples. Each batch of NMF solutions obtained as described before (one batch corresponds to 300 x n solutions, where n is the number of signatures; varies from 2 to 40) was clustered into k clusters with k-means clustering, varying k from 2 to 30. (ii) We applied the RTOL-filtering (relative tolerance) as proposed by Degasperi et al. 38 , where (separately for each different number of signatures parameter) NMF runs that diverge for more than 0.1% from the best NMF run are removed, as measured by the root-mean-square deviation of the factorization. (iii) We implemented the ‘hierarchical extraction’ procedure proposed by Alexandrov et al. 9 where NMF is repeated iteratively while removing (or down weighting) the well-reconstructed samples (cosine similarity above a specified threshold) from a previous iteration to uncover additional signatures. We allowed a maximum of 3 iterations and tested different cosine similarity thresholds ranging from 0.95 to 0.99. In the case of down weighting, we multiplied the sample’s mutation spectra by 0.05 instead of removing it. (iv) Similarly as in Ghandi et al. 42 , we performed joint signatures inference of cell line exomes and tumor exomes. To the initial matrix containing mutation spectra of cell lines, we added mutation spectra of 6154 TCGA somatic exomes which were preprocessed in the same way as cell line exomes: the same regions of the exome were used (see above for filtering by coverage, mappability, etc.). From the candidate cancer cell line mutational signatures obtained by the above described methodologies, we searched for the signatures that closely resemble the ones that were previously found in human cancers 9 (referred to as PCAWG signatures in the text). We compared the individual signatures to the PCAWG signatures searching for the closest matching cell line signature for each PCAWG signature. As a final set of tumor-like cell line signatures, we kept the closest matching cell line signature if its cosine similarity to the best matching PCAWG signature exceeded 0.85. For cell line signatures that used indel features in addition to 96 tri-nucleotide spectra, cosine similarity was calculated only on 96 spectra since the known PCAWG SBS mutational signatures do not have an indel component in their current implementation (v3.2). We considered an additional criterion for matching cell line and tumor signatures where in addition to the mutation-spectrum cosine similarity, a cosine similarity between signature’s average per-cancer-type exposure profile (for cancer types available in both cell line and PCAWG data). The spectrum-cosine and exposure-cosine similarities were combined into a single metric given the weight ‘ w ’ controlling the relative contribution of each: w ×spectrum-cosine + (1– w ) × exposure-cosine. We tested different weights: w = 0.5, 0.7, 0.9, and 0.99. For each PCAWG signature, from a set of cell line signatures that match it with spectrum-cosine ≥0.85, we selected the best matching cell line signature as the one with the highest combined metric. We considered using additional regression post-processing for assigning signature exposures to samples, similarly as done by Petljak et al. 31 and Alexandrov et al. 9 . We used the “sigproSS” tool of SigProfilerExtractor framework 101 to attribute exposures to the extracted cell line signatures (i.e., signatures were used as an input to sigproSS). Additionally, we used a custom script based on regularized regression from “glmnet” R package enforcing different degrees of sparseness. We considered Ridge, Lasso and Elastic Net regression to model cell line’s mutation spectra as a combination of cell line signatures, where the resulting regression coefficients are considered as exposures to signatures. We enforced non-negative coefficients, no interaction term, and used crossvalidation to determine the optimal value of the lambda parameter. The different procedures for mutational signature extraction we implemented were evaluated according to the above-described criteria (a–c), results are presented in Supplementary Data S1 . As a final set of signatures we chose signatures obtained with hierarchical extraction (using 0.97 cosine similarity threshold for sample removal), using 4 indel features in addition to 96 trinucleotide types. We considered both the cosine similarity in the trinucleotide spectrum and also the cosine similarity in the exposures across tissues (in tumor genomes), using w = 0.9 as weight on the trinucleotide-cosine and 0.1 on exposure-cosine, for matching our cell line signatures with the known COSMIC tumor signatures. No post-processing of exposures to signatures was performed since it did not yield improvements according to the evaluation criteria, thus, we used the raw NMF scores of the exposure matrix. A notable modification from the signature extraction method presented in Alexandrov et al. 33 is that we did not limit to a single value of the “number of signatures” NMF parameter (choosing based on measures of fit and consistency, as in Alexandrov et al. 33 ). Instead, inference was run for many values of this parameter, and the final set of mutational signatures consists of solutions from different values for the ‘number of signatures’ parameter. This procedure yielded 52 signatures (Supplementary Fig. 2 ). The cell line signatures are named according to the PCAWG signatures they resemble, e.g., the cell line signature name SBS4/45L denotes that for PCAWG SBS4 this signature was the closest match (i.e., SBS4 is the primary signature); 45L denotes that the signature also resembled PCAWG signature SBS45 (at cosine similarity ≥0.85). The suffix “L” (for “like”) denotes 0.85 ≤ cosine similarity < 0.95 (a somewhat less-close match), while the absence of the suffix “L” means cosine similarity ≥0.95. Names of signatures other than the primary signature (if present) are ordered by decreasing cosine similarity. We checked if the trinucleotide composition of the exome ‘territory’ covered in WES sequencing we used could affect the cosine similarities to the known PCAWG signatures, since they were extracted from the WGS data. We adjusted the trinucleotide spectra of our signatures to match the WGS spectra and re-calculated cosine similarities to the PCAWG signatures. The cosine similarities are not substantially affected – our signatures map to PCAWG signatures largely irrespective of the adjustment for territories (Supplementary Fig. 17b ). Note the one signature that does change between the two normalizations (0.96 vs 0.87) is an SBS49-like signature, where SBS49 was previously suggested to be an artifact 9 . Furthermore, to check that the mutational signatures extraction was not biased in terms of the trinucleotide composition due to the removal of lowly covered regions and the common germline loci, we compared the trinucleotide compositions in an exome, our examined territory (largely the exome with lowly-covered regions removed), and in our examined territory after having removed common SNP loci. The trinucleotide composition was highly similar (cosine similarities >0.993 in all three possible pairwise comparisons; Supplementary Fig. 17a ). We next searched for cell line-specific signatures, i.e., signatures that commonly appear in cell line data and do not resemble any of the known tumor signatures. To this end, we employed k-means clustering (“clara” function in “clusters” R package, with Euclidean distance, standardization, “pamLike” options and 10% as the number of samples to be drawn from the dataset). Each batch of NMF solutions (from the signature extraction method selected as final by the evaluation) was clustered into k clusters with k-means clustering varying k from 2 to 40. We chose the clustering result (i.e., a set of signatures) where the agreement with PCAWG signatures was maximized in terms of the number of cluster medoids that resemble PCAWG signatures (at cosine similarity ≥0.85). From such a set of signatures we selected the ones dissimilar from any of PCAWG signatures (cosine similarity <0.8), yielding in total 5 cancer cell line-specific signatures (named SBS-CL). These SBS-CL signatures appear together with real signatures and are robust (i.e., they are cluster medoids) therefore they are also likely bona fide mutational signatures that might originate, for instance, from cell-line specific mutational processes. In total, this yielded 57 cancer cell line signatures (52 corresponding to known tumor signatures, and 5 additional cell line-specific signatures) (Supplementary Fig. 2 ). To investigate if some of the extracted signatures are a result of incomplete separation of other signatures we considered the coefficients of the Lasso regression (“glmnet” R package) (Supplementary Fig. 6 ), where we modeled cell line signatures extracted in this work as a linear combination of PCAWG signatures (Supplementary Fig. 6 ). Predicting drug response We built predictive models of drug response using the Random Forest (RF) algorithm as implemented in the “randomForestSRC” R package. We compared the predictive performance of different predictors: mutations in cancer driver genes (Muts), recurrent copy number alterations (CNAs), DNA hypermethylation (HypMet), gene expression, previously reported cancer cell line mutational signatures 31 , 32 , 42 , and mutational signatures extracted in this work. We used Muts, CNAs, and HypMet data as reported by Iorio et al. 30 , namely, mutations in cancer genes associated with positive selection in tumors, focal recurrently aberrant copy number segments, and hypermethylated informative 5′C-phosphate-G-3′ sites in gene promoters. The dependent variable was the continuous value of a response to a drug (log IC50); we iterated this over all drugs. Another possible way to run Random Forest, not employed here, would be to binarize the log IC50 drug response and run the RF algorithm in classification mode rather than regression mode. However, binarizing the drug response implies some loss of information, and moreover, it would necessitate an extra parameter in the analysis (choice of the binarization threshold) so we opted to use continuous drug response values here. We built cancer type specific models for each drug separately, considering only models where at least 15 cell lines with drug response were available to train the model. For model validation we used 10-fold cross validation which was repeated five times to get more stable results. We built random forests with 100 trees and a minimal number of samples in terminal nodes set to 2. We assessed the predictive performance of the predictors by the relative root-mean-square-error (RRMSE), i.e., a root-mean-square error divided by the root-mean-square error of the default model. The default model predicts a constant value for all cell lines equal to the average ln IC50 across the training set. RRMSE < 1, therefore, denotes better accuracy better than the one of the (uninformative) default model. We define such models as predictive. Results are presented as an average RRMSE per cancer type across all models built for a cancer type (Fig. 2a ). Note that, all drugs were not necessarily tested exhaustively across all cancer cell lines, therefore, the number of models per cancer type may differ (one model corresponds to one drug) due to missing data (<15 cell lines with drug response data available), as well as the number of model per different predictors (mutational signatures, gene expression, oncogenic mutations, copy number alterations, DNA hypermethylation) due to data availability. In the analysis of complementarity between different predictor types (Fig. 2d, e and Supplementary Fig. 7b ) we thus report relative numbers of predictive models to facilitate fair comparison. To assess the statistical significance of the differences in the predictive performance (Fig. 2b, c ), we follow the recommendations given by Demšar 102 . More specifically, to statistically compare the predictive performance of multiple predictors over multiple datasets we use the corrected Friedman test and the post-hoc Nemenyi test 103 . Here, a dataset corresponds to a pair of one cancer-type and one drug. Due to the requirements of this test, only the intersection of drugs modeled across all cancer types and predictors were considered (due to missing data the number of drugs modeled per cancer type and/or predictor may differ; see above). For each drug-cancer-type pair, predictors are ranked according to their RRMSE for that drug-cancer-type pair, where rank 1 corresponds to the best (i.e. the lowest) RRMSE, 2 to second-best, etc. Ranks are then averaged across all drug-cancer-type pairs to obtain the average rankings of predictors. The Nemenyi test performs a pairwise comparison of predictors’ performance based on the absolute difference of the average rankings of the predictors to assess whether there are statistically significant differences among predictors. The test determines the critical difference (CD) for a given significance level α, if the difference between the average ranks of two predictors is greater than CD, the null hypothesis (that the predictors have the same performance) is rejected., i.e., there is a statistically significant difference between the two predictors. The results from the Nemenyi post-hoc test are presented with an average ranks diagram 102 . The average predictor rankings are depicted on an axis, in such a manner that the best ranking algorithms are at the left-most side of the diagram. The algorithms that do not differ significantly (in performance) for a significance level of 0.05 are connected with a bold line, therefore, predictors that are not connected are statistically significantly different according to the test. Associations with drug response Randomization test for associations that replicate in independent datasets We performed two-way association testing where we searched for robust associations that replicate in two independent datasets (schematic in Fig. 3a ). We considered three different types of two-way tests where we enforced that, for a given drug, an association between a particular feature (a mutational signature or a genetic feature) and the drug response from the GDSC database is replicated: (1) in the PRISM drug screening database (GDSC/PRISM test) for the same drug, or (2) with another drug from the GDSC database that shares the same molecular target (GDSC/GDSC (same target) test), or (3) in the Project SCORE CRISPR/Cas9 genetic screen as an association with a protein-coding gene fitness score of one of the drug’s target proteins (GDSC/PSCORE test). We considered cancer-type specific associations. We required that in both tests of a two-way test an association is detected in the same cancer type. We merged some similar cancer types with a small number of cell lines: esophagus carcinoma and stomach adenocarcinoma (denoted as ESCA/STAD), glioblastoma and brain lower grade glioma (denoted as GBM/LGG), and head and neck squamous cell carcinoma and lung squamous cell carcinoma (denoted as HNSC/LUSC). We additionally considered two groups of cell lines obtained by dividing colorectal cell lines according to microsatellite instability status (denoted as COREAD_MSI and COREAD_MSS; other cancer types did not have enough cell lines with microsatellite (in)stability labels to warrant such division). Prior to the association search, we removed 21 cell lines that exhibited either sensitivity or resistance nonspecifically towards a large number of drugs (15 cell lines reported by Abbas-Aghababazadeh et al. 104 and 6 outlier cell lines considering median ln IC50), as well as additional 29 cell lines that were reported to be misclassified 105 . In addition, for each cancer type, we removed outlier cell lines by the total number of mutations (remaining after the filtering with the ‘ancestry matching’ procedure; see above), here defined as having the number of mutations >3× interquartile range + upper quartile, or <3 × interquartile range - lower quartile (calculated for each cancer type separately). We used binarized exposures to mutational signatures where, for each signature, values below the 5% of the value of the second-highest exposure score (used in order to avoid single high-score outliers observed in some signatures) of that signature were set to 0, and the rest to 1 (Supplementary Data S4 ). We empirically tested several different thresholds ranging from 1% to 20% (of the second-highest exposure across cell lines), and measured (i) the sparsity of binarized signatures and (ii) the distribution of binarized exposures across tissues for signatures with known etiology that should dominantly appear in certain tissues (the “UV” signatures 7a and 7b in skin and the “tobacco” signature 4 in lung). We chose 5% since it offered a good tradeoff on these two metrics; sparsity is not too high, while the tissue distribution of signatures 7a, 7b, and 4 are reasonable (Supplementary Fig. 15B ). We used normalized (relative) signature exposures, as described above. We considered only tests with at least 8 cancer cell lines and at least 2 non-zero values of a feature. All of the association tests were performed by modeling the drug response (or gene fitness score) to associate it with the status of a feature (i.e., a mutational signature or a genetic feature) searching separately for sensitivity and resistance associations. For each association, we calculated the association score as the minimum (in the case of sensitivity) or maximum (resistance) effect size (Cohen’s d ) of the two independent datasets i.e. positive Cohen’s d implies sensitivity and negative resistance associations. Here, effect size is the Cohen’s d statistic: difference of mean drug sensitivity (ln IC50) between the cell lines having the feature and those not having it, divided by the pooled standard deviation of the data. To obtain the association’s empirical p -value, we performed a randomization procedure where we calculated the association score 100,000 times for the randomly shuffled features. For the sensitivity test, the formula for the p value is: \(p=\frac{{{{{{\rm{random}}}}}}\,{{{{{\rm{score}}}}}}\, > =\,{{{{{\rm{observed}}}}}}\,{{{{{\rm{score}}}}}}}{{{{{{\rm{num}}}}}}.{{{{{\rm{of}}}}}}\,{{{{{\rm{randomizations}}}}}}}\) and for the resistance is: \(p=\frac{{{{{{\rm{random}}}}}}\,{{{{{\rm{score}}}}}}\, < =\,{{{{{\rm{observed}}}}}}\,{{{{{\rm{score}}}}}}}{{{{{{\rm{num}}}}}}.\,{{{{{\rm{of}}}}}}\,{{{{{\rm{randomizations}}}}}}}\) . Due to the computational burden, we performed the randomization procedure only for associations that had an effect size >0.2 in the primary test. The empirical p -values were adjusted with the Tibshirani-Storey method 47 . Recent work suggested that MMR-failure signatures can be grouped into a few broad types 38 , 106 , 107 , in particular a group enriched with C > T changes, and a group enriched with T>C (equivalently, A > G) changes. Based on this we considered aggregated mis-match mutational signatures where we merged p-values and effect sizes of SBS6, SBS15, and SBS44L (denoted as SBS-MMR1); SBS21 and SBS26/12L (denoted as SBS-MMR2). Similarly, we considered the aggregate of the two APOBEC signatures SBS2 and SBS13 (denoted as SBS-APOBEC). Pooled p-values of aggregated signatures were obtained by Fisher's method, while pooled effect size was obtained by averaging. Pooled p-values were adjusted the same as described above. We consider a two-way association as statistically significant if FDR<15% and additionally we imposed an effect size threshold of Cohen’s d > 0.5 in both tests (based on a known set of positive control associations (Supplementary Fig. 16 ) for GDSC/PRISM and GDSC/PSCORE tests, while for the GDSC/GDSC (same target) test we required Cohen’s d > 1. In addition, we considered only associations coming from cancer types where the inflation factor lambda was below 1.3 (Supplementary Fig. 8 ). Note that, in supplementary data also associations with lambda > 1.3 are listed. For some analyses we considered an additional set of associations with an unadjusted p-value threshold of < 0.005 and the same effect size threshold of Cohen’s d > 0.5. As a rule-of-thumb for interpretation, Cohen’s d = 0.2, 0.5 and 0.8 correspond to small, medium and large effect sizes, respectively 108 . We used “QCEWAS” R package to calculate the lambda score to estimate the inflation of p -values (calculated separately for sensitivity and resistance associations). ‘Golden’ and ‘Silver’ sets of high-priority associations We collated a list of 3911 associations (the ‘silver set’; Supplementary Data S10 ). To make this list, we considered all associations tested in the three ‘two-way’ replication tests that pass the permissive criterion of significance (effect size d > 0.5 in both tests (or d > 1 for the GDSC/GDSC two-way test) and a nominal p < 0.005). From these, an association (between a feature and drug within a certain cancer type) was listed in the silver set if it was confirmed in more than one ‘two-way’ replication tests, or was seen in more than one cancer type. We require that at least one of the supporting associations has FDR<25%. Additionally, we collated a ‘golden set’ of high-priority list of associations involving mutational signatures and cancer functional events which we consider to be suitable for follow up work: we require that an association involving the same drug is supported in at least three cancer types or is replicated in all three ‘two-way’ tests where at least one association has FDR < 25% (995 associations; Supplementary Data S8 ). Abbreviations of cancer types A list of abbreviations of cancer types used in this study: ALL/CLL, acute/chronic lymphoblastic leukemia; BLCA, bladder urothelial carcinoma; BONE, bone cancer other/not classified further; COREAD, colon adenocarcinoma and rectum adenocarcinoma; COREAD_MSI, microsatellite instable colon and rectum adenocarcinoma; COREAD_MSS, microsatellite stable colon and rectum adenocarcinoma; ESCA, esophageal carcinoma; ESCA_STAD, esophagus carcinoma and stomach adenocarcinoma; EWING, Ewing's sarcoma; GBM, glioblastoma multiforme; GBM_LGG, glioblastoma and brain lower grade glioma; HNSC, head and neck squamous cell carcinoma; HNSC_LUSC, head and neck squamous cell carcinoma and lung squamous cell carcinoma; KIRC, kidney renal clear cell carcinoma; LAML, acute myeloid leukemia; LGG, brain lower grade glioma; LIHC, liver hepatocellular carcinoma; LUAD, lung adenocarcinoma; LUSC, lung squamous cell carcinoma; LYMP, lymphoma; MESO, mesothelioma; MM, multiple myeloma; NB, neuroblastoma; OV, ovarian serous cystadenocarcinoma; PAAD, pancreatic adenocarcinoma; SARC, sarcoma other/not classified further; SCLC, small cell lung cancer; SKCM, skin cutaneous melanoma; STAD, stomach adenocarcinoma; THCA, thyroid carcinoma; UCEC, uterine corpus endometrial carcinoma. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data sources used for this study are listed below and described further in the Methods. The data resulting from our analyses are available in the Supplementary Material, or are otherwise available from the authors upon request. Source data for figures are provided with this paper. Data sources used: WES bam files for human cancer cell lines (EGA ID number EGAD00001001039 , restricted access that can be applied to following instructions on EGA), WES bam files for tumors and their matched normals (dbGaP accession ID phs000178 [ ], restricted access that can be applied to following instructions on dbGaP; bams downloaded from NCI Genomic Data Commons [ ]), drug response data for human cancer cell lines [ ]; Release 8.3), PRISM Repurposing dataset 19Q4 [ ], Project Score CRISPR genetic screening data [ ]; Integrated cancer dependency dataset from Wellcome Sanger Institute (release 1) and Broad Institute (19Q3)) [ ]. Source data are provided with this paper. Code availability The computer code is available at the GitHub repository: ( ) 109 . | As an approach to personalized medicine, a new Nature Communications study proposes that "mutational footprints" of DNA repair are a promising predictive genetic marker for determining which tumors will respond to certain therapies. Cancer therapy increasingly relies on a personalized approach, where genetic changes in an individual tumor can be used to determine the best therapeutic strategy. In many cases thus far, these genetic changes included a so-called driver mutation that would predict response to a drug. For instance, mutations in the BRAF gene in melanoma predict response to BRAF inhibitor drugs, and amplifications in the ERBB2 gene in breast cancer predict response to ERBB inhibitor drugs. However, these examples of successful drug markers are still quite rare. For many mutated driver genes, specific drugs to target them are not available. Moreover, tumors of different patients show high variability in response to drugs and such variability is often not linked to driver gene mutations. Researchers at IRB Barcelona, led by Dr. Fran Supek, ICREA researcher and head of the Genome Data Science lab, have found that so-called mutational signatures can accurately predict the activity of various drugs applied to cancer cells originating from many types of tumors. These mutational signatures do not originate from driver genes; instead, they reflect a collection of mutations found across the entire genome of a tumor. The mutational signatures can reflect, for example, that the tumor has difficulties in copying or repairing DNA, which may make it more amenable to therapy. "We have performed statistical analysis using machine-learning methods, considering jointly cancer cell genomes, their response to various drugs, and their response to gene editing experiments. Surprisingly, our analysis revealed that the 'classical' genetic markers such as driver gene mutations or copy-number changes are often less powerful than the mutational signature genetic markers in predicting drug response," explains Dr. Supek. DNA repair deficiencies make cancer cells easier to target by many drugs This study found many statistical predictions linking an observed mutational signature with the response (or lack of) towards a cancer drug. It was previously known that a certain type of deficiency in so-called BRCA genes—which can cause breast, ovarian and prostate cancers—predicts response to drugs targeting BRCA deficiency. This deficiency also leaves a mutational signature in the genome of certain types of deletions (removed DNA), which can signal that the tumor is treatable by drugs targeting BRCA deficiency. In the current study, IRB Barcelona researchers, headed by the Marie Curie postdoctoral fellow Dr. Jurica Levatić, now a postdoctoral researcher in the Jozef Stefan Institute in Slovenia, have shown that this is only one example of many: various types of DNA repair deficiency, such as defects in the "genomic spellchecker" (DNA mismatch repair), can predispose cancer cells to sensitivity to certain drugs. Given that tumors have impaired DNA repair mechanisms, these predicted therapies would have a greater capacity to kill cancer cells and spare healthy ones. Prior exposure to mutagenic chemicals, including drugs, may confer cancer cells resistance to future therapies The statistical and machine-learning analyses in this work, jointly implemented by Marina Salvadores, a Ph.D. student at the Genome Data Science lab, can connect databases from prior experiments in which many drugs had been tested on cancer cells growing in vitro (in the lab). Moreover, this study also integrated gene editing experimental data, where CRISPR was used to switch off various drug target genes in the same types of cancer cells. This approach allowed the researchers to link drug target genes to drug treatments, thus adding confidence to their main finding that mutational signatures predict drug activity in cancer. Interestingly, the cancer cells that bear genomic "scars" (mutational signatures) of previous exposure to mutagenic chemicals tended to be resistant to diverse chemotherapeutic drugs. One possible explanation for this is based on the known mechanism by which, for example, brain cancer cells can switch off their DNA repair systems during treatment with the mutagenic drug TMZ, which could permanently covert them into hardy, hypermutating cells resistant to a range of future treatments. The study suggests that this sort of adaptation may be common in cancer. This has potential implications as tumors caused by mutagen exposure (e.g., lung exposure to tobacco or skin exposure to UV light) may be more difficult to treat, since the cells may harbor a long-term memory of dealing with DNA damage. The algorithms used to identify the mutational signatures and link them to drug vulnerabilities are open access. Future work by the lab will focus on testing these prediction algorithms on patients' data, thereby overcoming the challenge of the scarcity of public genomic data for patients that correlate with randomized clinical trials. | 10.1038/s41467-022-30582-3 |
Nano | Researchers use electron microscope to reveal how semiconductor nanowires grow | Daniel Jacobsson et al. Interface dynamics and crystal phase switching in GaAs nanowires, Nature (2016). DOI: 10.1038/nature17148 Abstract Controlled formation of non-equilibrium crystal structures is one of the most important challenges in crystal growth. Catalytically grown nanowires are ideal systems for studying the fundamental physics of phase selection, and could lead to new electronic applications based on the engineering of crystal phases. Here we image gallium arsenide (GaAs) nanowires during growth as they switch between phases as a result of varying growth conditions. We find clear differences between the growth dynamics of the phases, including differences in interface morphology, step flow and catalyst geometry. We explain these differences, and the phase selection, using a model that relates the catalyst volume, the contact angle at the trijunction (the point at which solid, liquid and vapour meet) and the nucleation site of each new layer of GaAs. This model allows us to predict the conditions under which each phase should be observed, and use these predictions to design GaAs heterostructures. These results could apply to phase selection in other nanowire systems. Press release Journal information: Nature | http://dx.doi.org/10.1038/nature17148 | https://phys.org/news/2016-03-electron-microscope-reveal-semiconductor-nanowires.html | Abstract Controlled formation of non-equilibrium crystal structures is one of the most important challenges in crystal growth. Catalytically grown nanowires are ideal systems for studying the fundamental physics of phase selection, and could lead to new electronic applications based on the engineering of crystal phases. Here we image gallium arsenide (GaAs) nanowires during growth as they switch between phases as a result of varying growth conditions. We find clear differences between the growth dynamics of the phases, including differences in interface morphology, step flow and catalyst geometry. We explain these differences, and the phase selection, using a model that relates the catalyst volume, the contact angle at the trijunction (the point at which solid, liquid and vapour meet) and the nucleation site of each new layer of GaAs. This model allows us to predict the conditions under which each phase should be observed, and use these predictions to design GaAs heterostructures. These results could apply to phase selection in other nanowire systems. Main Many materials can grow in multiple (meta)stable crystal structures, and phase selection is one of the most fundamental problems in materials science. However, the selection process is difficult to access experimentally; for example, many metastable phases are obtained only by rapid quenching of a liquid into a polycrystalline multiphase solid. By contrast, nanowires provide an ideal system for studying phase selection. Typical zinc-blende (ZB)-structure III-V semiconductors form nanowires in the wurtzite (WZ) structure as well as the ZB 1 , 2 , 3 , 4 , 5 . Nanowires can easily be switched between these phases by varying the temperature, source-material flux or impurities 3 , 4 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 ; and their small diameter guarantees that they are single crystals, with phase switching occurring along the growth axis where it is easily observed. ZB and WZ semiconductors have different band structures 15 , which creates opportunities for designing modulated nanowire structures with new electronic properties. Crystal phase heterostructures are particularly interesting because they can access the electronic properties of heterostructure quantum dots for photonics and single-electron-transistor applications, but without the challenge of achieving compositional control 16 , 17 , 18 , 19 . To take full advantage of the possibilities offered by crystal structure control, a detailed understanding of the physics behind crystal phase selection is required. On the basis of post-growth observations, different models have been proposed for phase selection. These models emphasize the role of supersaturation, catalyst geometry and interfacial energies 20 , 21 , 22 , 23 , 24 , 25 . Experimental results are typically interpreted in terms of the dominant role of one of these factors 6 , 7 , 8 , 10 , 12 , 20 , 26 , 27 , 28 . Here, we directly observe the dynamic processes that take place during nanowire growth for each crystal phase, and during the switch between phases, using in situ transmission electron microscopy. We find surprising differences in the structure and dynamics during growth of ZB and WZ nanowires. The switching process itself, and the associated changes in geometry, provide clues that allow us to develop a new model identifying the underlying mechanism driving crystal phase selection. In this model, droplet geometry is the key parameter in determining structure, but in an indirect way, via its effect on the nanowire edge morphology. This understanding allows us to form crystal phase quantum dots with atomic layer precision. Imaging interface and catalyst geometry We observed the two GaAs nanowire crystal phases during growth in situ using a Hitachi H-9000 ultra-high vacuum transmission electron microscope (UHVTEM) 29 , 30 . Si substrates were first covered with pre-grown GaAs nanowires using standard metal–organic vapour phase epitaxy (MOVPE) and Au aerosol particles with diameters of 30 nm, 50 nm and 70 nm. Such samples were loaded into the TEM and heated resistively, using a pyrometer to calibrate the temperature at each heating current (see Methods). Pure trimethylgallium (TMGa) and arsine (AsH 3 ) were used as precursor gases, and were introduced close to the substrate using separate capillary tubes to a maximum total pressure during imaging of 2 × 10 −5 Torr. Details of how the growth parameters in situ compare to conventional MOVPE are provided in Methods. On heating to temperatures of about 550 °C, a liquid AuGa droplet formed at the nanowire tips and growth took place at the droplet/nanowire interface. Growth was recorded at 30 images per second. Dark-field imaging conditions, as used in Fig. 1 , allow the crystal structure to be distinguished, that is, the WZ phase and the two twin variants of the ZB phase ( Extended Data Fig. 1 ). Bright-field imaging conditions, as used in Fig. 2 , allow a more accurate determination of the dimensions of the droplet and the nanowire. Figure 1: Interface dynamics during WZ and ZB growth of GaAs. a , Images extracted from a dark-field movie recorded during WZ GaAs growth. A step flows across the top facet of a nanowire that has a diameter of 60 nm; the position of the step is indicated by the arrows. See also Supplementary Video 1 . Growth conditions: 550 °C, AsH 3 pressure of 1 × 10 −5 Torr, TMGa pressure of 3.5 × 10 −8 Torr. The narrow stripes show previously grown ZB segments: ZB can occur in two twinned orientations, one appearing bright and the other dark in this imaging condition. b , Images extracted from a dark-field movie recorded during ZB GaAs growth. The truncation slowly fills then jumps back to its maximum size (the simultaneous rapid step flow across the growth interface is not visible). See also Supplementary Video 2 . Growth conditions (not steady-state for this example): 550 °C, AsH 3 pressure increased from 10 −7 Torr to 1.4 × 10 −5 Torr a few seconds before the first image was recorded, TMGa pressure of 2.0 × 10 −8 Torr. ZB phases appear bright and WZ dark in this imaging condition. Note that the truncation shows strongly on one side of the nanowire; this was typical (see text), although some (5%) nanowires showed synchronized oscillation on both sides, as in Supplementary Video 5 . The relative time of each image in a and b is shown in seconds. c , The first ZB bilayer growing on WZ, with imaging conditions as in b . The AsH 3 pressure was reduced; the droplet is in the process of growing larger past the critical volume; the first layer of ZB appears followed immediately by a truncation that cuts into the previously grown WZ. d , The first WZ bilayer growing on a ZB segment. The AsH 3 pressure was increased; the droplet is in the process of growing smaller; the truncation fills in and the first layer of WZ appears via step flow. The truncation does not appear, and slow step flow occurs, even though ZB covers the top facet. Scale bars are 10 nm in all images. PowerPoint slide Full size image Figure 2: Changes in droplet volume during phase switching. a , Series of bright-field images obtained during growth of a GaAs nanowire at varying AsH 3 pressure, constant TMGa pressure (2 × 10 −8 Torr) and constant temperature (550 °C). Scale bar is 10nm. b , Droplet aspect ratio h / d and angle ϕ as defined in a . The times of the images in a are indicated by coloured boxes. The droplet volume takes several minutes to respond to the change in pressure. The crystal phase is indicated by blue (WZ) and red (ZB) squares. Each red square marks the occurrence of a truncation of the top facet and nucleation of a ZB bilayer. Each blue square marks the identification of WZ growth via step flow. c , AsH 3 pressure variation over time. PowerPoint slide Full size image We find that, in situ , both ZB and WZ GaAs can be grown by varying the precursor pressures (the V/III ratio) while maintaining a constant temperature. Within the parameter range accessible in situ , WZ GaAs forms at higher V/III ratios and ZB at lower ratios at steady-state conditions; transient conditions are discussed below. The two phases show marked differences in terms of their growth dynamics. For WZ GaAs, growth proceeds by step flow across the droplet/nanowire interface ( Fig. 1a and Supplementary Video 1 ). Steps flow slowly with each one starting as soon as the previous one has completed its flow ( Extended Data Fig. 2a ). By counting the number of step-flow events and correlating with the length of the nanowire ( Extended Data Fig. 2b ), as in ref. 31 , we find that each step flow represents the addition of one WZ GaAs(0001) bilayer, with a height of 0.3 nm. The growth rates are low under the conditions accessible in situ , typically one bilayer per minute (see Methods and Supplementary Video 1 ). Growth rates are proportional to AsH 3 pressure ( Extended Data Fig. 2c ), which suggests that growth under these circumstances is limited by the arrival and incorporation of As (see Methods). We can then understand the step-flow dynamics through the solubility of As in AuGa, which is generally accepted to be low 32 . Since the droplet contains no reservoir of the rate-limiting species (in our case, As), the arriving atoms are incorporated immediately into the nanowire, leading to slow and gradual step flow 33 . The growth of ZB GaAs looks quite different ( Fig. 1b , Supplementary Video 2 ). Growth similarly proceeds by addition of bilayers, but each bilayer flows across the growth interface too rapidly to observe. Furthermore, the droplet/nanowire interface shows an oscillating geometry at the trijunction (at which solid, liquid and vapour meet) that is similar to that seen in Si, ZB GaP, Ge and Al 2 O 3 (refs 30 , 31 , 34 , 35 ). The edge of the nanowire appears truncated ( Fig. 1b , first panel). The three-dimensional geometry of this ‘edge facet’ is shown schematically in Extended Data Fig. 1 . Material gradually adds to the edge facet to fill in the corner ( Fig. 1b , second and third panels). The interface jumps forwards as one step flows quickly, and the edge facet reappears ( Fig. 1b , fourth and fifth panels). Each oscillation in trijunction geometry is correlated with the nucleation and flow of a new bilayer, as in ref. 31 . The step moves quickly, even with no reservoir of As, because it is supplied by material from the truncated volume. A rough estimate of the change in the truncated volume is consistent with it being the source of the one bilayer of growth. In parallel with the observed changes in the interface dynamics, the droplet geometry also changes as we vary the V/III ratio to achieve growth of WZ and ZB GaAs. Figure 2 and Supplementary Video 3 show the effect on the droplet of changing the AsH 3 pressure between high values (to achieve WZ) and low values (to achieve ZB) at constant TMGa pressure (2 × 10 −8 Torr). Temperature was kept constant throughout the experiments (at 540–560 °C) to avoid introducing temperature dependencies that would obscure the observed trends (see Methods). Throughout the changes in AsH 3 pressure, the nanowire continued to grow. The most obvious feature is the change in the volume of the droplet. We quantify this in Fig. 2b via the droplet aspect ratio h / d (droplet height divided by nanowire diameter at the growth interface) or, equivalently, via the droplet angle ϕ (angle between the basal plane and the tangent to the droplet at its edge; see Fig. 2a ). On decreasing the AsH 3 pressure, the droplet increases in volume; increasing the AsH 3 pressure decreases the droplet volume. A quasi-steady-state volume is reached, depending on the V/III ratio. These volume changes must be driven by the addition or subtraction of Ga: we do not expect Au to move in and out of the droplet, because its diffusion on GaAs is assumed to be negligible at this temperature 36 , whereas As makes up only a small fraction of the volume, owing to its low solubility (discussed above). An Au–Ga–As alloy with over 40% Ga forms a liquid at this temperature, with no upper limit on the Ga content 32 ; consequently, droplets of a range of volumes are possible. Furthermore, because the volume changes occur more quickly than the rate at which Ga could be consumed by incorporation into the growing nanowire, the Ga must be supplied or removed by surface diffusion along the nanowire. In the simplest picture 30 , there is a surface reservoir of mobile Ga adatoms that equilibrate with the droplet over time whenever the chemical potentials of Ga in the droplet (and on the surface) are changed by altering the AsH 3 pressure. In this way, the V/III ratio controls the droplet size. (A more complete treatment would include diffusion and the effect of As flux on Ga diffusion, but this would not change the general picture 22 .) We have shown above that crystal structure and droplet volume both change as the V/III ratio varies. We now explore how they correlate with each other. During a growth experiment, it is possible to measure crystal structure and droplet volume by alternating between dark- and bright-field imaging conditions. The crystal structure identified during the experiment in Fig. 2a is shown as the red and blue data points in Fig. 2b . It is clear that the switch between WZ and ZB crystal structure occurs as the droplet passes a certain aspect ratio—under these conditions, this is h / d ≈ 0.95 and ϕ ≈ 125°. (Other experiments show a small hysteresis that is not visible in this data; see, for example, Supplementary Fig. 3 .) Because the droplet takes several minutes to respond to the pressure change, it is clear that WZ–ZB growth is correlated to h / d and ϕ rather than to the instantaneous AsH 3 pressure. This direct correlation between crystal switch and droplet dimensions (volume, aspect ratio and angle), governed ultimately by the V/III ratio, is a key result that provides the basis for the model we develop below. Understanding that the droplet geometry is the critical parameter, rather than the gas environment itself, provides useful guidance in growing crystal phase heterostructures. The length of each crystal phase segment depends on the time during which the droplet has the appropriate geometry. The relatively slow kinetics of the change in droplet volume mean that the V/III ratio must be designed with appropriate offsets in timing. An example is shown in Fig. 3 . Here, the V/III ratio was set initially at a value that formed WZ GaAs. The AsH 3 pressure was then decreased, for short pulses, to a V/III ratio that would be expected to form ZB GaAs. The result is a series of ZB inclusions in a WZ nanowire with lengths that are repeatable, but not directly proportional to the pulse duration. The shortest pulses did not form ZB GaAs at all. Longer pulses produced one or two bilayers of ZB stacking. The correlation between droplet volume and crystal phase in Fig. 3 confirms that the droplet must reach a critical volume for the structural change to occur; the reduction of AsH 3 pressure in itself may not trigger a structure change. Designing a crystal phase heterostructure thus requires consideration of the kinetics of the change in droplet volume. Figure 3: Growth of a WZ nanowire containing multiple narrow ZB segments. TMGa pressure (2 × 10 −8 Torr) and temperature (550 °C) were constant. a , Droplet aspect ratio h / d and angle ϕ versus time (grey line), and crystal phase versus time (squares). Each square indicates the addition of a bilayer of WZ (blue) or ZB (red), as described in Fig. 1 . ZB segments with thicknesses of zero, one or two bilayers form each time h / d increases. b , AsH 3 pressure variation over time. The pressure was held at 1 × 10 −5 Torr to grow WZ, but was pulsed downwards to less than 10 −8 Torr for seven intervals with durations 5 min, 5 min, 7 min, 7 min, 9 min, 9 min and 5 min, respectively. c , d , Images of the nanowire at the start ( d ) and end ( c ) of the experiment; scale bar is 10 nm. In c , the position of the growth front at each of the seven intervals is indicated by an arrow. The ZB segments grown in the 7- and 9-min intervals are visible as narrow stripes. Three segments have one ZB twin orientation (bright contrast in this imaging condition) and one has the other ZB twin orientation (dark contrast). PowerPoint slide Full size image A model for interface geometry Developing a framework to understand the relationship observed above between droplet volume, interface growth dynamics and crystal structure requires two additional key observations. The first observation is that when the droplet changes volume, it changes composition too. This could in principle affect phase selection, for example, by changing the surface energy 27 . It is therefore not immediately clear which factor determines crystal structure, the Au:Ga ratio or the geometry ( h / d and ϕ ). To establish this we measured h / d and ϕ at the switch for one particular nanowire, then allowed the nanowire to increase its diameter by conformal growth on the sidewalls, and again measured h / d and ϕ while inducing a switch ( Extended Data Fig. 3 ). Because the nanowire widens, but the amount of Au present does not change, relatively more Ga is needed to achieve the same h / d and ϕ . We did not see any strong effect of diameter on the switch between crystal phases; h / d and ϕ appear to be the controlling parameters. The second observation concerns the relationship between crystal phase and the dynamics at the growth front. In Fig. 1c, d and Supplementary Videos 2 and 4 we show the crystal switch in more detail, specifically the growth of the first ZB layer on WZ GaAs and the first WZ layer on ZB GaAs. Starting from a WZ nanowire, we reduce the AsH 3 pressure ( Fig. 3c ); the droplet enlarges and the first ZB layer forms as the critical h / d and ϕ are reached. As the ZB step flows (too rapidly to see) across the growth interface, an edge facet appears ( Supplementary Video 4 ). This facet cuts into the WZ beneath. Thus, even though steady-state growth of WZ GaAs proceeds without an edge facet, it is possible to form an edge facet in WZ GaAs under appropriate conditions. Conversely, if we start from a ZB nanowire and increase the AsH 3 pressure, as shown in Fig. 1d , then the droplet shrinks and the first WZ layer grows. It grows by slow step flow and without an edge facet appearing ( Supplementary Video 2 ), even though ZB is exposed on the growth interface as it starts to grow. These morphology observations are particularly surprising. One might expect the presence of an edge facet to be controlled by the crystal structure at that facet; but, instead, Fig. 1c, d shows that it correlates with the crystal structure on the main (that is, ZB(111) or WZ(0001)) growth facet, and thus with h / d and ϕ . To determine cause and effect, we analyse the ways in which the droplet angle ϕ affects the morphology of the growth interface. Equilibrium crystal shapes generally do not have edge angles as sharp as 90°, so one would expect an edge facet in a GaAs crystal. A sharp edge only exists during nanowire growth because the droplet is present, providing a capillary force that can pull on the edge facet and shrink its size to zero. We calculate the circumstances under which this occurs. We assume an ideal, symmetric nanowire for which the droplet angle ϕ is the same all around the edge (see Fig. 4a ); the more realistic, asymmetric case is discussed below. The difference in free energy between this ideal nanowire with an edge facet and a nanowire with the same geometry but a sharp edge is 31 in which L is the total length of the edge, y is the facet length and θ is the facet angle (see Fig. 4a ), μ cat − μ 0 reflects the supersaturation of chemical potential μ in the catalyst, and c 2 includes various other second-order terms 31 . For a sufficiently small facet length y , this energy is dominated by the linear term c 1 , which reflects the capillary forces acting on the corner facet. Therefore, for c 1 < 0, it is always energetically favourable to have the edge facet (Δ E < 0), whereas for c 1 > 0, we expect the edge to be sharp everywhere and have no facet. By examining c 1 in more detail and including all the capillary terms 37 , we find in which γ e is the liquid–solid interfacial energy at the edge facet, γ vs is the vapour–solid interfacial energy on the sidewall, γ ls is the liquid–solid interfacial energy at the main growth facet and γ vl is the vapour–liquid interfacial energy. Figure 4: Model relating droplet size to interface morphology. a , Schematic of a quasi-two-dimensional, ideal, symmetric nanowire illustrating the droplet angle ϕ , and the edge facet angle θ and length (in the growth direction) y . ‘N’ marks the interior point at which ZB nucleates; ‘T’ marks the trijunction. b , Edge facet length y versus droplet angle ϕ , calculated for two values of supersaturation (blue, low; green, high) for the symmetric nanowire. There is a range of ϕ in which y = 0; in this range c 1 > 0 and the edge facet does not exist. Supersaturation does not affect this range. c – e , Schematics of the nanowire and droplet for ϕ < 90° ( c ), ϕ = 90° ( d ) and ϕ > 90° ( e ). The possibility of the droplet depinning from the point T is not included in the model. PowerPoint slide Full size image The key point here is the presence of ϕ in equation (2). This implies that the droplet angle alters c 1 and, hence, changes the lowest-energy state of the nanowire from one with an edge facet to one with a sharp corner. A hemispherical droplet ( ϕ = 90°) yields the maximum possible value of c 1 , and so is the most favourable for eliminating the edge facet. This is shown in Fig. 4b , in which we calculate the length y of the edge facet as a function of angle ϕ for a symmetric, but otherwise arbitrary, illustrative case. In our experiments, the droplet is never less than a hemisphere ( ϕ is always greater than 90°) during stable growth. Thus, the analysis in equations (1) and (2) predicts that a switch could be observed between a large droplet with an edge facet ( Fig. 4e ) and a smaller droplet with a sharp edge ( Fig. 4d ); that is indeed what we observe. Connecting geometry to crystal phase We now consider the ways in which the presence or absence of an edge facet controls the crystal phase. Without attempting to develop a microscopic model, we can understand heuristically how this would occur by considering a previous analysis 20 . Several models have argued that the crystal structure should be determined by the location of the nucleation event on the main growth facet 6 , 7 , 8 , 10 , 13 , 20 , 21 , 22 , 23 , 26 , 38 . The argument is that the metastable WZ phase can only grow if it has a lower nucleation barrier than does the ZB phase. With sharp edges, nucleation is expected to occur at the trijunction (rather than in the middle of the facet), so the solid–vapour interface plays a critical part. In particular, the solid–vapour interface energy is thought to be lower for WZ nanowires, reducing the nucleation barrier for WZ relative to ZB in this geometry 20 . However, when edge facets are present, nucleation on the main facet occurs away from the trijunction (presumably at position ‘N’ in Fig. 4a ; ref. 31 ) and the liquid–vapour interface plays no part. If we adopt this argument, then it is no surprise that the change in the trijunction geometry can result in easier nucleation of WZ than ZB in one case, but not the other. This argument is also qualitatively consistent with in situ X-ray diffraction studies that infer (indirectly) that crystal structure in GaAs nanowires is determined by the geometry of the liquid–vapour interface 39 , 40 . Here, we have not considered effects of interlayer interactions on the nucleation barrier, which might lead to formation of higher-order crystal phases, such as 4H, under certain conditions 41 , because we do not observe such phases in our experiments. At very small h / d , our analysis also suggests the possibility of an edge facet and, hence, ZB growth ( Fig. 4b ). Although we cannot access such conditions in our experiments, they could occur transiently at the beginning of nanowire growth, because the droplet has much a smaller h / d ratio when sitting on a flat surface 42 , and perhaps at the end of growth if material in the droplet is consumed. Indeed, the ZB phase has been observed at the bases and tips of WZ nanowires 20 , although under growth conditions that are different enough that our model might not be applicable. Recent experiments have demonstrated two transitions (from ZB to WZ and then back to ZB) as the V/III ratio is increased 9 , consistent with the model in Fig. 4 . However, because our experiments have only a limited range of V/III ratios, we cannot observe the second transition back to ZB and so cannot assess whether the transition is associated with interface structure in a way that is analogous to the switch at lower group V pressures. The discussion above is simplified in several respects. No difference in interfacial or surface energies between ZB and WZ structures is included. The small, but real, differences could lead to hysteresis in the switching angle as the droplet grows and shrinks. However, data such as that in Fig. 2 , which displays no strong hysteresis, suggest that the droplet angle has a larger effect than do the differences between ZB and WZ interfacial energies. More importantly, our quasi-two-dimensional model treats the droplet angle ϕ as uniform all the way around the edge. For the true three-dimensional geometry, in which the droplet sits on a hexagonal prism whose side lengths may not be equal 43 , it is clear that ϕ will vary around the trijunction. Suppose that for a WZ nanowire we change the conditions to enlarge the droplet. At some point, ϕ will become large enough along one edge for that edge to become truncated, even though other edges remain sharp. As the droplet continues to grow, every other edge will progressively become truncated. Therefore, we expect any nanowire with unequal edge lengths to exhibit a mix of sharp and truncated edges over a range of droplet volumes. In the experiments, we observe the ZB phase once the first truncated corner appears ( Fig. 1 and Supplementary Videos 2 and 4 ). Because we typically stabilize the conditions as soon as we see the crystal switch, the majority of nanowires presented here show a mix of sharp and truncated edges. However, we also observe symmetrically oscillating nanowires ( Supplementary Video 5 ), presumably because the wire is more symmetric or because the droplet has grown large enough to cause all edges to be truncated. The observation mentioned above that the ZB phase grows if any edge is truncated, whereas WZ grows only when all edges are sharp, implies that nucleation of ZB at a truncated edge is actually easier than nucleation of either WZ or ZB at a sharp one. This somewhat unexpected result could provide guidance in refining model parameters in nucleation calculations. Conclusions Direct observation during growth has enabled us to probe the phenomena controlling crystal phase in nanowires. WZ and ZB crystal phases in GaAs appear markedly different during growth in terms of the morphology of the nanowire/droplet interface, the flow of steps and the droplet size. The step-flow kinetics can be understood as a consequence of As-limited growth, low As solubility in the droplet, and the role of the edge truncation as an alternative reservoir. Examining the switch between phases suggests a scenario in which the growth conditions (here, the V/III ratio) determine the volume of the droplet and, hence, its aspect ratio h / d and angle ϕ ; the value of ϕ determines whether an edge facet will be present; the presence or absence of the edge facet determines the nucleation site for a new layer; and the nucleation site determines which phase, WZ or ZB, is most likely to nucleate. Our interpretation differs markedly from previous models of phase selection. Because nanowire growth has been achieved using a wide range of parameters and growth techniques, it is possible that phase selection is controlled by different physics under different circumstances. However, the regime we analyse here, MOVPE under As-limited conditions, has advantages for atomic-level control and high-throughput manufacturing. This understanding of the causal sequence, in particular the changes in droplet volume with conditions and the controlling role of the droplet angle ϕ , has practical consequences. First, the large changes in droplet volume as a function of conditions may be relevant to aspects of nanowire growth other than crystal phase control 13 . For example, kinking can be caused by depinning of droplets from the nanowire tip 37 , 44 , and experiments such as those shown here can explore the range of conditions under which droplets attain sizes that are sufficiently large or sufficiently small to cause depinning. In terms of crystal phase control, because the Au:Ga ratio in the droplet seems not to be critical, our results might be applicable to self-catalysed (Au-free) nanowire growth (although any small difference in liquid surface energies could lead to a slightly different critical angle). A second consequence is that any means of controlling the energy balance between a truncated and sharp edge should affect the crystal phase: we used the V/III ratio here, but temperature and surfactants are other possible ways to tune the crystal phase. We anticipate that similar behaviour may occur in other III-V semiconductors that exhibit polytypism, although it is not guaranteed because the various interfacial-energy parameters are material-specific. Finally, understanding how crystal structure switching depends on the kinetics of group III motion into and out of the droplet helps us work towards precise control of individual crystal phase superlattices, to enable fabrication of new types of electronic devices that make full use of the possibilities for engineering band structure that are provided by crystal-phase-engineered nanowires. Methods The GaAs nanowires imaged in this study were grown on Si(111). The substrates were cut from a Si(111) wafer into strips 3 mm × 350 μm 500 μm, small enough to fit directly into the TEM heating holder. The strips were, however, too small to be handled in the GaAs growth system, so they were stacked in arrays, parallel to each other with the polished surface facing upwards, and mounted on a larger Si wafer. At Lund University, Au aerosol particles with diameters of 30 nm, 50 nm and 70 nm were deposited onto the arrays of strips using a size-selected aerosol source at a total density of about 1 particle per μm 2 . Then, GaAs nanowires of the order of 500 nm in length were grown on the arrays using standard metal–organic vapour phase epitaxy in an Epiquip system, operating at 100 mbar with AsH 3 and TMGa as precursor gases and H 2 as carrier gas. After growth, the arrays were glued to sample boxes using a small piece of SEM-type double-sided carbon tape, the sample boxes were placed in a plastic bag, which was vacuum sealed, and then the bag was sent through air to the UHVTEM at IBM. The individual strips were separated and each sample was degassed in UHV by resistive heating below 100 °C for 30 min, flowing a direct current through the Si strip. The heating current required for a temperature of around 300 °C was then determined in a separate UHV chamber using an infrared pyrometer. All of the strips had a similar temperature–current calibration, so it was possible to estimate the current required to heat the sample to 500 °C or 550 °C. The sample was transferred to the UHVTEM column to check that the nanowires and Au catalysts were still present after this process. Finally, TMGa was flowed to a chosen pressure of around 5 × 10 −8 Torr as measured using a mass spectrometer, AsH 3 was flowed to a chosen pressure of around 2 × 10 −5 Torr as measured on the column ion gauge, the nanowires were heated to 500–550 °C and GaAs was grown at the nanowire tips. The crystal phase was generally controlled using AsH 3 pressure, which was easier to measure and faster to change than TMGa. After experiments on one sample were completed, the full current–temperature calibration curve was obtained for that sample. The reason for this calibration procedure was to prevent any damage (for example, etching) of the wires by overheating before the growth experiment began. Owing to drift of the temperature on continued heating, we estimate the temperature accuracy to be ±20 °C. All observations of crystal switching occurred between 500 °C and 570 °C. Approaching 500 °C, the temperature range over which switching occurred became narrower and ZB grew for all accessible pressures. Above 600 °C, the nanowires etched slowly at the Au/GaAs interface, presumably owing to the low group V pressure. This growth in situ within the TEM is somewhat different from standard MOVPE. Even though the conventional MOVPE precursor gases are used, there is no H 2 carrier gas during growth within the TEM, as is typically used during MOVPE growth of GaAs. In addition, the absolute pressures of the two precursor species are lower than typical precursor partial pressures used in MOVPE. The lower partial pressures can alone account for the low growth rates observed here compared to those observed in MOVPE. To compare the effects of the V/III ratio, we need to consider the possible differences between the two methods in more detail. The growth in this study was observed to always be group-V limited, as seen in Extended Data Fig. 2c . This is in contrast to standard MOVPE, where high group-V flows and group-III-limited regimes are typically used. Instead, one could argue that the in situ TEM conditions are more similar or relevant to chemical beam epitaxy (CBE), or possibly molecular beam epitaxy (MBE), than to MOVPE. The group-V-limited regime is also highly relevant to catalyst-free growth from Ga droplets. One exception for MOVPE is a recent study 9 exploring group-V-limited regimes in a standard MOVPE reactor to grow WZ and ZB GaAs. In this work, it was shown that at low enough group-V flow to yield group-V-limited growth, WZ grows at ‘high’ V/III ratio, whereas ZB grows when the V/III ratio is lowered. This result is in contrast to the more well-known behaviour in the group-III-limited regime, in which ZB forms at high V/III ratio; however, the result of ref. 9 is entirely in agreement with the results of this study. Where the present work and that presented in ref. 9 differ is in the absolute magnitude of the V/III ratio: here, V/III ratios of 100 or more yielded group-V-limited nanowire growth; in ref. 9 , the group-V-limited regime occurred at V/III ratios of less than 2. This comparison suggests that the effective As pressure at the growth front is substantially lower in the TEM, relative to the Ga pressure, than in typical MOVPE. To understand this, we note that AsH 3 pyrolysis in GaAs growth is generally considered to proceed heterogeneously on GaAs surfaces, without interaction with the carrier gas 45 . This pyrolysis starts with adsorption of AsH 3 onto the surface, followed by sequential dissociation of H atoms one by one, eventually leaving atomic As adsorbed on the surface 46 . When AsH 3 is combined with TMGa, however, the two species decompose together, simultaneously, via adduct formation on the surface; this decomposition pathway does not involve hydrogen and is more efficient than the decomposition of either species alone 45 . That the species decompose primarily on the surface is an important clue to the relatively inefficient supply of As in the UHVTEM. First, the nanowires are grown on Si substrates rather than on GaAs; although there is also ample GaAs surface on the pre-grown nanowire stubs, this surface is clearly different from the typical GaAs substrates used for MOVPE nanowire growth. Second, the surfaces are likely to be passivated with hydrogen when growth occurs in a H 2 atmosphere; the absence of H 2 here could affect the supply in a number of ways, changing, for example, the decomposition process and precursor surface diffusion. Finally, the decomposition process relies on the desorption of gas-phase As species. This adsorption process naturally depends on the partial pressure; because As has a substantially higher vapour pressure than Ga, the adsorption process of As will be reduced to a greater extent by the lower partial pressure. Other minor differences between growth in the TEM and growth in a reactor are expected, owing to the experimental set-up. When using needle valves rather than standard mass-flow controllers to control the precursor flows, the experimental parameters are less accurately controlled than they are in dedicated epitaxy growth systems. At the high V/III ratios used (V/III > 100), the TMGa partial pressure is much lower than that of AsH 3 . A gauge reading the total pressure close to the sample is used to monitor the AsH 3 flow and provide fast feedback on the AsH 3 pressure at the sample. To monitor TMGa pressure, a mass spectrometer is used, with a controlled, steady pressure of TMGa set at the start of the experiment and generally held constant. The mass spectrometer is continuously used during the experiments to monitor any drift in the TMGa pressure. | (Phys.org)—A team of researchers with members from Sweden, the U.K. and the U.S. has used a transmission electron microscope to discover the secrets behind how nanowires used to make semiconductors grow. In their paper published in the journal Nature, the team describes their microscopic study of gallium arsenide nanowires during their growth phase and what they learned about the process. Anna Fontcuberta i Morral with École Polytechnique Fédérale de Lausanne in Switzerland offers a News & Views Perspective piece on the work done by the team in the same journal issue outlining the process used and explaining what the results will mean for advances in electronics, photonics and quantum information research efforts. Scientists have discovered many useful properties of crystals leading to the development of many modern products, such as computers and photonic devices. Such devices depend on an ability to grow crystals in ways that suit particular needs. But, as Fontcuberta i Morral notes, a complete understanding of what occurs during the initial stages of crystal growth is holding back the development of a wider range of products. In this new effort, the researchers sought to learn more about polytypism—where a compound has the ability to exist as various crystal forms with differences only in their bilayer structure—by taking a very close look at the initial stages of gallium arsenide nanowire formation during the vapor–liquid–solid method. They report that their observations revealed that new bilayers formed at the triple-phase line, resulting in a flat layer at the top, but when the liquid-metal droplet used as a catalyst grew to a certain size, an edge appeared which altered the growth of the crystal—bilayers suddenly formed faster and the edge began to oscillate. The researchers suggest that their observations revealed that droplet size directly impacted the contact angle and the morphology of the liquid-solid interface. They noted also that angles close to 90° typically resulted in nucleation of bilayers, whereas smaller angles typically led to suppression of nucleation of bilayers allowing for the formation of zinc-blende structures. Fontcuberta i Morral suggests the findings by the team provide a new pathway towards crystal-phase design, allowing for engineers to select the crystal phase they desire for particular applications. | 10.1038/nature17148 |
Medicine | Study: Cells of three advanced cancers die with drug-like compounds that reverse chemo failure | Amila K. Nanayakkara et al, Targeted inhibitors of P-glycoprotein increase chemotherapeutic-induced mortality of multidrug resistant tumor cells, Scientific Reports (2018). DOI: 10.1038/s41598-018-19325-x Journal information: Scientific Reports , Biochemistry | http://dx.doi.org/10.1038/s41598-018-19325-x | https://medicalxpress.com/news/2018-01-cells-advanced-cancers-die-drug-like.html | Abstract Overexpression of ATP-binding cassette (ABC) transporters is often linked to multidrug resistance (MDR) in cancer chemotherapies. P-glycoprotein (P-gp) is one of the best studied drug transporters associated with MDR. There are currently no approved drugs available for clinical use in cancer chemotherapies to reverse MDR by inhibiting P-glycoprotein. Using computational studies, we previously identified several compounds that inhibit P-gp by targeting its nucleotide binding domain and avoiding its drug binding domains. Several of these compounds showed successful MDR reversal when tested on a drug resistant prostate cancer cell line. Using conventional two-dimensional cell culture of MDR ovarian and prostate cancer cells and three dimensional prostate cancer microtumor spheroids, we demonstrated here that co-administration with chemotherapeutics significantly decreased cell viability and survival as well as cell motility. The P-gp inhibitors were not observed to be toxic on their own. The inhibitors increased cellular retention of chemotherapeutics and reporter compounds known to be transport substrates of P-gp. We also showed that these compounds are not transport substrates of P-gp and that two of the three inhibit P-gp, but not the closely related ABC transporter, ABCG2/BCRP. The results presented suggest that these P-gp inhibitors may be promising leads for future drug development. Introduction Despite advances in chemotherapies against cancer, multidrug resistance (MDR) remains a major obstacle to positive therapeutic outcomes in adult 1 , 2 , 3 as well as pediatric cancers 4 . The most common mechanism of MDR is overexpression of drug efflux transporters of the ATP binding cassette (ABC) family. These pumps reduce the intracellular accumulation of many anticancer drugs to sub-therapeutic levels, thus decreasing or abolishing chemotherapy efficacy. P-glycoprotein (P-gp/ABCB1) is a glycosylated 170-kDa transmembrane protein that is encoded by the MDR1 gene 5 and is the best studied drug efflux pump of the family of ABC transporters 6 . P-gp is composed of two hydrophobic domains which include 12 transmembrane α-helices that make up the drug binding domains (DBD) and are involved in transporting toxins and xenobiotics out of the cell. Two nucleotide binding domains in the cytoplasmic region are responsible for coupling ATP hydrolysis to the transport processes 7 , 8 . P-gp is expressed in a variety of normal tissues, such as the intestine, brain, liver, placenta, kidney, and others 9 and is protective against xenobiotic substances and toxic compounds. It was noted close to 40 years ago that the expression of P-gp is correlated with MDR in many different types of cancers 10 , as well as the lack of response to chemotherapies and poor prognoses in breast 11 and ovarian 12 cancers. Overexpression of P-gp in cancers results in reduced accumulation of chemotherapeutics and leads to resistance against many of the currently available anti-cancer drugs such as taxanes (paclitaxel), vinca alkaloids (vinblastine), and anthracyclines (daunorubicin) 13 . The ability of P-gp to transport such diverse chemical classes is at least partly due to multiple transport pathways through the protein which have been recently visualized using molecular dynamics simulations 14 . Studies show that overexpression of P-gp in cancers can be either intrinsic or acquired upon drug treatment, depending on the tissue of origin, for examples see 15 , 16 , 17 , 18 , 19 . Clinical trials using MDR-inhibitors have had only limited success 20 , 21 , 22 , but the potential of the approach can be appreciated from a trial that used cyclosporine to inhibit P-gp in patients with poor-risk acute myeloid leukemia. Inclusion of the inhibitor with therapy resulted in significant gains in relapse-free and overall survival 23 . The difficulties in clinical trials as discussed in 24 , 25 were mainly due to inhibitor toxicities, drug-interactions, and clinical trial design problems. Many of the initial inhibitors were P-gp transport substrates 21 , 22 , requiring relatively high systemic concentrations for efficacy; others lacked specificity for P-gp and led to drug interactions, for review see 26 . None of these complications, however, diminish the impact or significance that employing effective P-gp inhibitors in cancer chemotherapies would have on patient outcomes. In earlier work we applied computational searches and detailed three dimensional models of P-gp 27 to identify small molecules that have the potential to overcome the problems of earlier generation P-gp inhibitors by specifically interacting with the nucleotide binding domains of the pump, while not binding significantly to the drug binding domains 28 . Three compounds were identified (compounds 29, 34 and 45) that caused reversal of paclitaxel resistance in a prostate cancer cell line that over-expresses P-gp 29 , 30 . Biochemical and biophysical analyses 28 indicated that compounds 34 and 45 affected nucleotide binding and all three compounds inhibited transport substrate activated ATP hydrolysis by purified P-gp. These results suggested that the inhibitors interacted with the nucleotide binding domains and not the drug binding domains and had the potential of not being transport substrates for P-gp. In the present study we extended our investigation of the reversal of multidrug resistance by these compounds to cancers of different origins using both 2-dimensional cell culture and spheroid – microtumor assays. We demonstrated that co-administration of these agents with chemotherapeutics resulted in significantly increased microtumor penetration of the fluorescent P-glycoprotein transport substrate, calcein AM, as well as increased accumulation of calcein AM or daunorubicin in two-dimensional cell culture studies. The studies show that the inhibitors directly blocked the pumping action of P-glycoprotein, but were not pump substrates themselves. Two of the three compounds are P-gp specific, while the third also inhibited to a lesser degree a second ABC transporter, the breast cancer resistance protein (BCRP, ABCG2). Cell mortality in both 2D and spheroid cultures was markedly increased when chemotherapeutics were used in combination with any of these P-glycoprotein inhibitors. Cell migration was also strongly inhibited. Protein expression analyses showed that the compounds did not downregulate P-gp expression under the conditions used for re-sensitizing the MDR cancer cells. These properties of the P-gp inhibitors studied here make them attractive leads for further development. Results Overexpression of P-glycoprotein leads to multidrug resistance in the ovarian cancer cell line, A2780ADR Follit et al . 29 showed that the addition of compounds 29, 34, or 45 previously identified in 28 to a prostate cancer cell line that over-expresses P-gp 30 caused reversal of the MDR phenotype. In the present study, we expanded our work to a drug resistant ovarian cancer cell line to assess whether the observed effects were cancer cell type specific or whether they might be more generally applicable. Fig. S1 (supplemental information) shows the characterization of the paired ovarian cancer cell lines, A2780 31 , and the highly drug resistant line derived from it, A2780ADR 32 . Western blot analyses using a P-gp-specific primary antibody showed that while the A2780ADR cells expressed significant amounts of P-gp (Fig. S1A , left lane), no P-gp was detectable in the parental A2780 cells (supplemental Fig. S1A , right lane). Original Western blots are shown in Fig. S2 as required by the journal. Also consistent with earlier work 32 , A2780ADR cells showed much higher resistance than the parental A2780 cell line to the vinca alkaloid, vinblastine, when tested using a resazurin cell viability assay 33 , 34 (Fig. S1B ). High levels of resistance to paclitaxel by A2780ADR cells were also observed when exposing the cells to increasing concentrations of the taxane, paclitaxel (Fig. S1C ). Inclusion of 60 µM novobiocin, a relatively specific inhibitor of BCRP 35 , or 250 µM probenecid, an inhibitor of the multidrug resistance associated protein 1 (ABCC1, MRP-1) 36 , together with vinblastine had no effect on the sensitivity of A2780ADR cells to the chemotherapeutic (Fig. S1B ). These results strongly suggest that the A2780ADR ovarian cancer cell line was phenotypically multidrug resistant and that this MDR phenotype was correlated to overexpression of P-glycoprotein and not BCRP or MRP-1. Inhibitors of P-glycoprotein reverse MDR phenotype of the ovarian cancer cell-line, A2780ADR Figure 1 shows the relative viability of A2780ADR cells as reported by the resazurin assay when incubated with increasing concentrations of paclitaxel (Fig. 1A ) or vinblastine (Fig. 1B ) with or without addition of P-gp inhibitors 29, 34, 45, or verapamil. The structures of the compounds 29, 34 and 45 are shown in Fig. 1C . It can be seen from Fig. 1A,B that the sensitivity of the MDR cell line to chemotherapeutics was increased by several orders of magnitude in the presence of the P-gp inhibitors. Inclusion of the BCRP inhibitor, novobiocin, or the MRP-1-inhibitor, probenecid, had no discernable effect on these cells (Table 1 ), suggesting that neither BCRP nor MRP-1 contributed to the MDR phenotype of the cells. Table 1 also compares the calculated IC50 values for the chemotherapeutics paclitaxel or vinblastine in the presence or absence of P-gp inhibitors for the two ovarian cancer cell lines. No sensitization of the non-MDR parental cell line A2780 was observed. These results suggest that the ovarian cell line A2780ADR is multidrug resistant because of overexpression of P-gp and that its MDR phenotype can be reversed by inhibitors of P-gp ATPase activity, compounds 29, 34 and 45 28 , 29 , as well as the P-gp transport substrate and competitive transport inhibitor, verapamil. Figure 1 Reversal of paclitaxel or vinblastine resistances by novel inhibitors of P-glycoprotein using metabolic viability assays. A2780ADR cells were treated in the presence of compounds 29, 34, 45 or verapamil with the indicated concentrations of chemotherapeutic. Panel A: circles, paclitaxel alone; squares, paclitaxel plus 25 µM compound 29; triangles, paclitaxel plus 25 µM compound 34; inverted triangles, paclitaxel plus 25 µM compound 45; stars, paclitaxel plus 25 µM P-gp substrate verapamil. Panel B: circles, vinblastine alone; squares, vinblastine plus 10 µM compound 29; triangles, vinblastine plus 10 µM compound 34; inverted triangles, vinblastine plus 10 µM compound 45; stars, vinblastine plus 10 µM P-gp substrate verapamil. Data represents the mean ± SD of 12 replicates from two independent experiments. Panel C: Chemical structures of novel P-gp inhibitors 29, 34 and 45. PTX, paclitaxel; VIN, vinblastine; VER, verapamil. Full size image Table 1 In silico identified P-gp inhibitors reverse MDR phenotype of ovarian cancer cell line. Full size table Novel P-gp inhibitors increased apoptosis in MDR prostate cancer cells when co-treated with paclitaxel The results of the resazurin viability assays with the A2780ADR cells (Fig. 1 and S1 ) showed unexpectedly high residual cell viabilities of between 25% with vinblastine and up to 60% when paclitaxel was used. While the overall results were consistent with increased cytotoxicity of chemotherapeutics in the presence of P-gp inhibitors, these results did not directly demonstrate that co-administration with inhibitor led to increased cell mortality. To test for increased cell mortality, the adherent MDR prostate cancer cells (DU145TXR 30 ) were used, since the semi-adherent properties of A2780ADR cells made cell imaging much less reliable. In experiments designed to assess the induction of apoptosis, DU145TXR cells were treated either with 1 μM paclitaxel alone, 10 μM P-gp inhibitors alone, or combinations of paclitaxel with inhibitors for 48 hours. Analysis of these assays took advantage of the facts that acridine orange is taken up both by viable and non-viable cells, intercalates with double stranded DNA, and emits green fluorescence, while ethidium bromide is only taken up by non-viable cells and emits a strong red fluorescence after interchelating with DNA 37 , 38 . As shown in Figure 2A , cells treated with vehicle, paclitaxel or P-gp inhibitors alone showed green fluorescence with highly organized nuclear morphologies, suggesting no induced apoptosis. In Fig. 2B , the blue arrow points out one such morphologically non-apoptotic nucleus. Upon combination treatment with chemotherapeutic and P-gp inhibitors, the number of cells with shrunken, rounded, and darker condensed cell morphologies increased (Fig. 2A , bright field panels). The number of cells that demonstrated obvious chromatin fragmentation also increased (Fig. 2B , white arrows). The number of dead cells, as indicated by ethidium bromide fluorescence also increased after co-treatments with chemotherapeutic and P-gp inhibitors (Fig. 2B , yellow arrows). These results indicated that paclitaxel induced apoptosis in DU145TXR cancer cells when P-gp activity was blocked by the P-gp targeted inhibitors. Figure 2 Reversal of paclitaxel resistance by novel P-gp inhibitors induces apoptosis in MDR prostate cancer cells. Panel A: DU145TXR cells were treated either with 1 μM paclitaxel alone, 10 μM of novel P-gp inhibitors alone, or combinations of paclitaxel with P-gp inhibitors for 48 hours, followed by staining with acridine orange and ethidium bromide. Top panels show the bright field images while lower panels show the merged images obtained using GFP and Texas Red fluorescence filters. Both were recorded with 10X objectives. Panel B. Enlarged images of the merged images obtained in panel A with the treatments shown. The blue arrow represents a nucleus that is not affected by paclitaxel. Cells in early stages of apoptosis with visible nuclear fragmentation are indicated with white arrows , and apoptotic cells, stained red with ethidium bromide are indicated by yellow arrows . PTX, paclitaxel; BF, bright field photomicrograph; G + R, green plus red fluorescence composite images. Full size image Reversal of MDR by added P-gp inhibitors causes inhibition of cell proliferation upon exposure to previously sub-lethal concentrations of chemotherapeutics To experimentally test the hypothesis that cells which had residual metabolic activity after co-treatment with chemotherapeutic and P-gp inhibitors were either dying or lost proliferation ability, colony formation experiments 39 were performed. Figure 3A shows the results of qualitative colony formation experiments, where A2780ADR cells were treated for 48 hours with either vehicle, 0.1 µM vinblastine, 1 µM paclitaxel or 10 µM P-gp inhibitor alone, or combinations of chemotherapeutic and P-gp inhibitor. Time of exposure to chemotherapeutic and / or inhibitor were the same as in Fig. 1 . The cells were then washed with media that did not contain chemotherapeutic or inhibitors and were allowed to recover for 96 hours after which the presence of cell colonies was assessed by crystal violet staining. In the presence of inhibitor in combination with paclitaxel or vinblastine, no cell colonies were observed (Fig. 3A ). In contrast, exposure to paclitaxel, vinblastine, or the inhibitors alone, resulted in very dense, viable cell colonies which were qualitatively equivalent to the DMSO control (Fig. 3A ). Figure 3 Reversal of chemotherapy resistances by novel inhibitors of P-glycoprotein. Panel A: Qualitative colony formation analyses using multidrug resistant ovarian cancer cells. A2780ADR cells were treated with chemotherapeutics and/or inhibitors as indicated in the figure for 48 hrs, washed and subsequently cultured for an additional 96 hours. Remaining cell colonies were stained with crystal violet. Left column from top to bottom: vehicle (DMSO); vinblastine (VIN) alone; compound 29 alone; vinblastine and compound 29; compound 34 alone; vinblastine and compound 34; compound 45 alone; vinblastine and compound 45; verapamil (VER) alone; vinblastine and verapamil. Right column from top to bottom: vehicle (DMSO); paclitaxel (PTX) alone; compound 29 alone; paclitaxel and compound 29; compound 34 alone; paclitaxel and compound 34; compound 45 only; paclitaxel and compound 45; verapamil (VER) alone; paclitaxel and verapamil. The concentrations used were 0.1 µM vinblastine, 1 µM paclitaxel, 10 µM of inhibitors 29, 34, 45 or verapamil. Panel B: Quantitative colony formation analyses using multidrug resistant prostate cancer cells. The experiments were performed as above, except that DU145TXR cells were seeded and grown to lower densities than in ( A ) and exposure to chemotherapeutic and inhibitors was for 24 hours at lower inhibitor concentrations. Top: Images of a representative experiment showing stained colonies after 5 days of recovery. Treatments were performed with 5 µM of P-gp inhibitors 29, 34 or 45 alone, 0.5 µM paclitaxel (PTX) alone, or in combination. Bottom : Quantitative analysis of colonies formed in ( B ) Each histogram represents the average ± S.D. (n = 6, three replicates from two individual experiments; ****P < 0.0001). Full size image To more quantitatively assess cell viability and colony formation and to test a different multidrug resistant cancer cell line, similar experiments were performed using the MDR prostate cancer cell line, DU145TXR 30 . The conditions for these experiments were chosen to represent the lowest exposure time and inhibitor concentration that resulted in significant differences in the number of colonies formed. Cells were treated for 24 hours with 0.5 µM paclitaxel alone, 5 µM inhibitor alone, or combinations of inhibitor and chemotherapeutic. Afterwards, the media containing inhibitor and/or chemotherapeutic were removed and the cells were allowed to recover for 120 h in the absence of chemotherapeutic or P-gp inhibitor. The cells were fixed and stained as described above. Figure 3B , top, shows images of the crystal violet stained colonies visible to the unaided eye. Figure 3B bottom shows the statistical analyses of two independent experiments. The number of colonies formed in the presence of paclitaxel and P-gp inhibitors was found to be significantly lower than when cells were grown with inhibitor or chemotherapeutic alone. These results support the hypothesis that the residual metabolic activities reported by the resazurin viability assays in Fig. 1 and S1 were due to the residual metabolic activities of cells that were dying, but not yet dead. P-glycoprotein inhibitors prevent multidrug resistant cancer cells from migrating when exposed to chemotherapeutics that interrupt microtubule dynamics To assess whether the P-gp inhibitors affect cancer cell migration in the presence of chemotherapeutics, wound healing assays 40 , 41 , 42 were performed with the MDR prostate cancer cell line 30 . Figure 4 shows the results of these wound healing assays under conditions of limited cell proliferation in the absence or presence of 0.1 µM vinblastine and 5 µM inhibitor 29, 34, or 45. Controls with the P-gp inhibitors alone (no chemotherapeutic present) are also shown. Figure 4A shows micrographs typical of the scratch zones immediately after the injury (zero time) and after 14 hours of incubation in low serum media. Figure 4B presents the averages of the relative wound closures normalized to wound closure in the presence of vehicle only (DMSO). Addition of vinblastine or the P-gp inhibitors by themselves resulted in reduction of the area of the scratch wound similar to vehicle controls, indicating that the MDR cancer cells were able to migrate into the wound site and close the scratch gap under these non-proliferative conditions. When any of the three P-gp inhibitors were used in combination with vinblastine, significant inhibition of wound healing was observed, suggesting that cancer cell migration was strongly inhibited. The closing of scratch area was limited to between 41% (vinblastine with 29 or 45) and 32% (vinblastine with 34) of those of the vehicle only controls. There was no significant difference when 2.5 or 5 µM of inhibitor was used in the presence of chemotherapeutic. Figure 4 P-gp inhibitors prevent the migration of MDR cancer cells in the presence of chemotherapeutics that target microtubule dynamics. Panel A: Wound healing assays. Confluent monolayers of the MDR prostate cancer cell line, DU145TXR, were manually scratched and subsequently cultured for 14 hours under conditions that inhibited cell proliferation. Representative 4X bright field micrographs of the scratch zones were recorded. Closure of the scratches was then evaluated in the presence of chemotherapeutic vinblastine (VIN) alone, P-gp inhibitors 29, 34 or 45 alone, as well as in the indicated combinations. The edge of the wound is marked by a black line. Panel B: Percentage wound closure. The average percentage of wound closure under different treatment conditions was compared to vehicle treated. Data are expressed as average ± S.D. of duplicate experiments (n = 12; ****P < 0.0001). DMSO, carrier vehicle only; VIN, 0.1 µM vinblastine; 29, 34, or 45 indicates added P-gp inhibitor at 2.5 or 5 µM. Full size image The intracellular retention of transport substrates of P-glycoprotein is enhanced in the presence of P-gp inhibitors in MDR cancer cells over-expressing P-gp The results presented in Figs 1 – 4 suggested that P-glycoprotein inhibition leads to enhanced therapeutic efficacies. To determine whether the P-gp inhibitors caused increased cellular retention of P-gp transport substrates, we assessed the accumulation of a known P-gp substrate, calcein AM. Calcein AM is an uncharged, acetoxymethyl derivative of the anionic fluorescent dye calcein and is known to be a substrate for P-gp 43 . Cellular esterases convert calcein AM to calcein, which is not transported by P-glycoprotein, making it a useful fluorescent probe for P-gp transport activity. Figure 5A shows that inclusion of P-gp inhibitors 29, 34, or 45 in media containing calcein AM with the P-gp overexpressing ovarian cancer cell line, A2780ADR, resulted in significant increases in intracellular calcein as detected by its green fluorescence. Compound 19 had been identified in earlier studies as an inhibitor of P-gp ATPase activity in cell-free biochemical assays 28 , but was shown to be ineffective in reversing multidrug resistance in cell-based assays 29 . This is likely due to a negative charge of the molecule at neutral pH, making it unable to enter intact cells. Inclusion of compound 19 to the experiments described here served as a negative control. Addition of the P-gp transport substrate verapamil also led to significant increases in intracellular fluorescence, suggesting that uninhibited P-glycoprotein activity in these cells was responsible for the low intracellular calcein accumulation observed in the absence of added P-gp inhibitors or substrates. Addition of the BCRP inhibitor, novobiocin, or the MRP-1 inhibitor, probenecid, did not lead to increased intracellular fluorescence, suggesting that BCRP and MRP-1 were not responsible for removing calcein AM from these cells. Time courses for the changes in calcein fluorescence intensities in the presence of the different inhibitors are shown in Figure 5B . The data indicate that both increased rates of accumulation as well as increased overall intracellular concentrations of calcein were achieved when P-gp specific inhibitors were present. It is worth noting that a number of cells in the uninhibited controls (Figure 5A and B ) also showed a few isolated puncta of fluorescence. This is hypothesized to be a consequence of subpopulations of cells in the MDR cultures that are not over-expressing P-gp. Given the plasticity of cancer cell genetics, this observation may be expected. Inclusion of chemotherapeutics would be expected to decrease the number of these calcein positive cells in the uninhibited control experiments. The differential accumulation of the chemotherapeutic daunorubicin in the MDR ovarian cancer cells in the presence and absence of various multidrug resistance pump inhibitors was assessed next. Similar in design to the calcein AM accumulation experiments described, these experiments took advantage of the intrinsic red fluorescence of daunorubicin. Figure Fig. 5C shows significant increases in intracellular daunorubicin fluorescence when the cells were treated with daunorubicin in combination with 10 µM inhibitor 29, 34, or 45. Inclusion of BCRP- or MRP-1 specific inhibitors (60 µM and 250 µM, respectively) did not result in observable increases in intracellular daunorubicin fluorescence suggesting that these effects were dependent on the specific inhibition of P-glycoprotein. Figure 5D presents the quantification of daunorubicin accumulation as assayed in Fig. 5C . The accumulation of daunorubicin in the presence of the experimental P-gp inhibitors was comparable to that observed for the non-MDR parental cell line, A2780 (Fig. 5D ). Figure 5 P-gp inhibitors restore calcein-AM or daunorubicin accumulation in MDR ovarian cancer cells. Panel A: Calcein-AM accumulation. Calcein AM accumulation upon P-gp inhibition was analyzed as described in methods using A2780ADR cells. All experiments had identical components except for the additions indicated. Additions were: DMSO, carrier vehicle at 0.5% final volume; compounds 19, 29, 34 or 45 at 10 µM; VER, 10 μM verapamil; NOV, 60 μM novobiocin; PRO, 250 μM probenecid. Scale bars are 200 µm. Panel B: Time course of calcein accumulation. Fluorescence measurements were made on the entire wells during the imaging experiments described in panel A. The increase in relative fluorescence resulting from accumulated calcein is plotted versus the time of incubation. The indicated additions were as described in panel A. Panel C: Daunorubicin accumulation. Intracellular daunorubicin accumulation in A2780ADR cells was observed similarly to the accumulation of calcein (see panel A). After a 2 hour incubation with 10 µM daunorubicin, fluorescence images of the cells were obtained using a Texas Red filter. Additions were as indicated in panel A. Panel D: Quantification of intracellular daunorubicin accumulation. A2780ADR cells were incubated as described for panel C. After the 2 hour incubation, cells were washed twice with cold RPMI to remove extracellular daunorubicin, and then lysed. The fluorescence of each well was measured at excitation 488/20 nm and emission 575/20 nm. Percent accumulation of daunorubicin in A2780ADR cells was normalized to the parental A2780 cells. Additions were as indicated in panel A. Each histogram represents the average ± S.D. from two independent experiments (n = 6; ****P < 0.0001). Full size image Increased accumulation and deep penetration of calcein in 3D tumor spheroids treated with P-gp inhibitors To assess the ability of the targeted P-gp inhibitors to facilitate the penetration of P-gp substrates into cells in tumor-like structures, microtumor spheroid cultures of an MDR prostate cancer cell line 30 were produced. Incubation of the spheroids with calcein AM in the presence of either vehicle alone or P-gp inhibitor 29, 34, or 45 showed considerably increased calcein fluorescence (Fig. 6A top row). The relative calcein fluorescence was visualized as 3D surface plots using the pixel intensities of the corresponding images (Fig. 6A , lower panel). Calcein accumulation was higher in the interior regions of the microtumors in the presence of P-gp inhibitors than in the DMSO control. These experiments suggest that calcein penetrated deeper into the interior of the microtumor in the presence of P-gp inhibitors. The time dependence of calcein AM uptake and calcein accumulation in the presence or absence of compound 29 is shown in Fig. 6B . Figure 6 P-gp inhibitors increase the accumulation and penetration of calcein-AM in MDR prostate cancer spheroids. Panel A: Calcein accumulation in 3D-spheroids. Upper panel – After 100 minutes of incubation with 2.5 µM of the P-gp substrate calcein-AM, fluorescence images of the spheroids were recorded. DMSO, 0.5% final volume; compounds 29, 34 or 45 at 15 µM. Scale bar indicates 1000 µm. Lower panel – 3D surface plots representing the pixel intensities of the corresponding images from the experiments above. Panel B: Time course of calcein accumulation. Fluorescence images of the spheroids treated with vehicle only or P-gp inhibitor 29 were obtained over 20 minute intervals using a GFP filter as described in panel A. Increases in calcein accumulation in the presence of compound 34 or 45 were observed to be similar to those shown with compound 29 (data not shown). Full size image Increased cytotoxicity of daunorubicin in the presence of P-glycoprotein inhibitors in multidrug resistant prostate cancer microtumors The experiments described in Figs 1 – 3 , S1 , S3 and S4 showed that the cytotoxicity of chemotherapeutics like paclitaxel or vinblastine to MDR cancer cells in traditional 2D cell culture was increased in the presence of the P-gp inhibitors 29, 34, or 45. It was of interest to test whether combination treatment of P-glycoprotein inhibitors with chemotherapeutic would increase the cytotoxicity of chemotherapeutics in microtumors. To investigate this hypothesis, DU145TXR spheroids were treated for 48 hours with either vehicle, P-gp inhibitor 29 or daunorubicin, as well as the combination of daunorubicin with two different concentrations of compound 29. After treatment, reagents were removed and the spheroids were incubated with fresh complete media for an additional 4 days. The growth of the spheroids under these conditions is shown in Fig. 7A . The change in size of the microtumors was quantified as the surface area of the spheroids as observed in the bright field images. Growth of the tumors in the presence of 15 or 25 µM of 29 alone was similar to that in the presence of DMSO alone, demonstrating the low toxicity of 29. In the presence of 1 µM daunorubicin, the tumor size did not change significantly over the course of the experiment, while combination with 15 or 25 µM of 29 led to a ~ 60% reduction in the size of the microtumors. Figure 7B shows a representative set of photomicrographs that correspond to the end-points of these experiments. The left panel shows bright field images of the microtumors after the treatments indicated in the figure. The middle panel shows the red fluorescence of the same microtumors upon addition of ethidium bromide, which stains dead cells 37 . Merged images are shown in the right panel. The results indicate that the efficacy of cancer cell killing by the daunorubicin chemotherapy treatment of these microtumors was increased in the presence of the P-glycoprotein inhibitor 29. Figure 7 Inhibition of P-gp leads to increased daunorubicin-induced cell death in MDR spheroid microtumors. Panel A: Time course of changes in tumor area. MDR DU145TXR spheroids were prepared and treated with P-gp inhibitory compound 29 with or without daunorubicin as described in methods. Areas of the spheroids were calculated at each day of the experiment and fold change plotted versus time. Values were normalized to the size of the tumor before addition of chemotherapeutic. Each point represents average ± S.D. (n = 4). Panel B: Photomicrographs of spheroids at the end of the experiment. At the end of the experiment, dead cells were illuminated by ethidium bromide staining and bright field and fluorescence (Texas Red channel) micrographs of typical spheroids were recorded. Treatments were as indicated in the panel labels. For each spheroid shown, bright field (left), Texas Red fluorescence (middle) and merged (right) images are shown. DNR, daunorubicin. Full size image P-gp inhibitors 29, 34 and 45 do not affect P-gp protein expression levels Previous studies have shown that reversal of multidrug resistances in cancers can sometimes be due to lowered expression of the protein and not to direct inhibition of P-gp transport by an experimental compound 44 , 45 . To test whether the inhibitors used in this study affected P-gp expression, Western blot analyses of the P-gp overexpressing prostate cancer cell line were performed after incubation for 48 hours with inhibitors 29, 34, or 45. These conditions led to at least 17-fold sensitization of the DU145 TXR cells to paclitaxel 29 . The results of the Western blot analyses are shown in Figs S5 and S6 . No decreases in P-gp protein expression were observed. P-gp inhibitors 29, 34 and 45 are not transport substrates of P-gp The original premise of our search for P-gp inhibitors was that compounds that are not transport substrates of the pump would make better lead compounds for future development for clinical use 28 , 29 . To test whether compounds 29, 34 and 45 are transport substrates, accumulation assays were performed where DU145TXR cells were incubated with 5 µM of 29, 34 and 45 in the presence or absence of the known P-gp inhibitor tariquidar 46 , 47 . A P-gp transport substrate, daunorubicin, was used as a positive control. After incubation for 2.5 hours, cells were washed with ice-cold buffer and counted. After cell lysis, the cell contents were analyzed by LC-MS/MS. The results of these analyses are shown in Fig. 8 . The studies indicate that cellular accumulation of compounds 29, 34 and 45 was no different in the presence or absence of tariquidar, while cellular accumulation of the P-gp substrate, daunorubicin, was significantly increased in the presence of tariquidar. These results support the hypothesis that compounds 29, 34 and 45 are not P-gp transport substrates. Figure 8 Compounds 29, 34 and 45 are inhibitors of P-gp and not transport substrates. Quantitative LC-MS/MS analysis of intracellular accumulation of 29 (panel A), 34 (panel B), 45 (panel C), or daunorubicin (panel D) in DU145 TXR. Each histogram represents the average ± S.D. (n = 3, two independent experiments); **P < 0.01; NS – not significant). DNR, daunorubicin; TQR, tariquidar. Full size image Inhibitors 34 and 45 are P-gp specific, while compound 29 also affects the breast cancer resistance protein In order to assess whether the inhibitors were specific for P-gp or would also inhibit other ABC transporters, we created a BCRP-overexpressing breast cancer cell line, MCF-7 M100, which we derived from MCF-7 cells 48 by exposing the cells to increasing, sub-lethal concentrations of the BCRP pump substrate and chemotherapeutic, mitoxantrone 49 . Fig. S7 shows the results of Western blot analyses of cell lysates of the MCF-7 and MCF-7 M100 cell lines indicating that the MCF-7 M100 derivative line overexpresses the BCRP protein. Figure 9 shows the results of experiments that suggest that compounds 34 and 45 inhibit only P-gp, while compound 29 inhibits both P-gp and BCRP. In these experiments, the MCF-7 M100 cells were exposed to mitoxantrone (a BCRP substrate), in the presence or absence of verapamil (a P-gp substrate), novobiocin or Ko143 (BCRP inhibitors), and the P-gp inhibitors 29, 34 or 45 as indicated in the figure. Cell viability was assessed using MTT assays 50 , 51 . The results indicate that cellular viability of the MCF-7 M100 cell line was reduced when mitoxantrone was co-administered with the BCRP inhibitors, novobiocin and Ko143, but no effect was observed when the P-pg substrate, verapamil, was added. The addition of mitoxantrone or the targeted inhibitors individually did not affect cell viability. When co-administered with mitoxantrone, compounds 34 or 45 did not significantly affect the viability of MCF-7 M100 cells. Compound 29, however, in combination with mitoxantrone, caused statistically significant reduction in cell viability when compared to the viability of the cells in the presence of 29 alone. These results suggest that at the concentrations used, compounds 34 and 45 are P-gp specific, while compound 29 inhibits both P-gp and BCRP. Figure 9 Inhibitors 34 and 45 are specific for P-gp while compound 29 also inhibits BCRP. MCF-7 M100 cells were treated with 50 nM of the chemotherapeutic mitoxantrone and either compound 29, 34, or 45 at 5 or 10 µM, verapamil at 10 µM, novobiocin at 200 µM or Ko143 at 1 µM as indicated in the figure. Each histogram presents the average ± S.D. of the determinations (n = 8, replicates from two individual experiments; ****P < 0.0001; *P < 0.1). MNT, mitoxantrone; VER, verapamil; NOV, novobiocin. Full size image Discussion While chemotherapy resistances may have a variety of causes, multidrug resistance is often caused by the overexpression of P-gp or closely related members of the ABC-transporter family 52 , 53 . Unfortunately, to date none of the P-gp inhibitors previously identified and pursued as potential co-therapeutics for treatment of MDR diseases has been successful in clinical trials 21 , 22 . The most recent trials employing the P-gp inhibitor, tariquidar, suffered from toxicities and several Phase III trials have been abandoned, see 47 . A Phase III trial with another P-gp inhibitor, zosuquidar 54 , was recently completed, but did not show improved outcomes in older acute myeloid leukemia patients. The authors of the study speculated that resistances unrelated to P-gp were the cause for the negative outcomes. Despite these set-backs, the potential of P-gp inhibition to improve cancer patient outcomes was demonstrated in an earlier randomized Phase III trial with poor-risk, acute myeloid leukemia patients 23 . These data indicated that positive effects from inhibitors were most prominent in patients with increased P-gp expression. Despite limited success in the clinic, P-gp still seems an important and relevant target for drug discovery and development. More recent advances in the knowledge of the structure and mechanism of P-gp as described in 14 , 27 as well as the related drug pumps 55 , 56 will likely help discovery of specific inhibitors of specific pumps in a more rational way. Many of the inhibitors previously pursued are thought to bind to the drug binding domains and inhibit chemotherapeutic export by functioning as P-gp transport substrates and competing for transport cycles 22 . This characteristic likely led to the need for relatively high systemic concentrations causing off target toxicities. In previous work 28 , our group hypothesized that molecules that strongly interacted with the ATP binding domains of P-gp and interfered with the power harvesting of the pump, but that do not bind well to the drug binding domains, would make better leads for inhibiting P-gp. Ultra-high-throughput computational techniques led to the discovery of a number of small, drug-like compounds that were predicted to inhibit P-gp action by specifically interacting with ATP binding and/or hydrolysis 27 , 28 . Three of these compounds 29, 34 and 45 reversed multidrug resistance in MDR prostate cancer cells in culture 29 (see Fig. 1 for the chemical structures). Here we described results that strongly suggest that the P-gp inhibitors 29, 34 and 45 reverse multidrug resistance and increase cell mortality in the presence of chemotherapeutics in different cancer cells, in both 2- and 3-dimensional cultures. We showed that these effects are caused by P-gp specific inhibition leading to increased accumulation of P-gp substrates. The decreased P-gp transport activity was shown not to be the result of downregulation of P-gp expression. We also showed that the inhibitors are not transport substrates of P-gp and that compounds 34 and 45 are P-gp specific, whereas compound 29 also affected the activity of the breast cancer resistance protein. To establish that the effects of the in silico identified inhibitors were not cancer-type specific, we extended our studies here to an ovarian cancer cell line, A2780 and its MDR subline, A2780ADR 32 , 57 . Supplemental Fig. 1 and Table 1 show that the ADR subline exhibited a chemotherapy resistance phenotype to both vinblastine and paclitaxel and that P-gp was significantly overexpressed. We also demonstrated that inclusion of the P-gp inhibitors 29, 34 or 45 with either paclitaxel or vinblastine reversed the multidrug resistance phenotype exhibited by the A2780ADR cells (Fig. 1 ). The enzymatic reduction of resazurin (Alamar Blue) 58 or tetrazolium salts like MTT 50 are frequently used in medium to high throughput assays to evaluate toxicity of experimental compounds. Both assays respond to mitochondrial metabolism, which is correlated to cell viability, but may not be directly correlated to cell proliferation 59 , 60 . Figure 1 shows that even at high concentrations of the chemotherapeutic, paclitaxel, significant mitochondrial metabolic activity was observed. As seen in supplemental Fig. 1 , the parental A2780 strain showed about 20% residual “metabolic viability” at 10 nM paclitaxel, while more than 50% viability was observed for the A2780ADR line at paclitaxel concentrations 4 orders of magnitude higher. In contrast, vinblastine resulted in residual viability of less than 10% for the parental and ~20% for the MDR line. Similarly, when inhibitors were co-administered with vinblastine, reduction of viability to about 20% was observed. Paclitaxel under these conditions led to about 50% reduction of viability. Both paclitaxel and vinblastine affect microtubule formation and stability, however their mechanisms of action differ. While taxols stabilize microtubules, vinca alkaloids inhibit microtubule polymerization 61 , 62 , 63 . These differences in action may cause different onsets of cell death and may result in the observed prolonged mitochondrial activities. A main goal of using chemotherapeutics is to induce apoptosis in cancer cells. Paclitaxel stabilizes microtubules leading to G2/M cell cycle arrest and apoptotic cell death 64 . Induction of apoptosis is relatively easy to demonstrate 65 . We showed in Fig. 2 that co-treatment with paclitaxel and P-gp inhibitors significantly induced apoptosis-related characteristics in these MDR cancer cells and increased the number of observed dead cells indicating that inhibition of P-gp resulted in paclitaxel-induced apoptosis. This hypothesis was further supported by the results of colony formation assays using both MDR prostate and ovarian cancer cell lines (Fig. 3 ). These results taken together strongly suggest that the P-gp inhibitors studied here potentiate the effect of chemotherapeutics on multidrug resistant cancer cells. While the inhibition of cell proliferation of cancer cells is extremely important, arguably of equal concern is the potential migration of cells to other sites in the body even when proliferation is inhibited. In order to test the effects of P-glycoprotein inhibition on MDR cancer cell migration, a series of wound healing assays were performed. As described in 40 , 41 , 42 , when a nearly confluent two-dimensional cell culture is physically damaged by scratching through the cell layer, migration of cells into the gap can be differentiated from filling the gap by proliferative mechanisms, if the culturing media lacks essential factors required for proliferation. Media with low serum concentrations allow only minimal cell proliferation. Under these conditions, the “healing” of the wound can occur only by the migration of existing cells into the scratch zone. In experiments like these, filling of the gap with cells (“healing or wound closure”) likely occurs by detachment and movement of preexisting cells into the gap 40 , 41 , 42 . Our results (Fig. 4 ) strongly indicated that the presence of the P-gp inhibitors 29, 34, or 45 with vinblastine inhibited wound closure by 30 to 50% suggesting that an increased accumulation of vinblastine in cells treated with P-gp inhibitors inhibited cell migration. Vinblastine is known to interact with tubulin, resulting in perturbations of microtubule polymerization 66 which is a necessary requirement for cellular mobility. Increased intracellular accumulation of P-gp substrates, calcein AM and daunorubicin, upon inhibition of P-gp activity by our experimental compounds was then shown in Fig. 5 . Calcein AM is frequently used as a probe to test P-gp and MRP-1 function in cultured cell lines, as well as in primary cancer cells from patients for treatment outcome predictions 67 , 68 , 69 . Calcein AM is a transport substrate for P-gp, while the hydrolysis product, calcein, is not transported by the pump. High P-gp activities can therefore be correlated with the lack of intrinsic fluorescence of calcein, while inhibition of P-gp or lack of expression can be correlated with increased intracellular fluorescence. We demonstrated that calcein AM accumulation in the MDR ovarian cancer cell line significantly increased in the presence of the P-gp inhibitors. Very similar increases in accumulation of daunorubicin in the presence of P-gp specific inhibitors 29, 34 and 45 were also demonstrated. These results are consistent with 29, 34 and 45 blocking P-gp catalyzed export of calcein AM or daunorubicin, causing cellular accumulation of the substrates. Control experiments suggested that these effects were due to inhibition of P-glycoprotein and not due to the activities of BCRP or MRP-1. One of the potential factors that negatively affected success in clinical trials of P-gp inhibitors was lack of tumor penetration of some of the inhibitors 70 , 71 . To evaluate whether the experimental P-gp inhibitors facilitated P-gp inhibition in what might be more of a clinically relevant 3-dimensional culture, microtumor spheroids were grown using the multidrug resistant DU145TXR prostate cancer cell line. Cellular accumulation of P-gp substrates as well as cytotoxicity of chemotherapeutics were then assessed. In attempts to create spheroids from the MDR ovarian cancer cell line, A2780ADR, we observed a lack of tight association of the cells that was likely due to low expression of claudin 4 which is required for spheroid formation 72 . The MDR prostate cancer cell line, DU145TXR, however, showed relatively tight cell to cell association in the microtumors produced. Results of experiments using these microtumors treated with P-gp inhibitors showed stronger accumulation of calcein AM also in internal parts of the spheroid (Fig. 6 ). In time course experiments, nearly complete tumor penetration of the fluorescent dye was observed after about 100 minutes of incubation, suggesting that the P-gp inhibitors affected cells inside the tumor. Increased cytotoxicity of daunorubicin in the presence of inhibitor 29 indicated that the inclusion of the P-gp inhibitor resulted in significantly increased cell mortality and destruction of these microtumors (Fig. 7 ). It had been previously shown by others that chemotherapeutics or P-glycoprotein inhibitors affected P-gp protein expression levels in cancer cells and that decreased expression of P-gp can be a cause of re-sensitization of MDR cells. Cisplatin, for example, has been observed to increase P-gp expression 73 , 74 , while agents like curcumin or certain antipsychotics appear to decrease P-gp expression 44 , 45 . We evaluated the effects of the targeted P-gp inhibitors on P-gp expression using Western blot analyses and found that exposure of the cancer cells to the P-gp inhibitors 29, 34 and 45 did not result in decreases in P-glycoprotein expression. This suggests that the effects presented here were the result of direct inhibition of the transport activities of the pump by compounds 29, 34 and 45, and were not related to changes in protein expression. The original premise and hypothesis for our studies was that P-gp inhibitors that are not transport substrates by themselves will make better drug leads for future development 28 , 29 . We presented biochemical data in 28 that supported the hypothesis that inhibitors 29, 34 and 45 were not transport substrates. Here we presented strong evidence that these inhibitors are not transported by P-gp, since the presence of the strong P-gp inhibitor, tariquidar, did not significantly affect accumulation of the compounds (Fig. 8 ). Additional experiments were performed to assess the specificity of inhibition by compounds 29, 34 and 45 on a second ABC transporter, BCRP, using a BCRP overexpressing MDR breast cancer cell line (Fig. 9 ). These experiments indicated that the P-gp inhibitors 34 and 45 did not significantly resensitize the BCRP overexpressing cell line to mitoxantrone, which is a BCRP substrate and chemotherapeutic. This suggests that the P-gp inhibitors 34 and 45 discriminated between the two closely related ABC transporters, P-glycoprotein and BCRP. Compound 29 on the other hand did resensitize the BCRP overexpressing cell line to mitoxantrone, indicating it did not discriminate between the two transporters. It will be of interest in the future to extend these investigations of the discrimination of the P-gp inhibitors to other ABC transporters and more generally to other ATP-utilizing enzymes. P-glycoprotein has been a target for drug discovery for almost 40 years (for reviews see 20 , 21 , 22 ). Other investigators also describe P-gp inhibitors that interact with the nucleotide binding domains of P-gp 75 , 76 although translation into the clinic has not yet been reported. Others recently reported in silico drug discovery studies 77 . However, these studies targeted the drug binding domains in attempts to disrupt drug transport by P-gp. This approach may lead to P-gp inhibitors that are transported by P-gp, a characteristic that we have tried to avoid. Conclusions We have shown here that the targeted inhibitors that were initially discovered through computational high throughput drug docking studies specifically inhibited P-glycoprotein function and increased the accumulation of P-gp transport substrates in 2- and 3-dimensional cell cultures of chemotherapy resistant ovarian and prostate cancer cell lines. We showed that the inhibitors were not themselves P-gp transport substrates, but increased accumulation of chemotherapeutics and caused reduction of cell viability, reduced colony formation, reduced cell migration, and increased cell death in both 2- and 3-dimensional cell culture studies. The reversal of the multidrug resistance phenotypes observed here was shown not to be due to downregulation of P-gp expression. P-gp inhibitors 34 and 45 did not appear to affect BCRP transport activities, while compound 29 affected both P-gp and BCRP. The experimental compounds 29, 34 and 45 therefore appear to be promising candidates for further development into co-therapeutics to treat cancers that are multidrug resistant due to P-gp overexpression. Materials and Methods Cell lines and cell culture The chemotherapeutic sensitive DU145 human prostate cancer cells 78 as well as the multidrug resistant sub-line, DU145TXR 30 were generous gifts from Dr. Evan Keller (University of Michigan, Ann Arbor, MI). The MDR DU145TXR was maintained under positive selection pressure by supplementing complete medium with 10 nM paclitaxel (Acros Organics, NJ). The above mentioned cell lines as well as the chemotherapeutic sensitive A2780 ovarian cancer cells (93112519, Sigma) and the multidrug resistant A2780ADR (93112520, Sigma) 32 , 57 were maintained in complete media consisting of RPMI-1640 with L-glutamine, 10% fetal bovine serum (FBS; BioWest, Logan, UT), 100 U/mL penicillin and 100 μg/mL streptomycin in a humidified incubator at 37 °C and 5% CO 2 . The drug-resistant line A2780ADR was maintained under positive selection pressure by supplementing complete medium with 100 nM doxorubicin (Fisher Scientific, NJ). Cell culture materials were purchased from Corning Inc. (Corning, NY) unless otherwise stated. A BCRP over-expressing breast cancer cell line (MCF-7 M100) was established by us according to a previously described method 49 . The drug sensitive MCF-7 (ATCC) 48 breast cancer cell line was exposed to increasing concentrations of the chemotherapeutic, mitoxantrone, over 60 passages. The mitoxantrone resistant MCF-7 M100 cell line was maintained under positive selection pressure by supplementing complete medium with 100 nM mitoxantrone (Santa Cruz Biotechnology, CA). Western blot analyses Whole cell lysates were prepared using approximately five million cells from each cell line. Cells were lysed in 500 μL of SDS buffer (125 mM Tris HCl pH 6.8, 20% v/v glycerol, 4% w/v SDS and 2% v/v β-mercaptoethanol) containing 5 μL of protease inhibitor cocktail (P8340, Sigma). The lysates were filtered through a spin column (QIAprep ®) by centrifugation at 5000 rpm for 5 minutes and used for Western blot analysis. The lysate proteins were resolved by denaturing SDS-PAGE 79 for 100 minutes at 110 V and subsequently transferred to a PVDF membrane (Bio-Rad, CA) using a Mini Transblot cell (Bio-Rad) for 70 minutes at 110 V. The transfer buffer contained 192 mM glycine, 25 mM Tris, and 10% methanol. The membrane was blocked overnight at 4 °C with 4% powdered skimmed milk in TBS-T (12 mM Tris–HCl pH 7.5, 0.5 M NaCl, 0.05% Tween 20). Washed membranes were incubated with the P-gp mouse monoclonal antibody C219 (Enzo Life Sciences, NY), the BCRP-specific monoclonal antibody B1 (from Santa Cruz Biotechnology, CA), or the β-actin monoclonal antibody C4 (Santa Cruz Biotechnology, CA), diluted to between 1:500 and 1:2000 in TBS-T and 4% powdered skimmed milk for 2 hours at room temperature. Washed membranes were subsequently incubated for 1 hour at room temperature with alkaline horseradish peroxidase-conjugated goat anti-mouse secondary antibody sc-2005 (Santa Cruz Biotechnology, CA) diluted to 1:10000 in TBS-T containing 4% milk powder. Membranes were washed in TBS-T and P-gp, BCRP, or β-actin were visualized using enhanced chemiluminescence detection (ECL kit, Thermo Scientific, IL). To evaluate P-gp protein expression levels of cells after inhibitor treatment, DU145TXR cells were treated with 5 μM of P-gp inhibitors 29, 34 or 45 for 48 hours after which cell lysates were prepared and Western blot analyses were performed as described above. Resazurin cell viability assay The resazurin assay is a well-established cell viability assay 80 which relies on the reduction of the blue, water soluble resazurin to highly fluorescent resafurin 80 under the reducing environment in the cell. The fluorescence of resafurin is directly proportional to the number of viable cells and can be measured by excitation at 530 nm and emission at 590 nm 80 . The assay was performed as follows: Cells were trypsinized from monolayers and seeded with 4000 cells in 150 μL of complete medium in a 96 well plate. After 24 hours, cells were treated with chemotherapeutics and / or P-gp inhibitory compounds dissolved in DMSO, or DMSO controls diluted in complete medium for 48 hours. The chemotherapeutics, paclitaxel and vinblastine, were purchased from Acros Organics, NJ, and MP Biomedicals, France, respectively. Upon 42 hours of treatment, resazurin assays were performed as described in 33 using 440 μM of resazurin (Acros Organics, NJ) solution prepared in PBS (137 mM NaCl, 2.7 mM KCl, 10 mM Na 2 HPO 4 , 1.8 mM KH 2 PO 4 , pH 7.4). After 6 hours of incubation with resazurin, the resulting fluorescence was measured by excitation at 530 nm and emission at 590 nm using a Bio-Tek Synergy 2 multi-mode plate reader (Bio-Tek, Winooski, VT). DMSO was used as the vehicle, 250 μM probenecid and 60 μM novobiocin (both from Alfar Aesar, MA) were used as negative controls, and verapamil (MP Biomedicals, France) was used as a positive control for P-gp inhibition. Percent viability was calculated using DMSO treated cells as representative for 100% viability. Background fluorescence was determined using resazurin and complete medium without cells. $${\rm{ \% }}\,{\rm{V}}{\rm{i}}{\rm{a}}{\rm{b}}{\rm{i}}{\rm{l}}{\rm{i}}{\rm{t}}{\rm{y}}\,=100\ast \frac{{\rm{F}}{\rm{l}}{\rm{u}}{\rm{o}}{\rm{r}}{\rm{e}}{\rm{s}}{\rm{c}}{\rm{e}}{\rm{n}}{\rm{c}}{\rm{e}}\,{\rm{o}}{\rm{f}}\,{\rm{e}}{\rm{x}}{\rm{p}}{\rm{e}}{\rm{r}}{\rm{i}}{\rm{m}}{\rm{e}}{\rm{n}}{\rm{t}}{\rm{a}}{\rm{l}}\,{\rm{c}}{\rm{e}}{\rm{l}}{\rm{l}}{\rm{s}}\,-\,{\rm{B}}{\rm{a}}{\rm{c}}{\rm{k}}{\rm{g}}{\rm{r}}{\rm{o}}{\rm{u}}{\rm{n}}{\rm{d}}\,{\rm{f}}{\rm{l}}{\rm{u}}{\rm{o}}{\rm{r}}{\rm{e}}{\rm{s}}{\rm{c}}{\rm{e}}{\rm{n}}{\rm{c}}{\rm{e}}}{{\rm{F}}{\rm{l}}{\rm{u}}{\rm{o}}{\rm{r}}{\rm{e}}{\rm{s}}{\rm{c}}{\rm{e}}{\rm{n}}{\rm{c}}{\rm{e}}\,{\rm{o}}{\rm{f}}\,{\rm{D}}{\rm{M}}{\rm{S}}{\rm{O}}\,{\rm{t}}{\rm{r}}{\rm{e}}{\rm{a}}{\rm{t}}{\rm{e}}{\rm{d}}\,{\rm{c}}{\rm{e}}{\rm{l}}{\rm{l}}{\rm{s}}\,-\,{\rm{B}}{\rm{a}}{\rm{c}}{\rm{k}}{\rm{g}}{\rm{r}}{\rm{o}}{\rm{u}}{\rm{n}}{\rm{d}}\,{\rm{f}}{\rm{l}}{\rm{u}}{\rm{o}}{\rm{r}}{\rm{e}}{\rm{s}}{\rm{c}}{\rm{e}}{\rm{n}}{\rm{c}}{\rm{e}}}$$ (1) The results were plotted as the mean with standard deviation (SD) of twelve replicates per concentration from at least two independent experiments. The graphical representations and IC 50 values were determined using four parameter variable slope non-linear regression, using the following equation: Y=bottom + (top-bottom)/(1 + 10^((logIC50-X)*Hill Slope) (GraphPad Prism™, La Jolla California, USA, Version 6.05). The reported “fold sensitization” was calculated as follows. $${\rm{F}}{\rm{o}}{\rm{l}}{\rm{d}}\,{\rm{s}}{\rm{e}}{\rm{n}}{\rm{s}}{\rm{i}}{\rm{t}}{\rm{i}}{\rm{z}}{\rm{a}}{\rm{t}}{\rm{i}}{\rm{o}}{\rm{n}}=\frac{{{\rm{I}}{\rm{C}}}_{50}\,{\rm{v}}{\rm{a}}{\rm{l}}{\rm{u}}{\rm{e}}\,{\rm{o}}{\rm{f}}\,{\rm{A}}2780{\rm{A}}{\rm{D}}{\rm{R}}\,{\rm{c}}{\rm{e}}{\rm{l}}{\rm{l}}{\rm{s}}\,{\rm{t}}{\rm{r}}{\rm{e}}{\rm{a}}{\rm{t}}{\rm{e}}{\rm{d}}\,{\rm{w}}{\rm{i}}{\rm{t}}{\rm{h}}\,{\rm{c}}{\rm{h}}{\rm{e}}{\rm{m}}{\rm{o}}{\rm{t}}{\rm{h}}{\rm{e}}{\rm{r}}{\rm{a}}{\rm{p}}{\rm{e}}{\rm{u}}{\rm{t}}{\rm{i}}{\rm{c}}\,{\rm{o}}{\rm{n}}{\rm{l}}{\rm{y}}}{\begin{array}{c}{{\rm{I}}{\rm{C}}}_{50}\,{\rm{v}}{\rm{a}}{\rm{l}}{\rm{u}}{\rm{e}}\,{\rm{o}}{\rm{f}}\,{\rm{A}}2780{\rm{A}}{\rm{D}}{\rm{R}}\,{\rm{o}}{\rm{r}}\,{\rm{A}}2780\,{\rm{c}}{\rm{e}}{\rm{l}}{\rm{l}}{\rm{s}}\,{\rm{t}}{\rm{r}}{\rm{e}}{\rm{a}}{\rm{t}}{\rm{e}}{\rm{d}}\,{\rm{w}}{\rm{i}}{\rm{t}}{\rm{h}}\,{\rm{c}}{\rm{h}}{\rm{e}}{\rm{m}}{\rm{o}}{\rm{t}}{\rm{h}}{\rm{e}}{\rm{r}}{\rm{a}}{\rm{p}}{\rm{e}}{\rm{u}}{\rm{t}}{\rm{i}}{\rm{c}}\\ {\rm{a}}{\rm{n}}{\rm{d}}\,{\rm{P}}{\rm{g}}{\rm{p}}\,{\rm{i}}{\rm{n}}{\rm{h}}{\rm{i}}{\rm{b}}{\rm{i}}{\rm{t}}{\rm{o}}{\rm{r}}{\rm{y}}\,{\rm{c}}{\rm{o}}{\rm{m}}{\rm{p}}{\rm{o}}{\rm{u}}{\rm{n}}{\rm{d}}\end{array}}$$ (2) MTT cell viability assay for BCRP over-expressing MCF-7 M100 breast cancer cell line MCF-7 M100 cells were trypsinized from monolayers and seeded with 2500 cells in 150 μL of complete medium in a 96 well plate. After 24 hours, cells were treated with mitoxantrone (50 nM, Santa Cruz Biotechnology, CA) and / or P-gp inhibitory compounds dissolved in DMSO, or DMSO controls diluted in complete medium for 96 hours. After 96 hours of treatment, MTT assays were performed as described in 81 using 5 mg/mL of MTT (Acros Organics, NJ) solution prepared in PBS (137 mM NaCl, 2.7 mM KCl, 10 mM Na 2 HPO 4 , 1.8 mM KH 2 PO 4 , pH 7.4). After 4 hours of incubation with MTT, the media was removed and the formazan crystals were dissolved in 100 µL of DMSO. The absorbance at 570 nm was then measured using a BioTek Cytation 5 imaging multi-mode reader (Bio-Tek, Winooski, VT). Data were obtained from two independent experiments. DMSO was used as the vehicle, 200 μM of novobiocin (Alfar Aesar, MA) and 1 μM of Ko143 (Sigma) were used as positive controls, and verapamil (MP Biomedicals, France) was used as a negative control for BCRP inhibition. Percent viability was calculated using DMSO treated cells as representative for 100% viability. Colony formation assay Colony formation assays were performed similar to those described in 39 with slight modifications. A2780ADR cells were seeded at 4000 cells per well in 96 well plates for 24 hours and incubated for 48 hours with chemotherapeutics vinblastine (0.1 µM ) or paclitaxel (1 µM) alone, as well as 10 µM inhibitors, compounds 29, 34, 45 or verapamil alone, or combinations of chemotherapeutics and inhibitors at the concentrations given above. After 48 hours, the media were replaced with drug free media and cells were allowed to grow for an additional 96 hours. To visualize cells that had grown during that period, media were removed from the wells of 96 well plates and cells were fixed with a mixture of methanol and acetic acid 3:1 (v/v) solution. After 5 minutes, the fixation solution was removed and the cells were stained with 0.5% w/v crystal violet (Alfar Aesar, MA) in 25% methanol for 30 minutes. Finally, crystal violet was removed and the plates were washed with running water to remove excess crystal violet. Cells that had continued to grow over the 96 hour incubation time were visible as blue dots in nearly confluent cell colonies. No growth was observed where P-gp inhibitors and chemotherapeutic were co-administered during the initial 48 hr incubation. In a more quantitative colony formation assay similar to 39 , DU145 TXR cells were seeded in 24 well plates with 200 cells per well. After 24 hours, cells were treated with 500 nM paclitaxel or 5 μM inhibitors alone, as well as combinations of inhibitors and chemotherapeutics for 48 hours. The media was then removed and cells were allowed to form colonies for 5 days in drug-free complete media. Cells were fixed and stained as described above. Colonies visible to the naked eye were counted and recorded by persons blinded to all experimental conditions. The experiment was repeated two times. Scratch assay Scratch assays were performed as outlined previously 41 with minor modifications. Cells were trypsinized from monolayers and diluted in complete culture medium to a density of 25,000 cells in 300 μL cell suspension per well in 48-well plates and cultured until confluent. The monolayers of cells were scratched using a 200 μL pipette tip. Media was removed and the cells were washed with PBS to remove any floating cells. Low serum (1%) containing media was then added to the wells together with 0.1 μM vinblastine with or without 2.5 or 5 µM P-gp inhibitory compounds, or 0.5% DMSO as a drug-carrier vehicle control. Immediately after the scratching and media additions, the wounds were imaged using a BioTek Cytation 5 imaging multi-mode reader (Bio-Tek, Winooski, VT). After 14 hours, the remaining wounds were imaged again and the areas of the wounds before and after treatment were quantified using ImageJ software 82 . The percentage of wound closures in each test were calculated compared to vehicle treated experiments. Each individual experiment was performed in triplicate and 2 images were obtained for each well. The whole experiment was repeated at least once, and n = 12 was used for the statistical analysis. Calcein AM assay To assess inhibition of P-gp-catalyzed transport of the P-gp pump substrate, calcein AM, A2780ADR cells were seeded at 40,000 cells per well in 96 wells plates and allowed to grow in complete medium for 48 hours. Medium was removed and cells were treated with or without 10 μM P-gp inhibitory compounds and 2.5 μM calcein AM (Life Technologies, OR) and diluted into phenol red free RPMI 1640 media. The cells were imaged over 45 minutes in 15 minute intervals using both GFP fluorescence and bright field filters. Fluorescence was measured by excitation at 485 nm with a 20 nm gate and at emission at 535 nm with a 20 nm gate using a BioTek Cytation 5 imaging multi-mode reader (Bio-Tek, Winooski, VT). DMSO was used as the vehicle, 10 μM experimental compound 19 (which does not penetrate intact cells) 29 , 250 μM probenecid and 60 μM novobiocin were used as negative controls, and verapamil was used as a positive control for competitive inhibition of P-gp transport. Results were plotted as the mean with standard deviation (SD) of three replicates per concentration and are representative of at least two independent experiments. Daunorubicin accumulation experiments A2780ADR cells were seeded in 96 wells plates at 150,000 cells per well in complete media and allowed to grow overnight. Medium was then removed and cells were treated with or without 15 μM P-gp inhibitory compounds and in the presence of 10 μM daunorubicin (MP Biomedicals, France) diluted in complete medium. After 2.5 hours of incubation, media were removed and cells were washed once with PBS containing DMSO 5% and once with 2% DMSO and imaged using a Texas Red fluorescence filter and a BioTek Cytation 5 imaging multi-mode reader. To quantify the accumulation of daunorubicin in cells, assays were carried out as above, but cells were lysed in 100 μL of PBS containing 0.5% SDS and 0.5% Triton X100 immediately after the washing step. The fluorescence of daunorubicin was measured using excitation at 488 nm with a 20 nm gate and emission at 575 nm with a 20 nm gate using the BioTek Cytation 5 imaging multi-mode reader. Calcein AM uptake in spheroids Spheroids of the multidrug resistant prostate cancer cell line, DU145TXR, were produced as described 83 with the following modifications. Cells were trypsinized from monolayers and diluted in complete culture medium to a density of 15000 cells in a 200 μL cell suspension per well in 96-well plates. Prior to the experiment, all wells used for the assay had been coated with 2.5% low melting agarose in RPMI. After seeding, the 96-well plates were centrifuged at 600 rpm for 20 minutes. Centrifugation was repeated after 24 hours to obtain more tightly packed spheroids. Spheroids that had formed six days after seeding were used for experiments. Spheroids were pretreated with 15 µM of P-gp inhibitors 29, 34 or 45 for 3 hours and then incubated with 2.5 µM of calcein-AM. Fluorescence images were obtained every 20 minutes for a total of 100 minutes using a GFP filter. The resulting TIFF image files were analyzed using ImageJ software and interactive 3D surface plot plugging was used to obtain 3D graphs based on the pixel intensities of images. Each experiment was carried out in triplicate and the whole experiment was duplicated. Spheroid growth and spheroid area reduction assay Spheroid cultures were prepared as described above except that the growth was initiated with only 2000 cells per spheroid. Prepared plates were incubated at 37 °C for 72 hours in a humidified incubator with 5% CO 2 for spheroid formation. The spheroids were treated with daunorubicin at the concentrations indicated, with or without compound 29 that had been diluted in 50 μL of complete medium. Half of the medium was replaced after every 48 hours of incubation. The spheroids were imaged every 24 hours using a BioTek Cytation 5 imaging multi-mode reader and the areas of the spheroids were determined using the BioTek Gen5 software. The fold change of tumor spheroid area was determined each day by comparing the area of each spheroid to that of day one. Dead cells in the spheroid culture on day six were visualized and imaged after staining with 10 μL of a 0.01% ethidium bromide (Fisher Scientific, NJ) solution diluted into PBS. Fluorescence microscopic analysis of cell apoptosis Double staining with acridine orange/ ethidium bromide (AO/EB) is a reliable method to detect apoptosis and was carried out as described in 37 with slight modifications. Briefly, 16000 cells were seeded in 48 well plates in 300 μL of complete media and incubated for 24 hours. After 24 hours, cells were treated with 1 μM paclitaxel and 10 μM P-gp inhibitory compounds in DMSO or DMSO controls for 48 hours. Then dual stain containing solution AO/EB (100 μg/ml each) was added to each well and images were acquired using a BioTek Cytation 5 imaging multi-mode reader with GFP (for green fluorescence from acridine orange), Texas Red fluorescence (for red fluorescence from ethidium bromide) and bright field filters. Cellular Accumulation Assays for Experimental P-gp Inhibitors DU145TXR cells were seeded in 6 well plates with ~350,000 cells per well. After 48 hours, the media was replaced with fresh media and cells were treated with 5 μM of compounds (29, 34, or 45) and daunorubicin with or without 500 nM of tariquidar (MedKoo Biosciences, Chapel Hill, NC, U.S.A.) Experiments were performed in triplicate. After 2.5 hours of incubation with compounds, cells were washed with Hank’s Balanced Salt Solution (HBSS, Corning Inc. NY), harvested using trypsin, and counted. Each sample was then washed with 2 mL of ice-cold HBSS and diluted in cold HBSS at a final concentration of 1 million cells/mL. All samples were frozen with liquid nitrogen and stored at −80 °C until analysis. LC-MS/MS analyses were performed essentially as described in 84 . 250 µl of treated or untreated cell lysate was aliquoted into Eppendorf tubes. Blank lysates were spiked with varying concentrations of each compound to create a standard curve. Each sample was mixed with 0.5 ml of a solution containing 0.15% formic acid and 120 ng/ml n-benzylbenzamide internal standard in methanol, vortexed 15 sec, incubated 10 min at RT and then centrifuged twice at 16,100 × g. The supernatant was then analyzed by LC-MS/MS using a Sciex 4000QTRAP mass spectrometer coupled to a Shimadzu Prominence LC. Chromatography conditions were as follows. Buffer A consisted of water + 0.1% formic acid and Buffer B consisted of methanol + 0.1% formic acid for compound 29, 34, and 45 and acetonitrile + 0.1% formic acid for daunorubicin. The column flow rate was 1.5 ml/min using an Agilent C18 XDB, 5 micron packing 50 × 4.6 mm column. The gradient conditions for compounds 29, 34, and 45 were 0–1.0 min 3% B, 1.0–2.0 min gradient to 95% B, 2.0–3.5 min 95% B, 3.5–3.6 min gradient to 3% B. 3.6–4.5 min 3% B. Gradient conditions for daunorubicin were 0–2.0 min 5% B, 2.0–3.5 min gradient to 60% B, 3.5–5.0 min 60% B, 5.0–5.1 min gradient to 5% B, 5.1–7.5 min 5% B. Compounds were detected in MRM mode after optimization of machine parameters by detection of the following parent/daughter ions: 459.1/278.1 for 29, 477.1/285.1 for 34, 424.1/149.0 for 45, and 528.1/321.0 for daunorubicin. N-benzyl benzamide (212.1/91.1) was used as the internal standard. A value 3-fold above the signal obtained from blank lysate was designated as the limit of detection (LOD). The limit of quantitation (LOQ) was defined as the lowest concentration of standard at which back calculation yielded a concentration within 20% of theoretical and above the LOD. The LOQ for all analytes was between 0.1–0.5 ng/ml. The protein pellet remaining after addition of organic solvent was resuspended in 25 µl of 0.1 M NaOH, boiled for 5 min, and 5 µl was mixed with 200 µl of 1:50 B:A reagent (Thermofisher BCA Kit) in order to determine the protein concentration. A BSA standard curve was prepared in H 2 O and mixed in the same ratio. The samples were incubated 30 min at 37 °C and read at 562 nM. Compound concentrations in the lysates were then normalized to protein content for each sample. Data availability The datasets generated during and/or analyzed during the current study are available from the corresponding authors on reasonable request. | Researchers at Southern Methodist University have discovered three drug-like compounds that successfully reverse chemotherapy failure in three of the most commonly aggressive cancers—ovarian, prostate and breast. The molecules were first discovered computationally via high-performance supercomputing. Now their effectiveness against specific cancers has been confirmed via wet-lab experiments, said biochemistry professors Pia Vogel and John G. Wise, who led the study. Wise and Vogel report the advancement in the Nature journal Scientific Reports. The computational discovery was confirmed in the Wise-Vogel labs at SMU after aggressive micro-tumors cultured in the labs were treated with a solution carrying the molecules in combination with a classic chemotherapy drug. The chemotherapy drug by itself was not effective in treating the drug-resistant cancer. "Nature designs all cells with survival mechanisms, and cancer cells are no exception," said Vogel, a professor and director of SMU's Center for Drug Discovery, Design and Delivery. "So it was incredibly gratifying that we were able to identify molecules that can inhibit that mechanism in the cancer cells, thereby bolstering the effectiveness of chemotherapeutic drugs. We saw the drugs penetrate these resistant cancer cells and allow chemotherapy to destroy them. While this is far from being a developed drug that will be available on the market anytime soon, this success in the lab gives us hope for developing new drugs to fight cancer." The current battle to defeat cancer is thwarted by chemotherapy failure in advanced cancers. Cancer cells initially treated with chemotherapy drugs ultimately evolve to resist the drugs. That renders chemotherapy ineffective, allowing cancers to grow and spread. Key to cancer cell resistance are often certain proteins typically found in all cells—cancerous or otherwise—that are outfitted with beneficial mechanisms that pump away toxins to ensure a cell's continued survival. Nature has set it up that these pumps are prevalent throughout the body, with some areas naturally having more of the pumps than others. "The cancer cell itself can use all these built-in defenses to protect it from the kinds of things we're using to try to kill it with," Wise said. The most common of these beneficial defense mechanisms is a pump protein, P-glycoprotein or P-gp, as it's called. Another is one seen in breast and many other cancers, called breast cancer resistance protein, BCRP. In the case of cancer cells on the first round of treatment, these pumps are typically not produced in high levels in the cells, which allows chemotherapy to enter most of the cells in the tumor. This often gives what looks like a good result. Unfortunately, in the cancer cells that don't die, the chemotherapeutic often changes the cell, which then adapts to protect itself by aggressively multiplying the production of its defensive pumps. Upon subsequent rounds of chemo, the P-gp and BCRP pumping mechanisms have proliferated. They effectively resist the chemotherapy, which now is much less successful, or not successful at all. "if enough of the pumps are present, the cancer isn't treatable anymore," said Wise, associate professor in the SMU Department of Biological Sciences. Researchers in the field have searched unsuccessfully for compounds to inhibit the pumps that could be used in the clinic as well. The molecules that Wise and Vogel discovered stopped the pumps. "They effectively bring the cancer cells back to a sensitivity as if they'd never seen chemotherapy before," said Vogel. "And our data indicated the molecules aren't cancer specific. They can be used to treat all kinds of cancers because they inhibit not just the P-gp pump, but also the breast cancer protein pump." To test the compounds, the researchers used amounts of chemotherapeutic that would not kill these multi-drug resistant cancers if the pumps were not blocked. Part A of Figure 4 shows that cells treated with a combination of chemo and inhibitor were thwarted in the process of metastasis -- a deadly challenge in the treatment of cancer.The researchers grew two-dimensional advanced cancer cells on cell plates, then used a pipette to scratch across each layer of cells to create a gap.Some of the cells were not treated (DMSO), some were treated with chemo (VIN), some with inhibitor (29, 34 and 45), and some with both (29+VIN, 34+VIN and 45+VIN).After 14 hours, those untreated and those treated with only chemo or only inhibitor, had proceeded toward closing the gap, indicating the chemo couldn't penetrate the cells, so those cancers could successfully metastasize. Those treated with a combination of chemo and inhibitor, were thwarted from migrating, indicating that the combination of the two drugs killed the cells and stopped migration and therefore metastasis. Credit: SMU "We wanted to make sure when using these really aggressive cancers that if we do knock out the pump, that the chemotherapy goes in there and causes the cell to die, so it doesn't just stop it temporarily," Wise said. "We spent a fair amount of time proving that point. It turns out that when a cell dies it goes through very predictable morphological changes. The DNA gets chopped up into small pieces, and we can see that, and so the nucleus becomes fragmented, and we can see that. Under the microscope, with proper staining, you can actually see that these highly drug-resistant prostate cancer cells, for example, are dead." Getting at the heart of the problem Unique to the experiment is that the molecules were also tested on three-dimensional micro-tumors. That is a departure from the usual cell-culture experiments, which are a two-dimensional film. In two-dimensional experiments, every cell is exposed to the chemotherapeutic because the film is just one layer of cells thick. That method ignores one of the key challenges to reversing tumors—how to get drugs into the middle of a tumor, not just on its surface. "We show that with the help of our inhibitor compounds, we actually make the tumor penetrable to chemotherapeutic," Vogel said. "We can kill the cells in the middle of the tumor." A pathway to personalized medical treatments Chemotherapy's harmful side effects on non-cancerous organs is well-known. The discovery of molecules that target a specific pump may mitigate that problem. A patient's tumor can be sampled to see which pump is causing the drug resistance. Then the molecule that knocks out that specific pump can be added to the chemotherapy. "That means you don't open the door wide to toxins in the central nervous system," Wise said. "That has some real implications for the future and for personalized medicine. In most of the previous clinical trials, inhibitors have opened the brain up to toxins. From what we can tell so far, our inhibitors do not increase the toxicity of chemotherapeutics in normal cells." An audacious discovery P-gp is present in one form or another in everything that lives. "It's in your dog, it's in your cat, it's in yeast cells, it's in bacteria, it's everywhere," Wise said. "Why is it everywhere? Because it's a really wonderful solution to the problem of getting toxins out of a cell. P-gp is a tremendously sophisticated evolutionary solution to that problem. And as with most things in biology that work well, everybody gets it, because if you don't have it, you didn't survive." Biologists say that P-gp can pump out 95 of 100 chemotherapeutics, indicating it can grab almost any drug and throw it out of a cell. Multi-drug resistant cancer cells absorb red die, indicating they've died after treatment with the chemotherapeutic daunorubicin and a P-gp inhibitor. Credit: SMU "So there's a certain audacity to say that we can use a computer and target one part of this protein—the motor—and totally avoid the part of the protein that has evolved to pump almost anything that looks like a drug out of the cell," Wise said. "That's an audacious claim and the findings surprised us." In their computational and wet-lab experiments, Wise and Vogel searched for molecules that inhibit ATP hydrolysis—the chemical energy reaction that powers the P-gp pump. "We targeted the motor of the pump instead of the pump part of the pump because almost all the clinical trial failures in other studies were actually compounds that targeted the pump part of the pump—and they would just slow down the pumping of the chemotherapeutic," Vogel said. "The time was ripe to do these structural models. We hypothesized that we could completely avoid the pumping mechanism and just target the motor." Computational method highly predictive The wet-lab experiments confirmed the accuracy of the computational findings, Vogel said. "The predictiveness of the computational methods was really high," she said. "It completely exceeded my expectations. We had selected certain molecules that were predicted in those computational experiments to interact with the pump in certain ways and not in others, and we could show in our wet-lab experiments that the predictions were spot on." Fascinated by the novel approach to the research, the National Institute of General Medical Sciences funded much of the research. Wise and Vogel tapped the high-performance computing power of SMU's ManeFrame, one of the most powerful academic supercomputers in the nation. Wise sorted through 15 million commercially available drug-like compounds made publically available in digital form from the pharmacology database Zinc at the University of California, San Francisco. Then, again using ManeFrame, Wise ran the compounds through a computer-generated model of P-gp. The virtual model, designed and built by Wise, is the first computational microscope of its kind to simulate the actual behavior of P-gp in the human body, including interactions with drug-like compounds while taking on different shapes. He reported the dynamic functioning of the model in 2015 in the journal Biochemistry. Process of elimination finds needle in the haystack Out of 15 million drug-like compounds that were virtually screened, the researchers found 180,000 that in the computer were predicted to interact strongly with the ATP harvesting power plant part of the pump motor. From those, Wise and Vogel eliminated the ones that interact well with the pump part. Roughly 0.15 percent survived—several hundred. "So that tells you how promiscuous that binding site is for compounds," Wise said. From there, they bought and tested in the lab a number of the remaining molecules. "It was a process of elimination," Vogel said. "Of the first 38 we tested, we found four. And because of the computational approach we took, it made failure relatively cheap. This is proof of principle that at least in those cases the compounds behave exactly in the lab as predicted in the computer. Which thrills the heck out of me—I never, ever would have thought that." The Vogel and Wise research labs are part of the Center for Drug Discovery, Design and Delivery in SMU's Dedman College. The center's mission is a novel multi-disciplinary focus for scientific research targeting medically important problems in human health. | 10.1038/s41598-018-19325-x |
Medicine | Immunotherapy for HPV+ head and neck cancer: Awakening the force within | Functional HPV-specific PD-1+ stem-like CD8 T cells in head and neck cancer, Nature (2021). DOI: 10.1038/s41586-021-03862-z , www.nature.com/articles/s41586-021-03862-z Journal information: Nature | http://dx.doi.org/10.1038/s41586-021-03862-z | https://medicalxpress.com/news/2021-09-immunotherapy-hpv-neck-cancer-awakening.html | Abstract T cells are important in tumour immunity but a better understanding is needed of the differentiation of antigen-specific T cells in human cancer 1 , 2 . Here we studied CD8 T cells in patients with human papillomavirus (HPV)-positive head and neck cancer and identified several epitopes derived from HPV E2, E5 and E6 proteins that allowed us to analyse virus-specific CD8 T cells using major histocompatibility complex (MHC) class I tetramers. HPV-specific CD8 T cells expressed PD-1 and were detectable in the tumour at levels that ranged from 0.1% to 10% of tumour-infiltrating CD8 T lymphocytes (TILs) for a given epitope. Single-cell RNA-sequencing analyses of tetramer-sorted HPV-specific PD-1 + CD8 TILs revealed three transcriptionally distinct subsets. One subset expressed TCF7 and other genes associated with PD-1 + stem-like CD8 T cells that are critical for maintaining T cell responses in conditions of antigen persistence. The second subset expressed more effector molecules, representing a transitory cell population, and the third subset was characterized by a terminally differentiated gene signature. T cell receptor clonotypes were shared between the three subsets and pseudotime analysis suggested a hypothetical differentiation trajectory from stem-like to transitory to terminally differentiated cells. More notably, HPV-specific PD-1 + TCF-1 + stem-like TILs proliferated and differentiated into more effector-like cells after in vitro stimulation with the cognate HPV peptide, whereas the more terminally differentiated cells did not proliferate. The presence of functional HPV-specific PD-1 + TCF-1 + CD45RO + stem-like CD8 T cells with proliferative capacity shows that the cellular machinery to respond to PD-1 blockade exists in HPV-positive head and neck cancer, supporting the further investigation of PD-1 targeted therapies in this malignancy. Furthermore, HPV therapeutic vaccination efforts have focused on E6 and E7 proteins; our results suggest that E2 and E5 should also be considered for inclusion as vaccine antigens to elicit tumour-reactive CD8 T cell responses of maximal breadth. Main Chronic viral infections and cancer result in compromised CD8 T cell function characterized by the expression of several inhibitory receptors including PD-1 1 , 2 , 3 , 4 , 5 . Blockade of the PD-1 inhibitory pathway enhances T cell function and is an effective treatment for several cancers 1 , 2 , 5 . Preclinical studies have characterized a unique PD-1 + TCF-1 + stem-like subset of CD8 T cells that maintains the CD8 T cell response during chronic infection and provides the proliferative burst after PD-1 directed immunotherapy 6 , 7 , 8 , 9 , 10 , 11 . Similar PD-1 + TCF-1 + CD8 T cells have been found in human tumours 12 , 13 , 14 . However, a better definition of their antigen specificity and their lineage relationship with other CD8 T cell subsets in the tumour is needed for understanding T cell differentiation pathways in human cancer and to optimize immunotherapeutic approaches. Here we address these questions by analysing HPV-specific CD8 T cells in head and neck squamous cell carcinoma (HNSCC). Characterization of CD8 TILs in HNSCC To investigate the CD8 T cell response to HNSCC, we characterized TILs from six primary tumours and eight metastatic lymph nodes of twelve patients from our treatment-naive HPV-positive HNSCC cohort. CD8 T cells comprised around 30% of TILs by flow cytometry (Extended Data Fig. 1a ). The majority of CD8 TILs expressed high levels of PD-1 and all of the PD-1 + cells were CD45RA-negative (Fig. 1a ). These PD-1 + CD45RA − CD8 T cells included distinct populations that expressed either TCF-1 or TIM-3 (Fig. 1b ), markers that are commonly used to distinguish stem-like (TCF-1 + TIM-3 − ) and terminally differentiated (TIM-3 + TCF-1 − ) PD-1 + CD8 T cells 6 , 7 , 8 , 14 . Although ranging widely between patients, on average 45% of PD-1 + cells exhibited a terminally differentiated phenotype. Stem-like PD-1 + TCF-1 + CD8 T cells were present in all patients and expressed significantly lower levels of granzyme B, Ki-67 and CD39 compared to TIM-3 + cells (Fig. 1b , Extended Data Fig. 1b, c ). Both subsets expressed TOX (Extended Data Fig. 1d ), a key transcription factor for CD8 T cell differentiation during chronic viral infections and cancer 15 . We next examined the anatomical location of these PD-1 + CD8 T cell subsets in primary tumours and metastatic lymph nodes using multiplex immunohistochemistry. CD8 + and PD-1 + cells were diffusely present in the cytokeratin-positive tumour parenchyma as well as in the stroma within and around the tumour bed (Fig. 1c , Extended Data Fig. 1e, f ). Notably, PD-1 + TCF-1 + cells were predominantly found in the stromal areas and were rarely present in the tumour parenchyma (Fig. 1d, e ), whereas the more differentiated cells were diffusely dispersed in the tumour microenvironment (TME) (Extended Data Fig. 1g, h ), suggesting that the stem-like cells reside in distinct niches within the TME and stay away from the tumour itself. Fig. 1: Intratumoral PD-1 + CD8 T cells are made up of distinct subsets with PD-1 + TCF-1 + stem-like cells residing in lymphoid-like stromal areas. a , Frequency of PD-1 + cells among CD8 + TILs. Flow plot is gated on CD8 T cells. b , Frequency of TCF-1 + TIM-3 − and TIM-3 + TCF-1 − subsets in PD-1 + CD8 TILs, with mean and s.d. Two-sided Wilcoxon matched-pairs rank test. Flow plot is gated on PD-1 + CD8 T cells. c , Representative multiplex immunohistochemistry staining of one of five primary HPV-positive HNSCC tumours with orange dashed lines highlighting parenchyma (cytokeratin + ) and stromal borders. Scale bars, 100 μm. d , e , Frequency ( d ) and density ( e ) of PD-1 + TCF-1 + CD8 TILs in the tumour parenchyma and stroma, with mean and s.d. n = 7 (five primary tumours (pink) and two metastatic lymph nodes (grey)). Two-sided paired t -tests. Full size image Identification of HPV-specific CD8 TILs HPV-positive HNSCC offers the opportunity to identify and characterize tumour-reactive CD8 T cells using a defined set of virus-derived tumour-associated antigens. We focused on the known HPV oncogenes E5, E6 and E7, and on E2, which is involved in the episomal maintenance of the HPV genome. To identify HPV-derived CD8 T cell epitopes, we cultured peripheral blood mononuclear cells (PBMCs) from patients with HNSCC with a pool of 251 predicted HPV peptides for 2 weeks and then tested the reactivity of T cells in the expanded PBMCs using interferon-γ (IFNγ) enzyme-linked immunosorbent spot (ELISpot) assays, intracellular cytokine staining (ICCS) and tetramer staining (Fig. 2a, b , Extended Data Fig. 2a–c ). Overall, 43 different peptides were identified by ELISpot (22 HPV E2 peptides, 10 HPV E5 peptides, 9 HPV E6 peptides and 2 HPV E7 peptides), with several epitopes being recognized by T cells from multiple patients (Extended Data Fig. 2a ). Using patient-specific HLA-typing information, peptide–HLA pairs were predicted in silico and confirmed by in vitro HLA-binding assays, which resulted in the identification of nine CD8 T cell epitopes and the production of seven different MHC class I tetramers, with epitopes derived from HPV E2, E5 and E6 (Extended Data Fig. 2d ). Some of the E2, E5 and E6 epitopes that we identified in these patients have also been described in previous studies 16 , 17 . Fig. 2: Identification of HPV-specific CD8 T cells in PBMCs and tumours of patients with HPV-positive HNSCC. a , Schematic of experimental strategy to map HPV-specific CD8 T cell epitopes. b , Representative example of expanded PBMCs of an HLA-A*01:01 + individual stimulated with HPV E2 329–337 peptide and analysed by IFNγ ELISpot (left) or ICCS (middle), or stained with the respective tetramer (right). Flow plots are gated on CD8 T cells. c , Representative flow plots of ex vivo tetramer-stained CD8 TILs from primary tumour, metastatic lymph nodes (LNs) and PBMCs of two patients. Flow plots are gated on CD8 T cells. d , Direct ex vivo frequency of tetramer-positive CD8 T cells in primary tumours, metastatic lymph nodes and PBMCs. Colours indicate different tetramers. Matched samples are connected by lines. Full size image HPV-specific CD8 T cells were readily detectable ex vivo by tetramer staining of TILs isolated from primary tumours and metastatic lymph nodes, and all of these cells expressed high levels of PD-1 (Fig. 2c , Extended Data Fig. 3 ). The frequency of HPV-specific CD8 T cells varied between patients, ranging from 0.1% to 10% of CD8 TILs, but CD8 T cells to a given epitope were present at similar frequencies in primary tumour and metastatic lymph nodes of the same individual (Fig. 2c, d ). In contrast to readily detectable HPV-specific CD8 T cells in the TME, the frequencies of tetramer-positive CD8 T cells in patient-matched peripheral blood were very low (less than 0.02% of CD8 T cells). These results suggest that most HPV-specific CD8 T cells are resident in the TME and do not circulate in the blood, which is consistent with previous reports in various cancers that have shown extremely low frequencies of tumour-specific CD8 T cells in peripheral blood 16 , 17 , 18 , 19 . The uniformly high expression of PD-1 among HPV-specific CD8 TILs suggests that the respective antigens (E2, E5 and E6) are expressed and being presented within the TME. Together, these data show that HPV-specific CD8 T cells can account for a substantial proportion of CD8 TILs in HPV-positive HNSCC, and that although cells of the same antigen specificity can be expanded from PBMCs, their frequency in the blood under steady state conditions is much lower. Three subsets of HPV-specific CD8 TILs We sorted HPV tetramer-positive CD8 TILs from seven primary tumours and six metastatic lymph nodes to high purity (Extended Data Fig. 4a, b ) and examined their transcriptional signature by single-cell RNA sequencing (scRNA-seq). HPV-specific CD8 TILs comprised three transcriptionally distinct clusters that differed from each other in the expression of several hundred genes (Fig. 3a , Extended Data Fig. 5a–c ). All clusters expressed TOX —a transcription factor that is critical for establishing an ‘exhausted’ epigenetic state in T cells 15 —and the inhibitory molecules PDCD1 , CTLA4 and TIGIT (Fig. 3b ). Cluster 1 cells were characterized by high levels of expression of TCF7 , a transcription factor essential for the generation of stem-like CD8 T cells during chronic viral infection 6 , 7 . Cluster 1 cells also expressed LEF1 , another transcription factor associated with stem-like T cells; IL7R , which is needed for T cell survival; CCR7 , the primary receptor involved in homing to lymphoid tissue; and XCL1 and GPR183 (EBI2), two molecules involved in migration towards and the recruitment of dendritic cells. Of note, effector molecules such as granzymes and perforin were minimally expressed in cluster 1. The HPV-specific CD8 T cells in cluster 2 were characterized by high expression of PRDM1 and several other transcription factors commonly associated with acute T cell receptor (TCR) engagement such as NR4A1 (NUR77), FOS and JUN , and expressed the highest levels of IFNG . Expression of effector molecules such as GZMA , GZMB , PRF1 and GNLY was found in both clusters 2 and 3, and these clusters also expressed inhibitory receptors such as HAVCR2 (TIM-3) and ENTPD1 (CD39). Given the similarity of these HPV-specific CD8 T cell clusters to the stem-like and terminally differentiated CD8 T cells identified in preclinical mouse models, we performed gene set enrichment analyses using gene sets derived from the mouse model of chronic lymphocytic choriomeningitis virus (LCMV) infection 6 . HPV cluster 1 was highly enriched for genes expressed by LCMV-specific PD-1 + TCF-1 + stem-like CD8 T cells (Extended Data Fig. 5e ), whereas clusters 2 and 3 exhibited the highest enrichment score for the terminally differentiated gene signature (Extended Data Fig. 5e ). Notably, cluster 2 showed high levels of enrichment for both signatures, suggesting that these cells might represent an intermediate population between the stem-like and terminally differentiated states. Thus, we will refer to the three HPV clusters as stem (cluster 1), transitory (cluster 2) and terminally differentiated (cluster 3). Fig. 3: HPV-specific CD8 TILs are made up of three transcriptionally distinct clusters. a , UMAP analyses of HPV tetramer-specific CD8 T cells (around 20,000 cells) from 13 samples were combined and 3 distinct clusters were identified: cluster 1 (stem-like), cluster 2 (transitory; trans.) and cluster 3 (terminally differentiated or exhausted; TD). b , UMAP plots showing the expression of selected genes. c , Relative distribution of clusters for each patient and tetramer. Full size image The above analysis included HPV-specific CD8 T cells from different patients, from primary and metastatic sites, as well as different epitope specificities. A separate analysis of HPV-specific CD8 TILs showed a comparable distribution of these cells among the identified clusters independent of sample origin and epitope specificity (Fig. 3c , Extended Data Fig. 4c–e ). Furthermore, separate clustering of HPV-specific CD8 TILs from primary tumours and metastatic lymph nodes yielded comparable clusters with minimal differences in gene expression between the corresponding clusters (Extended Data Fig. 5d ), suggesting highly similar differentiation states of HPV-specific CD8 TILs in primary and metastatic sites. Flow cytometric analyses of HPV-specific PD-1 + CD8 TILs further confirmed the presence of TCF-1 + stem-like and more terminally differentiated cells in the TME, with TCF-1 + cells expressing high levels of the co-stimulatory molecule CD28 and low levels of granzyme B and TIM-3 (Extended Data Fig. 5f ). It was of interest to determine how total PD-1 + CD8 TILs compared to the HPV-specific CD8 T cells. scRNA-seq of total PD-1 + CD8 TILs showed that three of the clusters (1–3) were highly similar to the clusters seen in HPV-specific CD8 T cells, but that the total PD-1 + CD8 T cells in the tumour had an additional fourth cluster with a distinct transcriptional signature (Extended Data Fig. 6a–e ). The origin of these cells is not clear but they could be bystander CD8 T cells that are specific for other viruses 20 , 21 . HPV-specific CD8 T cell subsets share TCRs The TCR repertoire of HPV-specific CD8 TILs showed dominance of TCR clonotypes for each tetramer (Extended Data Fig. 7b, c ). Paired scRNA-seq data from the primary and metastatic site for individual patients showed a correlation between the frequency of a given clonotype in the two sites (Extended Data Fig. 7c, d ). Most importantly, HPV-specific CD8 T cells showed a notable degree of TCR clonal overlap between the three clusters (Fig. 4 , Extended Data Fig. 7e, f ). In fact, the most dominant as well as less dominant clonotypes were similarly distributed among the stem-like, transitory and terminally differentiated HPV-specific CD8 T cells (Extended Data Fig. 7g ). This was seen for all of the HPV-specific tetramer-sorted CD8 T cells that we analysed. Fig. 4: The three HPV-specific transcriptionally distinct clusters share TCR clonotypes. Dominant TCR clonotypes specific for a given tetramer are present in all three HPV-specific CD8 T cell subsets (stem-like, transitory and terminally differentiated). Full size image Lineage relationship of HPV-specific CD8 TIL subsets Give the observed TCR overlap between the different clusters, we investigated a potential lineage relationship between the distinct HPV-specific CD8 T cell states present in the tumour using pseudotime analyses. These analyses showed that CD8 T cells that start from a stem-like state (cluster 1) would transition through cluster 2 with a more effector-like signature before reaching a terminally differentiated state (cluster 3), while progressively losing the stem-like signature (Extended Data Fig. 8a–c ). This analysis also showed that the relative proportion of a given TCR clonotype was evenly distributed across the stem-like, transitory and terminally differentiated states of the pseudotime progression, suggesting that there is no preference for cells to remain in certain transcriptional states based on their TCR clonotype (Extended Data Fig. 8d ). To provide direct experimental evidence of the proposed lineage relationship model, we examined the potential of HPV-specific PD-1 + stem-like and the more terminally differentiated CD8 T cells to proliferate and differentiate after stimulation with the cognate HPV peptide in vitro. We took TILs directly from the tumour, labelled them with CellTrace Violet (CTV) and then sorted CTV-labelled CD8 TILs by flow cytometry, on the basis of the expression of the surface markers PD-1, TIM-3 and CD39, into stem-like (PD-1 + TIM-3 − CD39 − ) and terminally differentiated (PD-1 + TIM-3 + CD39 + ) CD8 T cells (Fig. 5a , Extended Data Fig. 9a ). Of note, stem-like cells obtained through this gating strategy expressed high levels of TCF-1 (Extended Data Fig. 9a ). The two sorted PD-1 + subsets were then cultured for five days in the presence of patient-matched autologous PBMCs pulsed with the tetramer-specific HPV-derived peptide. In the absence of the HPV peptide, there was minimal to no proliferation of either the stem-like or the terminally differentiated CD8 T cells isolated from the tumour (Fig. 5b ). However, stimulation of the stem-like cells with HPV peptide resulted in extensive proliferation of the HPV-specific tetramer-positive CD8 T cells, with these cells undergoing up to seven divisions. In contrast, only minimal to no proliferation was observed when the terminally differentiated HPV-specific CD8 T cells were stimulated with the same peptide-pulsed PBMCs (Fig. 5b ). The superior proliferative capacity of HPV-specific stem-like CD8 T cells was observed for all seven samples as measured by the replicative index—a measure of the number of divisions the cell undergoes after antigen stimulation (Fig. 5c ). These results clearly show that the HPV-specific PD-1 + stem-like CD8 T cells present in the tumour have proliferative potential, whereas the more differentiated HPV-specific CD8 T cells have lost the ability to divide after antigen stimulation. Fig. 5: HPV-specific PD-1 + TCF-1 + stem-like CD8 T cells in the tumour have proliferative capacity and can differentiate into effector-like cells. a , Experimental design to isolate stem-like and terminally differentiated subsets from the tumour and test the proliferative and differentiation capacity of HPV-specific CD8 T cells. b , Representative plots show HPV-tetramer staining and CTV dilution of stem-like and terminally differentiated cells after five days of culture with autologous PBMCs in the presence or absence of HPV-specific peptide. c , Replication index of stem-like ( n = 7) and terminally differentiated ( n = 6) cells. Histograms show representative CTV dilution of tetramer-specific CD8 T cells. d , Representative plots showing CTV dilution and expression of selected markers after five days of stimulation with HPV-specific peptide (S) or unstimulated with no peptide (U). Summary plots show the percentage of cells expressing the indicated markers. Mean and s.d.; two-sided unpaired Mann–Whitney U test. Full size image We next investigated whether the antigen-induced proliferation of HPV-specific stem-like CD8 T cells was associated with differentiation into more effector-like and terminally differentiated cells. Stem-like CD8 T cells cultured in the absence of antigenic stimulation expressed low levels of TIM-3, granzyme B, and CD39, thus maintaining their in vivo phenotype (Fig. 5d ). However, stimulation of these HPV-specific stem-like CD8 T cells with the cognate HPV peptide resulted in marked upregulation of TIM-3, granzyme B and CD39 by the tetramer-positive proliferating CD8 T cells (Fig. 5d ). CD25 expression also increased on these dividing HPV-specific CD8 T cells (Extended Data Fig. 9b ). In marked contrast, the terminally differentiated HPV-specific CD8 T cells did not proliferate after antigen stimulation, retained high levels of TIM-3, granzyme B and CD39 (Fig. 5d ) and only showed minimal upregulation of CD25 (Extended Data Fig. 9b ). Both the stem-like and the terminally differentiated HPV-specific CD8 T cells expressed CD45RO, and expression of this molecule did not change after peptide stimulation (Extended Data Fig. 9b ). It is worth noting that the HPV-specific PD-1 + stem-like CD8 T cells expressed the CD45RO isoform and are distinct from the human stem-cell-like memory CD8 T (T SCM ) cells that were defined on the basis of CD45RA expression 22 . In addition to the acquisition of markers associated with a more differentiated state, the HPV-specific stem-like cells also decreased the expression of surface markers that are associated with the stem-like state. HPV-specific PD-1 + stem-like CD8 T cells that were cultured without antigenic stimulation expressed IL-7R, a cytokine receptor providing critical signals for cell survival, but downregulated their expression of IL-7R after antigen-induced proliferation (Fig. 5d ). Stem-like CD8 T cells cultured in the absence of antigenic stimulation maintained expression of CD28—an important co-stimulatory molecule required for the proliferative burst upon PD-1 blockade 23 —whereas terminally differentiated cells did not express significant levels of CD28 (Extended Data Fig. 9b ). Notably, antigen stimulation of stem-like CD8 T cells resulted in the retention of CD28 expression by many of the dividing cells (Extended Data Fig. 9b ). Together, these studies clearly demonstrate that HPV-specific PD-1 + TCF-1 + stem-like CD8 TILs—in contrast to more differentiated cells—have the ability to proliferate and differentiate into more effector-like CD8 T cells after antigenic stimulation, and could act as resource cells to maintain HPV-specific CD8 T cell responses in patients with HNSCC. Implications Our findings have implications in three main areas: first, for PD-1-based therapy of patients with HPV-positive HNSCC; second, for HPV therapeutic vaccination strategies; and third, for viral-mediated cancers in general. First, we show that the cellular machinery for responding to PD-1 directed immunotherapy is present in patients with HPV-positive HNSCC. Functional HPV-specific PD-1 + TCF-1 + CD8 T cells that can proliferate and differentiate into effector-like cells after antigen stimulation were readily detectable in the tumours of these patients. Definitive studies in preclinical mouse models have shown that this subset of CD8 T cells provides the proliferative burst of effector-like T cells after PD-1 blockade and there is also evidence suggesting that the presence of these PD-1 + TCF-1 + CD8 T cells in human tumours correlates with responsiveness to PD-1 therapy 6 , 7 , 8 , 12 . Patients with HPV-positive HNSCC who had undergone conventional treatments and then relapsed have shown low response rates to PD-1 therapy 24 . Our results now provide a strong rationale for further investigation of PD-1 targeted neoadjuvant and adjuvant therapies in HPV-positive head and neck cancer. Of note, our studies were done in treatment-naive patients, whereas the PD-1 clinical trials that showed low responsiveness were done in patients with HNSCC who had received extensive previous treatments (radiation, chemotherapy and surgery) that may have reduced the number of HPV-specific T cells. Future PD-1 therapy trials should take this into consideration. Second, there have been considerable efforts in developing therapeutic HPV vaccines for patients with cancer but most of these studies have focused on HPV E6 and E7 as the target antigens for the vaccine 25 . Our studies now show that E2 and E5 are major targets of the intratumoral CD8 T cell response in patients with HPV-positive HNSCC. Thus, therapeutic vaccination strategies for patients with HPV-positive HNSCC should, in addition to E6 and E7, also consider including E2 and E5 as vaccine antigens to elicit tumour-reactive CD8 T cell responses of maximal breadth. This vaccine-induced response could be further enhanced by concurrent PD-1 blockade, which has previously been shown to synergize with therapeutic vaccination in a preclinical chronic infection model 26 . Finally, it is important to emphasize that viral-mediated cancers are a major public health problem worldwide 27 . Our study characterizing the CD8 T cell response in patients with HPV-positive HNSCC, along with another study that examined HPV-specific B cell responses in the same patients 28 , can serve as benchmarks for the analysis of immune responses in other viral-mediated cancers. Methods Data reporting No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment. Patients, isolation of TILs and HLA-typing Untreated patients with stage I HPV-positive HNSCC were enrolled at Emory University Hospital between March 2017 and April 2019 in accordance with an approved Emory University Institutional Review Board protocol (WINSHIP4008-17), with all patients providing informed consent. Samples from primary tumour, metastatic lymph nodes and peripheral blood were obtained at the time of surgery. TILs and PBMCs were isolated as described previously 28 and cryopreserved. DNA from whole blood or PBMCs was isolated using the Qiagen DNeasy Blood & Tissue kit and HLA class I loci were genotyped using a sequence-base typing method. Multiplex immunohistochemistry Seven-colour multiplex immunohistochemistry was performed with the OPAL Polaris system (Akoya Biosciences). Four-to-five-micrometre sections of formalin-fixed paraffin-embedded tumours from the patients with HNSCC were deparaffinized, hydrated and stained manually with anti-CD3 (clone F7.2.38, Dako), CD8 (clone C8/144B, eBiosciences), CD20 (clone L26, Invitrogen), TCF-1 (clone C63D9, Cell Signaling Technologies (CST)), PD-1 (clone D4W2J, CST) and cytokeratin (clone AE1/3, Dako) antibodies. Heat induced epitope retrieval (HIER) in EDTA (pH 9) or citrate (pH 6) buffer was performed before blocking non-specific binding and staining the tissues with the primary antibodies. The sections were sequentially stained with each primary, HRP-conjugated secondary antibody, tyramide signal amplification, and OPAL fluorophore according to the manufacturer’s instructions. OPAL 480, 520, 570, 620, 690 and 780 dyes were used. The sections were counterstained with spectral DAPI (Akoya Biosciences). The stained slides were imaged and scanned using the Vectra Polaris multispectral imaging system. Image analysis Random tumour areas (that included tumour parenchyma and stroma) of high-resolution whole-slide scanned images were first annotated in PhenoChart 1.0.12 (Akoya Biosciences) and then analysed with the inForm2.4.8 software (Akoya Biosciences). Tumour parenchyma and stroma areas were identified using DAPI and cytokeratin as a marker for the tumour parenchyma. Adaptive cell segmentation was accomplished on the basis of nuclear DAPI and membranous CD8. At least 30 cells from the phenotypes of the immune cells of interest were manually selected and used to train the software for automated phenotyping. Five to six representative regions per tumour sample were analysed for a total area of 3.2 to 3.8 mm 2 per tumour. The data were processed in R studio using phenoptr (v.0.2.5) and phenoptrReports (v.0.2.6) (Akoya Biosciences). Flow cytometry Cryopreserved PBMCs and TILs were thawed, counted and stained in Dulbecco’s phosphate buffered saline (DPBS) + 2% FBS with LIVE/DEAD Fixable Yellow or Aqua Dead Cell Stain Kit (1:200, Life Technologies) and surface-stain antibodies at 0.5 tests per 1 × 10 6 cells in brilliant stain buffer (BD Bioscience, 563794; for antibodies see Supplementary Table 1 ). For tetramer stain, tetramers were generated from monomers according to standard protocol 29 and stored at -20 °C in 50% glycerol. Cells were incubated for 10 min at room temperature with the tetramers at 1:100 in DPBS, followed by addition of extracellular staining antibodies for 25 min. Cells were fixed and permeabilized with a Fixation/Permeabilization Kit (Invitrogen, 00-8333-56) or BD Cytofix/Perm Kit (BD, 554714) for ICCS, followed by intracellular staining (for antibodies see Supplementary Table 1 ). All data were acquired the same day on an LSR II cytometer, FACS Canto II or FACSymphony A5 with FACSDiva v.8.0.1 (BD Biosciences) and analysed using FlowJo software (v.10.6.1, Tree Star; for gating strategy see Extended Data Fig. 10a, b ). HPV-specific T cell expansion PBMCs and TILs from 17 patients with HPV-positive HNSCC were used for in vitro T cell expansion. Potential CD8 T cell epitopes derived from HPV proteins E2, E5, E6 and E7 and presented by a reference set of 27 human leukocyte antigens (HLA-A, B and C) covering 97% of the population were predicted using the Immune Epitope Database (IEDB) 30 . A total of 251 predicted 9–10-amino-acid-long peptides with a percentile rank of less than or equal to 1 (Supplementary Table 2 ) were synthesized as crude material (A&A Labs) and ultimately resuspended in DMSO. A peptide pool was prepared containing all 251 peptides from proteins E2 (125 peptides), E5 (55 peptides), E6 (50 peptides) and E7 (21 peptides). PBMCs were cultured in complete CTS OpTmizer medium (CTS OpTmizer T Cell Expansion SFM with CTS supplement A1048501, substituted with l -glutamine, penicillin–streptomycin and 2% human serum, Sigma-Aldrich, H3667) in the presence of the HPV-peptide pool (1 μg ml −1 per peptide), rIL-2 (Peprotech, 50 IU ml −1 ), rIL-7 (Peprotech, 25 ng ml −1 ) and rIL-15 (Peprotech, 25 ng ml −1 ). HPV-peptides were only added on the first day of culture, whereas cytokines were supplemented whenever cells were split during the two-week expansion period. At day 13 of cell culture, expanded cells were washed and rested overnight in cytokine-free medium. Identification of epitopes by ELISpot and ICCS Expanded and rested cells were plated in an IFNγ ELISpot (1 × 10 5 cells per well, BD Elispot Human IFNγ ELISPOT, 551873, IP Sterile White plates 0.45 µm hydrophobic high protein binding, Merck Millipore, S2EM004M99) and stimulated overnight with 32 different megapools containing up to 16 of the predicted peptides, with each peptide being represented in 2 megapools (2 µg ml −1 per peptide). If the two megapools were positive, the potential positive peptide (1 µg ml −1 ) was used individually to stimulate rested expanded cells in an IFNγ ELISpot. Peptides scoring positive were further confirmed to be recognized by CD8 T cells using ICCS. Cryopreserved expanded cells were thawed, rested overnight and simulated with the candidate peptide (10 µg ml −1 ) in the presence of brefeldin A and monensin (BD Golgi-Plug, 555029, and BD Golgi-Stop, 554724, 1:1,000). After 6 h, cells were washed with PBS + 2% FBS and stained for flow cytometric analysis as detailed above. For each recognized peptide sequence, the binding affinity to the responder’s HLA class I alleles was predicted in silico ( ). The HLA class I allele with the highest binding affinity for each epitope was retained as a HLA–peptide pair. Generation of HPV-specific tetramers Peptides of identified HLA–peptide pairs were synthesized at greater than 90% purity (A&A Labs) and the binding affinity to the respective HLA class I alleles was determined by in vitro binding assays 31 . Ten HLA–monomers were generated by the NIH Tetramer Core facility, one by Immunaware (HLA-A*29:02 E5 65–73 (peptide sequence IFVYIPLFL)) and one in our laboratory (HLA-A*02:01-E5 46–54 (peptide sequence VLLLWITAA)). Monomers were then tetramerized in house. Tetramer staining was tested on expanded PBMCs and frequencies of tetramer-positive CD8 T cells were similar to frequencies of IFNγ-positive CD8 T cells previously quantified by ICCS. We successfully produced the following HLA–tetramers: HLA-A*01:01–HPV E2 151–159 (peptide sequence QVDYYGLYY), HLA-A*01:01–HPV E2 329–337 (peptide sequence KSAIVTLTY), HLA-A*02:01–HPV E5 46–54 (peptide sequence VLLLWITAA), HLA-B*35:01–HPV E5 55–63 (peptide sequence SAFRCFIVY), HLA-A*29:02–HPV E5 65–73 (peptide sequence IFVYIPLFL), HLA-B*35:01–HPV E6 52–61 (peptide sequence FAFRDLCIVY) and HLA-B*27:05–HPV E6 83–90 (peptide sequence FAFRDLCIVY). Sorting of HPV-positive CD8 T cells from lymphocytes infiltrating tumour and metastatic lymph nodes Cryopreserved TILs from primary tumours or metastatic lymph nodes were thawed, rested for 3 h in RPMI + 10% FBS (R10, supplemented with 1% penicillin, streptomycin and l -glutamine) at 37 °C, 5% CO 2 and stained with tetramer in two different colours, followed by surface marker staining as described above. Live CD3 + CD19 − CD14 − CD16 − CD4 − CD8 + double-tetramer-positive cells (for gating strategy see Extended Data Fig. 10c ) were sorted on an ARIA II (BD Bioscience) into PBS with 2% FBS. For patients for whom scRNA-seq was performed, we confirmed by PCR that epitopes were not mutated and that the DNA encoding the corresponding epitopes was detectable in the tumour tissue (data not shown). Single-cell and TCR analysis Alignment, filtering, barcode counting and unique molecular identifier counting were performed using Cell Ranger v.3.1. Data were further analysed using Seurat v.3.1.4 32 . All single-cell analysis was performed using R v.3.6.2. In brief, cells with a percentage of mitochondrial genes below 0.07% were included. Cells with more than 6,000 or fewer than 1,000 detected genes were considered as outliers and excluded from the downstream analyses. Samples from different patients were merged using the Seurat function FindIntegrationMarkers, which identifies and calculates anchors between pairs of datasets to reduce the sample batch effect. Principal component analysis was performed, and the top 6–8 most statistically significant principal components were used for uniform manifold approximation and projection (UMAP) analysis. Components enriched for cell-cycle genes were excluded from UMAP clustering. Marker genes that were differentially expressed within each cluster were identified by the Seurat function FindAllMarkers with average log-transformed fold change cut-offs of 0.5. Scaled expression data of the top 20 marker genes were used to create the heat maps. Gene set scoring was performed using the VISION R package v.2.1.0, following the scoring algorithm as previously described 33 . In brief, the expression of signature genes is weighted on the basis of predicted dropout probability calculated from nearest neighbours, and the normalized expression summed for all genes in the gene set. TCR analysis was performed using Cell Ranger. Unique clonotypes were defined by CDR3 alpha and beta sequences. Cells for which we failed to recover both alpha and beta chains were treated as unique clones, even if the recovered chain overlapped with another clonotype that had found both alpha and beta chains. Pseudotime 34 was calculated using the Monocle3 v.0.2.3.0 package. In brief, we enforced a model in which cells started in the root node in the stem-like population. This assumption was based on extensive data in mice and humans indicating that this cell is the precursor of other T cell populations. We then calculated a trajectory that first passed through the transitionary population and ended in a leaf node of the terminally differentiated population. These assumptions were based on the transitionary population of cells expressing intermediate levels of many genes between the stem and terminally differentiated cells. In vitro proliferation assay Cryopreserved TILs from primary tumours or metastatic lymph nodes were thawed, rested for 3 h in RPMI + 10% FBS (R10, supplemented with 1% penicillin, streptomycin and l -glutamine) at 37 °C, 5% CO 2 and subsequently labelled with CTV, followed by staining of cell-surface markers (as described above). PD-1 + CD45RA − CD8 T cells (live CD3 + CD19 − CD14 − CD16 − CD4 − CD8 + ) were sorted on the basis of the expression of TIM-3 and CD39 on an ARIA II (BD Bioscience) into PBS with 2% FBS, with stem-like cells being TIM-3 − CD39 − and terminally differentiated cells being TIM-3 + CD39 + (for gating strategy see Extended Data Fig. 9a ). Sorted cell populations were cultured alone or with patient-matched, irradiated (20 Gy) PBMCs pulsed with 10 μg ml −1 peptide for 5 days in R10 with 20–50 U ml −1 rIL-2 at 37 °C, 5% CO 2 , followed by staining for flow cytometric analysis. Statistical analysis Data are presented as mean ± s.d. Paired two-tailed t -tests and Wilcoxon matched-pairs rank tests were used when appropriate and as indicated, with * P < 0.05, ** P < 0.01, *** P < 0.001 and **** P < 0.0001. All statistical analyses were performed using GraphPad Prism v.8.3. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability The following protein sequences were used for predicting and generating HPV peptides: E2 (Uniprot P03120), E5 (Uniprot P06927), E6 (Uniprot P03126), and E7 (Uniprot P03129). scRNA-seq data are available in the NCBI Gene Expression Omnibus (GEO) database under the accession number GSE180268 . Other relevant data are available from the corresponding authors upon reasonable request. Code availability Custom code for scRNA-seq is available from the corresponding authors upon reasonable request. | A new study from scientists at Emory Vaccine Center and Winship Cancer Institute of Emory University reports that the immune cells that are the major targets of immune checkpoint inhibitors are present in tumors from head and neck cancer patients. The study focuses on head and neck tumors that are positive for human papillomavirus (HPV), which is becoming one of most common types of head and neck cancers treated in the Western world. The results are scheduled for publication in Nature. It suggests checkpoint inhibitors, which have transformed the treatment of several types of cancer, could be uniquely effective against this type of head and neck cancer. The results also indicate that the experimental approach of therapeutic vaccination for HPV+ cancer could be broadened to include more elements of the virus, to potentially trigger a broader and stronger immune response. Researchers from Rafi Ahmed's lab at Emory Vaccine Center collaborated with the co-directors of the Winship Head and Neck Cancers working group, oncologists Nabil Saba, MD and Mihir Patel, MD, to obtain samples from patients with head and neck tumors early in the course of treatment. "About five years ago, we began to have an influx of patients that sought out our center for surgical treatment," Patel says. "We often heard some variation of a similar story: I was sick with cold-like symptoms and once that resolved this I noticed swelling in a lymph node on the side of my neck. Stories like this made us think about how the immune system might play a unique role, different than typical smoking-related head and neck cancers." The team wanted to learn more about the different kinds of CD8 or "killer" T cells present within the cancers; CD8 T cells are specialized immune cells capable of detecting and killing virus-infected or tumor cells, if they are not constrained by regulatory signals. The inhibitory receptor PD-1 is highly expressed on exhausted CD8 T cells in chronic viral infections and cancer, and stem-like PD-1+ CD8 T cells are crucial for maintaining tumor-specific CD8 T cell responses. The majority of currently available checkpoint inhibitors, such as pembrolizumab and nivolumab, block the PD-1 signaling pathway. "Our results show that a subset of HPV-specific CD8 T cells in the tumor exhibits a striking resemblance to the stem-like CD8 T cells our lab has previously defined in mouse models as proliferating in response to PD-1 blockade," says Andreas Wieland, Ph.D., co-lead author of the paper and an instructor in Ahmed's lab. "It is reasonable to assume that these cells would similarly provide a proliferative burst in response to PD-1 blockade in these patients. However, this remains to be formally tested." HPV-positive tumors do have a relatively good response to conventional forms of treatment such as radiation and chemotherapy, Wieland adds. The group of patients studied at Winship was treatment-naïve when tumor samples were obtained; how radiation and chemotherapy affect the number and phenotype of T cells in the tumor needs additional investigation. "These findings greatly enhance our understanding of CD8 T cell responses in the tumor micro-environment in HPV-related oropharynx cancers, and likely other virally mediated tumors," Saba says. "It confirms the existence of the different lineages necessary for an effective T cell specific anti-tumor response. Taking advantage of the local immune-response by avoiding its possible early elimination by traditional therapeutic modalities may pave the way to an improved clinical outcome for patients. It may have implications for how best to incorporate immunotherapy in the treatment of other virally mediated tumors." "We now have an inclination that incorporating immune therapy with PD-1 blockade prior to surgery or radiation may benefit patients," Patel says. "We are actively in the process of developing 'window of opportunity' studies to understand this. Looking at both primary tumors and metastatic lymph nodes, the researchers were able to detect both tumor-specific stem-like CD8 T cells, which can proliferate in response to HPV peptides, and more terminally differentiated cells that do not proliferate. In contrast to significant numbers of tumor-specific CD8 T cells in the tumors, tumor-specific cells appeared at a very low abundance in patients' blood, suggesting that they preferentially reside in tumors. The team also found that the different CD8 T cell subsets in the tumor microenvironment differ in their localization, with stem-like cells residing in distinct niches within the stroma and away from the tumor cells themselves. Concentrating on HPV-positive tumors in this study facilitated the study of tumor-specific T cells with defined specificities across several patients as the virus is providing a defined set of tumor-associated antigens, whereas in other types of cancer the antigens caused by mutations will vary from individual to individual. Co-first authors of the paper are Haydn Kissick, Ph.D., assistant professor of urology and microbiology and immunology, and Christiane Eberhardt, MD, a former postdoctoral fellow in Ahmed's lab who is now at the University of Geneva. Patel is also associate professor of otolaryngology at Emory University School of Medicine. Saba is professor and Vice Chair in the Department of Hematology and Medical Oncology. | 10.1038/s41586-021-03862-z |
Biology | Build an ark? Biologists discuss conservation prioritization | Mazel, Florent and Matthew W. Pennell, Marc Cadotte, Sandra Diaz, Giulio Valentino Dalla Riva, Richard Grenyer, Fabien Leprieur, Arne O. Mooers, David Mouillot, Caroline M. Tucker and William D. Pearse. "Prioritizing phylogenetic diversity does not reliably conserve functional diversity," Nature Communications. 23 July 2018. DOI: 10.1038/s41467-018-05126-3 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-018-05126-3 | https://phys.org/news/2018-07-ark-biologists-discuss-prioritization.html | Abstract In the face of the biodiversity crisis, it is argued that we should prioritize species in order to capture high functional diversity (FD). Because species traits often reflect shared evolutionary history, many researchers have assumed that maximizing phylogenetic diversity (PD) should indirectly capture FD, a hypothesis that we name the “phylogenetic gambit”. Here, we empirically test this gambit using data on ecologically relevant traits from >15,000 vertebrate species. Specifically, we estimate a measure of surrogacy of PD for FD. We find that maximizing PD results in an average gain of 18% of FD relative to random choice. However, this average gain obscures the fact that in over one-third of the comparisons, maximum PD sets contain less FD than randomly chosen sets of species. These results suggest that, while maximizing PD protection can help to protect FD, it represents a risky conservation strategy. Introduction We are in the midst of a period of heightened biological extinction, with rates several orders of magnitude higher than background rates estimated from the fossil record 1 , 2 , 3 . In addition to having potentially widespread consequences for the functioning of ecosystems and the provisioning of valuable ecosystem services, this situation poses an immense moral challenge 4 , 5 , 6 , 7 , 8 . Since the extent that resources for conservation actions remain limited, agonizing choices as to which species most warrant attention become necessary 9 , 10 . To keep humanity’s options open, and our common legacy as rich as possible, it is widely argued that we should seek to maximize the biological diversity of form and function in conservation strategies 6 , 7 , 8 , 9 , 10 , 11 , 12 . The biological diversity of form and function can be measured as functional diversity (FD) (see Methods). However, in practice, it is challenging to prioritize species on the basis of FD: we have imperfect knowledge about which, and how many traits and functions are important in a given context, how these traits and functions vary among species and across space, and how the importance of traits may change in the future 13 . Many researchers have therefore advocated for a hypothesis that we name the ‘phylogenetic gambit’; that is, if species traits reflect their shared evolutionary history, then the pattern of that evolutionary history—their phylogeny—should serve as a useful stand-in for unmeasured and unmeasurable traits 9 , 14 , 15 . The phylogenetic gambit implies that maximizing phylogenetic diversity (PD), i.e., the breadth of evolutionary history, will ensure that a wide variety of forms and functions are present within a species set 14 , 15 , 16 , 17 . Following this logic, phylogenetic diversity has formed the basis of global conservation schemes, notably the EDGE of existence program 18 has been used by restoration biologists 19 and has been widely embraced by researchers across the biodiversity sciences 20 , 21 , 22 , 23 . Despite this enthusiasm, the critical question of whether maximizing PD will actually capture more FD than prioritization schemes that ignore phylogeny has, to our knowledge, never been empirically tested 16 . Some studies have discussed 24 , 25 and documented the relationship between FD and PD, both at regional 26 and global scales 20 , 22 , and many of these studies have shown that maximizing PD does not maximize FD. However, such studies do not test the fundamental phylogenetic gambit at the heart of all PD-based conservation strategies: maximizing PD captures more FD than randomly choosing species. No one would dispute that the best way to maximize FD is to prioritize FD, but phylogenetic diversity has emerged as prioritization tool because we rarely have sufficient trait data to calculate FD. Here we test whether PD-based conservation passes a much less stringent, but ultimately more fundamental, test: is conserving on the basis of PD better than conserving at random? Worryingly, a recent theoretical study has demonstrated that PD could be a poor surrogate for FD and, in some scenarios, prioritizing species on the basis of PD could actually capture less FD than if species were simply selected at random 16 . This recent work points to the need for empirical tests of the phylogenetic gambit, i.e., whether—within a given species pool—sets of species selected to maximize PD actually contain more FD than sets of species selected without regard to evolutionary relatedness. We clarify what our goals are in testing the utility of PD to capture FD. First, we take as given that maximizing PD is not the overarching goal per se of PD-maximization schemes, but rather that a PD maximization strategy is valued for its ability to capture more FD compared to a strategy that ignores phylogeny. Second, it is important to note that we are selecting species sets to maximize PD or FD within a region. While this is a simplification, as conservation actions often aim to select sets of areas (e.g., in reserve design), the only global phylogenetically informed conservation initiative is species-centered 18 (EDGE). Critically, the question we raise has been shown to be distinct from asking whether traits have phylogenetic signal (whether closely related species tend to share similar sets of traits), since PD can be a poor surrogate for FD even if traits exhibit phylogenetic signal 16 . Here, we test the phylogenetic gambit using data on ecologically-relevant traits from >15,000 vertebrate species. We find that maximizing PD results in an average gain of 18% of FD relative to random choice. However, this average gain obscures the fact that in over one third of the comparisons, maximum PD sets contain less FD than randomly chosen sets of species. These results suggest that, while maximizing PD protection can help to protect FD, it represents a risky conservation strategy. Results Approach We evaluate the PD–FD relationship for different species pools (taxonomic families and geographical assemblages, i.e., sets of species co-occurring at a given scale) using a large global dataset including trait, phylogenetic, and geographic range data for 4616 species of mammals, 9993 species of birds, and 1536 species of tropical fish. Specifically, we measure FD as functional richness (see Methods) and compute, for any given species pool, an estimate of surrogacy 27 , 28 ( S PD–FD , Fig. 1 ). S PD–FD represents the amount of FD sampled by the set of species chosen to maximize PD, relative to the FD sampled by optimal set of species selected to maximize FD directly, with both components controlled for the expected FD from a random species set of the same size. S PD–FD will be positive if the averaged PD-maximized set contains more FD than the averaged random set, and negative if not. S PD–FD will equal 100% if the PD-maximization strategy is optimal ( i.e . to maximize FD). We integrate S PD–FD for each species pool across all deciles of species richness but because they are many sets of species that can maximize PD or that can be chosen at random, we computed S PD–FD based on the averaged FD over 1000 PD-maximized sets and 1000 random sets 16 . Fig. 1 A conceptual approach for evaluating whether PD is a good surrogate for FD. To evaluate if PD is a good surrogate of FD, we measure to what extent a species prioritization strategy that maximize PD captures FD relative to an optimal and a random strategy. To do so, we compare FD accumulation curves (i.e., FD computed for increasing proportion of the species pool considered) across these three different sampling strategies: the random sampling (i.e., rarefaction curve, averaged over 1000 sets), the maxPD sampling, and the maxFD (optimal) sampling (i.e., sets that maximize FD, see legend). Then, we measure the surrogacy of PD for FD ( S PD–FD ) as the area between the random and the maxPD curve (“ A ”, see legend) divided by the area between the random and the maxFD curve (“ A + B ”, see legend). If S PD–FD is positive, PD is a good surrogate for FD (the maximum value being 1 where PD is an optimal surrogate) while when S PD–FD is negative preserving species based on PD is worse than preserving them at random Full size image Mean surrogacy of PD for FD We find that selecting the most phylogenetically diverse sets of species within a given taxonomic family or within a given geographical location (large grid-cells across the globe) captures, on average, 18% more FD than sets of randomly chosen species (i.e., S PD–FD = 18%, SD ± 6.5% across pools, see Figs. 2 and 3 and Supplementary Figures 1 and 2 ). Although the surrogacy is generally positive, there was substantial variation across species pools. For example, the surrogacy of PD varies widely from a minimum of −85% to a maximum of 92%, meaning that selecting the most phylogenetically diverse sets of taxa can capture either 85% less (or 92% more) FD than sets of randomly chosen taxa (Figs. 2 and 3 and Supplementary Figures 1 and 2 ). However, in 88% of the species pools, choosing sets of species according to PD captured more FD than would be expected at random (i.e., surrogacy values > 0 in 88% of the cases, see Figs. 2 and 3 ). This suggests that, on average, maximizing PD is a sound strategy to capture FD. Fig. 2 PD is a good surrogate for FD across geographical areas. The figure presents the distribution and correlates of S PD–FD for mammals ( a – c ), birds ( d – f ), and tropical fishes ( g–i ) separately across space. For each of the three groups, the S PD–FD frequency distribution is presented in top panels ( b , e , h ) along with its mean (vertical line). The color scheme is common to all panels, with blue indicating positive S PD-FD (maximizing PD captures more FD than random) and red indicating negative S PD–FD . S PD–FD geographical distribution is presented in middle panels ( a , d , g ). Relationships between S PD–FD and species pool richness are presented in panels c , f , i . In each grid cell, S PD–FD values are based on the mean over 1000 repetitions of random and PDmax set draw (there is only one maxFD set). The maps in this figure were generated making use of the R-package rasterVis and latticeExtra. The animal silhouette images in this figure were created by the corresponding author in Adobe Acrobat Full size image Fig. 3 PD is a good surrogate for FD across clades. The figure presents the distribution and correlates of S PD–FD for mammals ( a – c ), birds ( d – f ) and fishes ( g – i ) across families. For each of the three groups, the S PD–FD frequency distribution is presented ( b , e ) along with its mean (vertical line). The color scheme is common to all panels. S PD–FD phylogenetic distribution is presented in panels a, d and g . Relationships between S PD–FD and species pool richness are presented in panels c , f , and i . For each taxonomic family, S PD–FD values are based on the mean over 1000 repetitions of random and maxPD set draw (there is only one maxFD set). The animal silhouette images in this figure were created by the corresponding author in Adobe Acrobat Full size image Reliability of the surrogacy of PD for FD Even though maximizing PD preserves more FD than averaged random selection, this averaged analysis does not capture the reliability of its performance. The PD maximization and the random selection strategies exhibit variation: simply by chance, random selection of species can capture very high (or, conversely, very low) FD, and the same may be true (to a previously unstudied degree) for PD. The extent of this variation is important: if it is less than the average difference, PD maximization is a reliable strategy as it will always yield more FD, but if it is not, then PD maximization could be unreliable for individual conservation interventions. To contrast these two situations, we measured the fraction of times that, within each species pool, the PD-maximization strategy yielded more FD than random selection (see Methods). PD-based selection was the best choice in 64% of cases (SD across species pool = 9%, see Supplementary Table 1 and Supplementary Figures 3 and 4 ), making it the better strategy but not a perfectly reliable one. Thus, while the PD-maximization strategy has a consistent positive effect (i.e., the average PD-maximization strategy yields more FD than the average random strategy), its effect is weak (i.e., the PD-maximization strategy still yields less FD than the random strategy in 36% of the trials within a species pool). Drivers of surrogacy We next explored the drivers of surrogacy values across species pools. Surrogacy of PD appears to weaken as the species pool richness increases (on average, Spearman Rho between absolute surrogacy and species richness = −0.15), most clearly seen in the tropics and in species-rich families such as the Muridae (rats, mice, and allies) and Columbidae (pigeons and allies) (Figs. 2 and 3 ). This is likely because our measure of FD (see Methods) rapidly saturates as the number of selected species increases and species from these large pools harbor high functional redundancy, such that a random prioritization scheme performs relatively well, or at least no worse than other strategies (Supplementary Figure 5 ). In contrast, FD can be greatly increased by prioritization of species using PD from species poor assemblages or clades. This is particularly the case in spatial assemblages containing multiple taxonomic orders, which are both phylogenetically and ecologically divergent from one another. Interestingly, the PD–FD relationship was not consistent across taxonomic scale: we found that, in contrast to patterns at the family level, for certain mammalian and avian orders (which are older than the families described above), using PD to select species is much worse for capturing FD than choosing species at random (see, for example, the Afrosoricidae, Chiroptera, and Charadriiformes in Supplementary Figure 6 ). We then explored whether we can explain this variability within- and between-datasets, and in particular, why for some assemblages/clades, a PD-prioritization strategy fails to capture more FD than random choice. It is often implicitly assumed that phylogenetic signal (i.e., the degree to which closely related species tend to harbor similar sets of traits) can be used to evaluate the effectiveness of PD as a surrogate for FD 5 , 15 , 16 , 17 . Surprisingly perhaps, the value of PD as a surrogate for FD was only weakly correlated with the phylogenetic signal of the underlying traits (Supplementary Figure 7 - 8 , on average Spearman Rho = 0.17). Similarly, tree imbalance, which is known to affect surrogacy in simulations 16 , did not explain surrogacy in these empirical data (Supplementary Figures 7 and 8 ). For mammals, regions where PD did worse than random were located in the Sahara, south western Patagonia, southern Africa including parts of Madagascar, and New Guinea (Fig. 2 ). These latter two in particular are of concern since they are global conservation priorities on the basis of species endemism and habitat loss. We suggest two historical reasons for such idiosyncratic poor performance of PD. First, there is a tendency for a large carnivore species, either a top predator (e.g., cheetahs in the Sahara or foxes in Patagonia) or a large scavenger (e.g., the hyena in South Africa) to co-occur with a close relative with distinct traits in these areas (e.g., a desert cat with the cheetah or the aardwolf with the hyena, see Supplementary Figure 9 ). Only one of these closely related species will tend to be selected under prioritization schemes that maximize PD, thus reducing the volume of the convex hull on average when the functionally distinct one is not selected (the large predator or scavenger). This seems also to drive the low surrogacy of PD in Charadriiformes (especially Larus and Sterna ; see Supplementary Figure 10 ). Second, lineages in which traits evolve very slowly will contribute little to FD, even over long periods of time (branch lengths) that contribute greatly to PD. For example, in New Guinea many co-occurring bats with similar traits diverged long ago, such that they are always selected in the PD maximizing set, but do not add much to the convex hull, resulting in a poor surrogacy of PD for FD. Such strong ecological niche conservatism is common in mammals 29 , e.g., in the Geomyidae: two basal branches of the Geomyidae tree harbor very similar traits (species descending from these branches are actually grouped in the same genus Thomomys ) while being distantly related in the phylogenies we used (Supplementary Figure 9 ). As such, they will be selected in all PD maximizing sets, but will not contribute greatly to FD. Discussion Maximizing PD in conservation decisions is now commonplace in the academic world 20 , 21 , 22 , 23 , 30 , 31 , 32 and is also starting to be used in real-world conservation prioritizations, for example with the EDGE of existence program 18 . To the best of our knowledge, there are no clear direct ecosystem function or health benefits that phylogenetic branch lengths provide. Rather, high PD is perceived as valuable because it is assumed to be a good proxy for high diversity of traits or “features” 14 (referred as to high functional diversity in this paper, FD), a hypothesis that we name the “phylogenetic gambit”. High FD might be valuable for a number of reasons, for example ecosystem functioning, ecosystem services, future “options values” 14 , 15 or “evolutionary potential” 15 , 33 . The utility of PD for conservation stems from the fact that calculating PD is relatively fast and cheap, often making it an easier way to prioritize species or areas than FD. Indeed, we have imperfect knowledge about which, and how many, traits and functions are important in a given context, how these traits and functions vary among species and across space, and how the importance of traits may change in the future 13 . Yet, even if convenient, maximizing PD can only be an effective and realistic conservation strategy to conserve FD if the phylogenetic gambit holds and maximizing PD yields more FD than a strategy that ignores phylogeny. If maximizing PD yields less FD than a random strategy (i.e., the gambit fails), then researchers and conservationists should reconsider whether maximizing PD as a useful conservation strategy. A large body of literature has shown that maximizing PD does not maximize FD empirically 20 , 21 , 22 , 23 or even in simple theoretical cases 16 , but such work does not test the phylogenetic gambit of whether PD prioritization captures more FD than random selection 16 . Here we have shown that the phylogenetic gambit generally holds: PD is an effective conservation metric to capture FD. Yet we also reveal some limitations of this strategy: PD is good “on average”, but there is still some risk associated with taking it. We found that prioritizing the most phylogenetically diverse set of taxa in a region or clade will result in an average gain of 18% functional diversity relative to applying the same conservation effort without considering phylogeny, but this gain will decrease as species richness increases. In contrast to what has previously been implicitly assumed 15 , 16 , we find weak empirical evidence that the presence of phylogenetic signal in traits predicts whether PD-based conservation will prioritize FD. Our result suggests that PD is a reasonable conservation prioritization strategy, especially in species-poor clades or regions, or in the absence of meaningful data on functional traits. However, we note three important caveats to the use of this strategy. First, 18% extra FD may not always be a useful conservation target. It is currently unknown whether this added 18% of FD can actually be of enough conservation value. Second, in cases of either recent trait divergence or, alternatively, very strong trait conservatism, a PD prioritization scheme can capture less FD than a random scheme. Evolutionary biologists commonly focus on “unusual” clades with rapid divergences (e.g., cichlids); we show here that divergence does not have to be that spectacular (e.g., African carnivores) to alter the PD–FD relationship. Third, we found that while this strategy, on average, captures FD well, it is also somewhat unreliable, and 36% of the time will not capture more FD than random choice. This means that while the PD gambit can be a bet worth taking, it is still a bet with associated risk, not a sure thing. Our objective in this paper is to test the phylogenetic gambit using empirical datasets. This means that we do not aim to provide a coherent prioritization strategy 34 , or ready-to-use conservation guidelines. Indeed, we simplistically and implicitly assume that species will either be saved or go extinct, and we have not linked our various scenarios to any particular policy position or conservation objective other than maximizing FD within a phylogenetic clade or region 28 , 30 . In reality, conservation decisions reflect the interplay of social, economic, political, and scientific priorities, and do not necessarily result in the saving of target species (and therefore of their associated FD or PD). While our study is thus not directly applicable, the test we are conducting is actually critical to validate (or invalidate) the use of PD in conservation as a whole. While it is not clear whether our results would generalize to other taxa (although we hope that others will extend our work and test the phylogenetic gambit in other systems), we do feel it is important to consider the uncertainty that has been introduced into our analysis as a result of uncertainty associated with the spatial scale of our analysis, our phylogenetic data, and our choice of trait and measurement of FD. The scale of conservation activities can vary, from the global scale of the hotspots approach to local protected areas within a single country, but, unfortunately, the connection between these scales remains unclear. For example, if the motivation for protecting FD is to maintain community-driven ecosystem functions and services 5 , 6 , 35 , the value of a regional or global focus may be questionable 36 ; studies are increasingly focusing on local scales 6 . Ecologists are refining and improving our understanding of how local assemblages assemble within a regional context 37 , and while the concept of the “regional pool” of species is increasingly being viewed as a simplification, it is unlikely that regional- and local-scale patterns are totally disconnected. We emphasize that our results are relatively robust to variation in spatial scale (see Supplementary Fig 3 ), but we acknowledge that future studies should test the phylogenetic gambit at more local scale as well. The set of species that maximize PD obviously relies on the phylogenetic hypothesis used. No hypothesis is perfect or without uncertainty, and these phylogenetic uncertainties could in turn impact the composition of the set of species that maximize PD and hence the surrogacy values we compute. In this study, we explicitly took into account these uncertainties by using 100 different trees 38 , 39 . The explicit propagation of this phylogenetic uncertainty through into our results may underlie some of the uncertainty (risk) of our result for birds and mammals, and we suggest that future studies explicitly take into account phylogenetic uncertainty when testing the phylogenetic gambit. The motivator of our test of the surrogacy value of PD for FD is the fact that ecologically relevant trait data is in short supply, especially for rare and data-deficient species. Indeed, if it were not for this relative paucity of data, we could simply prioritize species based on their unique contribution to FD directly. Although there have been massive and well-funded efforts to collect and curate trait data from across the Tree of Life 40 , 41 , 42 , we are still far from having comprehensive coverage. Furthermore, despite recent progress 43 , it is still not fully understood which traits are most relevant for responses to environmental change, or that contribute most to certain ecosystem functions and services, and how these vary among systems. Our analysis suffers from a similar data limitation. We chose these traits because they are frequently collected in ecological studies, not because we know they are ecologically important. Our assumption is that their phylogenetic distribution is typical of those traits that are most desirable for the purpose of conservation and that our primary results are therefore widely applicable. While we did test the robustness of our results to the variation of trait information retained to compute FD (Supplementary Figure 10 ), it is true that, overall, we used a rather limited set of traits. We acknowledge that it is possible that many other potential valuable traits are not captured by our measure of FD. One of the ideas behind the use of PD is that phylogeny might account for these for unmeasured and unmeasurable traits 9 , 14 , 15 , however, as this hypothesis is not testable (we do not have these traits), it seems risky to assume it is true. Our objective here is to test the phylogenetic gambit given the limited set of traits that we have: we consider that carrying out our imperfect test is more informative than not carrying any test at all. In conclusion, we found that maximizing PD results in an average gain of 18% of FD relative to random choice. However, this average gain hides the fact that in over 1/3 of the comparisons, maximum PD sets contain less FD than randomly chosen sets of species. These results suggest that, while maximizing PD can help capture FD, it represents a risky strategy. If maximizing PD is a risky strategy, then, should we abandon the use of PD in conservation? We believe that before such dramatic decision, our test should be repeated across space, traits and taxa, in order to narrow the uncertainties of our results. This is why we now urge others to expand our simple phylogenetic gambit test to other clades and other traits in order to test the generality of our findings. We hope that our study will stimulate the production of numerous tests to finally rigorously assess the usefulness of PD in conservation. Methods Type of data We use two classes of data to address the question of whether choosing sets of species according to PD captures the underlying trait diversity (as measured with FD) well. First, we used taxonomic groups (clades) of species as our unit of analysis (“species pool” hereafter) and, second, we investigated broad assemblages found across the globe. The former is more explicitly evolutionary, ensuring that our results are not driven by well-established relationships across large taxonomic groups (e.g., monotremes are distinct from placental mammals) and the latter is likely more relevant to actual conservation practice. We use distribution data to delineate geographical assemblage species pool and taxonomy to delineate clade-based species pools (namely families and orders). Distribution data For mammals, we used the distribution maps provided by the Mammal Red List Assessment ( ) for 4616 species. For birds, full (breeding and wintering) and breeding ranges distribution maps were extracted from BirdLife ( ) for 9993 species. The best resolution at which these maps should be used is still under discussion in the literature, so we decided to use the 40,000 km 2 resolution (200 × 200 km gird cell at the equator) that is commonly used at global scale 44 , 45 . The total number of grid cells was 3646. Domestic and aquatic mammals were excluded from the analysis. In order to make sure our results were not driven by the important trait difference between volant and nonvolant mammals, we repeated our results excluding bats. For birds, we repeated our analysis using the full ranges. Finally, we evaluated the robustness of our result to the spatial resolution considered by repeating our analysis at a resolution of 100 × 100 km (number of cells was 13,330) for birds and mammals; we present these results in the supplementary materials, as they are qualitatively identical to those conducted at 200 × 200 km (Supplementary Figure 1 ). For fishes, we used a database of 1536 species, for which we had distribution data, phylogenetic and functional data. Distribution data were extracted from a global-scale distribution database 46 . Species composition was then extracted from grid cells of 5°x5°, corresponding to approximately 555 × 555 km at the equator 47 . This grain size of the grid was chosen because it represents a good compromise between the desired resolution and the geographical density of information. Fish distribution data are available upon request to DM and FL. Maps were handled and plotted in R using the packages rasterVis 48 and latticeExtra 49 . Phylogenetic data In order to prioritize species to maximize PD, phylogenies of each species pool are needed. We used the first 100 published calibrated ultrametric trees of Jetz et al. 39 for birds and of Faurby and Svenning 38 for mammals. By repeating our analyses across a posterior distribution of phylogenetic hypotheses, we control and account for phylogenetic uncertainty. For tropical reef fishes, we built a phylogeny for 18 families (i.e., Labridae, Scaridae, Pomacentridae, Chaetodontidae, Acanthuridae, Haemulidae, Balistidae, Carangidae, Serranidae, Lutjanidae, Sparidae, Caesionidae, Holocentridae, Mullidae, Muraenidae, Tetraodontidae, Lethrinidae, and Siganidae) by pruning a dated molecular phylogenetic tree for 7822 extant fish species 47 . These families were selected as the most representative tropical reef fish families, that is, they are abundant and speciose on tropical reefs. We grafted missing species on the pruned phylogenetic tree (circa 50% among the 1536 studied species) based on published phylogenies for these families, supplemented by taxonomic information from fish identification guides and FishBase 47 , 50 . The corresponding tree is available on figshare ( ). We recorded, for each of these trees, a measure of imbalance (as measured by beta 51 ) and “tipiness” (as measured by gamma 52 ). For both mammals and birds, we chose to group species in families and orders. We used these groupings when calculating the purely phylogenetic, clade-based analyses, but not within the spatial, assemblage-based analyses. For the taxonomic analysis of mammal families, we removed two families (Dipodidae and Echimyidae) because of their very poor phylogenetic resolution (i.e., polytomies for an important number of species). Trait data For birds and mammals, four traits (diet, (log-transformed) body mass, activity cycle, and foraging height) were extracted from Elton Traits1.0 42 . These traits are generally assumed to appropriately represent Eltonian niche dimensions within an assemblage or clade of mammals or birds 53 , 54 . For fishes, we used a previously published database 12 . We used 6 categorical traits: size, mobility, period of activity, schooling, vertical position in the water column, and diet (for a full description of the dataset, see Mouillot et al. 12 ). These traits have already been used to investigate community assembly rules 55 and to seek vulnerable fish functions 11 . Fish trait data is available upon request to DM and FL. For each clade and assemblage, we used the raw trait (only body mass was log-transformed and rescaled by the clade/assemblage range of body masses) values to compute distances between species using Gower distance and use PCoA to summarize the trait space in few dimensions. We retained the numbers of PCoA axes necessary to represent 70% of the total initial variability (using a 80% threshold did not quantitatively change our conclusions, see Supplementary Figure 10 ). We also recorded phylogenetic signal for each PCoA axis using Blomberg’s K 56 . General approach Our aim was to evaluate, across a wide range of clades and regions, the ability of PD-informed prioritization scheme to capture FD in comparison with two other prioritization schemes: selecting species to directly maximize FD (“maxFD” hereafter) and selecting species randomly (Fig. 1 ). Our premise was that we often do not know or have not measured the traits that are most relevant for ecosystem function and services such that maximizing FD is not generally feasible. By focusing on a subset of traits and assuming that they are representative of ecologically relevant traits, we were able to get an estimate of how well PD does compared to the best we could possibly do. We used performance relative to choosing on the basis of FD as an upper-limit to the performance of PD as a surrogate for FD and used random species selection as a lower benchmark. Random prioritization scheme For each pool (i.e., each clade and each geographical assemblage) and each number of selected species (10, 20, 30, 40, 50, 60, 70, 80, 90, and 100% of the total pool), 1000 random sets of species were produced, from which the average FD was recorded. Prioritization scheme maximizing PD (maxPD) While there are many, overlapping metrics for measuring the evolutionary history encompassed by a set of species 15 , 57 , the most common is the sum of all branch lengths (often in units of time) connecting a set of species to a common root 14 , called phylogenetic diversity (PD). This is the metric whose maximization has most commonly been proposed as a conservation prioritization metric 14 , 33 , 58 , and as a measure of phylogenetic “richness” it most naturally maps onto our chosen FD metric 57 . We used the greedy algorithm proposed by Bordewich et al. 59 to find our maxPD set of species S . For a given tree there are likely multiple, and possibly very many, sets of species with the same PD as S . As a consequence, we produced, for each pool, each number of selected species, and each alternative phylogenetic trees, 10 maxPD sets of species. We then averaged the FD of these sets across our 100 phylogenetic trees, so that each value is an average of 1000 sets (10 sets for each of the 100 trees). Prioritization scheme maximizing FD (maxFD) Functional diversity was estimated using a functional richness index (FRic) 60 , 61 , 62 . The FRic index relies on a multidimensional Euclidean space, where the axes are traits (or factorial axes from a principal coordinates analysis (PCoA) computed using these traits) along which species are placed according to their trait values. This index measures the volume of trait space occupied by a given species assemblage by calculating the convex hull volume 62 , defined by the species at the vertices of the functional space, that encompasses the entire trait space filled by all species in this assemblage. In a single dimension, this simply equals the range of values 62 . This broadly used metric in ecology is set monotonic with species richness, a property generally assumed desirable in conservation whereby the addition of a new species can never decrease the metric’s value 63 . FD measures the total amount of variation in trait values, making it conceptually comparable to PD 57 . We used the FRic index instead of the FD index based on a functional dendrogram since recent studies showed that the FD index may lead to biased assessments of functional diversity and inaccurate ecological conclusions 64 . The most straightforward way to obtain the maximal FD for n species is to compute FD for all possible combinations of n species and simply record the greatest value (the brute force approach). However, this is not feasible in practice as the numbers of combinations of selected species was too high (e.g., 10 71 possible sets for all mammal assemblages). To rapidly and efficiently find the set of species that aim to maximize FD, we developed a novel (at least in ecology) greedy algorithm. In brief, our approach iteratively (starting with two species) select the species that is the furthest from the centroid of the already selected set. To avoid selecting two species that are far from the centroid but close to each other, we penalized the distance to the centroid by the distance to the closest neighbor in the already selected set. Here we present in details the greedy algorithm we used to find the set of species that maximize FD: Step 1. Select the two species with the highest trait distance. Step 2. Compute the centroid of these two selected species. Step 3. Compute distances between species not in the set and this “set centroid”. Step 4. Penalize these distances by adding the following factor f (Eq. 1 ) $${f = K}\,\times\,{\mathrm{e}}^{{ {L}}\,\times\,{\mathrm{minD}}}$$ (1) with K and L being penalizing factors and minD the distance between a given candidate species and the nearest species already in the selected set. Step 5. Select the species that maximized the penalized distance. Step 6. Go back to step one with this new set of species until the desired number of species is reached. To avoid arbitrarily setting the penalizing parameters, we tested 1000 pairs of parameters drawn from a truncated normal distribution (mean = 1, SD = 0.5) and retained the parameter pairs that yielded the maximal FD. In tests of subsets of the data for which finding the true maxFD was feasible, we found our approach to adequately approximate the true maxFD and to produce a very good approximation of the true degree of PD’s surrogacy for FD (Supplementary Figure 11 ). Surrogacy estimates We use a common approach 27 , 28 to quantify the extent to which a given surrogate (here, the maxPD choice) reaches a certain objective (here, maximize FD). Species from a given pool (i.e., for each dataset (clade and assemblages) independently,) were prioritized and selected according to (1) the objective, i.e., maximize FD, producing the “optimal curve” (maxFD curve in Fig. 1 ) the surrogate, i.e., maximize PD, producing the “surrogate curve” (maxPD curve in Figs. 1 and 3 ) at random (random curve in Fig. 1 ), i.e., producing the “random curve” (Fig. 1 ). To compute a “surrogacy” estimate of PD ( S PD–FD ), we compare the position of the surrogate curve (1) to the random curve (2) relative to the optimal curve (2) (Fig. 1 and Eq. 2 ) across the deciles of species richness of the pool (given as an interval 0–1): $$S_{\mathrm {PD - FD}} = {\int\nolimits_0^1} \frac{{{\mathrm {FD}}_{\mathrm {maxPD}} - {\mathrm {FD}}_{\mathrm {random}}}}{{{\mathrm {FD}}_{\mathrm {maxFD}} - {\mathrm {FD}}_{\mathrm {random}}}}$$ (2) This surrogacy metric is at 100% when the surrogate perfectly meets the objective (i.e., the maxFD and maxPD curves are identical and the max PD set is the maxFD set), 0% when the surrogate is not better than randomly chosen sets of species (i.e., the random and maxPD curves are identical) and is negative if the surrogate choice is worse than random (i.e., the maxPD curve is below the random curve). Correlates of S PD–FD were evaluated using Spearman correlations. Apart from focusing on average tendencies, we quantified the variability of the FD yielded by the PD—maximized selection strategy and the random selection strategy within each species pools. To do so, we compute, for each species pool and for each % of selected species independently, the number of cases where FD random > FD maxPD across the 1000 random *1000 maxPD sets combinations (i.e., 10 6 comparisons). We then averaged theses number across % of selected species and report statistics across datasets (Supplementary Table 1 , Supplementary Figs. 3 and 4 ). Code availability R functions developed in this paper are available at Data availability Mammal and bird datasets are publicly available (see methods). The Fish phylogeny is available on figshare ( ). The distribution and trait fish datasets are available upon request to D.M. and F.L. Change history 04 February 2019 The original version of this Article contained a plotting error in Figure 3g. The Serranidae and Siganidae families were misplaced in the plotted phylogeny. This error has now been corrected in the PDF and HTML versions of the Article. For comparison, the original, incorrect version of Figure 3g is presented in the associated Author Correction. | Conservation biologists recognize a sobering reality. "We're losing species left, right and center," says Utah State University scientist Will Pearse. 'We call it the 'Noah's Ark Problem,' and we have to pick species to save. We can't save them all." The biblical mariner seemed capable of building a vessel to accommodate mating pairs of all the world's creatures. The metaphor, today, however, would portray the harried Noah bailing water and valiantly trying to prioritize saving animals most beneficial for the future, as his boat rapidly sank. Pearse, with colleagues Florent Mazel, Arne Mooers and Caroline Tucker of Simon Fraser University and the University of British Columbia, Marc Cadotte of the University of Toronto, Sandra Diaz of Argentina's National University of Cordoba, Giulio Valentino Dalla Riva of the University of British Columbia, Richard Grenyer of the University of Oxford, Fabien Leprieur of the University of Montpellier and David Mouillot of James Cook University, explore phylogenetic diversity as a metric of conservation prioritization in the July 23, 2018, issue of Nature Communications. "Our paper tests a fundamental component of conservation biology we refer to as the 'phylogenetic gambit,'" says Pearse, assistant professor in USU's Department of Biology and the USU Ecology Center. "That is, conservation biologists often use species' evolutionary history – their phylogeny – to identify groups of species to save." This idea is based on the assumption that preserving phylogenetic diversity among species preserves more functional diversity than selecting species to preserve by chance. Functional diversity is important, Pearse says, because it drives ecosystem health and productivity. "Yet measuring the effectiveness of functional diversity is difficult," he says. "So using phylogenetic diversity as a surrogate for functional diversity has made conservation biology much easier and more effective." In global datasets of mammals, birds and tropical fishes, the team demonstrates that, for the most part, the phylogenetic gambit holds. Preserving phylogenetic diversity preserves 18 percent more functional diversity than would be expected if species to save were selected at random. "Worryingly, though, we found in some parts of the world, and in some groups of species, preserving phylogenetic diversity did worse or just the same as random chance," Pearse says. "Luckily, we identified the areas and reasons this was happening, which still makes this selection technique valid and valuable for conservation biologists." The team's efforts, organized through an international working group initiated by Tucker and Mooers, were funded by sDIV, the Synthesis Center for Biodiversity Sciences based in Leipzig, Germany. | 10.1038/s41467-018-05126-3 |
Biology | A protein that extends life of yeast cells | Nitish Mittal et al. The Gcn4 transcription factor reduces protein synthesis capacity and extends yeast lifespan, Nature Communications (2017). DOI: 10.1038/s41467-017-00539-y Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-017-00539-y | https://phys.org/news/2017-09-protein-life-yeast-cells.html | Abstract In Saccharomyces cerevisiae , deletion of large ribosomal subunit protein-encoding genes increases the replicative lifespan in a Gcn4-dependent manner. However, how Gcn4, a key transcriptional activator of amino acid biosynthesis genes, increases lifespan, is unknown. Here we show that Gcn4 acts as a repressor of protein synthesis. By analyzing the messenger RNA and protein abundance, ribosome occupancy and protein synthesis rate in various yeast strains, we demonstrate that Gcn4 is sufficient to reduce protein synthesis and increase yeast lifespan. Chromatin immunoprecipitation reveals Gcn4 binding not only at genes that are activated, but also at genes, some encoding ribosomal proteins, that are repressed upon Gcn4 overexpression. The promoters of repressed genes contain Rap1 binding motifs. Our data suggest that Gcn4 is a central regulator of protein synthesis under multiple perturbations, including ribosomal protein gene deletions, calorie restriction, and rapamycin treatment, and provide an explanation for its role in longevity and stress response. Introduction The discovery that individual genes can significantly and reproducibly modulate the lifespan of eukaryotic organisms 1 opened the process of aging to investigation by geneticists and molecular biologists. The ease of its genetic manipulation has made the yeast Saccharomyces cerevisiae an important experimental model for aging studies. The number of divisions that a mother yeast cell undergoes before it enters senescence is used as a measure of lifespan, also called replicative lifespan 2 . Genetic studies have linked nutrient sensing pathways to aging (extensively reviewed in refs. 3 , 4 ) and consistently, caloric restriction improved the functionality and increased the lifespan of many model organisms (reviewed in ref. 5 ). Although the molecular mechanisms underlying changes in lifespan are still debated 4 , modulation of protein synthesis by the target of rapamycin (TOR) serine/threonine kinase seems to play an important role 6 , 7 . Reducing TOR activity increased lifespan in yeast, worms, flies, and mammals 8 . Furthermore, deletion of translation-related genes such as SCH9 , TIF1 , and TIF2 , increased the yeast replicative lifespan 9 , 10 , 11 , and inhibition of translation in the worm Caenorhabditis elegans promoted longevity 12 , 13 , 14 . These observations indicate that reducing cellular translation is a conserved mechanism of lifespan extension 15 . Translation of messenger RNA (mRNAs) into proteins is carried out by the ribosome, a molecular machine that in S. cerevisiae is composed of four ribosomal RNAs and 78 ribosomal proteins (RPs). The yeast genome contains 137 RP-encoding genes, 59 of which are paralogs 16 . Screening studies have found that deletion of RPL31A , RPL6B and other RPs increased replicative lifespan 9 , 10 , 17 , and a systematic survey of 107 RP gene deletion strains comprehensively showed that the specific reduction of the 60S ribosome subunit significantly extends replicative lifespan 18 . The fact that RPs are downstream targets of TOR signaling reinforces the link between nutrient sensing pathways, protein synthesis, and aging 18 , 19 , 20 . Lifespan extension in large ribosomal subunit protein (RPL) deletion strains depends on the upregulation of Gcn4 18 . This protein is the key transcriptional activator of amino acid biosynthesis genes in yeast, being translationally upregulated in various stress conditions 21 , 22 , 23 , 24 , 25 , as well as upon deletion of RPL genes 18 , 26 , 27 . Other modulators of aging such as the tRNA transporter Los1 and mitochondrial AAA protease gene Afg3 are also thought to exert their lifespan-increasing effects through Gcn4 11 , 28 . The GCN4 mRNA provides one of the best well characterized models of translational control 29 . When sufficient nutrients are available, most ribosomes are sequestered at four upstream open reading frames (uORFs) present in the 5′ UTR, resulting in low Gcn4 abundance. Upon amino acid starvation, the Gcn2 kinase phosphorylates the translation initiation factor eIF2α, leading to reduced levels of GTP-bound eIF2 and depletion of the ternary translation initiation complex. This allows the scanning of 40S ribosomes past the uORF4, resulting in increased initiation at the main ORF and higher production of Gcn4. However, in spite of Gcn4’s central role in yeast longevity, it is difficult to link mechanistically its activity of transcriptional activator of amino acid biosynthesis genes to lifespan extension. Characterizing the gene expression of RP deletion (RPKO) strains with mRNA sequencing (m RNA-seq), ribosome profiling and proteomics we found that reduced protein synthesis, impaired ribosome assembly and general uORF skipping leading to Gcn4 expression are hallmarks of RPKO strains with increased replicative lifespan. Consistently, we show that Gcn4 is necessary for the translation repression observed not only in long-lived RPKO strains, but also under glucose starvation and rapamycin treatment conditions, and that overexpression of Gcn4 is sufficient to promote longevity and reduce protein biosynthesis. Our results thus suggest that the reduction in protein synthesis capacity contributes to the Gcn4-mediated lifespan extension. Results Long and short-lived RPKO strains differ in gene expression To understand the molecular mechanisms behind the increased lifespan of RPKO strains, we compared gene expression of the wild-type strain with that of two RPKO strains with increased ( Δrpl7a and Δrpl9a ) and three with decreased ( Δrpl6a , Δrpl15b , and Δrps27b ) lifespan 18 . For each strain we determined transcript levels by mRNA-seq, ribosome occupancy by ribosome footprint sequencing (Ribo-seq), and protein levels by shotgun proteomics (Supplementary Data 1 ). Interestingly, all but one RPKO strain (Δ rpl15b ) showed an increased expression of the paralog of the deleted RP (Supplementary Fig. 1a, b ), suggesting that yeast cells can compensate for the lack of individual RP genes. Beyond the expected upregulation of amino acid biosynthesis genes in the large subunit RPKO (RPLKO) strains 26 , 27 , we found that RPLKO strains with increased replicative lifespan showed the strongest upregulation of these pathways, both at mRNA and protein levels (Fig. 1a ; Supplementary Data 2 for the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways that are significantly altered in these strains). Unexpectedly, genes from the biosynthetic pathways of phenylalanine, tyrosine, tryptophan, lysine, and arginine and for 2-oxocarboxylic acid metabolism were also upregulated at the level of translation efficiency (Fig. 1a, b ), whereas in the alanine, aspartate, glutamate, histidine, and tryptophan metabolism pathways, protein-level changes could be explained by their mRNA-level changes (Fig. 1a, c ). Strikingly, the mRNA abundance of genes encoding ribosomal components was strongly reduced in long-lived RPKO strains. This was not compensated by increased translation efficiency on these mRNAs because the protein levels were also reduced (Fig. 1a ). Components of the small and large ribosomal subunits were equally affected (Supplementary Fig. 1c ), indicating that the repression was not specific to the subunit to which the deleted gene belongs. These data show that deletion of individual RP genes leads to complex changes in gene expression, specifically impacting the abundance and translation efficiency of mRNAs. Fig. 1 RPKO strains exhibit changes at multiple levels of gene expression. a Heatmap depicts mean fold-changes of components of the indicated KEGG pathways at mRNA (measured by mRNA-seq), translation efficiency (calculated by dividing the normalized read count in Ribo-seq by the normalized read count in mRNA-seq), and protein (measured by proteomics) levels. The orange box highlights ribosome-related genes, whereas purple boxes highlight gene categories involved in amino acid biosynthesis and metabolism. b , c Specific examples of KEGG pathways whose components are regulated b translationally (2-oxocarboxylic acid metabolism) or c transcriptionally (tryptophan metabolism). Red dots represent genes from the indicated KEGG pathways and the contour plots refer to all genes Full size image Impaired ribosome assembly in long-lived RPKO strains To evaluate the global impact of RP repression on translation, we generated polysome profiles for all studied strains. The 60S ribosomal subunit was less abundant than the 40S subunit in the long-lived Δ rpl7a and Δ rpl9a in comparison to the wild-type strain (Fig. 2a ; Supplementary Fig. 2a–c ), whereas the short-lived strains did not show this pattern (Fig. 2b ; Supplementary Fig. 2a, d–f ). The long-lived Δ rpl7a and Δ rpl9a strains yielded half-mer peaks after each monosome/polysome (Fig. 2a ; Supplementary Fig. 2b, c ), diagnostic for the presence of 48S initiation complexes on actively translated mRNAs 30 . These results indicate that long-lived RPKO strains exhibit delayed/impaired ribosome assembly. Fig. 2 Long-lived RPKO strains show defective ribosome assembly and reduced translation. a and b Polysome profiles of a long-lived and b short-lived RPKO strains in relation to the wild-type strain show that the former have a lower 60S-to-40S ratio, and are characterized by the presence of half-mers. c Flow cytometry readout for nascent protein synthesis by Click-iT HPG in the different strains and conditions. d Quantification of global translation in RPKOs with respect to the wild-type strain. Error bars represent s.d. across three different biological replicates. ** p < 0.01, *** p < 0.001. P -values were calculated using two-tailed Student’s t -test Full size image An expected consequence of ribosome assembly defects is a reduced translational output. To directly quantify global translation, we measured the incorporation of the methionine analog l -homopropargylglycine (HPG) in newly synthesized proteins with a fluorimetric assay. These data confirmed that translation was significantly reduced in the Δ rpl7a and Δ rpl9a strains and increased in Δ rpl6a in comparison to wild type (Fig. 2c, d ), thereby providing evidence for an association between translation repression and longevity in RPKO strains. Generalized uORFs skipping in long-lived RPKO strains To determine whether the defective ribosome assembly in long-lived strains is accompanied by a global increase in uORF skipping, we computed the relative ribosome occupancy of 5′ UTRs and corresponding ORFs (or coding sequences, CDS) for the 2067 uORF-containing S. cerevisiae genes 31 . Only the long-lived strains with defective ribosome assembly had a lower 5′UTR-to-CDS ratio than the wild type, indicating less occupancy at uORFs (Fig. 3 ). We obtained similar results when considering only Ribo-seq reads mapping to uORFs instead of to the entire 5′ UTR (Supplementary Fig. 3 ). Although one may expect that generalized uORF skipping in long-lived RPKO strains leads to an increased translation efficiency of the downstream ORFs, we found little change in the translation efficiency of most genes containing uORFs. Furthermore, the very few genes with a significant change in translation efficiency were either up- or down-regulated (Supplementary Fig. 4 ). Our data thus indicate that uORF skipping is a general feature of long-lived RPKO strains, and that uORFs skipping rarely has a strong influence on the translation efficiency of the corresponding CDS. Fig. 3 Generalized uORFs skipping in long-lived RPKO strains. a – e 5′UTR-to-CDS Ribo-seq reads ratio for a Δ rps27b , b Δ rpl6a , c Δ rpl15b , d Δ rpl7a , and e Δ rpl9a strains compared to the wild-type strain. Each dot corresponds to a gene containing at least one uORF. The GCN4 gene is highlighted in orange . f Cumulative distribution functions for the different strains studied indicate that only long-lived strains show a significant decrease in the 5′UTR-to-CDS ratio compared to the wild-type strain. The comparison of the distributions of ratios between the different RPKO and the wild type was performed with the Mann–Whitney U test and the P -values for the two-tailed test are indicated. See Supplementary Fig. 3 for the similar analysis of uORF-to-CDS ratios Full size image GCN4 translation is upregulated in long-lived RPKO strains The GCN4 transcript showed by far the largest change in the 5′UTR-to-CDS ratio in long-lived strains compared to wild type (Fig. 3d, e ). The ribosome occupancy of the GCN4 locus in the wild-type strain revealed all four well-described inhibitory uORFs in the GCN4 5′ UTR 29 , as well as the non-canonical uORF observed more recently 32 (Fig. 4a ). However, it was only in the long-lived Δ rpl7a and Δ rpl9a strains that the non-canonical uORF and uORF1 were less covered by reads, while the ribosome density strongly increased at the start of the GCN4 CDS. The mRNA-seq data showed that the increased ribosome occupancy of the CDS is not due to higher mRNA abundance, and targeted proteomics confirmed that it leads to increased protein levels (Fig. 4b ). Gcn4 was also increased in the short-lived Δ rpl6a strain, albeit less than in the long-lived strains (Fig. 4b ). Consistent with an increased Gcn4 level, the mRNA-seq data showed that the expression of its transcriptional targets is also most significantly increased in the long-lived Δ rpl7a and Δ rpl9a strains compared to wild type, whereas expression of non-targets is not changed (Fig. 4c ). These analyses confirm that Gcn4 is translationally upregulated and its known targets are transcriptionally upregulated in long-lived RPKO strains. Fig. 4 GCN4 is translationally upregulated in long-lived RPKO strains. a Density of ribosome protected fragment (RPF) reads along the GCN4 locus in the different strains studied. Long-lived strains show decreased density specifically at non-canonical ( dark gray ) and subsequent, first canonical uORF ( light gray ) as well as increased density in the main ORF, particularly at the start. Note the different scales upstream and within the coding regions. b Quantification of mRNA, ribosome-protected fragments and protein fold-changes for Gcn4 in the different RPKO strains with respect to the wild-type strain. Error bars indicate the s.e.m. c Boxplots show the mRNA fold-changes for Gcn4 targets ( gray , from ) and non-targets ( white ) in the RPKO strains compared to wild-type strain. P -values were calculated using the two-sided Mann–Whitney U test. Boxes extend from the 25th to 75th percentiles (interquartile range (IQR)), horizontal lines represent the median, whiskers indicate the lowest and highest datum within 1.5*IQR from the lower and upper quartiles, respectively Full size image Gcn4 overexpression increases replicative lifespan Although deletion of GCN4 in RPKO strains partially restores lifespan to wild type levels 18 , whether the Gcn4 overexpression is sufficient to increase yeast replicative lifespan is not known. To test this, we replaced the endogenous promoter and 5′UTR sequence of the GCN4 gene with the constitutively active ADH1 promoter. The resulting P ADH1 -GCN4 strain had significantly longer lifespan in comparison to the wild type (mean number of divisions = 28.48 vs. 19.6, based on 30 and 28 cells, respectively, p < 0.001, Wilcoxon Rank-sum test; Fig. 5a ). This 45% increase in lifespan is comparable to those observed in RPKO strains, which were ~26% for Δ rpl9a and ~39% for Δ rpl7a 18 . Thus, the overexpression of GCN4 is sufficient to promote longevity in yeast. Fig. 5 Gcn4 overexpression increases replicative lifespan and dampens global translation independent of eIF2α phosphorylation. a The replicative lifespan assay shows that the GCN4 overexpression strain exhibits a ~45% increase in the mean number of generations compared to the wild-type strain. Mean lifespan values are shown in parentheses. b – d Boxplots illustrating the distribution of mRNA fold-change of b ribosomal proteins, c translation initiation factors, and d translation elongation factors in RPKOs and GCN4 overexpression strains relative to the wild-type strain. P -values were calculated using the two-sided Mann–Whitney U test to compare mRNA fold-changes of genes belonging to a given category (RPs, Initiation factors or Elongation factors) and that of all other genes. Boxes extend from the 25th to 75th percentiles, horizontal lines represent the median, whiskers indicate the lowest and highest datum within 1.5*IQR from the lower and upper quartiles, respectively. e Quantification of global translation by Click-iT HPG shows that GCN4 overexpression strain has a significantly reduced global protein synthesis. Error bars represent s.d. across three different biological replicates. *** p < 0.001. P -values were calculated using two-tailed Student’s t -test. f Quantification of eIF2α phosphorylation through western blot. Two different replicates are shown along with their quantification. Rapamycin vs. vehicle-treated WT was used as positive control for antibody Full size image Gcn4 dampens global translation To elucidate the molecular mechanism underlying the extended lifespan conferred by Gcn4 overexpression, we measured the global gene expression changes in the P ADH1 -GCN4 strain by mRNA-seq. As expected, multiple pathways involved in amino acid biosynthesis were significantly upregulated in this strain relative to wild type (Supplementary Fig. 5a ). Surprisingly, the abundance of genes encoding RPs (Fig. 5b ), translation initiation factors (IFs, Fig. 5c ) and elongation factors (EFs, Fig. 5d ) was reduced (Supplementary Data 3 ) to the level observed in the long-lived RPKO strains. The general repression of genes encoding components of the translation machinery could lead to a global decrease in translation. Quantifying nascent protein synthesis with the HPG fluorometric assay, we found that global translation rate was indeed decreased in the P ADH1 -GCN4 strain in comparison to wild type (Fig. 5e ). The translational repression was not specific to the manner of Gcn4 overexpression, as it was also evident in a strain overexpressing GCN4 from a galactose-inducible plasmid ( P Gal1-10 -GST-GCN4 ), and in another strain, carrying a genomically integrated, copper-inducible promoter upstream of the GCN4 gene ( P Cup1 -GCN4 ; Supplementary Fig. 5b, c ). To test whether Gcn4 overexpression triggers a stress response, which in turn would lead to general translational repression, we measured the level of phosphorylated eIF2α, a well established stress marker 33 . We did not find an increase, but rather a decrease in the level of eIF2α phosphorylation in the long-lived RPKO and P ADH1 -GCN4 strains (Fig. 5f ; Supplementary Fig. 6 ). These data demonstrate that the global translational repression in Δ rpl7a , Δ rpl9a , and P ADH1 -GCN4 does not depend on the canonical eIF2α pathway. Distinct regulatory motifs in Gcn4-activated/repressed genes To determine whether Gcn4 controls directly the transcription of genes encoding components of the translation machinery, we performed chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) in the P Gal1-10 -GST-GCN4 strain, in which GCN4 was tagged with glutathione S-transferase (GST). We found a strong enrichment of reads in the genomic regions upstream of start codons in the Gcn4-ChIP sample compared to the input chromatin (Fig. 6a ; Supplementary Fig. 7a ). We identified 327 ChIP peaks, of which 151 could be unambiguously assigned to downstream genes (Supplementary Data 4 ). Although only 25.8% of these ChIP-inferred targets were previously reported as Gcn4-responsive genes (Supplementary Fig. 7b ), all of the ChIP targets, including the 74.2% that were not identified before, contained high-scoring Gcn4-binding sequences (Supplementary Fig. 7c ). Fig. 6 Gcn4-ChIP targets that are activated/repressed upon Gcn4 overexpression have distinct configurations of regulatory elements. a Profile of ChIP-seq reads in the 2 kb region centered on the start of the ORF shows an enrichment of Gcn4-ChIP signal over the input signal in the upstream region of the ORF. b Example profiles of ChIP-seq reads for three different genes: one related to amino acid biosynthesis ( ARG4 ), and two ribosomal proteins ( RPL14B and RPS24A ). c Boxplot illustrating mRNA fold-change for Gcn4-ChIP targets in the P Gal1-10 -GST-GCN4 strain relative to the respective wild-type strain. The box extends from the 25th to 75th percentiles, the horizontal line represents the median, whiskers indicate the lowest and highest datum within 1.5*IQR from the lower and upper quartiles, respectively. d Distances between Gcn4-binding sites and gene starts are significantly higher for repressed compared to activated targets. Boxes extend from the 25th to 75th percentiles, the horizontal line represents the median, whiskers indicate the lowest and highest datum within 1.5*IQR from the lower and upper quartiles, respectively. ** p < 0.01. P -value was calculated using the two-tailed Mann–Whitney U test. e and f Sequence logo for the ChIP peaks associated with e upregulated and f downregulated genes in the P Gal1-10 -GST-GCN4 strain. The number of peaks where the motif was found out of all the peaks considered is indicated Full size image To evaluate the transcriptional response of Gcn4-ChIP targets, we analyzed gene expression in the P Gal1-10 -GST-GCN4 strain, that was used for the ChIP analysis, and in the corresponding wt- URA3 control. We observed consistent changes in gene expression upon GCN4 overexpression in P Gal1-10 -GST-GCN4 and P ADH1 -GCN4 cells (Pearson’s correlation R = 0.65, p < 0.001). Out of the 149 unambiguous Gcn4-ChIP targets whose expression could be detected, 131 were upregulated and 18 were downregulated, suggesting that Gcn4 acts as a transcriptional activator as well as repressor (Fig. 6c ). The upregulated targets were mainly amino acid biosynthesis genes (Supplementary Fig. 7d ) and had the Gcn4-ChIP peaks at ~250 nucleotides upstream of the translation start (Fig. 6d ). The downregulated genes did not share a specific molecular pathway and had the Gcn4-ChIP peaks farther upstream. Whereas RP genes were generally repressed upon Gcn4 overexpression (Fig. 5b ), only a few contained Gcn4-ChIP peaks in their promoters (Fig. 6b ), indicating that most RP genes are indirectly repressed by Gcn4. Specifically, two of the directly repressed, unambiguous Gcn4-ChIP targets were RPs. In addition, in contrast to most of the Gcn4 targets, translation-related factors had reduced expression, regardless of the method used for Gcn4 overexpression (Supplementary Fig. 7e ). To confirm that Gcn4 interacts directly with the promoters of the downregulated targets, we searched for overrepresented sequence motifs in the Gcn4-ChIP peaks with the MEME software 34 . Surprisingly, although 88% of the upregulated Gcn4 targets exhibited the canonical Gcn4-binding motifs 35 , 36 , 37 , all but one of the downregulated targets had a shorter form of the motif (Fig. 6e, f ). In addition, the majority of the downregulated targets (12 out of 18, ~67%) contained Rap1 binding motifs 38 (Fig. 6f ), and half of these (6 out of the 12 promoter regions) have been previously reported to be regulated by Rap1 25 , 39 , 40 , 41 , 42 . The frequency of the Rap1 motif at upregulated targets of Gcn4 was much lower (~17%). Also unexpectedly, the Gcn4-binding sites are located downstream of the predicted Rap1 binding sites in upregulated Gcn4 targets, whereas in downregulated targets the configuration is inverted (Supplementary Fig. 8 ). Mutation of the Gcn4 DNA-binding domain was reported to impair the upregulation of genes in amino acid biosynthetic pathway but not the repression of RP genes 43 . However, we found that a strain that overexpressed the genomically integrated S242L Gcn4 mutant showed both reduced upregulation of amino acid biosynthetic pathways as well as de-repressed RP gene expression (Supplementary Fig. 9 ). Collectively, our data demonstrate that more than 10% of genes with Gcn4-ChIP peaks are repressed and that the DNA-binding domain of Gcn4 is necessary for both its activating and repressing effects on gene expression. Gcn4 generally represses translation The above findings strongly suggest that the induction of Gcn4 contributes to the decreased protein synthesis capacity in long-lived RPKO strains. To test this, we deleted GCN4 and measured the translation rate in the wild type and single RPKO strains. We found that deletion of GCN4 leads to increased translation in all strains, the largest changes occurring in the long-lived RPKO strains (Fig. 7a ). These results indicate that Gcn4 is required for the reduced translational output of long-lived RPKO strains. To determine whether Gcn4 generally represses translation beyond the RPKO strains, we measured the rate of protein synthesis in wild-type strains treated with rapamycin or subjected to glucose starvation. These conditions have been shown to induce Gcn4 expression and reduce translation 44 , 45 . The protein synthesis assay showed that indeed, rapamycin treatment and glucose starvation reduce translation (Fig. 7b ). Importantly, deletion of GCN4 significantly mitigated this effect. Altogether, these results indicate that translation repression is a general function of Gcn4. Fig. 7 Gcn4 strongly represses translation in the long-lived RPKO strains and stresses. Quantification of global translation by Click-iT HPG for a different single and double KO strains, b glucose-starved (CR) and rapamycin-treated (RAPA) yeast cells. GCN4 deletion restores translation to the level of the wild-type strain in RPKO strains and also leads to increased translation in stressed cells. The significance of the two-tailed t -test between any given deletion strain and the wild-type strain is depicted above the respective bar. Mean values of the relative translation change between the GCN4 deletion strain and the respective parental strain are shown in parentheses. Error bars represent s.d. across three different biological replicates except glucose starvation where n = 2. The p -value for the two-tailed t-test is indicated by ‘*’: * p < 0.05, ** p < 0.01, *** p < 0.001. c Model for Gcn4 effect on translation and aging. Green lines indicate the findings in this study, black continuous lines denote previously established links, and the dashed line indicates a connection that remains to be studied Full size image Discussion Recent studies have demonstrated that yeast strains with individual RP gene deletions differ widely in replicative lifespan, and that longevity is partially dependent on Gcn4 expression 11 , 18 . However, the mechanism by which Gcn4, a transcriptional activator of amino acid biosynthesis genes, influences lifespan, is unknown. Here we uncovered a general function of Gcn4 in repressing translation. This finding has important implications for the coupling between stress, protein synthesis, and longevity. For our study we chose the wild-type strain, as well as strains with increased (Δ rpl7a and Δ rpl9a ) or decreased lifespan (Δ rpl6a , Δ rpl15b , and Δ rps27b ) 18 . Although amino acid biosynthesis genes were upregulated in all RPLKO strains 26 , 27 , the upregulation was most pronounced in the long-lived strains, in line with the strongest induction of Gcn4 in these strains. Long-lived RPLKO strains also showed downregulation of translation-related genes (Fig. 1a ), and a consistently impaired protein synthesis (Fig. 2 ). Polysome profiling revealed half-mers (Fig. 2a ) diagnostic for the presence of the 48S initiation complex on actively translated mRNAs as a result of delayed monosome formation 30 , 46 . As fast assembly of 80S ribosomes at GCN4 uORFs prevents Gcn4 protein production, the slow assembly of 80S explains the pronounced Gcn4 upregulation in long-lived RPKO strains 26 , 27 (Figs. 3 and 4 ). Consistent with this interpretation, profiling of genome-wide ribosome occupancy revealed generalized uORF skipping in the strains displaying half-mers (Fig. 3 ). Incidentally, although one might expect that increased uORF skipping generally leads to increased translation of the downstream ORF, our data does not support this hypothesis. The translation efficiency of most genes with uORFs remained unchanged and for the few genes that showed significant changes, we observed both up- and down-regulation (Supplementary Fig. 4 ). This is in line with observations that uORFs are not always inhibitory 31 . Strikingly, the overexpression of Gcn4 was sufficient to repress protein synthesis and also to downregulate the expression of translation-related genes (Fig. 5 ). Furthermore, Gcn4 overexpression did not lead to increased eIF2α phosphorylation (Fig. 5f , Supplementary Fig. 6 ), showing that repression of translation-related genes that follows Gcn4 overexpression can be decoupled from the program triggered by cellular stress. The Gcn4-dependent repression of RPs expression has been described before, under conditions of amino acid starvation 22 . As most RP genes do not have a Gcn4-binding motif in their promoters, it was concluded that this response, although Gcn4-dependent, is indirect, perhaps through squelching of transcription factors that are required for RP gene expression 22 , 47 . More recently, it was proposed that the RP downregulation upon amino acid starvation is the result of Gcn4 displacing Esa1 from the RP-activating Esa1-Rap1 complex 43 . Here we found that the DNA binding activity of Gcn4 is necessary for RP gene repression (Supplementary Fig. 9 ), which remains consistent with the squelching hypothesis. Nevertheless, we also found that Rap1 and Gcn4-binding motifs co-occur in the promoters of genes that are repressed upon Gcn4 overexpression (Fig. 6f ), indicating that both of these proteins bind to the promoters of repressed genes in a sequence-specific manner. Although the frequency of Rap1 binding motifs in the promoters of upregulated Gcn4 targets is much lower, there are some upregulated Gcn4 targets that also co-targeted by Rap1. What accounts for the differences in expression changes among genes whose promoters contain both Gcn4 and Rap1 binding motifs remains to be further analyzed. Here we observed that the Rap1 binding sites are preferentially located upstream of the Gcn4-binding sites in the upregulated targets, whereas they are located downstream in the downregulated targets (Supplementary Fig. 8 ). In addition, the Gcn4 motif that we inferred from downregulated targets is shorter than the motif we inferred from the upregulated targets. Since reduced translation is linked to increased lifespan 13 , 15 , 48 , our findings provide a compelling explanation for the effect of Gcn4 on lifespan. Consistently, the knockout of GCN4 almost entirely restored global translation to wild type levels in the long-lived RPKO strains (Fig. 7a ). Notably, the magnitude of Gcn4 induction (Fig. 4b ), the corresponding repression of RP genes (Fig. 5b ), and the reduction in the translation capacity (Fig. 2d ), all correlated with the increase in lifespan. However, Gcn4 also increases amino acid biosynthesis, and so far, these two activities could not be decoupled. Uncovering the precise mechanism by which Gcn4 represses the expression of translation factors is likely necessary to be able to assess the relative contribution of these two activities in increasing the replicative lifespan. In physiological contexts such as environmental stress 49 , expression of Gcn4 follows eIF2α phosphorylation, which also leads to reduced protein synthesis 33 . Why would Gcn4 and phosphorylated eIF2α act simultaneously to globally repress translation? As ribosome biogenesis consumes a large fraction of a cell’s energy 20 , transcriptional regulation of translation-related genes by Gcn4 is an energy efficient mechanism to globally inhibit protein synthesis (Fig. 7c ), which may operate synergistically with the repression of the translation process through the phosphorylation of eIF2α. An interesting open question is how certain mRNAs encoding amino acid biosynthesis genes are translationally upregulated in long-lived RPKO strains despite globally impaired protein synthesis capacity (Fig. 1b ). It may be that the corresponding transcripts are selectively regulated by mRNA-binding factors 50 . Alternatively, the deletion of the RP gene could confer a specialized function to cellular ribosomes lacking this protein, namely increased affinity to specific mRNAs and hence enhanced translation 51 . Gcn4 is a highly conserved protein and its mammalian homolog is known as activating transcription factor-4 (ATF4). Similarly to GCN4 , translation of ATF4 upon stress or amino acid starvation is regulated through uORFs 52 , 53 . Also similarly to Gcn4, Atf4 binds directly to the promoters of translation-related genes in mouse embryonic fibroblasts 54 . Unlike Gcn4, however, Atf4 has context-dependent effects on cellular and organism lifespan. Upon ER stress, ATF4 induction leads to increased protein synthesis and cell death 54 . In contrast, in cellular models of Parkinson’s disease, ATF4 protects against neuronal cell death 55 and its increased expression has also been observed in slow-aging mice 56 , 57 . Our study provides a compelling molecular basis for the effect of Gcn4 and perhaps its mammalian counterpart ATF4 on longevity, relying on the transcriptional repression of protein synthesis genes. Methods Yeast strains and growth The yeast strains used in this study are listed in the Supplementary Table 1 . All yeast strains are in the BY4741 genetic background. All single RPKOs and wild-type BY4741 strains were obtained from GE Dharmacon as a part of the haploid yeast ORF deletion collection. The P GAL1/10 -GST-GCN4 strain was also purchased from GE Dharmacon. Other strains were generated by standard genetic techniques 58 . The chromosomal promoter exchange for P ADH1 -GCN4 and P CUP1 -GCN4 strains were performed according to refs. 59 , 60 . Yeast was grown on YPD (1% yeast extract, 2% peptone, and 2% glucose) medium at 30 °C, 200 r.p.m. unless otherwise stated. The cells were collected in mid-log phase at OD 600 0.4–0.8. mRNA sequencing Libraries for mRNA sequencing were prepared for three biological replicates. A yeast cell pellet was resuspended in 1 ml of lysis buffer from Dynabeads mRNA DIRECT Kit (61011, Life technologies) and lysed at 4 °C with 1 volume of acid washed glass beads in a FastPrep instrument (Thermo Scientific) using 2 cycles with the following settings: 45 s at 6.5 speed with 3 min pause on ice between cycles. Further, poly(A) + RNA was isolated directly from cell lysate using the Dynabeads mRNA DIRECT Kit according to manufacturer’s protocol. Libraries for mRNA sequencing were prepared using the “directional mRNA-seq sample preparation” protocol from Illumina, with minor modifications. In brief, after isolation, 50 ng of mRNA was chemically fragmented by incubating the mRNA solution with twice the volume of alkaline hydrolysis buffer (50 mM sodium carbonate [NaHCO 3 /Na 2 CO 3 ] pH 9.2, 1 mM EDTA) at 95 °C for 5 min to obtain fragments of ~200–300 bases. The fragmented mRNA was immediately purified with RNeasy MinElute Cleanup Kit (74204, Qiagen) to stop the reaction and to remove small RNA fragments (<100 bases). Further, purified, fragmented mRNA was treated with thermosensitive alkaline phosphatase FastAP (EF0651, Fermentas) at 37 °C for 30 min and then at 75 °C for 5 min to inactivate FastAP. The fragmented mRNA was further incubated with ATP and T4 polynucleotide kinase (EK0032, Fermentas) at 37 °C for 1 h and subsequently purified. Ligation of RNA 3′ adapter (RA3, part # 15013207, Illumina) was done using T4 RNA Ligase 2, truncated K227Q (M0351L, New England Biolabs Inc) according to the Illumina protocol. The ligation step was followed by RNA purification as described above to remove unligated 3′ adapters. The RNA 5′ adapter (RA5, part #15013205, Illumina) was ligated using T4 RNA ligase (EL0021, Fermentas) according to the Illumina protocol, and the RNA was then purified to remove unligated 5′ adapters. Complementary DNA (cDNA) was synthesized using RNA RT Primer (RTP, part #15013981, Illumina) and SuperScript III (18080044, Invitrogen) as per Illumina protocol. Libraries were amplified for 14 cycles of PCR using forward (RNA PCR Primer (RP1), part #15005505 Illumina), and reverse (Illumina PCR Primer, Index) PCR primers. Reverse PCR primers with different indexes were used to prepare libraries from different samples thereby enabling multiplexed sequencing. Libraries were sequenced for 51 cycles on an Illumina HiSeq 2000 instrument. Polysome profiling and Ribo-seq Polysome profiling and sequencing of ribosome-protected mRNA fragments were performed for three biological replicates (except for Δ rpl6a , for which only two replicates were obtained) according to protocol described in ref. 61 . In brief, yeast cells were treated for 1 min with 100 μg per ml of cycloheximide (CHX) to stabilize the translating ribosomes on mRNA. Cells were harvested by vacuum filtration, flash frozen in liquid nitrogen, and later lysed under cryogenic conditions in lysis buffer (20 mM Tris HCl, pH 7.4, 150 mM NaCl, 5 mM MgCl 2 , 1 mM DTT, 1% Triton X-100, and 100 μg/ml cycloheximide) using freezer mill (Spex). Lysate was centrifuged at 3000× g for 3 min at 4 °C and then at 10,000× g for 5 min at 4 °C, to clarify the lysate. A fraction of lysate equivalent to A 260 = 10 was treated with 6 μl of RNase I (100 U per μl, Ambion) for 45 min at room temperature (RT) with gentle agitation for ribosome profiling and RNaseI was inactivated by addition of 10 µl of SuperaseIn (20U per µl). Notably, for polysome profiling, the lysate was not treated with RNaseI but taken directly for subsequent steps. 7–47% linear sucrose gradient was prepared in 50 mM Tris-HCl (pH = 7.5), 50 mM NH4Cl, 12 mM MgCl2, 0.5 mM DTT, and 100 µg per ml CHX using Gradient Master instrument (Biocomp) according to the manufacturer’s instruction. The samples were loaded on the precooled linear gradient and centrifuged at 35,000 rpm for 3 h at 4 °C in a TH-641 rotor (Thermo Scientific). For ribosome profiling only different fractions of the gradient were collected in 1% SDS solution using a Density Gradient Fractionation System (Brandel) with a setting of pump speed (0.75 ml per min) and collection time 32 sec per tube, then flash frozen. The appropriate fractions that contain monosomes were processed for footprint library preparation according to ref. 62 . In brief, RNA was isolated from the collected monosomes fraction with the phenol chloroform method. RNA fragments of appropriate size (28–30 nt) were selected by running samples on 15% polyacrylamide denaturing TBE-Urea gel and visualized by SYBR Gold (Life Technologies). Size-selected RNA was dephosphorylated by T4 polynucleotide kinase (PNK, New England Biolabs) treatment for 1 h at 37 °C. PNK was heat inactivated and RNA was purified using phenol chloroform method and overnight precipitation of RNA in ethanol. Preadenylated 3′ linker was ligated to dephosphorylated RNA by using T4 RNA ligase 2, truncated (New England Biolabs). The ligation reaction was carried out for 4 h at 22 °C. The ligation reaction was run on 15% polyacrylamide denaturing TBE-Urea gel to separate and purify the ligated from the unligated product and from unused 3′ linker. Gel purified, ligated RNA was reverse transcribed by Superscript III (Invitrogen) for 30 min at 48 °C in a total reaction volume of 20 µl. After reverse transcription, the RNA was hydrolyzed by adding 2.2 µl of 1 N NaOH solution and incubating for 20 min at 98 °C. First-strand cDNA was further gel purified by electrophoresis on 15% polyacrylamide denaturing TBE-Urea gel and circularized by incubating with CircLigase II ssDNA Ligase (Epicentre) for 60 min at 60 °C, followed by inactivation of CircLigase by heating at 80 °C for 10 min. Thereafter, circular cDNA was PCR-amplified and then the amplified products were gel purified on 8% native polyacrylamide gel. The prepared library was sequenced on an Illumina platform. Quantitative proteomics Quantitative proteomics was performed for three biological replicates according to the protocol described below in a step by step manner. Sample preparation For each sample, 108 yeast cells were resuspended in 100 µl lysis buffer (2% sodium deoxycholate, 100 mM ammonium bicarbonate), sonicated for 2 × 10 s using a vial tweeter and spinned down. A small aliquot of the supernatant was taken to determine the protein concentration using a BCA assay (Thermo Fisher Scientific). Aliquots containing 50 µg proteins were taken from each sample, respectively, reduced with 5 mM TCEP for 15 min at 95 °C and alkylated with 10 mM iodoacetamide for 30 min in the dark at 25 °C. After quenching the reaction with 12 mM N-acetyl-cysteine the samples were diluted with 100 mM ammonium bicarbonate buffer to a final DOC concentration of 1%. Proteins were digested by incubation with sequencing-grade modified trypsin (1/50, w/w; Promega, Madison, WI, USA) overnight at 37 °C. Then, the samples were acidified with 2 M HCl to a final concentration of 50 mM, incubated for 15 min at 37 °C and the precipitated detergent removed by centrifugation at 10,000× g for 15 min. Subsequently, peptides were desalted on C18 reversed-phase spin columns according to the manufacturer’s instructions (Microspin, Harvard Apparatus) and dried under vacuum. The dried peptide samples were subsequently labeled with isobaric tag (TMT 6-plex, Thermo Fisher Scientific) according to the manufacturer’s instructions. The pooled sample was again desalted on C18 reversed-phase spin columns according to the manufacturer’s instructions (Macrospin, Harvard Apparatus) and dried under vacuum. In total, three pooled TMT samples containing one biological replicates of all six conditions, respectively, were generated. Off-Gel electrophoresis The TMT labeled samples was resolubilized to a final concentration of 1 mg/ml in Off-Gel electrophoresis buffer containing 6.25% glycerol and 1.25% IPG buffer (GE Healthcare). The peptides were separated on a 12 cm pH 3–10 IPG strip (GE Healthcare) with a 3100 OFFGEL fractionator (Agilent) as previously described 63 using a protocol of 1 h rehydration at maximum 500 V, 50 μA and 200 mW. Peptides were separated at maximum 8000 V, 100 μA and 300 mW until 20 kVh were reached. Subsequently, neighboring fractions were pooled (1&2, 3&4 …11&12) and the thus generated 6 peptide fractions were desalted using C18 reversed-phase columns according to the manufacturer’s instructions (Microspin, Harvard Apparatus), dried under vacuum and subjected to LC-MS/MS analysis. Mass spectrometric analysis The setup of the μRPLC-MS system was as described previously 64 . Chromatographic separation of peptides was carried out using an EASY nano-LC 1000 system (Thermo Fisher Scientific), equipped with a heated RP-HPLC column (75 μm × 50 cm) packed in-house with 1.9 μm C18 resin (Reprosil-AQ Pur, Dr. Maisch). Aliquots of 1 μg total peptides were analyzed per LC-MS/MS run using a linear gradient ranging from 95% solvent A (0.15% formic acid, 2% acetonitrile) and 5% solvent B (98% acetonitrile, 2% water, 0.15% formic acid) to 30% solvent B over 180 min at a flow rate of 200 nl/min. Mass spectrometry analysis was performed on a dual pressure LTQ-Elite Orbitrap mass spectrometer equipped with a nanoelectrospray ion source (both Thermo Fisher Scientific). Each MS1 scan was followed by high-collision-dissociation (HCD, both acquired in the Orbitrap) of the 10 most abundant precursor ions with dynamic exclusion for 60 s. Total cycle time was ~2 s. For MS1, 10 6 ions were accumulated in the Orbitrap cell over a maximum time of 300 ms and scanned at a resolution of 60,000 FWHM (at 400 m / z ). MS2 scans were acquired at a target setting of 50,000 ions, accumulation time of 100 ms and a resolution of 15,000 FWHM (at 400 m / z ). Singly charged ions and ions with unassigned charge state were excluded from triggering MS2 events. The normalized collision energy was set to 35%, and one microscan was acquired for each spectrum. Database searching and protein quantification The acquired raw-files were converted to the mascot generic file (mgf) format using the msconvert tool (part of ProteoWizard, version 3.0.4624 (2013-6-3)). Using the MASCOT algorithm (Matrix Science, Version 2.4.0), the mgf files were searched against a decoy database containing normal and reverse sequences of the predicted SwissProt entries of Saccharomyces cerevisiae ( , release date 20/10/2014) and commonly observed contaminants (in total 13,386 protein sequences) generated using the SequenceReverser tool from the MaxQuant software (Version 1.0.13.13). The precursor ion tolerance was set to 10 ppm and fragment ion tolerance was set to 0.01 Da. The search criteria were set as follows: full tryptic specificity was required (cleavage after lysine or arginine residues unless followed by proline), 2 missed cleavages were allowed, carbamidomethylation (C), TMT6plex (K and peptide N-terminus) were set as fixed modifications and oxidation (M) as a variable modification. Next, the database search results were imported to the Scaffold Q+software (version 4.3.3, Proteome Software Inc., Portland, OR, USA) and the protein false identification rate was set to 1% based on the number of decoy hits. Specifically, peptide identifications were accepted if they could be established at >93.0% probability to achieve an FDR <1.0% by the scaffold local FDR algorithm. Protein identifications were accepted if they could be established at >5.0% probability to achieve an FDR <1.0% and contained at least 1 identified peptide. Protein probabilities were assigned by the Protein Prophet program 65 . Proteins that contained similar peptides and could not be differentiated based on MS/MS analysis alone were grouped to satisfy the principles of parsimony. Proteins sharing significant peptide evidence were grouped into clusters. Acquired reporter ion intensities in the experiments were employed for automated quantification and statically analysis using a modified version of our in-house developed SafeQuant R script 66 . In brief, reporter ion intensities were corrected for isotopic impurities according to the manufacturer’s instructions. Intensities for each peptide and protein identification were summed, globally normalized across all acquisition runs and employed for ratio calculation and statistical analysis. Additionally, ratio distortion was controlled using spiked in protein calibrants as recently described 67 . The correlation between biological replicates was 0.872–0.911 (median 0.886), indicating that our estimates of protein abundance have a good reproducibility. Translation assay To quantify nascent protein synthesis as a measure of global translation, we followed the protocol of non-radioactive metabolic labelling assay kit “Click-iT HPG Alexa Fluor 488 Protein Synthesis Assay Kit” (Thermo Fisher Scientific). The method is based on the incorporation of L-HPG, an amino acid analog of methionine containing an alkyne moiety, and Alexa Fluor 488 azide. The signal intensity of incorporated HPG-Alexa Fluor 488 was measured by flow cytometry. Mean fluorescence intensities were computed from 10,000–50,000 cells of each strain and then normalized by the mean fluorescence intensity of the wild-type strain. We integrated the MET15 gene in all the strains used for the translation assay to allow the growth of cells in medium lacking methionine, as BY4741 strains are auxotrophic for this amino acid. Replicative lifespan assay Yeast replicative lifespan assays were performed as described previously 68 . In brief, yeast was grown on YPD agar plates at 30 °C and virgin daughter cells were isolated. Thereafter, virgin daughters were allowed to grow and divide while daughter cells were microdissected using a conventional manual microdissector (MSM, Singer Instruments) until mother cells stopped dividing. The differences in mean replicative lifespan among strains were compared with the Wilcoxon Rank-sum test. Western blot Yeast cells were lysed in 300 µl RIPA buffer containing protease inhibitor and phosphatase inhibitor as described in mRNA sequencing protocol above. 15–25 µg total protein was resolved on 10% SDS PAGE. For probing expression of p-eIF2α and Pgk1 with the respective antibodies used at 1:1000 dilution (Cell Signaling #3597 and Thermo Fisher Scientific #459250), we followed the protocol from Cell Signaling for transfer, blocking, incubation, washing, and developing the membrane. As positive control, we included samples from the wild-type strain treated for 30 min with rapamycin (200 ng per ml) and with the equivalent volume of solvent (ethanol) that we used to dissolve the rapamycin. ChIP-seq The ChIP protocol was adapted from ref. 69 . 500 ml of yeast cells grown to mid-log phase were crosslinked in fixing buffer (50 mM HEPES pH 7.5, 1 mM EDTA pH 8.0, 0.5 mM EGTA pH 8.0, 100 mM NaCl, and 1% formaldehyde) for 10 min with continuous rocking at RT, and then quenched with 125 mM glycine for 5 min. Cells were washed three times with cold PBS and collected. Nuclei were isolated and lysed to obtain crosslinked chromatin. Simultaneously, the antibody was coupled with protein G magnetic beads (10004D, Thermo Fisher Scientific) by incubating 100 μl of protein G beads with 10 μg of anti-GST antibody (27-4577-01, GE Healthcare Life Sciences) for minimum 1 h at RT with continuous rotation. A probe sonicator was then used in cold conditions to reduce heating, for 6 cycles of 30 s pulse-on at amplitude value of 60 and 1 min, and 15 s pulse-off to obtain chromatin fragments of 100–500 bp, followed by centrifugation at 20,000× g for 10 min at 4 °C to remove nuclear debris. Further, 3% chromatin from each sample was kept as input control and an equal amount (~0.75–1 mg) of chromatin was incubated with magnetic beads-coupled antibody at 4 °C overnight, with continuous rotation. Immuno-complexes were washed with 1 ml of wash buffers as described in the original protocol. Samples of washed immuno-complexes along with the input were further treated with RNase and then with proteinase K followed by overnight reverse crosslinking at 65 °C with continuous shaking at 1400 rpm, in a thermoblock with heating lid. DNA was purified using Ampure (Beckman Coulter) beads as detailed in ref. 69 . Libraries of ChIPed and input DNA were prepared according to the instruction manual of NEBNext ChIP-Seq Library Prep Reagent Set from Illumina. In brief, end repair of input and ChIPed DNA was done by incubating with T4 DNA Polymerase Klenow fragment and T4 PNK enzyme at 20 °C for 30 min. The reaction was purified using Ampure beads according to the instruction manual. An A nucleotide overhang at the 3′ end was produced by treating the end-repaired DNA with dATP and Klenow Fragment (3′ → 5′ exo – ) at 37 °C for 20 min followed by DNA purification. Double-stranded DNA adapters were ligated to the dA overhang DNA by T4 DNA ligase reaction at 37 °C for 30 min, the DNA was purification and size-selected as described in the instructions manual. Size-selected DNA was PCR-amplified for 16 cycles using NEBNext High-Fidelity 2X PCR Master Mix with Illumina universal forward primer and indexed reverse primer, that enabled multiplexing of samples for sequencing. Amplified DNA was finally purified and sequenced on an Illumina Hiseq2500 instrument. uORF analyses Yeast mRNAs with annotated uORFs with ATG, GTG, or TTG initiation codons were retrieved from ref. 31 . Ribo-seq 5′ UTR and CDS library-normalized counts were aggregated across the multiple replicates for the same strain to maximize the number of genes amenable for the follow-up analyses. Then, for all the genes containing more than 50 reads in both the 5′UTR and CDS for any given RPKO and wild-type strain, the ratio of 5′ UTR-to-CDS counts was calculated. We have also estimated the uORF-to-CDS ratio, counting only reads mapped within a predicted uORF (i.e., an open reading frame within the 5′ UTR that starts with ATG, GTG, or TTG as initiation codon), rather than in the entire 5′ UTR. mRNA- and Ribo-seq analysis mRNA-seq reads were first subjected to 3′ adapter trimming (5′-TGGAATTCTCGGGTGCCAAGG-3′) and quality control (reads shorter than 20 nucleotides or of low quality, i.e., for which over 10% of the nucleotides had a PHRED quality score <20, were discarded) using the FASTX-toolkit ( ). Segemehl 70 (version 0.1.7–411) was used to map reads to the yeast transcriptome, allowing a minimum mapping accuracy of 90%. CDS annotations were taken from the yeast database ( ) and 5′/3′UTR annotations from ref. 71 . For Ribo-seq, the procedure was similar to the one used above with only two alterations: (1) the sequence of the 3′ adapter that was used and trimmed was 5′-CTGTAGGCACCATCAAT-3′; (2) only reads mapped to coding regions were counted toward differential expression analysis. The Ribo-seq reads had the expected length (28–32nt) and for each read length, the relative location of the P site with respect to the read start was inferred as the value for which the correct position of the start codon and the 3-nt periodicity was most apparent (the number of reads at the first frame being larger than at both other frames). Only read lengths showing the expected 3-nt periodicity along the CDS were considered for further analyses. Note that beyond the enrichment at translation start, no strong bias along the open reading frame was observed (Supplementary Fig. 10 ). For both types of data, transcript counts were calculated based on uniquely mapped reads, for all considered read lengths, and used for differential expression with DESeq2 72 . The fold-change in translation efficiency was calculated by dividing the Ribo-seq fold-change by the mRNA-seq fold-change. Up/down-regulation was considered significant when the mRNA or RPF abundance changed more than 2-fold between strains and when the corresponding False Discovery Rate was lower than 0.01. Three biological replicates were obtained for each strain and each sample type (with the exception of Δ rpl6a , which only has two biological replicates for the Ribo-seq data) and were used for the estimation of differential expression. Pearson correlations between replicates of 0.966–0.999 (median of 0.981) for RNA-seq and 0.992–0.9997 (median of 0.998) for Ribo-seq indicate that our data have very good reproducibility. ChIP-seq analysis ChIP-seq reads were first subject 3′ adapter trimming (5′-TGGAATTCTCGGGTGCCAAGG-3′) and quality control using the FASTX-toolkit ( ). Reads were then mapped to the yeast genome (sacCer3) using Segemehl 70 (version 0.1.7–411), allowing a minimum mapping accuracy of 90%. ChIP peaks were found with MACS 73 (version 1.4.2) using the mapped reads from input and Gcn4-ChIP samples as follows: macs14 -t gcn4_chip.sam -c input.sam -f SAM -g 2e7 -w -n gcn4.chip.output We used bedtools to find peaks whose summits overlapped with 1 kb regions upstream of the annotated CDS starts of yeast and thereby annotated the Gcn4 targets. Peaks that could be unambiguously associated with only one gene were selected for further analyses, and the respective genes were reported as Gcn4 targets. Finally, we used MEME 74 to find enriched sequence motifs embedded in the ChIP peaks ([−50,50] nucleotides around peak summits) associated with genes that were found either up or downregulated upon Gcn4 overexpression using the following command: meme chip_peaks.fa -dna -maxsize 60000 -mod zoops -nmotifs 3 -minw 6 -maxw 20 -revcomp Motif scoring in promoter regions We used Patser (version 3b; ) parameterized with the position-dependent frequency matrix of Gcn4 75 and Rap1 38 to predicted binding sites for these two transcription factors in promoter regions (i.e., regions located 1 kb upstream of start codons). As the score of a given promoter we took the highest score of a predicted binding site in the entire promoter region. Gene-set analyses For each gene set present in KEGG ( ) containing at least 10 genes, we computed the mean fold-change across all the genes between a given RPKO strain and the wild-type strain. The list of RP genes was retrieved from the KEGG database (sce03010), whereas the initiation and elongation factors were retrieved from the GO repository (GO:0006413 and GO:0006414, respectively) in the yeast database ( ). Gcn4 literature targets were retrieved from the yeast database ( ). KEGG enrichment analyses for up and downregulated genes were performed with GeneCodis ( ) using the default parameters and all yeast genes as the reference list. Data availability All sequencing data have been deposited to Gene Expression Omnibus (GEO) under accession number GSE85591. The MS proteomics data have been deposited to ProteomeXchange with the identifier PXD004760. | To understand and control aging is the aspiration of many scientists. Researchers at the Biozentrum of the University of Basel have now discovered that the protein Gcn4 decreases protein synthesis and extends the life of yeast cells. Understanding how individual genes affect lifespan opens new ways to control the aging process and the occurrence of aging-related diseases. The results of this study have recently been published in Nature Communications. For about one hundred years it has been known that nutrient restriction and moderate stress can significantly prolong life. The researchers led by Prof. Mihaela Zavolan and Prof. Anne Spang, both at the Biozentrum of the University of Basel, have discovered how the transcription factor Gcn4, a protein that regulates the expression of many genes, extends the life of baker's yeast Saccharomyces cerevisiae. In various stress situations, the cells stimulate Gcn4 production which leads to reduced biosynthesis of new proteins and increased yeast lifespan. Transcription factor represses protein synthesis It has long been known that protein synthesis – also known as translation – plays an important role in aging. Inhibition of protein synthesis, caused for example by reduced nutrient intake, can have a positive effect on the life expectancy of diverse organisms such as yeast, flies, worms or fish. Reducing the ribosomes, the protein factories of the cell, can also considerably extend the lifespan of yeast cells. What these cellular stresses have in common is that they activate the production of Gcn4. However, how this protein promotes longevity has remained unclear. In their study, the team working with Zavolan exposed yeast cells to different stress conditions, measured their lifespan, protein synthesis rates and Gcn4 expression. "We observed that the level of the Gcn4 protein was positively correlated with the longevity of yeast cells," says Mihaela Zavolan, Professor of Computational and Systems Biology. "However, we wanted to understand why. We have now shown for the first time that it is the transcriptional suppression of genes that are important for cellular protein synthesis by Gcn4 that seems to account for its lifespan extension effect. As the translation machinery is limiting, the energy-intensive production of new proteins is overall dampened." From the yeast cell's point of view, this is an advantage: This enables them to live about 40 percent longer than usual. The transcription factor Gcn4 is conserved in over 50 different organisms, including mammals, and it likely play a significant role in the aging of these organisms as well. Zavolan's group will now investigate whether the mammalian homolog similarly slows aging and extends lifespan by regulating protein synthesis genes in response to nutrients and stress. | 10.1038/s41467-017-00539-y |
Chemistry | Chemists discover new reactivity of strained molecules | Roman Kleinmans et al, Intermolecular [2π+2σ]-photocycloaddition enabled by triplet energy transfer, Nature (2022). DOI: 10.1038/s41586-022-04636-x Journal information: Nature | http://dx.doi.org/10.1038/s41586-022-04636-x | https://phys.org/news/2022-03-chemists-reactivity-strained-molecules.html | Abstract For more than one century, photochemical [2+2]-cycloadditions have been used by synthetic chemists to make cyclobutanes, four-membered carbon-based rings. In this reaction, typically two olefin subunits (two π -electrons per olefin) cyclize to form two new C–C σ -bonds. Although the development of photochemical [2+2]-cycloadditions has made enormous progress within the last century, research has been focused on such [2 π +2 π ]-systems, in which two π -bonds are converted into two new σ -bonds 1 , 2 . Here we report an intermolecular [2+2]-photocycloaddition that uses bicyclo[1.1.0]butanes as 2 σ -electron reactants 3 , 4 , 5 , 6 , 7 . This strain-release-driven [2 π +2 σ ]-photocycloaddition reaction was realized by visible-light-mediated triplet energy transfer catalysis 8 , 9 . A simple, modular and diastereoselective synthesis of bicyclo[2.1.1]hexanes from heterocyclic olefin coupling partners, namely coumarins, flavones and indoles, is disclosed. Given the increasing importance of bicyclo[2.1.1]hexanes as bioisosteres—groups that convey similar biological properties to those they replace—in pharmaceutical research and considering their limited access 10 , 11 , there remains a need for new synthetic methodologies. Applying this strategy enabled us to extend the intermolecular [2+2]-photocycloadditions to σ -bonds and provides previously inaccessible structural motifs. Main [2+2]-Photocycloadditions belong to the most fundamental organic transformations and are an established tool for the straightforward synthesis of strained cyclobutane rings 1 , 2 . The formation of two new C–C σ -bonds and up to four stereogenic centres in a single step with perfect atom economy renders these processes highly attractive 12 . The first reported photochemical [2+2]-cycloaddition—a solid-state photodimerization of thymoquinone mediated by exposure to sunlight—was published by Liebermann as early as 1877 (ref. 13 ). Additional selected milestones in the history of photochemical [2+2]-cycloadditions including intramolecular and diastereoselective variants are depicted in Fig. 1a (refs. 14 , 15 , 16 , 17 ). In particular, the development of cross-selective intermolecular protocols has gained tremendous attention 2 , 18 . In contrast to intramolecular or dimerization approaches, these transformations allow modular variation of the olefin components and provide rapid access to molecular complexity. The general concept of cross-selective photochemical [2+2]-cycloadditions consists of selective activation of olefin A , which is then able to react with olefin B (Fig. 1b ). From a mechanistic perspective, the most relevant strategy in this regard is the excitation of one substrate into its triplet state (T 1 ). This mainly stems from the comparably long lifetime of such triplet excited states, leading to an increased probability for productive intermolecular interaction 1 . The triplet state of a substrate can be reached either by direct excitation and subsequent intersystem crossing from the excited singlet state (S 1 ) or by indirect population, mediated by triplet–triplet energy transfer (ENT) from a suitable photosensitizer 8 . A major drawback of direct excitation is that most organic substrates require harsh, ionizing ultraviolet (UV) light irradiation, which often results in undesired and competitive side reactions. Therefore, the synthetic community has recently focused on the use of milder visible light to enable ENT processes to overcome these selectivity and activation issues 19 , 20 , 21 , 22 . Fig. 1: Background and motivation of the present work. a , Selected milestones in the history of photochemical [2+2]-cycloadditions. b , Mechanistic outline of stepwise [2+2]-photocycloadditions by triplet excited state olefins and general selectivity issues. c , Intramolecular [2 π +2 σ ]-valence isomerization of strained tricyclic systems, seminal reports by Prinzbach in the 1960s. d , Synthesis of bicyclo[2.1.1]hexanes enabled by ENT-mediated intermolecular [2 π +2 σ ]-photocycloaddition undertaken in this work. ISC, intersystem crossing. Full size image Inspired by seminal reports by Prinzbach and coworkers from the 1960s about intramolecular valence [2 π +2 σ ]-photoisomerization of tricyclic systems (Fig. 1c ) 23 , 24 , 25 , we questioned whether a strain-release approach 26 , 27 could enable an intermolecular [2 π +2 σ ]-photocycloaddition in which an excited π -bond in its triplet state reacts with a σ -bond to form two new C–C σ -bonds (Fig. 1d ). We envisioned that bicyclo[1.1.0]butanes (BCBs) 4 , 5 , 6 would be ideally suited for our proposed [2 π +2 σ ]-photocycloaddition as they would enable the construction of bicyclo[2.1.1]hexane (BCH) products by simple reaction with an olefin component 28 , 29 . Three-dimensional architectures with conformationally restricted C( sp 3 )-rich skeletons such as BCHs are becoming increasingly relevant in bioisosterism 30 , 31 , and are thus extremely valuable for medicinal chemistry. However, the lack of practical synthetic access to these frameworks, especially for highly substituted examples 10 , 11 , inherently limits their application in drug discovery. Consequently, the development of new methodologies towards their simple and modular synthesis is of highest interest to the synthetic community. With this in mind, we initiated our investigations by evaluating a suitable olefin coupling partner (for unsuccessful olefin coupling partners see Supplementary Table 2 ) in combination with dibenzyl amide-substituted BCB 2a in the presence of iridium-based triplet sensitizer [Ir(dF(CF 3 )ppy) 2 (dtbbpy)]PF 6 ( Ir-F , triplet excited state energy E T = 61.8 kcal mol –1 ) 8 under visible light irradiation with blue light-emitting diodes (LEDs) in MeCN. Fortunately, when using coumarin ( 1a , E T = 62.1 kcal mol –1 ) 32 as a coupling partner the desired [2 π +2 σ ]-cycloaddition product 3a was formed. Intrigued by this finding, we began with the optimization studies ( Supplementary Table 1 ). The use of simple thioxanthone triplet sensitizer ( TXT , E T = 65.5 kcal mol –1 ) 33 not only improved the yield but also allowed us to perform the reaction metal-free. In control experiments no product formation was observed in the absence of photocatalyst or light. Note that during the optimization and substrate scope analysis only formation of the cis -diastereomer was observed. Having established the optimized reaction conditions, we proceeded with the investigation of the substrate scope (Fig. 2 ), starting with an array of diversely functionalized BCBs. An attached Weinreb amide, which provides a potential handle for further downstream modifications, and a morpholine amide were successful substrates in our [2 π +2 σ ]-protocol and the respective products 3b and 3c were produced in high yields. Moreover, different alkyl esters (ethyl, iso -butyl and benzyl) are suitable as shown by products 3d–3f . Aryl ( 3g ) and alkyl ketones ( 3h ) are compatible as well and gave the products in good yields. BCB-boronic acid pinacol ester (Bpin) was converted to the desired product 3i in a synthetically useful yield, offering many opportunities for further diversifications 34 . A BCB without an attached electron-withdrawing group was reactive as well, and the corresponding product 3j was obtained in moderate yield as a 1:1 mixture of diastereomers. Even a 1,3-disubstituted BCB was compatible, leading to highly decorated BCH 3k with two opposing quaternary carbon centres at the bridgehead positions. Unfortunately, sulfones currently display a limitation of this methodology. As Weinreb amide BCB 2b provided the highest yield, it was selected as standard substrate for further studies. A reaction condition-based sensitivity assessment was performed to ensure high levels of reproducibility 35 . Whereas most parameters (temperature ( T ), concentration ( c ), water and oxygen) only had a negligible effect, high light intensity ( I ) was crucial for an optimal reaction outcome. Fig. 2: Substrate scope and sensitivity assessment. Standard reaction conditions: olefin 1 (5.0 equiv.), BCB 2 (0.20 mmol, 1.0 equiv.), thioxanthone (2 mol%) in MeCN (0.05 M) at room temperature (rt) under irradiation with blue LEDs ( λ max = 405 nm) for 20 h. Isolated yields are indicated. 1 H NMR yields were determined with CH 2 Br 2 as internal standard and are reported in parentheses. See Supplementary Information for full experimental details. a Switch of limiting reagent: olefin 1 (0.20 mmol, 1.0 equiv.) and BCB 2 (5.0 equiv.). b diastereomeric ratio (d.r.) was determined by 1 H NMR analysis of the crude reaction mixture. c Reaction performed on 0.10 mmol scale. d Extended reaction time, see Supplementary Information . X-ray, crystal structure (see Supplementary Information ). Full size image Next, we turned our attention to exploring the coumarin substrate scope. Coumarins with methyl substitution on the aryl ring, and even a sterically demanding substrate, were successfully tested and the products 3l–3n were obtained in good yields. Extension of the π -system to a benzocoumarin moiety was also tolerated as shown by product 3o . Moreover, various halogenated (Br, Cl and F) coumarins in varying substitution patterns were well tolerated ( 3p–3t ). In addition, functional groups such as electron-donating alkoxy groups ( 3u , 3v ), mesylated ( 3w ) and acetylated ( 3x ) alcohols, ketones ( 3y ), esters ( 3z , 3aa , 3ac ), tethered olefins ( 3aa ) and nitriles ( 3ab ) are compatible with this protocol. Notably, substitution on the double bond leads to the construction of all-carbon quaternary centres ( 3y–3ac , 3al ). Furthermore, a 2-quinolone derivative, a nitrogen analogue of coumarin, reacted in good yields to furnish the respective product 3ac . The potential of this strategy was further illustrated by use of different flavones ( 3ad–3af ) and indoles ( 3ag–3al ) as heterocyclic olefin coupling partners (Fig. 3a ) 21 , 36 . Those products show a particularly high structural and functional group density as highlighted by product 3al bearing three contiguous fully substituted carbon centres. Fig. 3: Extended substrate scope and product diversification. a , Flavone and indole scope. b , Aza Paternò–Büchi variant of our [2 π +2 σ ]-photocycloaddition methodology. c , Product diversification of boronic acid ester 6 . d , Ring-opening experiments with piperidine and methanol as nucleophiles and reduction with LiAlH 4 . a Standard reaction conditions (see Supplementary Information for full experimental details). b Extended reaction time, see Supplementary Information . c Reaction performed in MeCN/CH 2 Cl 2 (1/1, 0.05 M). d d.r. was determined by 1 H NMR analysis of the crude reaction mixture. e d.r. was determined by 1 H NMR analysis of the isolated product. DMAP, 4-(dimethylamino)pyridine; NBS, N -bromosuccinimide; THF, tetrahydrofuran. Full size image We wondered whether an aza Paternò–Büchi-type reaction was feasible with our [2 π +2 σ ]-approach that would deliver rapid access to 2-aza bicyclo[2.1.1]hexane scaffolds 37 , which are considered as pyrrolidine or proline bioisosteres depending on the substitution pattern 38 . Recently, the Schindler group reported an intermolecular aza Paternò–Büchi reaction with 2-isoxazoline-3-carboxylates enabled by ENT 39 . Fortunately, when applying substrate 4 under our standard reaction conditions the desired 2-aza bicyclo[2.1.1]hexane product 5 was obtained in excellent yield (Fig. 3b ). As previously mentioned, the Bpin-containing products offer straightforward opportunities for product diversification. Thus, as representative examples, product 6 ( see Supplementary information ) was oxidized to the corresponding alcohol 7a and was applied in C( sp 3 )–C( sp 2 ) ( 7b ) and C( sp 3 )–C( sp 3 ) ( 7c ) couplings (Fig. 3c ). This variable C–C bond formation at the bridgehead position, forging quaternary carbon centres, showcases the synthetic utility of this strategy, as the corresponding BCBs would be difficult to access. Moreover, we were able to open the phenolic lactone moiety of product 3b (Fig. 3d ) both by aminolysis with piperidine ( 8a ) and transesterification with methanol ( 8b ). The products were obtained in high yields and exclusively as the cis -products. These ring-opening experiments highlight the cis -selectivity of our developed methodology, as [2+2]-photocycloadditions with analogous acyclic olefins would usually lead to mixtures of diastereomers 1 . Finally, reduction of 3b with excess LiAlH 4 gave hemiacetal 8c on intramolecular cyclization of the phenol with the intermediary aldehyde. Next, we moved forward to gain more detailed insights into the mechanism of our developed [2 π +2 σ ]-photocycloaddition reaction. UV/Vis spectroscopy of the reaction components (Fig. 4a ) revealed that TXT is the only light absorbing species around a wavelength of λ = 405 nm, eliminating the possibility of direct excitation of coumarin ( 1a ) or BCB 2b under standard reaction conditions. Furthermore, Stern–Volmer quenching studies clearly demonstrated that coumarin ( 1a ) quenches the excited photocatalyst (here Ir-F ), whereas BCB 2b showed no detectable quenching (Fig. 4b ). In addition, a direct excitation experiment with UV light ( λ max = 365 nm) of the standard reaction gave 14% yield after 8 days of reaction time (Fig. 4c ). Considering that only coumarin ( 1a ) absorbs light in this wavelength range (Fig. 4a ), these experiments further hint towards coumarin ( 1a ) being the excited species. To rule out thermal background reactivity, the reaction mixture was heated to 100 °C in the absence of the photocatalyst 28 , 29 . However, no product formation was observed (Fig. 4c ). Additionally, the quantum yield was determined to be Φ = 0.12 (Fig. 4d ), and a series of photocatalysts with different triplet energies was tested (Fig. 4e ). As a general trend, it can be recognized that the yield correlates with increasing triplet energy, whereas no trend was identified across the redox potentials. On the basis of these findings and supported by density functional theory (DFT) calculations, the proposed stepwise mechanism is outlined in Fig. 4f . Fig. 4: Mechanistic studies. a , UV/Vis spectroscopy shows that thioxanthone exclusively absorbs light around λ = 405 nm. a.u., arbitrary units; Vis, visible. b , Stern–Volmer analysis reveals that coumarin quenches photoexcited Ir-F , whereas BCB 2b shows no quenching. c , Irradiation with 365 nm LEDs for 8 days afforded product 3b in 14% yield, whereas heating of the reaction mixture in the absence of photocatalyst delivered no product. No product formation was observed on irradiation with 405 nm blue LEDs. d , Quantum yield of the standard reaction was determined to be Φ = 0.12. e , Comparison of various triplet sensitizers reveals that increasing triplet energy correlates with higher yield. Yields were determined by GC-FID analysis. E T and potentials ( E 1/2 ) are literature values ( Supplementary Information ). n.d., not detected; PC , photocatalyst. f , DFT calculations (reaction of 1a with 2b ) and the proposed mechanism. Computational analysis of the proposed mechanism was performed using DFT calculations at the M06-2X/def2-TZVP level. See Supplementary Information for further details. Δ G ‡ , Gibbs free energy of activation. Full size image Visible light excited TXT (T 1 ) sensitizes coumarin ( 1a ) by ENT and the resulting triplet state coumarin approaches BCB 2b to form an exciplex. This exciplex gives rise to the first C–C bond forming event, which determines the regioselectivity. DFT calculations disclose that the transition state in which the α-carbonyl-site of coumarin ( 1a ) engages BCB 2b in the 3-position is thermodynamically most favourable (ΔΔ G ‡ = 2.8 kcal mol –1 ), leading to the observed regioisomer. Intersystem crossing of the resultant triplet 1,5-diradical allows subsequent cis -selective radical–radical recombination as the diastereoselectivity determining step to finally deliver the cycloaddition product ( Supplementary Section 3.9 ). Data availability Crystallographic data are available free of charge under Cambridge Crystallographic Data Centre (CCDC) reference numbers 2145108 ( 3j ), 2150704 ( 3k ), 2120368 ( 3p ), 2145107 ( 3x ), 2120712 ( 3y ), 2120369 ( 3af ), 2120370 ( 3ak ) and 2120371 ( 3al ). All other data are available in the main text or Supplementary Information . | In synthetic organic chemistry, so-called cycloadditions are a particularly important class of reactions. With this type of reaction, ring-shaped molecules can be constructed simply and efficiently by joining ("adding") two compounds that each contain double bonds. A team led by Prof. Dr. Frank Glorius from the University of Münster has now succeeded in performing an unconventional cycloaddition in which a carbon-carbon double bond reacts with a carbon-carbon single bond. In double bonds, atoms are connected by two pairs of electrons; in single bonds, only one pair of electrons is involved. The key to success was the use of particularly "strained" single bonds. To enable mild reaction conditions, the chemists used a photosensitizer, a catalyst that drives the reaction using light energy. The study has now been published in the journal Nature. "In addition to its conceptual and mechanistic importance, this method also has a synthetic benefit," explains lead author Roman Kleinmans. "This is because we can use it to build polycyclic, three-dimensional carbon scaffolds that have been difficult or impossible to access. Such three-dimensional architectures are fascinating and play an increasingly important role in medicinal chemistry." The chemistry in detail For more than a century, so-called [2+2] photocycloadditions have been studied and developed. Research has focused specifically on [2π+2π]-systems in which, for example, two double bonds react to form two new single bonds giving a four-membered ring product. The team from Münster has now achieved a breakthrough by realizing this type of reaction using a single bond ("2π+2σ"). The team used a class of compounds with "strained" single bonds: so-called bicyclo[1.1.0]butanes (BCBs). These carbon compounds have a butterfly-like shape, with two linked triangles on the single bond that look like wings. The internal wing angles (60 degrees each) deviate greatly from the ideal unstrained angles (109 degrees each). The opening of the central single bond releases strain energy—thermodynamically favoring the reaction with the carbon-carbon double bond. Using this strategy, the researchers also succeeded in incorporating a nitrogen atom into the carbon skeleton of the product. The visible light driven "triplet-triplet energy transfer catalysis" allows the reaction to be carried out as mildly as possible, so that irradiation of the reaction with harsh UV light, which is commonly used in chemistry of this type, was not needed. In this mechanism, the catalyst absorbs energy from the irradiated light and transfers it to a suitable substrate. The catalyst returns to the ground state (it is regenerated), and the corresponding molecule is left in an energetically excited state (triplet state). The excited molecule is then able to react with the single bond. "We used a simple organic photosensitizer for this, namely thioxanthone, dispensing with rare and expensive transition metal-based catalysts," emphasizes Frank Glorius. To further understand the molecular mechanism of the reaction in more detail, the chemists carried out computational calculations using "density functional theory." | 10.1038/s41586-022-04636-x |
Nano | Modelling nonequilibrium nanoscale junctions with steady-state density functional theory | Zhuoling Jiang et al, Prominent nonequilibrium effects beyond the standard first-principles approach in nanoscale electronic devices, Nanoscale Horizons (2021). DOI: 10.1039/d1nh00293g | http://dx.doi.org/10.1039/d1nh00293g | https://phys.org/news/2021-10-nonequilibrium-nanoscale-junctions-steady-state-density.html | Abstract The standard density functional theory (DFT) based first-principles approach has been widely used for modeling nanoscale electronic devices. A recent experiment, however, reported surprising transport properties of thiol-terminated silane junctions that cannot be understood using the standard DFT approach, presenting a severe challenge for the current computational understanding of electron transport at the nanoscale. Using the recently proposed steady-state DFT (SS-DFT) for nonequilibrium quantum systems, we found that in silane junctions, underlying the puzzling experimental observations is a novel type of intriguing nonequilibrium effect that is beyond the framework of the standard DFT approach. Our calculations show that the standard DFT approach is a good approximation of SS-DFT when silane junctions are near equilibrium, but the aforementioned nonequilibrium effects could drive the thiol-terminated silanes far away from equilibrium even at low biases of around 0.2 V. Further analysis suggests that these nonequilibrium effects could generally exist in nanoscale devices in which there are conducting channels mainly residing at the source contact and close to the bias window. These findings significantly broaden our fundamental understanding of electron transport at the nanoscale. This article is part of the themed collection: Nanoscale Horizons 2022 Lunar New Year Collection New concepts Understanding bias-induced nonequilibrium effects on the transport properties of nanoscale electronic devices is one of the biggest challenges in computational nanoscience. In this work, with the recently proposed steady-state density functional theory (SS-DFT) for nonequilibrium quantum systems, we predict a novel type of nonequilibrium effect (named ‘nonequilibrium pulling’ in the paper) that could exist in nanoscale devices even at low biases well within the expected linear-response regime. The effects lead to surprising transport phenomena that are beyond the framework of conventional DFT-based methods, which result in the puzzling transport properties of silane junctions reported in a recent experiment. Further analysis points out that the ‘nonequilibrium pulling’ originates from the ‘two-dimensional’ nature of the theory: in SS-DFT, the transport state is a functional of two densities, the total electron density and the current-carrying electron density. When conducting channels exist near the bias window that mainly reside in the source contact, the dimension of the current-carrying electron density can be significant, causing ‘nonequilibrium pulling’ and the failure of the conventional ‘one-dimensional’ DFT method. Similar nonequilibrium effects could also exist in other bias-driven processes under scanning tunnelling microscopes and electrochemical systems. Introduction Nanoscale electronic devices that often contain a molecular scale center have become one of the most promising candidates for the next generation of electronic devices. In the past two decades, numerous nanoscale devices with various functions, such as transistors, 1,2 switches, 3–5 diodes, 6–8 spintronic devices, 9–11 and many others 12–14 have been proposed theoretically and/or experimentally. The density functional theory (DFT) based first-principles approach that combines the DFT and nonequilibrium Green's function (NEGF) techniques 15–17 has been widely used in qualitatively understanding experiments by linking the measured transport properties of a device to the tunnelling of electrons through orbitals of the molecular scale device center. A recent experiment, 18 however, reported surprising transport phenomena through silane junctions that cannot be understood using the standard DFT based method. Therein the low-bias conductance of various silane molecules with different linker groups (amine or thiol) bridging different metal electrodes (Au or Ag) were measured. It was found that with the amine linker, the Au electrode generates a much higher conductance than that of Ag, while with the thiol linker, the trend reverses and the Ag electrode is significantly more conducting than the Au electrode. In contrast, DFT-based transport calculations predict that Au electrodes are always more conducting than Ag regardless of linkers. 18 This contradiction between theory and experiment presents the community of computational nanoscience with a severe challenge. To address this challenge, we theoretically study the transport properties of silane junctions using both the standard DFT-based approach and the steady-state DFT (SS-DFT) we recently proposed. 19 Several widely used implementations of the DFT-based approach, i.e. , TranSIESTA, 16 SMEAGOL, 17 and ATK, 20 which only differ from each other in the detail of the numerical procedures, were used in this study. Unlike the DFT-based method, SS-DFT considers nonequilibrium effects in full by employing Hershfield's nonequilibrium quantum statistics. 21 In SS-DFT, the nonequilibrium quantum device is mapped to an effective equilibrium system using Hershfield's statistics and then the desired nonequilibrium steady state of the device can be obtained by minimizing the effective energy of the mapped equilibrium, Ẽ [ ρ t , ρ n ], which is a functional of both the total electron density, ρ t , and the current-carrying electron density, ρ n . The effective energy is calculated by subtracting a bias-dependent term from the energy of the steady state, E SS [ ρ t , ρ n ], as shown in eqn (1) below. (1) Hereinafter the bias-dependent term in eqn (1) is referred to as nonequilibrium energy E n [ ρ n ] that measures how far the system is away from equilibrium. Note that E SS [ ρ t , ρ n ] is the same as the conventional DFT energy except for the two-density dependence (see the Methods section). The minus sign in eqn (1) is the reason why minimizing Ẽ drives the system out of equilibrium. It has been proven that as the bias voltage, V b , approaches zero, E SS [ ρ t , ρ n ] goes to the conventional DFT energy functional E DFT [ ρ t ], and SS-DFT reduces to DFT. 20 In the next section, we present our calculations from all these DFT and SS-DFT computational packages for the transport properties of silane junctions. We show that the competition between two energies, E SS and E n , yields a novel type of nonequilibrium effect in the thiol-terminated silanes that is beyond the framework of the standard DFT-based method, which results in the experimentally observed dramatic trend reversal of conductance. Results and discussions Herein, the transport properties of a series of methylamine- and methylthiol-terminated permethyloligosilanes (denoted Si n – NH 2 and Si n –SH, where n is the number of Si atoms in the silane chains) bridging Au and Ag electrodes were studied. The junctions with different metal electrodes (M) and linker groups (L) are then denoted Si n –L–M (M = Au, Ag; L = NH 2 , S), in which dative interactions were formed for L = NH 2 while covalent bonds were formed for L = S. Specifically, Si 4 –NH 2 –M and Si n – S–M ( n = 2–4, 6–9) have been measured in experiments. 19 We assume that the electrodes are semi-infinite, and without losing generality, the source (drain) is on the left (right) of the molecule. The amine-terminated silane junctions Si 4 –NH 2 –M The corresponding optimised atomic structures of NH 2 –Si 4 –M (M = Au, Ag) are shown in Fig. 1a . I – V curves from both the DFT-based method (with different packages) and the SS-DFT method are calculated and presented in Fig. 1b . In the figure, DFT(T) and DFT(S) refer to the TranSIESTA and SMEAGOL packages, respectively. As shown in the figure, two DFT packages produced essentially the same I – V curves as those of SS-DFT, all of which predict that the Au contact is significantly more conducting than the Ag contact, consistent with experiments and previous DFT calculations. 18 Note that since ATK uses its own built-in basis set, pseudopotentials and convergence scheme that dramatically speed up the calculations, the results from ATK cannot be quantitatively compared with others. Nevertheless, ATK also agrees that for NH 2 –Si 4 , the Au contact provides a much higher conductivity at low biases than that of Ag (see Fig. S1a, ESI † ). To further compare the DFT and SS-DFT methods, we plotted the transmission spectra from TranSIESTA and SS-DFT at a bias voltage of 0 V in Fig. 1c and 0.4 V in Fig. 1d . At zero bias, the two transmission spectra are essentially the same, which is the consequence of SS-DFT reducing to DFT at zero bias. When V b = 0.4 V, SS-DFT still agrees remarkably well with DFT, indicating that in Si 4 –NH 2 –M, the nonequilibrium effects induced by E n [ ρ n ] at low biases are trivial. We will analyse the nonequilibrium effects in more detail later in this paper. Fig. 1 Transport properties of the amine-terminated silane junctions Si 4 –NH 2 –M. (a) The optimised atomic structures of Si 4 –NH 2 –M (M = Au or Ag). (b) The I – V curves of Si 4 –NH 2 –M calculated from SS-DFT and two DFT-based packages, TranSIESTA (DFT(T)) and SMEAGOL (DFT(S)). The three packages agree with each other remarkably well. (c) and (d) Transmission spectra obtained from SS-DFT and TranSIESTA are shown in (c) for V b = 0 V and in (d) for V b = 0.4 V. E 0 F in (c) and (d) denotes the Fermi energy of the junction at zero bias. It is not surprising that the Au contact provides better conductivity than the Ag contact, which is a natural consequence of the ground-state properties of Au and Ag elements. Although Au and Ag are in the same group of the periodic table and have similar broad s band characteristics, it is known that the density of states (DOS) of Au is pronouncedly higher than that of Ag at E F due to the relativistic effect induced upshift of the d bands. 22,23 In Fig. S2 (ESI † ), we plotted the ground-state local DOS of the Au and Ag atoms that bind with the silane molecules. The higher DOS of Au at E F (than that of Ag) due to the up-shifted d bands can be clearly seen, which results in the higher conductance of the Au contact when the system is not far from equilibrium. The thiol-terminated silane junctions Si n –S–M We consider the Si n –S–M (M = Au, Ag) junctions for n = 3, 6 and 7. Their optimised atomic structures are shown in Fig. 2–c . The corresponding I – V curves for both the Au and Ag contacts obtained from the DFT-based method, via the TranSIESTA and SMEAGOL packages, were plotted in Fig. 2d–f , where we see that both computational packages still predict that the Au contact is more conducting than the Ag contact for all cases with SMEAGOL generating a more significant difference between Au and Ag than TranSIESTA. ATK gives the same prediction as shown in the I – V curves for n = 7 in Fig. S1b (ESI † ). The calculations based on the standard DFT method contradict with the experiment, which shows that the Ag contacts provide a much higher conductivity than those of the Au contacts. Another puzzling feature of the I – V curves in Fig. 2 is that, unlike the case of the NH 2 linker for which the TranSIESTA and SMEAGOL packages agree with each other amazingly well ( Fig. 1b ), the two packages yield quantitatively quite different I – V curves despite being built on the same DFT-based transport method and employing the same set of parameters in the calculations (bases, pseudopotentials etc. , see the Methods section). The difference between the electric currents from the two packages can be as much as 30% at 0.4 V. On the other hand, the zero-bias conductance calculated from both packages for different cases (Table S1, ESI † ) is quite similar with at most around a 2% difference. Fig. 2 I – V curves of the thiol-terminated silane junctions Si n –S–M calculated from the DFT-based method. (a–c) The optimised atomic structures of Si n –S–M (M = Au or Ag) are shown in (a) for n = 3 (Si 3 ), (b) for n = 6 (Si 6 ) and (c) for n = 7 (Si 7 ). (d–f), I – V curves calculated from two DFT-based packages, TranSIESTA (DFT(T)) and SMEAGOL (DFT(S)), are shown in (d) for n = 3, (e) for n = 6 and (f) for n = 7. DFT(T or S)-Au/Ag denotes TranSIESTA or SMEAGOL results for Si n –S–Au/Ag. For all cases, the Au contact produces a higher conductivity than that of the Ag contact. The significant quantitative difference in the I – V curves (together with essentially the same equilibrium conductance) produced by the two DFT-based packages is an indication of significant bias-induced nonequilibrium effects in the system. At equilibrium ( V b = 0 V), conventional DFT is applicable, and the properties of the system can be uniquely determined by the total electron density ρ t . TranSIESTA and SMEAGOL optimise ρ t by the same DFT-based self-consistent process, and therefore shall obtain similar ρ t , which in turn guarantees similar equilibrium properties. Under finite V b , it has been proven that the properties of the system depend on two densities, ρ t and ρ n , and ρ t alone cannot uniquely determine the system. 19 Since the DFT method only optimises ρ t , the un-optimised ρ n from different packages may differ substantially depending on their detailed numerical procedures upon convergence of ρ t , resulting in different I – V curves. To capture the nonequilibrium effects in full, we performed SS-DFT calculations. The I – V curves for Si n –S–M (M = Au, Ag) junctions are shown in Fig. 3a–c for n = 3, 6 and 7, respectively. As a reference, the results from TranSIESTA are also included. For all cases, when the bias is low enough (≤0.1 V), there is essentially no difference between the SS-DFT and DFT-based methods and SS-DFT also predicts that Au provides slightly better conductivity than that of Ag. A significant deviation occurs after 0.1 V , where the electric currents from SS-DFT for the Ag contact start to increase more rapidly than those for Au. For all cases of n , the current of the Ag contact surpasses that of the Au contact at 0.2 V, and when the bias further increases, the difference between the currents of Ag and Au also widens. To better compare with the experiment 18 that reported the trend reversal of low-bias conductance (measured at 0.2 V), we plotted the differential conductance at 0.2 V calculated from SS-DFT in Fig. 3d . For comparison, the differential conductance of the junctions Si 4 –NH 2 –M (M = Au, Ag) is shown in the inset. We see that according to SS-DFT, for the NH 2 linker, the conductance of the Au contact is approximately 2.3 times higher than that of the Ag contact, while for the S linker, the conductance of Ag is about 1.2, 1.4 and 1.5 times higher than that of Au for n = 3, 6 and 7, respectively. Thus, the experimentally observed trend reversal of low-bias conductance is captured in SS-DFT. The differential conductance at 0.2 V from TranSIESTA and SMEAGOL is shown in Fig. S3 (ESI † ), where both packages predict that the Au contact provides a higher conductance than that of the Ag contact regardless of the linker. Fig. 3 I – V curves of the thiol-terminated silane junctions Si n –S–M calculated from the SS-DFT method. The I – V curves of Si n –S–M calculated from the SS-DFT method are shown in (a) for n = 3 (Si 3 ), (b) for n = 6 (Si 6 ) and (c) for n = 7 (Si 7 ). SSDFT-Au/Ag denotes SS-DFT results for Si n –S–Au/Ag. As a reference, I – V curves from TranSIESTA (DFT(T)) are also shown (dotted lines in the figures). Note that at small biases (≤0.1 V), there are no essential differences between the two methods. Significant differences occur at biases >0.1 V. (d) Differential conductance of Si n –S–M (M = Au or Ag) at 0.2 V calculated from SS-DFT for n = 3, 6 and 7. The two solid lines in the figure are obtained by linear fitting. Inset: differential conductance of Si n –NH 2 –M (M = Au or Ag) at 0.2 V from SS-DFT. For the NH 2 linker, the Au contact is more conducting than the Ag contact. The trend reverses for the S linker. Prominent effects of the ‘nonequilibrium pulling’ We use the Si 7 – S–M junction to elucidate the origin of the bias-induced nonequilibrium effects, as similar junctions were theoretically studied before. 18 In Fig. 4 , we plot the zero-bias transmission spectra of Si 7 –S–M calculated from both SS-DFT and TranSIESTA. As expected, at zero bias, the two packages yield almost exactly the same transmission function. At E F , the Au contact exhibits a higher conductance than that of the Ag contact, unsurprisingly, which agrees with the previous DFT study. 18 Shown in Fig. 4 are two major peaks near the E F of transmission for both the Ag (A and B) and Au (C and D) cases. Comparing the tunnelling eigenstates at those peaks ( Fig. 4 ) with frontier orbitals of SH–Si 7 (Fig. S4, ESI † ), we see that peaks A and C originated from tunnelling through the HOMO−1 orbitals of SH–Si 7 that are mainly localised at the source contact, while peaks B and D come from the HOMO that resides in the centre of the molecule. Peak A is much (nearly 2 times) higher than C, which is caused by the different bonding nature between Ag–S and Au–S. Ag has a lower electronegativity than that of Au, resulting in a greater charge transfer to the S atom from Ag (0.25| e |) than that from Au (0.16| e |) according to Voronoi population analysis. 24 The nearly 0.1| e | difference in the two cases originates from the electron transfer from the Ag 4d to the S 3p that is located about 1 eV below E F (see the shaded area in the projected DOS of S in Fig. S5, ESI † ), which enhances the coupling between the Ag contact and the S linker around the same energy and in turn leads to the high transmission peak A (see Fig. S5, ESI † ). In Fig. 4 , the tunnelling eigenstates at E F are also shown, which clearly suggest that the transmission at E F is determined by the tail of the broader HOMO−1 peak rather than the HOMO peak for both the Au and Ag cases. Fig. 4 Zero-bias transmission spectra of Si 7 –S–M. (a) Transmission spectra of Si 7 –S–M (M = Au, Ag) from SS-DFT and TranSIESTA (DFT(T)). For both the Au and Ag cases, the results from the two packages are essentially the same. Inset: enlarged view of the transmission spectra near the Fermi energy. Note that the Au contact is more conducting than the Ag contact. (b), Tunneling eigenchannels of major peaks in the transmission spectra, A and B for the Ag contact, and C and D for the Au contact. Tunneling eigenchannels at the Fermi energy were also plotted. The response of the main peaks in the transmission spectra to the bias voltages determines the I – V curves. In Fig. 5 , we plotted the transmission spectra of the M–S–Si 7 junction at different biases from both SS-DFT and TranSIESTA. The bias window where current-carrying electrons are located is shown in the figure. When a bias is applied, an additional electrostatic potential is established in the device region that leads to a potential drop from the source to the drain . Note that E 0 F is the Fermi level of the system at zero bias. This additional electrostatic potential, however, has little effect on the energy of the HOMO of the molecule since the orbital resides in the middle and is symmetrically distributed across the molecule. Consequently, for both SS-DFT and DFT, the HOMO peaks (for both the Au and Ag contacts) remain essentially stationary when the bias increases (see Fig. 5a and b ). In contrast, the potential of the source, , heavily influences the energy of the HOMO−1 orbital, which is localised at the source contact, leading to the upshift of the HOMO−1 peaks when the bias increases as shown in Fig. 5 . The HOMO−1 peak of Si 7 –S–Au in the transmission spectra is much lower than that of Si 7 –S–Ag and submerges into the much higher HOMO peak as shown in the figure. We therefore focus our discussion on the HOMO−1 peak of Si 7 –S–Ag. At zero bias, the separation between the HOMO−1 and HOMO peak of Ag–S–Si 7 is 0.45 eV (the same for both SS-DFT and TranSIESTA). When the bias increases to 0.2 V, the separation becomes 0.38 eV for TranSIESTA and 0.26 eV for SS-DFT. At 0.4 V, the separation is 0.33 eV and 0.15 eV for TranSIESTA and SS-DFT, respectively. The HOMO−1 peak from SS-DFT moves significantly faster towards the right than that from TranSIESTA, suggesting that besides the bias-induced electrostatic potential, there should exist an additional ‘force’ pulling the peak towards the bias window. Fig. 5 Transmission spectra of Si 7 –S–M under different biases. (a and b) Transmission spectra of Si 7 –S–M (M = Au, Ag) at different bias voltages calculated from TranSIESTA (DFT(T)) in (a) and SS-DFT in (b), with the separation between the HOMO−1 and HOMO peaks for the Ag contact shown. Two dotted lines for the non-zero bias cases denote the bias window. SS-DFT predicts a bias-induced ‘nonequilibrium pulling’ that pulls the HOMO−1 peak localised at the source contact towards the bias window. Insets: enlarged view of transmission in the bias window for finite biases and from −0.1 to 0.1 eV for zero bias. (c) Iso-surface of the current-carrying electron density at different biases for the Ag contact calculated from SS-DFT. Note that for zero bias, the current-carrying electron density is zero. At finite biases, the current-carrying density mainly accumulates at the source contact. The additional ‘pulling force’ originates from the nonequilibrium effects in the process of minimizing Ẽ in eqn (1) . The bias-dependent term E n [ ρ n ] in eqn (1) can be calculated as (see Methods section), where N n is the number of current-carrying electrons. Since the term is always positive, the process of minimizing Ẽ tends to pull the orbital outside the bias window towards the window to increase N n and E n , which ultimately decreases Ẽ . On the other hand, this nonequilibrium effect induced ‘pulling’ (which we name ‘nonequilibrium pulling’ in this paper) also tends to increase the energy of the steady state, E SS , by causing a further upshift of the HOMO and HOMO−1 peaks in the energy axis. The minimizing process of Ẽ in SS-DFT is therefore a competition between these two energies, E SS and E n . The current-carrying electron density, ρ n , mainly accumulates around the source contact ( Fig. 5c ) where the reflection of current-carrying electrons takes place. The nonequilibrium effects therefore are most significant for the states localised at the source contact, which is the reason why we see clear ‘pulling’ of the HOMO−1 peaks towards the bias window from SS-DFT in Fig. 5b . For HOMO peaks residing in the molecular centre, however, the nonequilibrium effects are much weaker, and the ‘pulling’ induced increase of E n cannot compensate for the increase of E SS , thus the pulling is trivial. It is the nonequilibrium pulling of the HOMO−1 peak of Si 7 –S–M that significantly enhances the transmission in the bias window at 0.2 and 0.4 V (as can be seen in the insets of Fig. 5a and b ), leading to the trend reversal of differential conductance discussed earlier. To see the difference between the SS-DFT and the DFT-based methods more clearly, we show schematically the searching paths of the two methods for the transport state in Fig. 6a . The steady-state energy, E SS [ ρ t , ρ n ], is represented by a two-dimensional (2d) colour contour map. SS-DFT searches for the minimum of Ẽ [ ρ t , ρ n ] (instead of E SS ) in the 2d plane and shall end up with a state near the minimum of E SS , while the DFT-based method searches for the stable transport state along the ρ t axis. The extent of the 2d SS-DFT searching along the ρ n axis, δ ρ n , can be measured by the number of current-carrying electrons in the molecular centre, N n , which can be calculated by the integral of ρ n in the centre. When N n is small (not far from equilibrium), the SS-DFT searching trajectory is close to the ρ t axis as shown in Fig. 6a , therefore yielding similar results to that of DFT. The Si 4 –NH 2 –M and the Si 7 –S–M junctions at small biases (≤0.1 V) belong to this category (see Fig. 6b ). When N n is significant, SS-DFT finds the stable steady state far away from equilibrium in the 2d plane, which is the case for Si 7 –S–M at biases ≥ 0.2 V ( Fig. 6b ). Fig. 6 Conceptual differences between the SS-DFT and DFT-based methods. (a) A schematic plot of the different searching paths of the SS-DFT and the DFT-based methods. The 2d color contour represents the steady-state energy in SS-DFT, E ss , which is a functional of two densities, ρ n and ρ t . The SS-DFT searches for the transport state with minimum effective energy Ẽ in the 2d plane that shall be located around the minimum E ss . The DFT method searches the most stable transport state along the axis of total electron density ρ t . When δ ρ n is small, the searching path of SS-DFT is close to the ρ t axis, which generates similar results to the DFT method. (b) The number of current-carrying electrons in the molecular centre, N n , for Si 7 –S–M and Si 4 –NH 2 –M (M = Au or Ag) at different bias voltages. Conclusions In summary, with both the DFT-based approach and the SS-DFT method, we studied the transport properties of the amine- and thiol-terminated silanes with Au and Ag contacts. We found that all three prevalent implementations of the DFT-based approach qualitatively agree with each other and predict that the Au contact is always more conducting than the Ag contact regardless of the linker. In contrast, while the SS-DFT agrees with the standard DFT method for the amine-terminated silane junctions, it predicts a striking trend reversal of low-bias conductance in thiol-terminated silane junctions that the Ag contact yields a significantly higher conductance than that of the Au contact, which is consistent with experimental observations. Detailed analysis suggests that a novel type of nonequilibrium effect, referred to as “nonequilibrium pulling” in this work, in which the conducting channels mainly reside on the source contact and are pulled towards the bias window, plays an essential role in the observed conductance trend reversal. Further analysis indicates that when the device is near equilibrium, the DFT-based approach is an excellent approximation of SS-DFT, but when there are conducting channels at the source contact that are close to the bias window, ‘nonequilibrium pulling’ could generally exist, causing the failure of the standard DFT approach, and then SS-DFT becomes necessary in modelling the device. These findings significantly broaden our understanding of electron transport at the nanoscale and provide guidelines for future computational studies of nanoscale devices. Methods Computational details Structure optimizations of all junctions were done using the SIESTA 25 and transport calculations were performed with the SS-DFT, 19 TranSIESTA, 16 SMEAGOL, 17 and ATK-2013 20 packages. In all calculations, the generalised gradient approximation (GGA) in PBE format, 26 double- ζ polarised basis and 4 × 4 × 4 k -point sampling in the Brillouin zone for bulk materials were employed. A 4 × 4 × 1 k -point sampling in the device region was adopted in the transport calculations. While ATK uses its own built-in pseudopotentials, all other computational packages use the norm-conserving pseudopotentials in the Troullier–Martins scheme 27 with the same set of parameters. 28 Scalar-relativistic effects were considered for Au and Ag. Nonequilibrium corrections to the exchange functional in an analytic form 29,30 were included in the SS-DFT calculations. Energy and force convergence criteria were set to 10 −4 eV and 0.03 eV Å −1 , respectively. To test the validity of the parameters used, we compare the band structures of the Au/Ag bulk and the HOMO–LUMO gaps of silanes calculated from SIESTA with those from VASP. 31 VASP calculations were employed a plane-wave basis set with a 450 eV energy cut-off, the PAW pseudopotentials 32 and GGA in PBE format. The results are shown in Fig. S8 and Table S2 in the ESI, † where we can see that SIESTA and VASP gave almost exactly the same band structures for the Au and Ag bulk and quite similar HOMO–LUMO gaps for the silanes. It is worth mentioning here that the accuracy of the DFT-calculated energy level alignment of molecular states with the Fermi level of a metal substrate can be significantly improved using the DFT + ∑ approach. 33 Energies in SS-DFT The central physical quantity in SS-DFT is the effective energy, Ẽ [ ρ t , ρ n ], that is a functional of ρ t and ρ n , and can be calculated by subtracting the bias-dependent nonequilibrium energy, E n [ ρ n ] , from the energy of the steady state, E SS [ ρ t , ρ n ], as shown in eqn (1) . The integral, can be interpreted as the number of current-carrying electrons, N n . The steady-state energy functional E SS [ ρ t , ρ n ] is defined in the following equation, (2) In eqn (2) , T is kinetic energy, V ext , V H , and e xc are the external and Hartree potentials and the exchange–correlation (XC) energy density, respectively. The functional E SS [ ρ t , ρ n ] is the same as the conventional DFT energy functional, E DFT [ ρ t ], except that the steady-state XC energy is a functional of two densities, ρ t and ρ n . Minimizing Ẽ [ ρ t , ρ n ] leads to two Kohn–Sham like mean-field equations, one for current-carrying electrons and another for ‘equilibrium’ electrons, which can be solved in a self-consistent way. More details can be found in ref. 20 . Data availability All data supporting the findings of this study are available within the article and its ESI, † or from the corresponding author upon reasonable request. Author contributions CZ conceived the project. ZJ and KY did most calculations. KY and CZ wrote the paper. All authors contributed to data analysis and finalization of the paper. Conflicts of interest The authors declare no competing interests. Acknowledgements We acknowledge the support from the Ministry of Education of Singapore (R-723-000-029-112), the NUS academic research fund (R-144-000-410-114; R-265-000-691-114) and the NUS green energy program (R-143-000-A63-114). Computational works were performed at the NUS Graphene Research Centre computing cluster facilities. | NUS scientists have predicted a new type of nonequilibrium effects that could generally exist in nanoscale electronic devices, and successfully explained a recent puzzling experiment using the effects. Understanding bias-induced nonequilibrium effects on electron transport properties of nanoscale junctions is the central issue in computational nanoscience. The standard density functional theory (DFT)-based first-principles method that combines DFT and nonequilibrium Green's functions' techniques has been widely used in modeling nonequilibrium nanoscale devices. This provides qualitative understanding of experiments by relating the measured conductance to tunneling of electrons through "molecular" orbitals of the devices. A recent experiment, however, reported surprising transport phenomena through silane junctions that cannot be understood by the standard DFT method. The conductance for various silane molecules connected with two different linker groups (amine or thiol) to either gold (Au) or silver (Ag) metal electrodes were measured. It was found that, when using the amine linker, the Au electrode generates a much higher conductance when compared to an Ag electrode. With the thiol linker, this trend reverses and the Ag electrode is significantly more conducting than the Au electrode. In contrast, DFT-based calculations predict that the Au electrode is always more conducting than the Ag electrode regardless of the type of linkers. This contradiction between theoretical and experimental results presents the community of computational nanoscience with an exciting challenge. To address this challenge, the research group led by Prof Zhang Chun from the Department of Physics and the Department of Chemistry, National University of Singapore, studied the theoretical transport properties of silane junctions building on the steady-state DFT technique that was proposed by Prof Zhang himself back in 2015. The steady-state DFT considers nonequilibrium effects in full by employing nonequilibrium quantum statistics. They found that underlying the puzzling experimental observations is a novel type of nonequilibrium effects (named "nonequilibrium pulling" in their work) that exist in silane junctions having thiol linkers. Their theoretical calculations show that, when the junction is near equilibrium, the standard DFT method is an excellent approximation of steady state conditions. However, at low biases around the region of 0.2 volts, the "nonequilibrium pulling" effect drives the thiol-terminated silanes far away from equilibrium, thus resulting in the reversal of conductance values observed in experiments. Prof Zhang says that "further analysis suggests that these nonequilibrium effects could generally exist in nanoscale devices in which there are conducting channels mainly residing at the source contact and located close to the bias window. These findings significantly broaden our fundamental understanding of electron transport at the nanoscale." | 10.1039/d1nh00293g |
Biology | It takes patience to restore watercourses | Christer Nilsson et al. How Do Biota Respond to Additional Physical Restoration of Restored Streams?, Ecosystems (2016). DOI: 10.1007/s10021-016-0020-0 Journal information: Ecosystems | http://dx.doi.org/10.1007/s10021-016-0020-0 | https://phys.org/news/2016-10-patience-watercourses.html | Abstract Restoration of channelized streams by returning coarse sediment from stream edges to the wetted channel has become a common practice in Sweden. Yet, restoration activities do not always result in the return of desired biota. This study evaluated a restoration project in the Vindel River in northern Sweden in which practitioners further increased channel complexity of previously restored stream reaches by placing very large boulders (>1 m), trees (>8 m), and salmonid spawning gravel from adjacent upland areas into the channels. One reach restored with basic methods and another with enhanced methods were selected in each of ten different tributaries to the main channel. Geomorphic and hydraulic complexity was enhanced but the chemical composition of riparian soils and the communities of riparian plants and fish did not exhibit any clear responses to the enhanced restoration measures during the first 5 years compared to reaches restored with basic restoration methods. The variation in the collected data was among streams instead of between types of restored reaches. We conclude that restoration is a disturbance in itself, that immigration potential varies across landscapes, and that biotic recovery processes in boreal river systems are slow. We suggest that enhanced restoration has to apply a catchment-scale approach accounting for connectivity and availability of source populations, and that low-intensity monitoring has to be performed over several decades to evaluate restoration outcomes. Working on a manuscript? Avoid the common mistakes Introduction Restoration of deteriorated streams and rivers aims to improve biodiversity, recreation, and mitigation of impacts from direct anthropogenic alterations and climate change. The development of restoration methods is currently getting a boost, as it is supported by national and international directives (Bullock and others 2011 ; Aronson and Alexander 2013 ; Pander and Geist 2013 ). Recent findings, that stream restoration is economically profitable (Acuna and others 2013 ), may further contribute to its development. The increase in different ways to restore streams and rivers is also clearly reflected in a steadily increasing number of scientific papers reporting on their results. For example, a recent (27 April 2016) search for ‘(stream* OR river*) AND restoration’ in the Core Collection of Web of Science generated 9134 hits and the number of papers quadrupled between years 2000 and 2014. Although most stream restoration projects are completed without or with very little evaluation of the outcome (Kondolf and Micheli 1995 ; Bernhardt and others 2005 ; Suding 2011 ), the growing body of literature and initiatives such as RiverWiki ( 2014 ) have expanded the knowledge about how restoration measures should be designed to be effective (for example, Alexander and Allan 2007 ; Kail and others 2007 ; Jähnig and others 2010 ; Palmer and others 2010 ; Fryirs and others 2013 ; Nilsson and others 2015 ; Wohl and others 2015 ). Streams can now be restored more effectively than just a few decades ago, but there is certainly room for further improvements. Hitherto, most restoration projects have been one-time-events, irrespective of the nature of results achieved in any follow-up studies. At best, new restoration projects have applied knowledge gained from previous studies and applied a modified design. For that to happen, however, close engagement with managers is required rather than publication of scientific papers (Bernhardt and others 2007 ; Wohl and others 2015 ). There are also examples of when restoration practices have been redesigned because of changing climatic conditions, that is, “adaptive restoration” in response to a moving target (Zedler 2010 ; Nungesser and others 2015 ). Another strategy would be “additional restoration,” which can mean: (1) increasing the area of restored habitat to improve the responses of the originally restored area (for example, Krause and Culmsee 2013 ; Conlisk and others 2014 ), (2) connecting the restored site to other restored sites (Aviron and others 2011 ; Crouzeilles and others 2014 ), or (3) returning to the restored site before it has fully recovered to make further adjustments based on new knowledge of the design and outcomes of monitoring (for example, Harms and Hiebert 2006 ; van Dijk and others 2007 ; Jiménez and others 2015 ). This paper will deal with the third of these strategies. Over more than a century, streams and rivers in many boreal regions have been successively straightened and simplified and even dammed to facilitate the floating of logs—a procedure summarized as channelization (Törnlund and Östlund 2002 ). In a stream restoration project that started in northern Sweden in 2002, practitioners restored streams that were previously channelized for timber-floating, while scientists described the previously used floatway structures and their geomorphic impacts, predicted environmental outcomes of restoration, and studied the biotic effects of this and even earlier restoration projects (Nilsson and others 2005a ). The return of coarse sediment formerly extracted from the channel, reopening of closed side channels and removal of splash dams made the restored reaches more complex with wider channels and increased floodplain connectivity (Polvi and others 2014 ). Follow-up studies of in-stream organisms 3–8 years after restoration, however, showed only very modest recovery of macroinvertebrates and fish (Lepori and others 2005 , 2006 ). Recent studies have shown that for riparian vegetation it can take at least 25 years until its status even resembles that of channelized reaches, where restoration is a disturbance that vegetation has to recover from (Hasselquist and others 2015 ), although there are local exceptions where riparian plants respond faster (Helfield and others 2007 ). The slow or absent biotic response to restoration measures fostered the idea that the physical modifications were not strong enough and that further addition of spawning gravel, big boulders, and large wood would improve the biotic recovery (Palm and others 2007 ; Rosenfeld and others 2011 ). This idea was based on the hypothesis that habitat heterogeneity favors biodiversity (Ward and Tockner 2001 ; Tews and others 2004 ; Elosegi and others 2010 ). A more recent stream restoration project in northern Sweden which began in 2010 ( ) gave practitioners an opportunity to apply such enhanced restoration techniques. They returned to some of the previously restored sites and carried out the suggested additional measures aimed at increasing geomorphological and hydraulic complexity even more, thus paving the way for an enhancement of retention capacity, habitat heterogeneity, and potentially biodiversity (see further Gardeström and others 2013 ). We formed a team of natural scientists, with expertise in hydraulics, geomorphology, riparian soil chemistry, plant ecology and fish ecology and studied the environmental outcomes of these enhanced restoration techniques during 5 years. The major scientific objective was to test whether the biotic response to enhanced restoration methods was different than the response to the original, more basic methods. We hypothesized that enhanced restoration of basic-restored stream reaches would increase complexity and lead to increased biotic diversity for different species groups (fish and plants). Study Sites The Vindel River is a free-flowing river system (Dynesius and Nilsson 1994 ; Nilsson and others 2005b ), which flows southeast from the border between Norway and Sweden for 450 km and joins the heavily regulated Ume River 30 km upstream of the Gulf of Bothnia. The Vindel River catchment comprises 12,654 km 2 , and 5% of this area is lakes that commonly link tributary stream segments. The region receives 500–850 mm of precipitation per year, 40% of which is snow (SMHI 2013 ). The area experiences a mean annual air temperature between 2 and 4°C with the highest mean monthly temperature in July (10–15°C) and the lowest in January (−9 to −15°C) (SMHI 2010 ), leading to accumulation of snow and ice during winter and large seasonal variations in discharge. The Vindel River and its tributaries cross the highest postglacial coastline at 240 m above the present sea level along the lower middle parts of the catchment. This former coastline separates glaciated areas from land that has been under sea level and rose because of postglacial isostatic rebound (Lambeck and others 1998 ). The channel bed sediment above the former highest coastline consists of undisturbed glacial legacy sediment, whereas below there is finer sediment, containing silt and sand, in addition to gravel, cobbles, and boulders. The tributary streams mainly have snowmelt-dominated floods and lack the ability to transport coarse glacial legacy sediment. This makes these streams quite unique in their structural complexity of somewhat randomly sorted large boulders. The tranquil reaches and lakes, which alternate with the rapids, can have a variety of substrates but in most cases have a large proportion of finer sediments such as sand and silt. The riparian vegetation surrounding the tributary streams is distinctly vertically zoned from the hillslope to the channel, ranging from conifer forest at the highest elevations, followed by shrubs, graminoids and amphibious plant communities closest to the water edge (Nilsson and others 1994 ). The widths of the vegetation zones are determined by hydrological conditions, with the upper end of riparian vegetation reflecting the spring-flood peak level, whereas the lower end of the vegetation zone is determined by average summer low water levels (Nilsson and others 1994 ). Understory vegetation of adjacent uplands is dominated by species-poor communities of dwarf shrubs. The average length of the annual growing season (days when the average temperature exceeds +5°C) ranges between 105 and 190 days, depending on geographic position. The fish fauna in the Vindel River is characterized by anadromous Salmo salar (Atlantic salmon) and both anadromous and resident S. trutta (brown trout). Spawning of S. salar only occurs in the main channel of the Vindel River, whereas S. trutta spawn throughout the system. Juvenile S. salar , hatched in the main channel, ascend tributaries to use them as nursery habitat. The tributary streams also contain populations of Thymallus thymallus (European grayling), Esox lucius (northern pike), Perca fluviatilis (European perch), Phoxinus phoxinus (common minnow), Cottus gobio (European bullhead), and Lota lota (burbot). From the mid-1800s to 1976, the Vindel River waterways were used for timber floating. During this period, nearly all turbulent reaches below timberline were channelized to facilitate the transportation of logs (Törnlund and Östlund 2002 ; Nilsson and others 2005a ). Channelization disrupted the clear plant zonation and led to lower cover and species richness of plants (Helfield and others 2007 ). It also harmed many of the valued fish populations, and therefore, restoration was initiated after timber floating was replaced by road transport. Some restoration was done already in the 1980s and 1990s, but it was not until 2002 that a more ambitious restoration project started that aimed to bring the channels and their fish populations back to more pristine states (Nilsson and others 2005a ; Gardeström and others 2013 ). Although the prime interest was to restore fish populations, awareness grew that the riparian plant communities and soil processes within the riparian zone are also important to restore, as the riparian zone is a biodiversity hotspot in the landscape and plays an important role in modifying cycles and fluxes of sediment, nutrients and organisms (Naiman and Décamps 1997 ; Naiman and others 2005 ). Methods Study Sites The study was carried out in 10 tributaries of the Vindel River (Figure 1 ; Online Appendix 1), in which pairs of reaches restored in the early 2000s were selected. One reach in each pair was subjected to enhanced restoration in 2010 (Figure 2 , Online Appendix 1). This restoration entailed that, in addition to the original restoration in which coarse sediment was returned from the channel edges to the channel, very large boulders (>1 m) and trees (>8 m) from surrounding areas were placed in the streams as a replacement for the boulders that had been fragmented by explosives. Gravel from external sources was also added to create spawning habitat for S. trutta , because original spawning sediment had been removed due to higher velocities after channelization (Gardeström and others 2013 ). We consistently call these two types of restoration “basic restoration” and “enhanced restoration” (Figure 3 ). These terms refer to the same types of restoration that Gardeström and others ( 2013 ), using EU LIFE terminology ( ), called “best-practice restoration” and “demonstration restoration”. Figure 1 Map of the Vindel River catchment ( gray ), showing location of study reaches on tributaries to the Vindel River ( thick black line ); lakes are shown as darker gray polygons . The black line outside of the Vindel River catchment denotes the Ume River. Inset map shows the location of the Vindel River catchment and the Vindel and Ume Rivers within Sweden. Full size image Figure 2 Timeline showing the sequence of timber-floating and restoration periods and the period of follow-up work preceding and following the restoration. A pre-restoration monitoring event took place directly before the enhanced restoration, followed by post-restoration monitoring 1, 2, 3, 4, and 5 years after enhanced restoration. Note that all variables were not monitored every year. Full size image Figure 3 Pictures showing typical examples of the different types of reaches in the tributaries of the Vindel River. ( A ) A reach in Gargån channelized for timber-floating, ( B ) a reach in Mattjokkbäcken subjected to basic restoration in 2003 and no further restoration after that, ( C ) a reach in Bjurbäcken subjected to basic restoration in 2002 and ( D ) the same reach as in C after enhanced restoration in 2010. Full size image Criteria used for selection of reaches for enhanced restoration were previous basic restoration, the presence of large boulders in the adjacent uplands and reasonably easy access for heavy machinery (Gardeström and others 2013 ). The basic-restored reach was located upstream to avoid influence from the disturbance of the enhanced restoration activities on the basic-restored reach. In seven of the streams, the reaches were located above and in three streams reaches were below the former highest coastline. Within each chosen restored reach, a 100-m long reach was selected for further study. Wetted width was measured at 10 cross sections and flow velocity and water depth were measured at five points across each of the 10 cross sections (50 measuring points) in each reach, following the methods described by Gardeström and others ( 2013 ). These measurements were made at low and medium flow conditions at three occasions: in 2010, before the enhanced restoration, and in 2011 and 2014, after the enhanced restoration (Figure 2 ). At each survey occasion discharge was also measured in each reach by measuring the cross-sectional area in a uniform cross section and velocity at 0.6 of the depth at 7–11 points, depending on the wetted width. Measurements in the channel during high flow conditions were avoided for safety reasons. Channel bed slope (S 0 ) was measured in seven paired reaches with basic and enhanced restoration (Beukabäcken, Bjurbäcken, Falåströmsbäcken, Hjuksån, Mattjokkbäcken, Mösupbäcken, and Rågobäcken) using a total station (a Trimble S3 in 2012 and a Trimble S8 in 2015, with a Trimble TSC3 datalogger). Channel Bed Sediment A detailed survey of the channel bed sediment distribution was carried out at five paired reaches with basic restoration and enhanced restoration (Beukabäcken, Falåströmsbäcken, Mattjokkbäcken, Mösupbäcken, and Rågobäcken) in 2012, that is, 2 years after additional restoration (Polvi and others 2014 ). The intermediate axes of 300 clasts were measured along a random walk in equally spaced transects throughout the reach; for details on the survey method, see Polvi and others ( 2014 ). Based on the cumulative distribution curves of sediment grain sizes, the following metrics were computed: D 10 , D 90 , coefficient of variation and kurtosis. D 10 and D 90 are the 10th and 90th percentile of the cumulative grain size distribution and are descriptors of the fine and coarse fractions of the channel bed sediment. We computed the coefficient of variation of sediment (CV), which is a measure of the heterogeneity of the distribution (equation 1 , Baker 2009 ). $$ CV = \frac{{\sqrt {\frac{{D_{84} }}{{D_{16} }}} }}{{D_{50} }} $$ (1) where D 16 , D 50 , and D 84 represent the median bed particle size corresponding to the various percentiles (16%, 50%, and 84%) of the particle size distributions. They roughly correspond to the distribution of fines (D 16 ), median (D 50 ), and coarse (D 84 ) materials. Kurtosis ( Κ) indicates the peakedness of distribution (Briggs 1977 ). High kurtosis values indicate a well-sorted sediment, and low kurtosis values indicate poorly sorted sediment (equation 2 ). $$ K = \frac{{D_{90} - D_{10} }}{{1.9(D_{75} - D_{15} )}} $$ (2) Hydraulics and Channel Roughness The analysis of the hydraulic data was performed separately for low and medium flow conditions and was carried out only for streams with complete datasets for both reach types and the three survey occasions ( n = 7 as Abmobäcken, Gargån and Olsbäcken had incomplete datasets). For each reach, we computed Manning’s n , which is a descriptor of channel roughness (equation 3 ). $$ n = \frac{{S^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 2}}\right.\kern-0pt} \!\lower0.7ex\hbox{$2$}}}} R^{{{\raise0.7ex\hbox{$2$} \!\mathord{\left/ {\vphantom {2 3}}\right.\kern-0pt} \!\lower0.7ex\hbox{$3$}}}} A}}{Q} $$ (3) where n is Manning’s roughness coefficient, S is the channel bed slope (m m −1 ), R is the hydraulic radius, calculated as the channel cross-sectional area A (m 2 ) divided by the wetted perimeter (m), and Q is the discharge (m 3 s −1 ). Because the hydraulic measurements taken at respective low and medium flows were relative to that year’s hydrograph, the discharges were not the same at all low flows or all medium flows. Therefore, it was not possible to compare differences in hydraulic data (water depth and velocity) between years. To standardize the velocity measurements to make comparisons between years possible, we calculated a dimensionless velocity by standardizing the actual velocity by the shear velocity ( U *) (equation 4 ). $$ U^{*} = \sqrt {gRS} $$ (4) where U * is the shear velocity (m s −1 ), g is the acceleration due to gravity (9.81 m s −2 ), R is the hydraulic radius (m), and S is the channel bed slope (m m −1 ). The effect of enhanced restoration on Manning’s n and dimensionless flow velocity was tested by comparing values before enhanced restoration (year 2010) and those after enhanced restoration (years 2011, 2014, and combined 2011 and 2014) at each reach, with pairwise Student’s t -tests. Similarly, pairwise Student’s t tests were also run for basic-restored reaches, where we did not expect differences between years. Although basic-restored reaches were paired with enhanced restoration reaches, comparisons between reaches with the two different types of restoration may not be valid because channel bed slopes differ substantially in some instances, which has a profound impact on hydraulic parameters. Ice Studies Along 10 cross sections at five paired reaches with basic and enhanced restoration, respectively (Beukabäcken, Falåströmsbäcken, Mattjokkbäcken, Mösupbäcken, and Rågobäcken), the spatial distribution of anchor ice, surface ice and specific ice forms and ice-related events (that is, anchor ice dams, aufeis and ice-induced floods) was mapped in 2011 and 2012. Anchor ice is usually initiated by the accumulation of tiny ice particles that have adhesive features in supercooled water and therefore attach to in-stream vegetation, coarse material and large wood (Stickler and Alfredsen 2009 ; Lind and others 2014a ). Suspended ice is created when anchor ice dams collapse or when water recedes during winter, thereby leaving ice elevated above the water surface (Prowse 1995 ; Turcotte and Morse 2013 ). Ice formations were drawn on maps and photographed during field visits between six to nine times through November to April during the winters 2011–2012 and 2012–2013. Automatic Time Lapse Plant Cameras (Model WSCA04) were also placed at each reach and set to take three photographs per day (October–April), but they were not reliable in temperatures below −15°C because of failing batteries. These photographs were used as complements to the visual mapping to follow ice dynamics between field visits. Ice data for the two winter seasons were quantified as mean proportions of suspended ice, surface ice and maximum anchor ice, respectively, in relation to the entire reach. Ice data and distance to upstream lakes were analyzed using ANOVA. Riparian Sampling For riparian abiotic and biotic variables, ten 0.5 × 1 m plots were sampled per 100-m long reach. At every 10 m going downstream, one plot was placed at a different distances from the channel based on a random distribution applied at all reaches. The distance between each particular plot and the channel edge was determined as a percentage of the total width of the riparian zone, that is, distances ranged between 0% and 100%. The lower border of the riparian zone was determined by the water level at the day of installing plots (in summer 2013), and the spring high-level was determined by eye as the transition from herb and grass vegetation to small shrubs and forest. For each (riparian) sample location, the height relative to the water level was measured using a laser pointer and two level staffs. In all streams, water-level fluctuations were measured between October 2011 and August 2014 using Rugged TROLL pressure loggers (Amtele, Kungens Kurva, Sweden). Water level on the day of measurement and the water level fluctuations over time were used to calculate the average flooding duration and flooding frequency for each plot. For each reach, we also calculated the duration and depth of the spring flood as the number of days and the average depth when the water level was above the yearly average between April 1 and June 30 of that year. At each sampled plot, three to five soil cores were taken from the top 5 cm of the soil in August 2014. They were analyzed for plant available N following the Devarda’s method (protocol SS-EN 15476:2009, Swedish Standard Institute, Stockholm) and plant available P after acid digestion with nitric acid (SS 028150-2), and an inductively coupled plasma-atomic emission spectrometry (ICP-AES) instrument enabling identification of low concentrations. Soil samples were shaken in demineralized water, leading to a suspension of solid material and a liquid phase consisting of dissolved substances from the soil. The soil organic content in the solid phase was determined by loss on ignition and pH was measured in the liquid phase. The vegetation in the riparian zone was surveyed in August 2013 and 2014 in the ten 0.5 × 1.0 m plots, before the soil samples were taken. The abundance of all plant species within the plots was determined using a 5-class scale (1 = present with one individual and covering <5%, 2 = present with two or more individuals in <5%, 3 = 5–25%, 4 = 25–50%, 5 = covering >50%). We also counted the number of seedlings per plot in August 2013 and 2014. In August 2014, the whole reach (100 m) was carefully searched and all species that were present in the entire reach were noted, including instream macrophytes but excluding bryophytes. Next to each vegetation plot, a 10 × 10 cm plot was cut at soil level and dried to determine aboveground biomass of mosses and higher plants separately. Fish Sampling The number of fish species and salmonid densities were assessed in August 2010 and 2015 by electrofishing sites that had undergone basic and enhanced restoration in six streams (Olsbäcken, Abmobäcken, Beukabäcken, Mattjokkbäcken, Rågobäcken and Mösupbäcken) in three runs using a generator-powered electroshocker (Lugab, Luleå, Sweden) that produced a constant direct current of 800 V. All fish collected were identified to species, counted, and measured (total length in mm), before being returned to the water. Body mass of each individual of S. trutta and S. salar was obtained using a length–weight model developed for the study area (D. Palm, unpublished data). Fish density standardized for area was calculated using the methodology described by Zippin ( 1956 ). Fish biomass standardized for area was obtained by dividing the biomass caught during the first electrofishing run by the area of the sampled reach. Statistical Analyses for Biotic and Riparian Variables Data on flood dynamics, soil, vegetation, and fish metrics were compared among reach types and survey occasions using linear mixed models (LMM) (Nakagawa and Schielzeth 2010 ), performed with the R package “nlme” (Pinheiro and others 2015 ). LMMs included “reach type” (basic vs. enhanced restoration) and its interaction as fixed factors and “stream” as a random factor. By including “stream” as a random factor, we accounted for possible autocorrelations due to the spatial proximity of the reaches within the same stream and to the repeated-measure design of the study. As fish data were collected both before and after enhanced restoration, LMM on fish metrics included also “survey occasion” (before enhanced restoration vs. after enhanced restoration in 2011 and 2014) and its interaction with “reach type” as fixed factors. Finally, “year” and its interaction with “reach type” were included as fixed factors in the LMM on flood data. Correlations among soil variables and between soil variables and seedling numbers were tested with Pearson’s correlations. Results Abiotic Variables Channel Bed Morphology Geomorphic, hydraulic, and ice variables showed a clear response to the enhanced restoration measures. The finer sediment size fraction (that is, D 10 ) was smaller at the reaches with enhanced restoration in comparison with their respective basic-restoration reaches, with the exception of one stream (Mösupbäcken) where the same value was found at the two reach types (Figure 4 A). Reaches with enhanced restoration also had coarser coarse fractions of channel bed sediments, with the exception of one stream (Beukabäcken) where the opposite was recorded (Figure 4 B). This resulted in a larger coefficient of variation and lower kurtosis of sediment distributions in reaches with enhanced restoration (Figure 4 C, D), indicating higher heterogeneity of the sediment size distribution. The only exception to this general pattern was Mattjokkbäcken, where the coefficient of variation was the same at the two reach types. Figure 4 Channel bed sediment distribution. The location of each stream within the plot is based on the values recorded at reaches with basic restoration and enhanced restoration. Symbols on the bisector line indicate no differences between the two reach types, while symbols above the bisector line indicate higher values at reach with enhanced restoration than at the reach with basic restoration, and those below the bisector line indicate lower values at the reach with enhanced restoration than at the reach with basic restoration. See text for metric descriptions. Full size image Enhanced restoration significantly increased channel roughness, as quantified by Manning’s n , and significantly decreased dimensionless flow velocities at both medium and low flow conditions (Table 1 ). On the contrary, at basic-restored sites, channel roughness and dimensionless flow velocity remained similar during the study period, with the exception of roughness, which was higher in 2014 than in 2010 at low flow condition, likely due to the particularly low discharges recorded that year (Table 2 ). There was a large difference in flow dynamics between years. Year 2012 had a higher spring flood compared to the other years (LMM, F 1,46 = 19.77, P < 0.001). There was, however, no significant difference between restored reach types for the length or the stage of the spring flood, nor in the average amplitude of the water level fluctuations over the entire year. Because of their location further downstream, reaches with enhanced restoration had a wider wetted width and a higher discharge than basic-restored reaches at both flow conditions (Table 2 ). Table 1 Changes in the Values of Manning’s n Roughness Coefficient and Dimensionless Flow Velocity (mean ± SE) after Enhanced Restoration (Years 2011 and 2014) in Comparison to the Values Recorded Before Enhanced Restoration (2010), at Medium and Low Flow Conditions Full size table Table 2 Discharge and Wetted Channel Width (mean ± SE) in the Reaches with Basic or Enhanced Restoration, in the Three Survey Occasions Before and After Enhanced Restoration at Low and Medium Flow Full size table Ice There was no significant difference in the proportion of surface ice or suspended ice that was formed in the basic-restored and enhanced-restored reaches (Table 3 ). There was however a significant difference for the maximum per cent cover of anchor-ice formed between basic-restored and enhanced-restored reaches as well as for the interaction between the type of restoration and the distance to upstream lakes (Table 3 ). There was also a significant relationship between the proportions of all types of ice formation (%) and the distance to the upstream lake, with more ice forming further away from lake outlets (Table 3 ). Hence, most anchor ice was formed in reaches with enhanced restoration far away from upstream lakes. Table 3 Formation of Surface Ice, Suspended Ice and Anchor Ice in Relation to Type of Restoration (Basic or Enhanced), Distance to Upstream Lakes and the Interaction Between Ice Type and Distance Full size table Riparian Soil Quality There was no significant difference in average pH, amount of N, P, and organic content in the riparian soil between basic-restored reaches and enhanced reaches (Table 4 ). Instead, the observed variation in riparian chemistry occurred between streams (Online Appendix 2). Within the riparian zones, pH and moisture increased, and the organic content decreased with decreasing elevation (Table 4 ), and organic content was strongly correlated to the amounts of nutrients (Pearson’s r = 0.677, P < 0.001 and r = 0.433, P < 0.001 for N and P, respectively). Table 4 Riparian Soil Quality (Nitrogen; N, Phosphorus; P, pH and Total Organic Content; TOC) and Flood Variables per Restoration Type Full size table Plants Plant species richness was lowest in plots close to the stream channel, then increased at elevations around 40 to 60 cm above the average water level, and decreased again at higher elevations. There was a slight trend towards higher plant species richness in reaches with enhanced restoration. That is, although the average number of plant species per plot did not differ between restoration types (Figure 5 ), the minimum plant species richness per plot was higher for reaches with enhanced restoration (LMM, F 1,9 = 5.16, P = 0.049, data not shown). Further, reaches with enhanced restoration contained slightly more species when looking at larger spatial scales. Three years after the cumulative number of species found in the 10 plots was significantly higher in reaches with enhanced restoration (LMM, F 1,9 = 11.51, P = 0.008). Four years after restoration, the cumulative number of species was still consistently higher in enhanced reaches, but the difference was not significant and neither was the total number of plant species in the reach (Figure 5 ). There was no significant difference in aboveground plant and moss biomass between the two types of restored reaches 3 years after restoration (Online Appendix 3). Figure 5 Mean number of species of riparian vegetation in individual plots ( P ), the summed amount in the 10 plots ( S ), and total amount in each reach ( T ) in 2013 (13) and 2014 (14). Error bars indicate standard errors. Asterisks indicate statistical significance ( P < 0.01). All P values are given in Online Appendix 3. Full size image There was no significant difference in average plot cover of herb species, graminoid species, shrubs or trees between basic and enhanced reaches (Online Appendix 3). Four years after restoration, we found on average about five seedlings from natural origin per vegetation plot, which translates into a density of 10 seedlings m −2 . There was a significant correlation between the numbers of seedlings in the third and fourth years after restoration in the reaches with enhanced restoration (Pearson’s r = 0.61, P < 0.001), but not in the basic reaches (Pearson’s r = 0.16, P = 0.111). Interestingly, in basic-restored reaches, seedling numbers decreased with elevation and thus increased with flooding frequency and duration (r = −0.265, P = 0.009 ), whereas these relationships were absent in reaches with enhanced restoration (Pearson’s r = −0.093, P = 0.36). Fish Electrofishing yielded 1–47 individuals, representing 2–3 species, per study site before restoration and 1–81 individuals, representing 1–4 species, per study site 5 years after. In total, eight species of fish ( S. trutta , C. gobio , S. salar , P. phoxinus , T. thymallus , E. lucius, L. lota , P. fluviatilis ) and one species of lamprey ( Lampetra planeri ) were found across all sites. S. trutta occurred in all of the six streams studied and was the dominant species accounting for 76% and 77% of all individuals collected before restoration and 5 years after, respectively. The second most common species, S. salar , was only found in Beukabäcken, Mattjokkbäcken and Rågobäcken. The other species were only caught sporadically and in low numbers, whereby further statistical analyses were not conducted. None of the tested fish variables were significantly different between the year before restoration and 5 years after in either the basic or enhanced restoration reaches (Table 5 ). Table 5 Mean Values and Ranges of Fish Variables ( Salmo trutta and S. salar ) in Reaches Restored Using Basic and Enhanced Methods in 2010 (Before Enhanced Restoration) and 2015 (5 Years After Enhanced Restoration) and Results from Linear Mixed Models Full size table Discussion Several authors (Lake and others 2007 ; Palmer 2009 ; Palmer and others 2014b ) have stressed the importance of basing stream restoration on ecological theory. So far, the majority of stream restoration projects have focused on channel morphology, adopting the theory that increasing heterogeneity favors species richness (Townsend and Hildrew 1994 ); fewer projects have tried to restore flow and sediment dynamics that are also major determinants of the biota and ecological processes (Palmer and others 2014b ). Of course, channel modification also affects biota by changing hydraulics. For example, the flood pulse concept predicts that the simplification of the hydraulic regime that results from channelization will reduce species diversity (Junk and others 1989 ). In the Vindel River catchment, streams have been primarily impacted by channelization and there are no other impacts on the hydraulic regime except those caused by channel reconfiguration. As opposed to streams in more densely populated areas of the world, streams in this catchment have little to no impact of common stressors such as eutrophication, increased fine sediment inputs and flashy flow regimes due to high impervious land cover (Bernhardt and Palmer 2011 ; Woodward and others 2012 ), and available information suggests that the species pools remains largely intact (Nilsson and others 1994 ; Persson and Jonsson 1997 ). The restoration of the Vindel River tributaries was therefore built on the assumption that reconstruction of the local, physical environment in the streams would be the most important measure for stimulating biotic recovery, following the Field of Dreams hypothesis (Palmer and others 1997 ). Had the river been more impacted and with less intact species pools, the system could instead have been manipulated to maximize specific ecosystem services (Bullock and others 2011 ; Palmer and others 2014a ), without any requirement to mimic more original conditions (Hobbs and others 2011 ). Another assumption made by the practitioners responsible for the Vindel River restoration was that a complex channel configuration similar—but not necessarily identical—to pre-industrial conditions would be the best option for promoting biotic recovery owing to the fact that the industry (timber floating) has been terminated (Gardeström and others 2013 ) and given that there was little additional anthropogenic disturbance such as pollution. In regions with glacial depositional landforms, such as moraines, eskers, and drumlins, that are naturally rich in big boulders, it was not possible to restore channels back to their original conditions since much of the original coarse sediment had been blasted. This brought the idea to introduce enhanced restoration in line with the above-mentioned, growing consensus that increasing habitat complexity is critical in restoration (Loke and others 2015 ), by transporting large boulders from the uplands into the channel. Recent findings that restoration type, such as channel widening, remeandering and recreating instream structures, matters more than spatial restoration extent or time since restoration (Göthe and others 2016 ), provide further support for using improved restoration methods. Therefore, we hypothesized that enhanced restoration, increasing physical and hydraulic heterogeneity, would lead to higher biodiversity than would basic restoration. Abiotic Differences After Enhanced Restoration Our results clearly show that there was an increase in physical and hydraulic heterogeneity associated with reaches that had undergone enhanced restoration (Gardeström and others 2013 ; Polvi and others 2014 ; Nilsson and others 2015 ). As far as physical heterogeneity, we found an increase in bed sediment heterogeneity following enhanced restoration, in addition to the differences in the sediment distribution. Polvi and others ( 2014 ) also reported significant differences between the two types of restored reaches with respect to longitudinal and cross-sectional channel morphology. The large boulders that were placed into the channels as an enhanced restoration measure increased grain and form roughness, which reduced flow velocities, as measured by dimensionless flow velocity. The reduction in flow velocity was particularly evident at medium flow conditions. Roughness increased significantly in enhanced restoration reaches after restoration. However, we also observed an increase in roughness in 2014 in the reaches with basic restoration at low flow conditions. Because the basic-restored reaches also have relatively coarse sediment compared to many gravel bed rivers, at low flow conditions, the cobbles and smaller boulders will contribute an equal amount of grain roughness as the larger boulders in the enhanced-restored reaches. At medium and higher flows, the large boulders in the enhanced reaches will occupy a larger percentage of the water column than the smaller boulders and cobbles in the basic-restored reaches. Therefore, at medium flows, we expected to see a larger effect of the enhanced restoration than at low flows on hydraulic heterogeneity measures. In these stream channels with coarse glacial legacy sediment, we do not expect a large amount of adjustment of boulders and other coarse sediment during the years following the enhanced restoration. This is in contrast to channels with finer sediment that are more dynamic during annual high flows, where geomorphic recovery may also require extra time after the actual restoration measures (Fryirs and Brierley 2000 ). Geomorphic adjustments in our semi-alluvial system will be caused by organization of the medium and fine sediment fractions (fine gravel to cobbles) around the coarse boulders, but this should not alter the overall sediment heterogeneity. In addition, many ice processes, particularly ice build-up and break-up, which are common in these streams (Lind and Nilsson 2015 ), can cause sediment transport (Turcotte and others 2011 ), with transport of boulders up to 2 m in diameter recorded in the Tana River in northern Finland (Lotsari and others 2015 ). The main objective of the restoration of tributaries to the Vindel River was to enhance fish production (Gardeström and others 2013 ). Therefore, the focus was on redesigning the channel, like in most other stream restoration projects (Palmer and others 2014b ), to favor spawning and feeding of fish (Gardeström and others 2013 ). However, it should not be forgotten that the increased retention capacity resulting from decreased flow velocities and wider channels with increased lateral connections also serves other purposes, not least to reduce the intensity of floods caused by the extreme rain events that are expected to follow as a result of climate change (Kundzewicz and others 2014 ). This is an important side effect that speaks in favor of applying enhanced restoration at more sites. However, basic-restored sites also slow down flows, although to a smaller extent. Given that previous efforts using basic restoration covered many parts of the river system (Gardeström and others 2013 ), the increase in retention capacity following restoration should be substantial. Because the enhanced restoration resulted in decreased velocities, it was expected that the production of anchor ice would also decrease. On the contrary, there was more anchor-ice production in enhanced reaches, but only if they were located far away from an upstream lake. Although the current velocity decreased, the turbulence might have increased since large trees were placed in the enhanced-restored reaches (Timalsina 2014 ). Increased turbulence around in-stream objects is important for transporting frazil ice from the surface to the bottom, thereby creating hotspots for production of anchor ice (Stickler and Alfredsen 2005 ). It could also be that the increased retention capacity of enhanced-restored reaches leads to more drifting frazil ice being captured, thus favoring anchor-ice production. In reaches close to lake outlets there was no anchor-ice production regardless of the type of restoration, whereas anchor-ice production increased in reaches that were further downstream from a lake and more geomorphically complex, which should translate into increased turbulence. The absence of anchor-ice formation close to lake outlets is explained by the temperature buffering capacity of the lakes that keeps the water from freezing. There are many other factors contributing to the presence of ice such as discharge, groundwater supply, temperature, and bottom substrate, which were not included in this study. Anchor ice has been shown to have impacts on the biota such as increasing riparian plant species richness while decreasing the potential for fish survival; this implies that the potential for ice formation should be included when planning restoration (Power 1993 ; Lind and others 2014a , b ). The Lack of Biotic Response and Its Potential Causes For obvious reasons, the physical alterations were visible more or less instantly after the enhanced restoration. The nature and rate of subsequent chemical and biotic changes, however, were more difficult to predict. We had expected that, eventually, the enhanced restoration measures would foster more biotically diverse reaches, but the speed and trajectory of these anticipated changes were impossible to predict. Given that restoration is a disturbance in itself, either a decrease or an increase in biotic variables could have been a possible outcome during the first few years following restoration (Zedler and Callaway 1999 ). Contrary to the physical differences between reaches subjected to basic and enhanced restoration, there were almost no biotic differences between reaches subjected to basic or enhanced restoration; species richness and abundance remained the same. Nor were there any chemical differences in the riparian soils between the two types of restored sites. Dietrich and others ( 2014 ) observed an increase in riparian soil fertility following basic restoration of channelized reaches suggesting that this initial restoration is more important than additional enhanced restoration of previously restored sites. Although chemical differences related to type of restoration may develop with time, at this stage, differences among streams were still larger than those between types of restored reaches. Therefore, we can assume that any general, biotic response to the enhanced restoration should be caused primarily by the observed increase in the heterogeneity of channel morphology and flow and ice patterns and not by chemical influence. Despite the physical and hydraulic changes, the biotic responses to enhanced restoration were practically nonexistent during the course of this study. Does this mean that our paper will add to the quite abundant literature that report null results following restoration, without being able to explain the outcome? Before responding to this question, we point out that attempts to predict more final results can be centered in either time- or dispersal-related factors. In reality, these factors are intertwined, but for clarity’s sake we here discuss them separately. As regards time, there are reasons to believe that, eventually, the biotic response of the enhanced reaches will be stronger than in the basic reaches. For example, although our vegetation survey indicated that reaches with basic and enhanced restoration were practically the same with no differences in vegetation cover or composition, there was a slight tendency for a higher species richness in enhanced reaches. This latter observation was made although the basic restoration took place about 5–8 years before the enhanced restoration and may indicate a rise in species immigration following the increase in morphological heterogeneity in enhanced reaches. If so, it would be in line with the augmented propagule retention observed by Engström and others ( 2009 ) following (basic) restoration of channelized stream reaches. Provided that the regional species pool is reasonably diverse and not dispersal-limited (Brederveld and others 2011 ; Sundermann and others 2011 ; Tonkin and others 2014 ), due to increased retention capacity, enhanced reaches may become more species rich than basic reaches given more time. The higher seedling numbers found in enhanced reaches support this reasoning. However, most riparian plants need a high spring flood to be caught by the flowing water and be transported, and spring flood intensity may vary considerably among years (Balke and others 2014 ). The fact that different immigration times may result in different floras (Sarneel and others 2016 ) makes predictions difficult. Another complicating factor is that many plant species in northern regions have a poor or infrequent seed production and may not be able to disperse any seeds when an opportunity arises (Molau and Larsson 2000 ). Therefore, even if our observations may suggest an ongoing differentiation between the restored reach types, it may take long for plants to establish in newly restored sites, even though these sites may be perfectly suitable for them (Hanski 2000 ). Predictions of dispersal success in heterogeneous landscapes are challenging because many factors need to work together for an effective immigration of plant propagules to occur (Gustafson and Gardner 1996 ; Mouquet and Loreau 2003 ). First, turbulent reaches have a flora that is largely different from that of tranquil reaches as well as uplands (Nilsson and others 1994 , 2010 ). Given that turbulent reaches can be situated quite far apart, and the chances for a floating plant propagule to be dispersed from an upstream turbulent reach to a restored turbulent reach further downstream—and be deposited there—are rather small simply due to distance. For example, lakes are common in these systems and lakes—as well as other tranquil, wide water bodies—are efficient seed traps (Brown and Chenoweth 2008 ). When a plant propagule is finally stranded at a restored site, factors such as water levels, soil moisture and temperature need to be favorable to promote establishment (Merritt and others 2010 ), and such conditions are not likely to occur every year (Balke and others 2014 ). For example, Riis ( 2008 ) studied dispersal and colonization of aquatic plants in a stream reach in Denmark and concluded that primary colonization was the major factor constraining vegetation development in restored (vegetation-free) sites. Given these circumstances, the floristic development of enhanced-restored reaches may not be predictable simply based on reach-scale heterogeneity but requires an analysis of the landscape context for its understanding. For example, an enhanced-restored reach close to an upstream rapid section may encounter a much more favorable recovery than a similar reach downstream of a lake. To compensate for such obstacles, many stream restoration projects therefore introduce propagules to enhance vegetation recovery (Kiehl and others 2010 ). The data on salmonid fish numbers and biomass were variable but—as for plants—did not show any general differences between the start and end of the project or with respect to whether sites were restored by basic or enhanced methods. There are several possible reasons for this lack of response. First, the scale of restoration could have been too small to lead to a general response of the fish population. Salmo trutta , which was the dominant salmonid species, typically has large home ranges and extended migration within and between tributary streams is common (Carlsson and others 2004 ; Palm and others 2009 ). Therefore, the reaches subjected to enhanced restoration were most likely too short to have a direct effect on the fish (but see Gowan and Fausch 1996 ; Palm and others 2009 ). Had entire subcatchments been restored using enhanced methods, it is more likely that the fish community would have shown a stronger response. For example, Roni and others ( 2010 ) found that all the available habitat would need to be restored to reach 95% certainty of achieving 25% more smolt production (that is, downstream migrating juveniles) for two species of Pacific salmon ( Oncorhynchus kisutch and O. mykiss ). A second possible reason is that enhanced restoration could have made electrofishing more difficult by the increased wetted area of the channels (Kolz 2006 ). In other words, it cannot be excluded that there were fish responses but that they were missed because of methodological shortcomings (compare Nilsson and others 2015 ). Third, the main food resource for S. trutta —benthic invertebrates—may not yet have recovered well enough to feed a larger S. trutta population (Muotka and others 2002 ; Louhi and others 2011 ). Fourth, a higher amount of anchor-ice accumulation in enhanced reaches may cause freezing of fish eggs and displacement, habitat exclusion and increased movement of fish, which can cause direct mortality (Power 1993 ; Weber and others 2013 ). A fifth reason could be added, that if fish populations, that is, the actual parental stocks, are small after restoration, they may require many years to recover to their potential sizes (Albanese and others 2009 ). Conclusions To conclude, our follow-up study showed that, although enhanced restoration changed the physical and hydraulic conditions of previously restored stream reaches, their biota had not responded 5 years after this restoration. This finding is consistent with other studies that have failed to find biotic responses of increasing physical heterogeneity in restored stream reaches (Jähnig and others 2010 ; Palmer and others 2010 ; Nilsson and others 2015 ). However, when evaluating this study it is important to keep in mind that we compared two types of restored reaches, restored during different time periods, and both of which are improved compared to channelized reaches (Gardeström and others 2013 ; Hasselquist and others 2015 ; Nilsson and others 2015 ). Had we compared still channelized reaches with enhanced reaches a different biotic outcome might have been found (for example, Helfield and others 2007 ). Hasselquist and others ( 2015 ) observed that riparian vegetation needed 25 years or more to recover solely from restoration disturbance, implying that any further recovery would take even longer. This suggests that many years may remain until the biotic effects of enhanced restoration actions can be accurately evaluated, so the size of a potential “species credit” (that is, species to come, Hanski 2000 ) is yet unknown. Do our biotic results mean that this is yet another attempt at stream restoration with null results? We propose that our results in this unique environment, without catchment-scale disturbances and with naturally geomorphic complex channel structures, lend support to three important take-home messages for restoration. First, restoration is a disturbance in itself and the disturbance by restoring basic-restored reaches once more is negligible after 5 years, second, immigration potential varies across catchments and may form an important bottleneck, and third, recovery rate is slow in boreal streams. A landscape-scale approach, taking into consideration other impacts in the catchment, such as migration barriers and lack of source populations, is necessary in planning restoration and for judging success (for example, Simenstad and others 2006 ; Lake and others 2007 ; Nilsson and others 2015 ). Seed dispersal is complicated by a number of factors: source populations are at other similar types of reaches that can be separated by lakes, a high spring flood is necessary for efficient seed dispersal and appropriate conditions for seed production are required to make dispersal possible. Similarly, connectivity of sites and spatial extent of restoration need to be sufficient to enable migratory fish populations to recover. In some ways, a landscape-scale approach has been taken in the case of basic restoration, for which most turbulent reaches in the tributaries have been restored, many side channels have been opened up and many migration obstacles, such as splash dams, have been removed. Any attempts to strengthen source populations by actively introducing organisms have however not been made. Although studies on the effects of restoration may seem only worth sharing if they provide positive results, we encourage critical examination of restoration results regardless of the outcome in order to further explore how to plan future restoration projects. Nilsson and others ( 2016 ) demonstrate that there is evaluation in all phases of ecological restoration and call for thorough documentation of evaluation steps to make such planning possible. Thus, for now we can conclude that the enhanced restoration did not exert negative impacts on biota, and therefore future restoration managers may choose to adopt the enhanced restoration methods directly, rather than using a stepwise approach, thereby limiting the effect of disturbance by the restoration activities themselves (such as by machines). A one-time (enhanced) restoration effort will also be cheaper than two (stepwise) restoration events. The climatic constraints of the study area (short growing seasons) mean that recovery is likely to be slow. However, the very low percentage of alien, invasive species in the area (Dynesius and others 2004 ), and the limited pressures of other anthropogenic activities, mean that it should be possible to await recolonization without further measures. This provides an ideal opportunity for monitoring and studying natural recovery and colonization processes. If monitoring and evaluations of restoration results are required by funding agencies, there are usually set budgets. The most effective way to administer such a resource for follow-up work would be low-intensity monitoring over several decades instead of high-intensity monitoring during only the first few years following restoration. Such a long-term strategy, located to encompass the varying conditions within catchments, would reduce the risk for inaccurate conclusions about the success of a restoration and strengthen the influence of evaluations when future restoration actions are planned (Nilsson and others 2016 ). Finally, we would like to recall that the slow payoff of restoration is a very important reason to protect rivers from future damage. | A common way to restore Swedish streams previously used for timber-floating has been to return rocks. A group of researchers at Umeå University has studied the effects of improved methods that also add large boulders and trees. Creating complex channels and watercourses is easy, but reintroducing plants and aquatic animals is a challenge, according to Umeå researchers. The results have been published in Ecosystems. "Restoration is in itself a disturbance that watercourses need to recover from. We've seen that the ability for various species to re-enter varies depending on landscape and happens at a much slower pace in northern streams," says Christer Nilsson, professor in landscape ecology at Umeå University. Over more than a century beginning in the 19th century, human kind cleared, straightened, channelized and dammed Swedish watercourses to simplify for timber-floating. When timber-floating was discontinued in the 1970s, the work of restoring watercourses was introduced. So far most restoration projects have been single activities without further evaluation. Conservationists have only in a few cases been able to improve their own methods based on learnings from former studies. The novel method in this project, named Vindel River LIFE and funded by the EU Life Foundation, was to collect very large boulders and trees from the adjacent upland areas to place in the water as well as adding gravel to spawning beds in reaches of tributaries that have previously been restored using more basic methods. In that way, the restored reaches became more complex, which ought to improve biodiversity. Researchers later compared the more restored river reaches with reaches where only basic restoration had been made. "It was obvious that the shape of the reach and its hydraulics showed an increased variation. Riparian soil chemistry, riparian vegetation and fish fauna, however, showed no evident reaction to the restoration in the first five years when the two types of river reaches were compared. The variation was greater between the various tributaries than between the different types of restoration," says Christer Nilsson. However, vegetation along river reaches where only basic restoration had been done showed improvements in comparison to river reaches that had never been restored. The expectation was therefore for the new restoration to yield an even better result. "The fact that it takes a long time for plants to establish can depend on the lack of seeds in the vicinity," says Christer Nilsson. When it comes to fish, it is possible that part restorations of reaches are not sufficient in order to gain clear results. As we know, many fish move across large areas. It can also be due to prey not having recovered or on anchor ice, i.e. ice anchored to the bottom, killing spawn and small fish. "To increase the chances of successful restorations, we recommend restoring entire river basins. Furthermore, follow-up studies are required for decades to be able to follow the result. The slow recovery of ecosystems after restorations in northern streams is also an important reason to protect them from future harm," concludes Christer Nilsson. | 10.1007/s10021-016-0020-0 |
Chemistry | Combining pneumatics with a hydrogel to create a baromorph—for soft robotics | Emmanuel Siéfert et al. Bio-inspired pneumatic shape-morphing elastomers, Nature Materials (2018). DOI: 10.1038/s41563-018-0219-x Journal information: Nature Materials | http://dx.doi.org/10.1038/s41563-018-0219-x | https://phys.org/news/2018-11-combining-pneumatics-hydrogel-baromorphfor-soft.html | Abstract Shape-morphing structures are at the core of future applications in aeronautics 1 , minimally invasive surgery 2 , tissue engineering 3 and smart materials 4 . However, current engineering technologies, based on inhomogeneous actuation across the thickness of slender structures, are intrinsically limited to one-directional bending 5 . Here, we describe a strategy where mesostructured elastomer plates undergo fast, controllable and complex shape transformations under applied pressure. Similar to pioneering techniques based on soft hydrogel swelling 6 , 7 , 8 , 9 , 10 , these pneumatic shape-morphing elastomers, termed here as ‘baromorphs’, are inspired by the morphogenesis of biological structures 11 , 12 , 13 , 14 , 15 . Geometric restrictions are overcome by controlling precisely the local growth rate and direction through a specific network of airways embedded inside the rubber plate. We show how arbitrary three-dimensional shapes can be programmed using an analytic theoretical model, propose a direct geometric solution to the inverse problem, and illustrate the versatility of the technique with a collection of configurations. Main Morphing a thin plate into a programmed shape is a challenging problem, as highlighted by Gauss: if the distances along the surface are not modified, the Gaussian curvature cannot be changed, and only a limited family of three-dimensional (3D) surfaces is achievable, as commonly observed with bilayer sheets 16 , 17 . However, nature overflows with examples of geometrically complex thin objects, such as leaves or organ epithelia 11 . For instance, differential growth induces the elegant shape of flower petals 12 , or may crinkle an initially flat leave when the growth rate is deregulated 13 . While the growth process may be spatially homogeneous, the orientations of cellulose fibres may also induce anisotropic growth, which leads to the hygroscopic actuation of wheat awns 14 or to the chiral shape of some seed pods 15 . Inspired by biological morphogenesis, pioneering experiments have been carried out with hydrogel plates where inhomogeneous isotropic 6 , 7 , 18 or anisotropic 9 , 10 swelling properties are spatially distributed to obtain various 3D shapes after being immersed in a bath of hot water. Nevertheless, the experimental realizations developed so far involve slow diffusive swelling processes and very soft objects that generally cannot sustain their own weight. In contrast, pneumatic actuation presents strong advantages, such as large work load, reversibility, controllability and fast actuation, which have led to the recent development of multiple soft robotics actuators for twisting, contracting, expanding or bending motions 19 . However, such actuators generally rely on bilayers 5 or are limited to surface texturing effects 20 ; this imposes strong constraints on the achievable states. Our approach bridges these two emerging fields—bio-inspired shape-morphing and pneumatic soft robotics—with a new easy-to-build, easy-to-control object, referred to as a baromorph. Baromorphs consist of elastomer plates embedding a network of airways (see Methods for fabrication details), which give rise to a programmed family of shapes under air inflation or suction. Such structures can be viewed as pneumatic metamaterials 21 . When the inner pressure is increased (or decreased), the elongated channels tend to inflate (or deflate) anisotropically 22 . The length of an inflated channel remains almost unchanged, while its width increases (Fig. 1a ). This anisotropy results in a controllable modification of the effective rest lengths of the baromorph, mimicking the anisotropic growth of a biological tissue, with a magnitude that depends on the geometry of the inner channels and on the applied pressure. The plate deforms according to the new target metric imposed by the network of airways, and may buckle out of plane to reach an equilibrium 3D shape that minimizes the total elastic energy (the sum of stretching and bending energies). Fig. 1: Principle of pressure-actuated baromorph plate. a , Schematic of actuation: the pressure inside the airways induces anisotropic inflation of the plate (higher strain normal to the airways than along the channels). b , 3D printed mould used to cast the baromorph illustrated in c . c , Actuation of the plate: suction (left) tends to contract the plate in the azimuthal direction, leading to a bowl (positive Gaussian curvature), while inflation (right) leads to an excess angle and a transformation into a saddle shape (negative Gaussian curvature). Scale bars, 1 cm. d , Evolution of the cap of an Acetabularia alga from a bowl to a saddle shape due to preferential growth in the azimuthal direction. Adapted from ref. 23 ,Springer ( d ). Full size image Figure 1c presents the deformation of a baromorph plate with radial channels obtained by casting the 3D printed template shown in Fig. 1b : the target expansion is mainly circumferential. Following suction and consequent azimuthal contraction, the plate adopts a bowl shape (with positive Gaussian curvature). Conversely, inflation induces an azimuthal expansion and leads to an excess angle in the plate, which destabilizes into a surface of negative Gaussian curvature. These transformations are reminiscent of the morphing evolution of Acetabularia (Fig. 1d ) 11 , 23 . After initiation, the cap of this unicellular alga evolves from a bowl to a flat configuration, and eventually to a saddle shape, essentially for the same reason as our baromorph: biological growth in the cap is stronger in the circumferential direction than along the radius. A first step in understanding and programming a baromorph is to predict the local deformation of airways in the absence of external geometrical constraints, that is, the target in-plane strains orthogonal ε t ⊥ and parallel ε t ∥ to the airway direction. The channel geometry (Fig. 2a ) can be reduced to two relevant parameters in our minimal model: the relative channel height with respect to the total thickness of the sheet Ψ = h /( h + 2 e ) and the in-plane channel density Φ = d /( d + d w ), where h , e , d and d w are geometrical parameters of the structure (Fig. 2b ). Balancing stresses and making simplifying assumptions (for details see Supplementary Text and Supplementary Fig. 2 ), the target strains are then deduced following Hooke’s law: $${\it{\epsilon }}^{\rm{t}} = \frac{p}{E}\frac{{{\it{\Psi \Phi }}}}{{1 - {\it{\Psi \Phi }}}}\left( {1 - 2{\mathrm{\nu }}} \right) = 0$$ $${\it{\epsilon }}_ \bot ^{\rm{t}} = \frac{p}{E}\left( {2 - {\it{\Phi }}} \right){\it{\Phi }}\left( {\frac{{\it{\Psi }}}{{1 - {\it{\Psi }}}} - \frac{{{\it{\nu \Psi \Phi }}}}{{1 - {\it{\Psi \Phi }}}}\left[ {1 + {\nu }\left( {\frac{{1 - {\it{\Phi }}}}{{{\it{\Phi }}\left( {1 - {\it{\Psi }}} \right)}}} \right)} \right]} \right)$$ (1) where E and ν are Young’s modulus and the Poisson ratio of the elastomer, respectively. We measured the target strains (Fig. 2b ) by inflating a ring composed of only a few channels, therefore free of radial constraint. Within our crude hypotheses, the longitudinal strain is zero for ν = 1/2, as expected for an incompressible elastomer, and we do observe that the longitudinal strain is much smaller than the transverse strain. We can account for the evolution of the parameters ( Ψ , Φ ) due to the deformation of the channels under pressure, as described in the Supplementary Information , and input the actual values into equation (1) . The resulting nonlinear prediction for the target strain is in very good agreement with experimental data, without any fitting parameter, as illustrated in Fig. 2b (note that the calculations remain within the framework of Hookean linear elasticity: material stiffening at large strain is not considered in this simplified model). Fig. 2: Characterization of baromorph expansion and deformation. a , Schematic vertical cut of the baromorph structure. The geometry of the channels can be reduced to two non-dimensional parameters: the relative height Ψ = h /( h + 2 e ) and the channel density Φ = d /( d + d w ), where d is the width of the channels, d w is the width of the walls, h is the height of the channels and e is the thickness of the covering membrane. b , Dependence of the targeted parallel and longitudinal strain on pressure for different values of Φ with Ψ = 0.69 ± 0.05 and for different values of Ψ with Φ = 0.5 ± 0.02. Solid lines correspond to the model without any fitting parameter (in our simplified model ε ∥ vanishes). c , Baromorph programmed to be a cone when pressurized. A radial segment of length dl in the rest state will elongate to (1 + ϵ r ) dl upon inflation, leading to a slope angle α . Scale bar, 1 cm. d , Experimental (symbols) and theoretical (solid lines, no fitting parameter) evolution of α as a function of applied pressure for baromorphs of different parameters: red diamonds ( Ψ = 0.78 ± 0.05, Φ = 0.5, R = 50 mm, H = 3.8 ± 0.2 mm); blue triangles ( Ψ = 0.74, Φ = 0.5, R = 40 mm, H = 5.4 mm); purple flags ( Ψ = 0.68, Φ = 0.2, R = 50 mm, H = 6 mm); green squares ( Ψ = 0.6, Φ = 0.5, R = 40 mm, H = 6.7 mm). Full size image We now employ the concept of the anisotropic target metric to program 3D shapes. As a first application, we targeted an axisymmetric shape, a cone. A configuration made of concentric and regularly spaced circular air channels is expected to induce a uniform radial target strain ε t r , while the azimuthal target strain remains null. Following elementary geometry (Fig. 2c ), both target strains are satisfied if the airways keep their initial radii and the plate adopts a conical shape of slope α , with $${\mathrm{cos\alpha }} = \frac{1}{{1 + {{\epsilon }}_{\mathrm{r}}^{\mathrm{t}}}}$$ (2) After a buckling transition, conical shapes with a tip regularized by the finite bending stiffness of the plate are observed for high applied pressures (Fig. 2d and Supplementary Fig. 4 ). As in traditional buckling of slender structures, the finite bending stiffness of the plate indeed prevents out-of-plane buckling for small strains 24 (Supplementary Fig. 5 ). Both the buckling threshold and the evolution of the angle can be rationalized when inserting the incompatible target strain from equation (1) within the Föppl–Von Karman equations for plates 25 (see Supplementary Text for a derivation and Supplementary Fig. 3 ). In Fig. 2d , the results from numerical integration of these equations for incompatible plates without any fitting parameter are plotted as solid lines, and match the experimental angles. Equation (1) is not limited to uniform channels, but is also valid locally if the channels’ distribution follows a gradient. In the configuration illustrated in Fig. 3 , the channel density decreases with the radius, which results in a spiky structure when inflated. Conversely, the channels tend to collapse under suction, leading to a negative value of ε t r . As a consequence, the structure adopts a saddle shape with negative Gaussian curvature, as theoretically predicted by Efrati and colleagues 25 . A continuous family of shapes is thus obtained when adjusting the pressure. Each shape corresponds to an equilibrium state and can be easily reached on demand. Fig. 3: Equilibrium states and dynamical response. a , Continuous family of equilibrium states obtained for a baromorph under different pressures. b , Corresponding network of channels embedded in the plate. Channels are more concentrated in the central region of the disk, which leads to a spiky structure once inflated. c , Dynamical response: actuation at approximately 3 Hz of the pneumatic system (Supplementary Video 6 ). Scale bars, 2 cm. Full size image Another key advantage of baromorphs relies on their fast pneumatic actuation 26 . In the example described in Fig. 3c , a reversible transformation of the plate could be achieved with a frequency of 3 Hz (Supplementary Video 1 ). The structures presented in this study have an initial diameter on the order of 10 cm and channels with a section close to 1 mm 2 . Baromorphs are, however, not limited to this size. For a given shape of the structure, the static mechanical response is independent of scale (Supplementary Fig. 6 and Supplementary Video 2 ). Provided with a sufficient power input, the actuation velocity is, in this case, limited by the natural frequency of the plate, ω ∼ h ( E / ρ ) 1/2 / R 2 , where ρ is the elastomer density, leading to a typical frequency of 10 Hz. For small-scale structures, a poroelastic timescale could limit the actuation as in the context of water transport in plants 27 . Conversely, the finite compressibility of air may be a limiting issue for large baromorph structures. The actuation principle is moreover largely material-independent, so that any elastomer may be used, including relatively stiff, tough and wear-resistant rubber, allowing metre-sized structures to resist their weight. Having captured the mechanical ingredients involved in the transformation of a baromorph, we now explore shape-programming issues. Elementary shapes can be programmed through simple computations. For instance, purely radial (respectively azimuthal) growth of uniform intensity leads to a cone (respectively an e-cone 11 , the surface obtained geometrically when inserting an angular sector in an initially flat disk). This is confirmed by baromorphs with entirely azimuthal (Fig. 4a ) respectively radial (Fig. 4c ) channels at constant channel density, which display zero Gausssian curvature except at the apex (for details on the scanning technique see Methods and Supplementary Fig. 1 ). Fig. 4: Collection of 3D shapes obtained by the buckling of baromorphs under pressure. a , b , Circular concentric channels: cone ( a ) and portion of a sphere ( b ). c , d , Radial channels: e-cone ( c ) and saddle ( d ). e , Truncated cone of large angle. f , Helicoid. Grey background paths represent the underlying airway network. Insets, plots of Gaussian curvature K . g , Shape programming of a face. From left to right: the target shape, the corresponding contour lines, the network of channels computed to give rise to the target metrics and two pictures of the deformed baromorph, made of Dragon Skin 10 Medium from Smooth-On. Videos of the transformations of most structures are available as Supplementary Videos 3 – 7) . Scale bars, 2 cm. Full size image A spherical cap with constant positive Gaussian curvature (Fig. 4b and Supplementary Video 3 ) is programmed by azimuthal channels with a density according to equation (2) with a varying value of angle α = arcsin( r / R ). As discussed in the Supplementary Information , less intuitive families of shapes can be simply obtained from the azimuthal growth of radial channels. Surfaces with constant negative Gaussian curvature may be programmed with radial channels with varying density (Fig. 4d and Supplementary Video 4 ). Figure 4e shows how a flat annulus almost becomes a cylinder, by expansion of the inner circumference (Supplementary Fig. 7a and Supplementary Video 5 ). Baromorphs are not limited to axisymmetric plates. For instance, a ribbon may spontaneously buckle into a helicoid, as expected when a larger target strain is programmed along the edges (Fig. 4f , Supplementary Fig. 7b and Supplementary Video 6 ). Programming an arbitrary shape, however, involves a non-trivial inverse problem, as in other practical realizations of shape morphing. For instance, the direction of the anisotropic target growth (in nematic elastomers 8 , 28 ) or the isotropic growth factor (in swelling gels 7 or auxetic materials 29 ) corresponding to a given target shape may only be computed through a numerical optimization procedure with no formal guarantee for the existence of a solution. In the specific case of baromorphs, the possibility to select both the orientation and the density of the channels enables us to tune at each point both the direction and intensity of the local expansion. Taking advantage of this additional degree of freedom, we propose a straightforward and intuitive analytical recipe for programming a smooth surface that can be parametrized as z = h ( x , y ). In this procedure, each point of the baromorph is moving along the z axis during activation, in a simple generalization of the axisymmetric case (Fig. 2c ). Contour lines (curves with equal h projected onto the reference plane z = 0) are conserved in the process, and no growth occurs along these curves, which we choose as the centreline for the baromorph channels. The local slope angle α measured on the target surface perpendicular to the contour lines, tan α = ∥ ∇ h ∥ , determines through equation (2) the lateral target strain $${\it{\epsilon }}_ \bot ^{\mathrm{t}} = \sqrt {1 + \left( {\frac{{\partial h}}{{\partial x}}} \right)^2 + \left( {\frac{{\partial h}}{{\partial y}}} \right)^2} - 1$$ (3) which in turn sets the width of the airways (that is, Φ ) for the desired pressure p using equation (1) . This arrangement ensures that, in the geometric limit (that is, for thin enough plates), the baromorph will follow the target metrics ( Supplementary Text ). Figure 4g shows the programming and realization of a face following the method described above. The results are qualitatively in good agreement with the target shape, except for the finest details (such as the eyes), which are smoothed out by bending elasticity (Supplementary Video 7 ). Indeed, the size of the eyes is of the same order as the thickness of the sheet, and bending rigidity cannot be neglected at this scale. Baromorphs constitute an efficient and versatile tool to transform 2D sheets into complex 3D structures reversibly with fast actuation. Numerous extensions of this architectured active material are possible for practical applications: cuts 29 can be made in the plate to release some bending constraints and improve the shape programming (Supplementary Fig. 8 and Supplementary Video 8 ). The target curvature tensor can also be programmed 25 , 30 using a bilayer composed of two independent networks of airways. In such a configuration, the homogenized sheet is free of constraint and the actuation does not involve any threshold. Snapping instabilities can also be triggered with out-of-phase actuation of the layers, as shown in Supplementary Video 9 . More generally, several intertwined networks may be embedded in one plate to program various shapes on demand. Rather than simple channels, controlled cavities with different sizes and shapes may be embedded in the plate to impose all three components of the target growth tensor, in contrast with current morphing techniques 6 , 7 , 10 , 28 (Supplementary Fig. 9 and Supplementary Video 10 ). Altogether, our study opens pathways in the numerous areas where shape morphing is anticipated to find new innovative applications, such as minimally invasive surgery, bioprinting, flow optimization, architecture or more generally smart materials. Methods Making baromorph plates The baromorph plates were made of polyvinyl siloxane (Elite Double 8 from Zhermack or Dragon Skin 10 Medium from Smooth-On) by mixing equal quantities of catalyst and base liquids. The mixture was then poured onto a 3D printed mould designed using OpenScad software and printed with a Form2 printer from Formlabs. If necessary, the entire set-up was placed in a vacuum chamber to efficiently remove trapped air bubbles. Curing required, respectively, 20 min and 3 h. At the same time, a sheet of thickness e of the same elastomer was spread on a flat surface and cured. The structure removed from the mould was finally closed by ‘gluing’ the flat sheet on top of the moulded sheet using a thin layer of uncured mixture of the same material. Experimental strain data Azimuthal and radial strains were measured experimentally using a digital image correlation (DIC) program, CorreliQ4, on Matlab 31 . A random pattern of dots was generated on the surface by spraying paint. Top-view pictures of the baromorph structure were taken at different pressures and the program tracked in-plane strain with respect to a chosen reference image. Mean strains perpendicular and parallel to the channels were then extracted. 3D scanning and computation of Gaussian curvature The surface topography of inflated baromorph structures was measured with a 3D scanning system developed in the laboratory and based on the work of Cobelli and colleagues 32 . Basically, the 3D shape was inferred from the distortion of a pattern of stripes projected on the structure (Supplementary Fig. 1a,b ). The local height was thus deduced from the phase shift of the periodic pattern. Although Fourier transform is generally used to extract the phase, the lack of periodic boundaries prevented us from using this method. We instead used a phase shifting profilometry technique, as detailed in ref. 33 . Four patterns were successively projected and recorded with the camera, each time shifted by π/2. The local phase φ ( x , y ) could thus be directly computed as $${\varphi }\left( {x,y} \right) = {\mathrm{arctan}}\left( {\frac{{I_4 - I_2}}{{I_3 - I_1}}} \right)$$ (4) where I 1−4 are the local fringe intensities at pixel ( x , y ) for the different phase-shifted patterns. The local phase was defined at each pixel modulo π, then unwrapped using a 2D unwrap Matlab code written by M.F. Kasim (2D Weighted Phase Unwrapping) based on the work of Ghiglia and Romero 34 . The local surface height can be deduced from the phase shift with respect to reference image ∆ φ ( x , y ) using basic geometrical optics: $$h\left( {x,y} \right) = \frac{{\Delta {{\varphi L}}}}{{\Delta {{\varphi }} - 2{\mathrm{\pi D}}/t}}$$ (5) where ∆ φ = φ ( x , y ) − φ 0 ( x , y ), D is the distance between the video projector and camera, L is the height of both instruments with respect to the flat surface of the reference, and t is the spatial wavelength of the fringed pattern (Supplementary Fig. 1c ). The Gaussian curvature was finally deduced from a local quadratic fit of the surface (Supplementary Fig. 1d ). Data availability The data supporting the findings of this study are available within the paper and its Supplementary Information files and from the corresponding author upon reasonable request. | A small team of researchers at ESPCI Paris has come up with a way to combine pneumatics with a hydrogel to create a baromorph for soft robotics applications—a baromorph is a soft material that self-configures when inflated. In their paper published in the journal Nature Materials, the group describes their research and their geometric creations. Efi Efrati with the Weizmann Institute of Science has written a News and Views piece on the work done by the team in the same journal issue. In the never-ending quest to create robots that are ever more capable, roboticists have often been inspired by creatures that nature has designed. Attempting to mimic humans is a popular research area, as is copying four-legged creatures. Efrati notes that most such animals have one thing in common—stiff parts of their anatomy working against other stiff parts produce motion—bones in joints, for example. But as Efrati also notes, there is another area of research focused on the development of softer components that are manipulated without stiffer components—like jellyfish, for example, or flowers. Unfortunately, progress in this area has been rather slow—soft machines tend to respond slowly due to actuation issues. And they also tend to have limited degrees of motion and wear out quickly. In this new effort, the researchers have come up with a novel approach to creating soft machines—combining the bendability of hydrogels with the power of air pressure. They call their creations baromorphs, and they have demonstrated that they can be used to create soft-machines in wide variety of shapes. Each baromorph is essentially a sheet of hydrogel with channels inside of it. In its initial relaxed state, it is typically flat. When air is pumped in, it is routed through the channels in such a way as to inflate the baromoph into a desired shape. The channels are designed using a computer program, which also handles the formation of the resultant product. To prove the viability of their method, the researchers created baromorphs that were shaped like bowls, a saddle and even a human face. | 10.1038/s41563-018-0219-x |
Nano | Slowly cooled DNA transforms disordered nanoparticles into orderly crystal | Paper: dx.doi.org/10.1038/nature12739 Journal information: Nature | http://dx.doi.org/10.1038/nature12739 | https://phys.org/news/2013-11-slowly-cooled-dna-disordered-nanoparticles.html | Abstract Crystallization is a fundamental and ubiquitous process much studied over the centuries. But although the crystallization of atoms is fairly well understood 1 , 2 , it remains challenging to predict reliably the outcome of molecular crystallization processes that are complicated by various molecular interactions and solvent involvement. This difficulty also applies to nanoparticles: high-quality three-dimensional crystals 3 , 4 , 5 , 6 are mostly produced using drying and sedimentation techniques that are often impossible to rationalize and control to give a desired crystal symmetry, lattice spacing and habit (crystal shape). In principle, DNA-mediated assembly of nanoparticles offers an ideal opportunity for studying nanoparticle crystallization 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 : a well-defined set of rules have been developed to target desired lattice symmetries and lattice constants 8 , 9 , 18 , and the occurrence of features such as grain boundaries and twinning in DNA superlattices and traditional crystals comprised of molecular or atomic building blocks suggests that similar principles govern their crystallization. But the presence of charged biomolecules, interparticle spacings of tens of nanometres, and the realization so far of only polycrystalline DNA-interconnected nanoparticle superlattices, all suggest that DNA-guided crystallization may differ from traditional crystal growth. Here we show that very slow cooling, over several days, of solutions of complementary-DNA-modified nanoparticles through the melting temperature of the system gives the thermodynamic product with a specific and uniform crystal habit. We find that our nanoparticle assemblies have the Wulff equilibrium crystal structure that is predicted from theoretical considerations and molecular dynamics simulations, thus establishing that DNA hybridization can direct nanoparticle assembly along a pathway that mimics atomic crystallization. Main The crystallization of nanoparticles mediated by DNA typically involves initial assembly of a disordered aggregate, which upon thermal annealing slightly below its melting temperature transforms into an ordered superlattice ( Fig. 1a , blue arrows) 12 . Transmission electron microscopy (TEM) images and the presence of rings in the small-angle X-ray scattering (SAXS) data show that all superlattices formed using this approach thus far are polycrystalline, with ordered micrometre-sized domains randomly oriented with respect to one another 9 , 19 . Considering that traditional crystallization techniques for atoms and molecules typically rely on slow cooling through the melting temperature 20 , we hypothesized that such a slow cooling approach applied to DNA-based assembly strategies might yield faceted crystals. DNA-functionalized nanoparticle solutions were therefore heated to above the melting temperature of the DNA links designed to connect particles and then slowly cooled to room temperature ( Fig. 1a , red arrows), in a process which typically took two to three days to complete. A key parameter known to the researcher before doing the experiments is the aggregate melting temperature, which is well defined and directly correlated with the nucleic acid sequences used for assembly 12 . Figure 1: Superlattices formed by the slow-cooling method. a , Two approaches for DNA-mediated nanoparticle crystallization are designated by arrows on a thermal melting curve of a DNA-linked gold nanoparticle aggregate. The traditional method (blue arrow), in which the aggregate is annealed a few degrees below the melting temperature, produces polycrystalline superlattices with no defined shape. The slow-cooling method (red arrow), in which the aggregate is heated above its melting temperature and cooled at a rate of 0.01 °C min −1 , produced well-defined, faceted microcrystals in each of the dozens of experiments conducted using these conditions. Extinction wavelength measured from the ultraviolet–visible spectrum, 520 nm (a.u.). b , Representative one-dimensional (top) and two-dimensional (bottom) SAXS data for b.c.c. (i) and CsCl (ii) superlattices synthesized from the slow-cooling technique. In the one-dimensional data, the red trace is the experimentally obtained scattering pattern and the black trace is the predicted scattering pattern for a perfect lattice. SAXS data are shown as plots of superlattice structure factor ( S ( q ), in arbitrary units) versus scattering vector ( q , in units of Å −1 ). c , TEM images of shape-controlled b.c.c. (i)–(ii) and CsCl (iii)–(iv) microcrystals. Scale bars are 1.5 µm for (i), 1.0 µm for (ii), 0.5 µm for (iii) and 0.5 µm for (iv). d , SEM image of a representative b.c.c. microcrystal with visible faceting where constituent nanoparticles can readily be seen (20-nm gold nanoparticles shown; scale bar is 1 µm). The inset shows a high-magnification (×52,500) view of the crystal facet with labelled surface defects. PowerPoint slide Full size image The slow cooling of the combination of two sets of gold nanoparticles functionalized with complementary DNA linker strands 9 produces superlattices with the expected body-centred cubic (b.c.c.) packing when using 20-nm gold nanoparticles and CsCl packing when using 20-nm and 15-nm gold nanoparticles, as confirmed by the radially averaged one-dimensional SAXS data ( Fig. 1b , (i) and (ii)). To enable direct visualization, the superlattices were also stabilized by embedding in silica 19 . TEM images of these structures reveal uniform crystals with square- and hexagonal-shaped domains for both the b.c.c. ( Fig. 1c , (i) and (ii)) and the CsCl ( Fig. 1c , (iii) and (iv)) particle packing symmetries, whereas scanning electron microscopy (SEM) allows us to observe surface features and the overall crystal habit ( Fig. 1d ). Evidently, the slow-cooling process enables DNA-driven assembly and crystallization that favours faceted rhombic dodecahedron microcrystals over the polycrystalline assemblies obtained by annealing below the melting temperature. Although single-crystal formation by annealing below the melting temperature may in principle be possible, the kinetics of reorganization from an irregularly shaped crystal into a well-defined microcrystal are likely to be too slow to be observed experimentally. SEM images of the microcrystals in different orientations on the substrate are all consistent with rhombic dodecahedron formation ( Fig. 2a ). Closer inspection of one of the crystals ( Fig. 2a , bottom) reveals extraordinarily well ordered nanoparticles at the surface, as well as the presence of common surface defects including ‘particle adatoms’ (a surface defect in which an atom is adsorbed on the surface of a crystal plane) and step edges. The nanoparticle orientation is consistent with a crystal that is enclosed by (110) planes, as expected for rhombic dodecahedra, and which is also the closest-packed plane in a b.c.c. unit cell. A tilting experiment was conducted in the TEM on a single microcrystal to observe the different morphologies that are consistent with a rhombic dodecahedron crystal habit. It is important to note that although these microcrystals exhibited a wide size distribution ( Fig. 2c ), faceted crystals were the predominant product of slow cooling, and no shapes other than rhombic dodecahedron microcrystals were observed. Figure 2: Structural determination and electron microscope observation of 20-nm gold nanoparticle microcrystals. a , SEM images of rhombic dodecahedron microcrystals viewed from various orientations. A schematic representation of each crystal orientation is shown on the top right corner of each of the top four images (scale bars, clockwise starting from top left, are 2 µm, 1 µm, 1 µm and 2 µm). At the bottom, a close-up view of the region enclosed by the orange box of one of the crystals reveals a nanoparticle orientation consistent with the (110) facet (scale bar is 1 µm). b , A TEM tilting experiment on a single microcrystal showing the square- and hexagonal-type shapes of the crystal when viewed at different angles in transmission mode. All scale bars are 2 µm. c , A TEM image showing the size variation and shape uniformity of the rhombic dodecahedron microcrystals (both scale bars are 5 µm). PowerPoint slide Full size image In contrast to several prior examples of microcrystals grown from nanoparticle building blocks, in which the overall crystal shape was largely dependent on factors including nanoparticle size and length of the ligand shell 3 , 4 , 21 , the shape of the microcrystals we report here was fairly independent of such parameters: microcrystals made from 5-nm, 10-nm and 20-nm gold nanoparticles all exhibit overall rhombic dodecahedron shapes and b.c.c. packing with lattice parameters of 25.7 nm, 29.1 nm and 39.5 nm, respectively (compare Fig. 3a, b and d ). Oligonucleotide length was kept constant in these experiments. Furthermore, rhombic dodecahedron microcrystals were observed for a binary system consisting of 20-nm and 15-nm gold nanoparticles arranged in a CsCl lattice symmetry ( Fig. 3c ). Thus, we conclude that the rhombic dodecahedron is the thermodynamically most favourable crystal shape for this system over a range of particle sizes and interparticle distances. Furthermore, molecular dynamics simulations on a colloid model predicted a rhombic dodecahedron equilibrium crystal structure, fully consistent with experimental observations ( Fig. 3e ). Figure 3: Rhombic dodecahedron microcrystals with varying unit cell compositions. a-c , TEM and SEM images of rhombic dodecahedron microcrystals synthesized from a b.c.c. lattice of 5-nm gold nanoparticles (scale bars, left to right, are 1 µm, 1 µm and 2 µm) ( a ), from a b.c.c. lattice of 10-nm gold nanoparticles (scale bars, left to right, are 1 µm, 2 µm and 4 µm) ( b ) and from a CsCl lattice of 20-nm and 15-nm gold nanoparticles (scale bars, left to right, are 0.5 µm, 1 µm and 2 µm) ( c ). d , SAXS data for a b.c.c. crystal made from 5-nm (black trace), 10-nm (red trace), and 20-nm (blue trace) gold nanoparticles. First-order scattering peak q 0 and corresponding lattice parameter values are indicated next to the respective scattering pattern for each crystal. e , Molecular dynamics simulation of a binary set of particles exhibiting interactions modelled for the DNA–gold nanoparticle system produces a rhombic dodecahedron microcrystal that is consistent with experimental observations. PowerPoint slide Full size image The formation of rhombic dodecahedron microcrystals from a b.c.c. packing of nanoparticles can be rationalized in terms of the surface energy γ of the exposed facets 22 . Rhombic dodecahedra are enclosed by (110) facets (bottom panel of Fig. 2a ), which is the closest-packed plane for a b.c.c. or CsCl lattice. When using the standard broken-bond model approximation for surface energy, exposing the closest-packed plane is thermodynamically favoured: it requires breaking the smallest number of particle-to-particle interactions per unit area and thus exposes the lowest-surface-energy facet 23 . From this model, the relative surface energies for b.c.c. metal facets should exhibit a ratio of γ (110) :γ (111) :γ (100) = 1:1.22:1.41. Similarly, the relative surface energies for face-centred cubic (f.c.c.) metal facets should be γ (111) :γ (100) :γ (110) = 1:1.15:1.22. These calculations thus predict the Wulff polyhedron, the equilibrium crystal structure, to be a rhombic dodecahedron enclosed by (110) facets for a b.c.c. metal and a truncated octahedron enclosed by (111) and (100) facets for a f.c.c. metal 1 . In many systems the expected Wulff polyhedron is not always formed, and the validity of the assumptions and approximations made must be analysed for each individual case. We therefore calculated actual surface energy values for our DNA–nanoparticle system using recently developed molecular dynamics simulations that accurately predict the crystallization behaviour of DNA-assembled nanoparticles (see Supplementary Information ) 24 , 25 . In these calculations, the surface energy is defined as the excess energy at the surface of a material compared to the energy of the bulk system. To calculate γ, periodic boundary conditions were removed from the modelled bulk crystal along the z -axis to expose the facet of interest on two sides ( Fig. 4 ). The energy of the bulk crystal E bulk was subtracted from the energy of the exposed facet E surface exposed , and then divided by twice the area (because two surfaces were exposed and the surface charge density is close to zero 26 ) to give the surface energy γ of the exposed facet. Figure 4: Method for calculating surface energy values for a b.c.c. DNA–gold nanoparticle superlattice. Periodic boundary conditions along the z -axis are removed from the bulk crystal to expose the surface of interest, for example, (110) or (100) as shown. Absolute surface energy values calculated from this approach are found in Table 1 . γ = ( E surface exposed − E bulk )/(2 × area). PowerPoint slide Full size image Table 1 summarizes absolute surface-energy values for the facets of a b.c.c. crystal and a f.c.c. crystal consisting of DNA-assembled nanoparticles calculated using this model, with the binding strength of complementary sticky ends scaled to 42.3 kJ mol −1 (ref. 27 ) to match the strength of the DNA sequences (TTCCTT) used in our experiments. For the b.c.c. system, the calculated ratios of γ (100) :γ (110) = 1.46 ± 0.02 and γ (111) :γ (110) = 1.24 ± 0.02 are in good agreement with the theoretically predicted ratio described above. Evidently, the observation of uniform rhombic dodecahedron crystals from a b.c.c. arrangement of nanoparticles follows the crystallization behaviour expected for a b.c.c. arrangement of atoms 28 . The expected Wulff polyhedron was observed for the b.c.c. nanoparticle system, but no truncated octahedra or other uniform shapes were observed in either experiment or simulation among the faceted crystals obtained for the f.c.c. system. This is probably because the surface energies of the two most stable surfaces in a f.c.c. crystal are too close in energy (predicted and calculated ratio γ (100) :γ (111) = 1.15) for one to be favoured predominantly over the other ( Table 1 ). Furthermore, the SAXS data for the f.c.c. crystals showed evidence of stacking faults and twinning in the lattice structure, defects which may have prevented the formation of uniform crystal shapes (a more in-depth discussion of the f.c.c. system can be found in the Supplementary Information ). Nonetheless, the consistency between the experimental observations and the simulation results provides convincing evidence that the broken-bond approximation used for describing surface energy and crystal growth for atomic systems can similarly be used to describe the crystallization of nanoparticles using DNA interactions; and, hence, that the DNA-guided assembly of nanoparticles provides a nanometre-scale analogue to the crystallization behaviour exhibited by atomic crystals. Table 1 Surface energy values calculated for DNA–gold nanoparticle superlattices Full size table The experimental observation of the Wulff equilibrium crystal structure, coupled with computational models, demonstrates the utility of DNA for controlling not only the recognition properties and surface energy of individual nanoparticles, but also the surface energies of the macroscopic nanoparticle assembly in such way that a specific structure can be deliberately programmed and realized in the laboratory. The challenge now for both the experimental and theoretical communities is to build on the principles we have described here to identify and synthesize crystal habits that maximize surface-energy differences, and to create single microcrystals with useful properties that may find practical use such as in photonic and catalytic applications. Methods Summary All oligonucleotides used in this work were synthesized on a solid-support MM48 synthesizer using reagents purchased from Glen Research. Sequences can be found in the Supplementary Information . Nanoparticles were functionalized and assembled according to published literature protocols. After particle assembly, slow cooling to room temperature was conducted in a temperature cycler (Life Technologies) at a starting temperature above the aggregate melting temperature and typically at a rate of 0.01 °C min −1 unless otherwise specified. Superlattices were characterized by synchrotron SAXS experiments conducted at the Advanced Photon Source at Argonne National Laboratory. Superlattices were transferred to the solid state using a silica embedding method 19 for visualization by TEM (Hitachi HD2300) and SEM (Hitachi SU8030). To reproduce the shapes with molecular dynamics simulations, a colloidal model was validated by computing the interaction potential with simulations 24 , 26 with explicit DNA chains and simulations were performed with the LAMMPS package (available at ). See the Supplementary Videos for a visualization of the simulated formation of microcrystals from a b.c.c. and a f.c.c. system of interacting particles modelled as a single bead. To estimate surface-energy values, a scale-accurate coarse-grained model was used 24 and molecular dynamics simulations were performed on the HOOMD-Blue package (available at ). More molecular dynamics simulation details and assumptions can be found in the Supplementary Information . | Nature builds flawless diamonds, sapphires and other gems. Now a Northwestern University research team is the first to build near-perfect single crystals out of nanoparticles and DNA, using the same structure favored by nature. "Single crystals are the backbone of many things we rely on—diamonds for beauty as well as industrial applications, sapphires for lasers and silicon for electronics," said nanoscientist Chad A. Mirkin. "The precise placement of atoms within a well-defined lattice defines these high-quality crystals. "Now we can do the same with nanomaterials and DNA, the blueprint of life," Mirkin said. "Our method could lead to novel technologies and even enable new industries, much as the ability to grow silicon in perfect crystalline arrangements made possible the multibillion-dollar semiconductor industry." His research group developed the "recipe" for using nanomaterials as atoms, DNA as bonds and a little heat to form tiny crystals. This single-crystal recipe builds on superlattice techniques Mirkin's lab has been developing for nearly two decades. In this recent work, Mirkin, an experimentalist, teamed up with Monica Olvera de la Cruz, a theoretician, to evaluate the new technique and develop an understanding of it. Given a set of nanoparticles and a specific type of DNA, Olvera de la Cruz showed they can accurately predict the 3-D structure, or crystal shape, into which the disordered components will self-assemble. Mirkin is the George B. Rathmann Professor of Chemistry in the Weinberg College of Arts and Sciences. Olvera de la Cruz is a Lawyer Taylor Professor and professor of materials science and engineering in the McCormick School of Engineering and Applied Science. The two are senior co-authors of the study. The results will be published Nov. 27 in the journal Nature. The general set of instructions gives researchers unprecedented control over the type and shape of crystals they can build. The Northwestern team worked with gold nanoparticles, but the recipe can be applied to a variety of materials, with potential applications in the fields of materials science, photonics, electronics and catalysis. DNA is used as both the blueprint and the basic building block for the construction of well-defined crystals. Through the use of programmed DNA interactions, nanoparticles are assembled into ordered lattices which form the structural components that make up three-dimensional crystals with a well-defined shape. Credit: Evelyn Auyeung/Ting Li/Chad A. Mirkin/Monica Olvera de la Cruz A single crystal has order: its crystal lattice is continuous and unbroken throughout. The absence of defects in the material can give these crystals unique mechanical, optical and electrical properties, making them very desirable. In the Northwestern study, strands of complementary DNA act as bonds between disordered gold nanoparticles, transforming them into an orderly crystal. The researchers determined that the ratio of the DNA linker's length to the size of the nanoparticle is critical. "If you get the right ratio it makes a perfect crystal—isn't that fun?" said Olvera de la Cruz, who also is a professor of chemistry in the Weinberg College of Arts and Sciences. "That's the fascinating thing, that you have to have the right ratio. We are learning so many rules for calculating things that other people cannot compute in atoms, in atomic crystals." The ratio affects the energy of the faces of the crystals, which determines the final crystal shape. Ratios that don't follow the recipe lead to large fluctuations in energy and result in a sphere, not a faceted crystal, she explained. With the correct ratio, the energies fluctuate less and result in a crystal every time. "Imagine having a million balls of two colors, some red, some blue, in a container, and you try shaking them until you get alternating red and blue balls," Mirkin explained. "It will never happen. "But if you attach DNA that is complementary to nanoparticles—the red has one kind of DNA, say, the blue its complement—and now you shake, or in our case, just stir in water, all the particles will find one another and link together," he said. "They beautifully assemble into a three-dimensional crystal that we predicted computationally and realized experimentally." To achieve a self-assembling single crystal in the lab, the research team reports taking two sets of gold nanoparticles outfitted with complementary DNA linker strands. Working with approximately 1 million nanoparticles in water, they heated the solution to a temperature just above the DNA linkers' melting point and then slowly cooled the solution to room temperature, which took two or three days. The very slow cooling process encouraged the single-stranded DNA to find its complement, resulting in a high-quality single crystal approximately three microns wide. "The process gives the system enough time and energy for all the particles to arrange themselves and find the spots they should be in," Mirkin said. The researchers determined that the length of DNA connected to each gold nanoparticle can't be much longer than the size of the nanoparticle. In the study, the gold nanoparticles varied from five to 20 nanometers in diameter; for each, the DNA length that led to crystal formation was about 18 base pairs and six single-base "sticky ends." "There's no reason we can't grow extraordinarily large single crystals in the future using modifications of our technique," said Mirkin, who also is a professor of medicine, chemical and biological engineering, biomedical engineering and materials science and engineering and director of Northwestern's International Institute for Nanotechnology. The title of the paper is "DNA-mediated nanoparticle crystallization into Wulff polyhedra." In addition to Mirkin and Olvera de la Cruz, authors of the paper are Evelyn Auyeung (first author), Ting I. N. G. Li, Andrew J. Senesi, Abrin L. Schmucker and Bridget C. Pals, all from Northwestern. | dx.doi.org/10.1038/nature12739 |
Physics | Student's physics homework picked up by Amazon quantum researchers | J. Pablo Bonilla Ataides et al. The XZZX surface code, Nature Communications (2021). DOI: 10.1038/s41467-021-22274-1 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-021-22274-1 | https://phys.org/news/2021-04-student-physics-homework-amazon-quantum.html | Abstract Performing large calculations with a quantum computer will likely require a fault-tolerant architecture based on quantum error-correcting codes. The challenge is to design practical quantum error-correcting codes that perform well against realistic noise using modest resources. Here we show that a variant of the surface code—the XZZX code—offers remarkable performance for fault-tolerant quantum computation. The error threshold of this code matches what can be achieved with random codes (hashing) for every single-qubit Pauli noise channel; it is the first explicit code shown to have this universal property. We present numerical evidence that the threshold even exceeds this hashing bound for an experimentally relevant range of noise parameters. Focusing on the common situation where qubit dephasing is the dominant noise, we show that this code has a practical, high-performance decoder and surpasses all previously known thresholds in the realistic setting where syndrome measurements are unreliable. We go on to demonstrate the favourable sub-threshold resource scaling that can be obtained by specialising a code to exploit structure in the noise. We show that it is possible to maintain all of these advantages when we perform fault-tolerant quantum computation. Introduction A large-scale quantum computer must be able to reliably process data encoded in a nearly noiseless quantum system. To build such a quantum computer using physical qubits that experience errors from noise and faulty control, we require an architecture that operates fault-tolerantly 1 , 2 , 3 , 4 , using quantum error correction to repair errors that occur throughout the computation. For a fault-tolerant architecture to be practical, it will need to correct for physically relevant errors with only a modest overhead. That is, quantum error correction can be used to create near-perfect logical qubits if the rate of relevant errors on the physical qubits is below some threshold, and a good architecture should have a sufficiently high threshold for this to be achievable in practice. These fault-tolerant designs should also be efficient, using a reasonable number of physical qubits to achieve the desired logical error rate. The most common architecture for fault-tolerant quantum computing is based on the surface code 5 . It offers thresholds against depolarising noise that are already high, and encouraging recent results have shown that its performance against more structured noise can be considerably improved by tailoring the code to the noise model 6 , 7 , 8 , 9 , 10 . While the surface code has already demonstrated promising thresholds, its overheads are daunting 5 , 11 . Practical fault-tolerant quantum computing will need architectures that provide high thresholds against relevant noise models while minimising overheads through efficiencies in physical qubits and logic gates. In this paper, we present a highly efficient fault-tolerant architecture design that exploits the common structures in the noise experienced by physical qubits. Our central tool is a variant of the surface code 12 , 13 , 14 where the stabilizer checks are given by the product XZZX of Pauli operators around each face on a square lattice 15 . This seemingly innocuous local change of basis offers a number of significant advantages over its more conventional counterpart for structured noise models that deviate from depolarising noise. We first consider preserving a logical qubit in a quantum memory using the XZZX code. While some two-dimensional codes have been shown to have high error thresholds for certain types of biased noise 7 , 16 , we find that the XZZX code gives exceptional thresholds for all single-qubit Pauli noise channels, matching what is known to be achievable with random coding according to the hashing bound 17 , 18 . It is particularly striking that the XZZX code can match the threshold performance of a random code, for any single-qubit Pauli error model, while retaining the practical benefits of local stabilizers and an efficient decoder. Intriguingly, for noise that is strongly biased towards X or Z, we have numerical evidence to suggest that the XZZX threshold exceeds this hashing bound, meaning this code could potentially provide a practical demonstration of the superadditivity of coherent information 19 , 20 , 21 , 22 , 23 . We show that these high thresholds persist with efficient, practical decoders by using a generalisation of a matching decoder in the regime where dephasing noise is dominant. In the fault-tolerant setting when stabilizer measurements are unreliable, we obtain thresholds in the biased-noise regime that surpass all previously known thresholds. With qubits and operations that perform below the threshold error rate, the practicality of scalable quantum computation is determined by the overhead, i.e. the number of physical qubits and gates we need to obtain a target logical failure rate. Along with offering high thresholds against structured noise, we show that architectures based on the XZZX code require very low overhead to achieve a given target logical failure rate. Generically, we expect the logical failure rate to decay like O ( p d /2 ) at low error rates p where \(d=O(\sqrt{n})\) is the distance of a surface code and n is the number of physical qubits used in the system. By considering a biased-noise model where dephasing errors occur a factor η more frequently than other types of errors we demonstrate an improved logical failure rate scaling like \(O({(p/\sqrt{\eta })}^{d/2})\) . We can therefore achieve a target logical failure rate using considerably fewer qubits at large bias because its scaling is improved by a factor ~ η − d /4 . We also show that near-term devices, i.e. small-sized systems with error rates near to threshold, can have a logical failure rate with quadratically improved scaling as a function of distance; \(O({p}^{{d}^{2}/2})\) . Thus, we should expect to achieve low logical failure rates using a modest number of physical qubits for experimentally plausible values of the noise bias, for example, 10 ≲ η ≲ 1000 24 , 25 . Finally, we consider fault-tolerant quantum computation with biased noise 26 , 27 , 28 , and we show that the advantages of the XZZX code persist in this context. We show how to implement low-overhead fault-tolerant Clifford gates by taking advantage of the noise structure as the XZZX code undergoes measurement-based deformations 29 , 30 , 31 . With an appropriate lattice orientation, noise with bias η is shown to yield a reduction in the required number of physical qubits by a factor of \(\sim {\mathrm{log}}\,\eta\) in a large-scale quantum computation. These advantages already manifest at code sizes attainable using present-day quantum devices. Results The XZZX surface code The XZZX surface code is locally equivalent to the conventional surface code 12 , 13 , 14 , differing by a Hadamard rotation on alternate qubits 32 , 33 . The code parameters of the surface code are invariant under this rotation. The XZZX code therefore encodes k = O (1) logical qubits using n = O ( d 2 ) physical qubits where the code distance is d . Constant factors in these values are determined by details such as the orientation of the square-lattice geometry and boundary conditions. See Fig. 1 for a description. This variant of the surface code was proposed in ref. 15 , and has been considered as a topological memory 34 . To contrast the XZZX surface code with its conventional counterpart, we refer to the latter as the CSS surface code because it is of Calderbank-Shor-Steane type 35 , 36 . Fig. 1: The XZZX surface code. Qubits lie on the vertices of the square lattice. The codespace is the common +1 eigenspace of its stabilizers S f for all faces of the lattice f . a An example of a stabilizer S f associated with face f . We name the XZZX code according to its stabilizer operators that are the product of two Pauli-X terms and two Pauli-Z terms. Unlike the conventional surface code, the stabilizers are the same at every face. b A boundary stabilizer. c A logical operator that terminates at the boundary. d Pauli-Z errors give rise to string-like errors that align along a common direction, enabling a one-dimensional decoding strategy. e The product of stabilizer operators along a diagonal give rise to symmetries under an infinite bias dephasing noise model 10 , 37 . f Pauli-X errors align along lines with an orthogonal orientation. At finite bias, errors in conjugate bases couple the lines. g Pauli-Y errors can be decoded as in ref. 10 . h A convenient choice of boundary conditions for the XZZX code are rectangular on a rotated lattice geometry. Changing the orientation of the lattice geometry means high-rate Pauli-Z errors only create strings oriented horizontally along the lattice. We can make a practical choice of lattice dimensions with d Z > d X to optimise the rates of logical failure caused by either low- or high-rate errors. Small-scale implementations of the XZZX code on rectangular lattices may be well suited for implementation with near-term devices. i In the limit where d X = 1 we find a repetition code. This may be a practical choice of code given a limited number of qubits that experience biased noise. j The next engineering challenge beyond a repetition code is an XZZX code on a rectangle with d X = 2. This code can detect a single low-rate error. Full size image Together with a choice of code, we require a decoding algorithm to determine which errors have occurred and correct for them. We will consider Pauli errors \(E\in {\mathcal{P}}\) , and we say that E creates a defect at face f if S f E = (−1) E S f with S f the stabilizer associated to f . A decoder takes as input the error syndrome (the locations of the defects) and returns a correction that will recover the encoded information with high probability. The failure probability of the decoder decays rapidly with increasing code distance, d , assuming the noise experienced by the physical qubits is below some threshold rate. Because of the local change of basis, the XZZX surface code responds differently to Pauli errors compared with the CSS surface code. We can take advantage of this difference to design better decoding algorithms. Let us consider the effect of different types of Pauli errors, starting with Pauli-Z errors. A single Pauli-Z error gives rise to two nearby defects. In fact, we can regard a Pauli-Z error as a segment of a string where defects lie at the endpoints of the string segment, and where multiple Pauli-Z errors compound into longer strings, see Fig. 1 d. A key feature of the XZZX code that we will exploit is that Pauli-Z error strings align along the same direction, as shown in Fig. 1 d. We can understand this phenomenon in more formal terms from the perspective of symmetries 10 , 37 . Indeed, the product of face operators along a diagonal such as that shown in Fig. 1 e commute with Pauli-Z errors. This symmetry guarantees that defects created by Pauli-Z errors will respect a parity conservation law on the faces of a diagonal oriented along this direction. Using this property, we can decode Pauli-Z errors on the XZZX code as a series of disjoint repetition codes. It follows that, for a noise model described by independent Pauli-Z errors, this code has a threshold error rate of 50%. Likewise, Pauli-X errors act similarly to Pauli-Z errors, but with Pauli-X error strings aligned along the orthogonal direction to the Pauli-Z error strings. In general, we would like to be able to decode all local Pauli errors, where error configurations of Pauli-X and Pauli-Z errors violate the one-dimensional symmetries we have introduced, e.g. Fig. 1 f. As we will see, we can generalise conventional decoding methods to account for finite but high bias of one Pauli operator relative to others and maintain a very high threshold. We finally remark that the XZZX surface code responds to Pauli-Y errors in the same way as the CSS surface code. Each Pauli-Y error will create four defects on each of their adjacent faces; see Fig. 1 g. The high-performance decoders presented in refs. 7 , 8 , 10 are therefore readily adapted for the XZZX code for an error model where Pauli-Y errors dominate. Optimal thresholds The XZZX code has exceptional thresholds for all single-qubit Pauli noise channels. We demonstrate this fact using an efficient maximum-likelihood decoder 38 , which gives the optimal threshold attainable with the code for a given noise model. Remarkably, we find that the XZZX surface code achieves code-capacity threshold error rates that closely match the zero-rate hashing bound for all single-qubit Pauli noise channels, and appears to exceed this bound in some regimes. We define the general single-qubit Pauli noise channel $${\mathcal{E}}(\rho )=(1-p)\rho +p({r}_{X}X\rho X+{r}_{Y}Y\rho Y+{r}_{Z}Z\rho Z)$$ (1) where p is the probability of any error on a single qubit and the channel is parameterised by the stochastic vector r = ( r X , r Y , r Z ), where r X , r Y , r Z ≥ 0 and r X + r Y + r Z = 1. The surface of all possible values of r parametrise an equilateral triangle, where the centre point (1/3, 1/3, 1/3) corresponds to standard depolarising noise, and vertices (1, 0, 0), (0, 1, 0) and (0, 0, 1) correspond to pure X , Y and Z noise, respectively. We also define biased-noise channels, which are restrictions of this general noise channel, parameterised by the scalar η ; for example, in the case of Z -biased noise, we define η = r Z /( r X + r Y ) where r X = r Y , such that η = 1/2 corresponds to standard depolarising noise and the limit η → ∞ corresponds to pure Z noise. The hashing bound is defined as R = 1 − H ( p ) with R an achievable rate, k / n , using random codes and H ( p ) the Shannon entropy for the vector p = p r . For our noise model, for any r there is a noise strength p for which the achievable rate via random coding goes to zero; we refer to this as the zero-rate hashing bound, and it serves as a useful benchmark for code-capacity thresholds. We estimate the threshold error rate as a function of r for both the XZZX surface code and the CSS surface code using a tensor-network decoder that gives a controlled approximation to the maximum-likelihood decoder 7 , 8 , 38 ; see Methods for details. Our results are summarised in Fig. 2 . We find that the thresholds of the XZZX surface code closely match or slightly exceed (as discussed below), the zero-rate hashing bound for all investigated values of r , with a global minimum p c = 18.7(1)% at standard depolarising noise and peaks p c ~ 50% at pure X , Y and Z noise. We find that the thresholds of the CSS surface code closely match this hashing bound for Y -biased noise, where Y errors dominate, consistent with prior work 7 , 8 , as well as for channels where r Y < r X = r Z such that X and Z errors dominate but are balanced. In contrast to the XZZX surface code, we find that the thresholds of the CSS surface code fall well below this hashing bound as either X or Z errors dominate with a global minimum p c = 10.8(1)% at pure X and pure Z noise. Fig. 2: Optimal code-capacity thresholds over all single-qubit Pauli channels. Threshold estimates p c are found using approximate maximum-likelihood decoding for a the XZZX surface code and b the CSS surface code with open boundaries (as in Fig. 1 b). The grey triangle represents a parametrisation of all single-qubit Pauli channels, where the centre corresponds to depolarising noise, the labeled vertices correspond to pure X and Z noise, and the third vertex corresponds to pure Y noise. For the XZZX code, estimates closely match the zero-rate hashing bound (not shown) for all single-qubit Pauli channels. For the CSS code, estimates closely match the hashing bound for Y -biased noise but fall well below for X - and Z -biased noise. All estimates use d × d codes with distances d ∈ {13, 17, 21, 25}. Full size image In some cases, our estimates of XZZX surface code thresholds appear to exceed the zero-rate hashing bound. The discovery of such a code would imply that we can create a superadditive coherent channel via code concatenation. To see why, consider an inner code with a high threshold that exceeds the hashing bound, p c > p h.b. , together with a finite-rate outer code with rate R out = K out / N out > 0 that has some arbitrary nonzero threshold against independent noise 39 , 40 , 41 , 42 . Now consider physical qubits with an error rate p below the threshold of the inner code but above the hashing bound, i.e. p h.b. < p < p c . We choose a constant-sized inner code using N in qubits such that its logical failure rate is below the threshold of the outer code. Concatenating this inner code into the finite-rate outer code will give us a family of codes with rate \(R^{\prime} ={R}_{{\rm{out}}}/{N}_{{\rm{in}}}\,> \; 0\) and a vanishing failure probability as N out → ∞ . If both codes have low-density parity checks (LDPCs) 41 , 42 , the resulting code provides an example of a superadditive LDPC code. Given the implications of a code that exceeds the zero-rate hashing bound we now investigate our numerics in this regime further. For the values of r investigated for Fig. 2 , the mean difference between our estimates and the hashing bound is \(\overline{{p}_{c}-{p}_{\text{h.b.}}}=-0.1(3)\) % and our estimates never fall more than 1.1% below the hashing bound. However, for high bias, η ≥ 100, we observe an asymmetry between Y -biased noise and Z -biased (or, equivalently, X -biased) noise. In particular, we observe that, while threshold estimates with Y -biased noise match the hashing bound to within error bars, threshold estimates with highly biased Z noise significantly exceed the hashing bound. Our results with Z -biased noise are summarised in Fig. 3 , where, since thresholds are defined in the limit of infinite code distance, we provide estimates with sets of increasing code distance for η ≥ 30. Although the gap typically reduces, it appears to stabilise for η = 30, 100, 1000, where we find p c − p h.b. = 1.2(2)%, 1.6(3)%, 3.7(3)%, respectively, with the largest code distances; for η = 300, the gap exceeds 2.9% but has clearly not yet stabilised. This evidence for exceeding the zero-rate hashing bound appears to be robust, but warrants further study. Fig. 3: Estimates of optimal XZZX surface code thresholds relative to the hashing bound. a Threshold estimates p c for the XZZX and CSS surface codes as a function of bias η with Z -biased (or, by code symmetry, X -biased) noise using approximate maximum-likelihood decoding and codes with open boundaries (as in Fig. 1 b). The solid line is the zero-rate hashing bound for the associated Pauli noise channel, where the entropy of the channel equals 1 bit. For high bias, η ≥ 30, the estimates for the XZZX code exceed the hashing bound. To investigate this surprising effect, estimates for the XZZX code with 30 ≤ η ≤ 1000 use large d × d codes with distances d ∈ {65, 69, 73, 77}; other estimates use distances d ∈ {13, 17, 21, 25} (as used for Fig. 2 ). b Difference between threshold estimates for the XZZX code with Z -biased noise and the hashing bound p c − p h.b. as a function of code distances used in the estimation. Data is shown for biases η = 30, 100, 300, 1000. Threshold estimates exceed the hashing bound in all cases. The gap reduces, in most cases, with sets of greater code distance, but it persists and appears to stabilise for η = 30, 100 and 1000. In both plots, error bars indicate one standard deviation relative to the fitting procedure. Full size image Finally, we evaluate threshold error rates for the XZZX code with rectangular boundaries using a minimum-weight perfect-matching decoder, see Fig. 4 . Matching decoders are very fast, and so allow us to explore very large systems sizes; they are also readily generalised to the fault-tolerant setting as discussed below. Our decoder is described in Methods. Remarkably, the thresholds we obtain closely follow the zero-rate hashing bound at high bias. This is despite using a sub-optimal decoder that does not use all of the syndrome information. Again, our data appear to marginally exceed this bound at high bias. Fig. 4: Thresholds for the XZZX code using a matching decoder. Code-capacity thresholds p c for the XZZX code with rectangular boundary conditions (as in Fig. 1 h) shown as a function of noise bias η using a matching decoder. The threshold error rates for the XZZX code experiencing Pauli-Z-biased noise (blue) significantly outperform those found using the matching decoder presented in ref. 10 experiencing Pauli-Y-biased noise for the CSS surface code (red). For the XZZX code, we evaluated separate thresholds for logical Pauli-X and Pauli-Z errors, with the lowest of the two shown here (although the discrepancy between the two different thresholds is negligible). Data points are found with ~10 5 Monte-Carlo samples for each physical error rate sampled and each lattice size used. We study the XZZX code for large lattices with d Z = A η d X where aspect ratios take values 1 ≤ A η ≤ 157 such that A 1/2 = 1 and A 1000 = 157. We find the XZZX code matches the zero-rate hashing bound at η ~10 (solid line). For larger biases the data appear to exceed the hashing bound. For instance, at η = 100 we found p c − p h.b. ~ 1%. We obtained this threshold using code sizes d X = 7, 11, 15 and A 100 = 23. Error bars indicate one standard deviation obtained by jackknife resampling over code distance. Full size image Fault-tolerant thresholds Having demonstrated the remarkable code-capacity thresholds of the XZZX surface code, we now demonstrate how to translate these high thresholds into practice using a matching decoder 14 , 43 , 44 . We find exceptionally high fault-tolerant thresholds, i.e. allowing for noisy measurements, with respect to a biased phenomenological noise model. Moreover, for unbiased noise models we recover the standard matching decoder 14 , 45 . To detect measurement errors we repeat measurements over a long time 14 . We can interpret measurement errors as strings that align along the temporal axis with a defect at each endpoint. This allows us to adapt minimum-weight perfect-matching for fault-tolerant decoding. We explain our simulation in Fig. 5 a–d and describe our decoder in Methods. Fig. 5: Fault-tolerant thresholds for the XZZX code. a – d Spacetime where stabilizer measurements are unreliable. Time t progresses upwards and stabilizers are measured at moments marked by black vertices. We identify a defect when a stabilizer measurement differs from its previous outcome. a The dashed line shows the worldline of one qubit. If a Pauli-Z error occurs in the time interval Δ, a horizontal string is created in the spacetime with defects at its endpoints at the following round of stabilizer measurements. b Measurement errors produce two sequential defects that we interpret as strings that align along the vertical direction. c Pauli-X errors create string-like errors that align orthogonally to the Pauli-Z errors and measurement errors. d In general errors compound to make longer strings. In the limit where there are no Pauli-X errors all strings are confined to the square lattice we show. e Fault-tolerant threshold error rates p c as a function of noise bias η and measurement error rates q = p h.r. + p l.r. . The results found using our matching decoder for the XZZX code experiencing Pauli-Z-biased noise (blue) are compared with the results found using the matching decoder presented in ref. 10 experiencing Pauli-Y-biased noise for the CSS surface code (red). Equivalent results to the red points are obtained with Pauli-Z-biased noise using the tailored code of ref. 7 . The XZZX code significantly outperforms the CSS code for all noise biases. At a fixed bias, data points are found with 3 × 10 4 Monte-Carlo samples for each physical error rate sampled and for each square lattice with distance d ∈ {12, 14, …, 20} at finite bias and d ∈ {24, 28, …, 40} at infinite bias. Error bars indicate one standard deviation obtained by jackknife resampling over code distance. The solid line shows the threshold of the conventional matching decoder for the CSS surface code undergoing phenomenological noise where bit-flip and dephasing errors are decoded independently. Specifically, it follows the function p h.r. + p l.r. = 0.029 where ~2.9% is the phenomenological threshold 45 . We note that our decoder is equivalent to the conventional matching decoder at η = 1/2. Full size image We evaluate fault-tolerant thresholds by finding logical failure rates using Monte-Carlo sampling for different system parameters. We simulate the XZZX code on a d × d lattice with periodic boundary conditions, and we perform d rounds of stabilizer measurements. We regard a given sample as a failure if the decoder introduces a logical error to the code qubits, or if the combination of the error string and its correction returned by the decoder includes a non-trivial cycle along the temporal axis. It is important to check for temporal errors, as they can cause logical errors when we perform fault-tolerant logic gates by code deformation 46 . The phenomenological noise model is defined such that qubits experience errors with probability p per unit time. These errors may be either high-rate Pauli-Z errors that occur with probability p h.r. per unit time, or low-rate Pauli-X or Pauli-Y errors each occurring with probability p l.r. per unit time. The noise bias with this phenomenological noise model is defined as η = p h.r. /(2 p l.r. ). One time unit is the time it takes to make a stabilizer measurement, and we assume we can measure all the stabilizers in parallel 5 . Each stabilizer measurement returns the incorrect outcome with probability q = p h.r. + p l.r. . To leading order, this measurement error rate is consistent with a measurement circuit where an ancilla is prepared in the state \(\left|+\right\rangle\) and subsequently entangled to the qubits of S f with bias-preserving controlled-not and controlled-phase gates before its measurement in the Pauli-X basis. With such a circuit, Pauli-Y and Pauli-Z errors on the ancilla will alter the measurement outcome. At η = 1/2 this noise model interpolates to a conventional noise model where q = 2 p /3 47 . We also remark that hook errors 47 , 48 , i.e. correlated errors that are introduced by this readout circuit, are low-rate events. This is because high-rate Pauli-Z errors acting on the control qubit commute with the entangling gate, and so no high-rate errors are spread to the code. Intuitively, the decoder will preferentially pair defects along the diagonals associated with the dominant error. In the limit of infinite bias at q = 0, the decoder corrects the Pauli-Z errors by treating the XZZX code as independent repetition codes. It follows that by extending the syndrome along the temporal direction to account for the phenomenological noise model with infinite bias, we effectively decode d decoupled copies of the two-dimensional surface code, see Fig. 5 . With the minimum-weight perfect-matching decoder, we therefore expect a fault-tolerant threshold ~ 10.3% 14 . Moreover, when η = 1/2 the minimum-weight perfect-matching decoder is equivalent to the conventional matching decoder 14 , 45 . We use these observations to check that our decoder behaves correctly in these limits. In Fig. 5 e, we present the thresholds we obtain for the phenomenological noise model as a function of the noise bias η . In the fault-tolerant case, we find our decoder tends towards a threshold of ~10% as the bias becomes large. We note that the threshold error rate appears lower than the expected ~10.3%; we suggest that this is a small-size effect. Indeed, the success of the decoder depends on effectively decoding ~ d independent copies of the surface code correctly. In practice, this leads us to underestimate the threshold when we perform simulations using finite-sized systems. Notably, our decoder significantly surpasses the thresholds found for the CSS surface code against biased Pauli-Y errors 10 . We also compare our results to a conventional minimum-weight perfect-matching decoder for the CSS surface code where we correct bit-flip errors and dephasing errors separately. As we see, our decoder for the XZZX code is equivalent to the conventional decoding strategy at η = 1/2 and outperforms it for all other values of noise bias. Overheads We now show that the exceptional error thresholds of the XZZX surface code are accompanied by significant advantages in terms of the scaling of the logical failure rate as a function of the number of physical qubits n when error rates are below threshold. Improvements in scaling will reduce the resource overhead, because fewer physical qubits will be needed to achieve a desired logical failure rate. The XZZX code with periodic boundary conditions on a lattice with dimensions d × ( d + 1) has the remarkable property that it possesses only a single logical operator that consists of only physical Pauli-Z terms. Moreover, this operator has weight n = d ( d + 1). Based on the results of ref. 8 , we can expect that the XZZX code on such a lattice will have a logical failure rate that decays like \(O({p}_{\,\text{h.r.}\,}^{{d}^{2}/2})\) at infinite bias. Note we can regard this single logical-Z operator as a string that coils around the torus many times such that it is supported on all n qubits. As such, this model can be regarded as an n -qubit repetition code whose logical failure rate decays like O ( p n /2 ). Here we use the XZZX code on a periodic d × ( d + 1) lattice to test the performance of codes with high-weight Pauli-Z operators at finite bias. We find, at high bias and error rates near to threshold, that a small XZZX code can demonstrate this rapid decay in logical failure rate. In general, at more modest biases and at lower error rates, we find that the logical failure rate scales like \(O({(p/\sqrt{\eta })}^{d/2})\) as the system size diverges. This scaling indicates a significant advantage in the overhead cost of architectures that take advantage of biased noise. We demonstrate both of these regimes with numerical data. In practice, it will be advantageous to find low-overhead scaling using codes with open boundary conditions. We finally argue that the XZZX code with rectangular open boundary conditions will achieve comparable overhead scaling in the large system size limit. Let us examine the different failure mechanisms for the XZZX code on the periodic d × ( d + 1) lattice more carefully. Restricting to Pauli-Z errors, the weight of the only non-trivial logical operator is d ( d + 1). This means the code can tolerate up to d ( d + 1)/2 dephasing errors, and we can therefore expect failures due to high-rate errors to occur with probability $${\overline{P}}_{{\rm{quad.}}} \sim {N}_{{\rm{h.r.}}}{p}_{\,\text{h.r.}\,}^{{d}^{2}/2},$$ (2) below threshold, where \({N}_{{\rm{h.r.}}} \sim {2}^{{d}^{2}}\) is the number of configurations that d 2 /2 Pauli-Z errors can take on the support of the weight- d 2 logical operator to cause a failure. We compare this failure rate to the probability of a logical error caused by a string of d /4 high-rate errors and d /4 low-rate errors. We thus consider the ansatz $${\overline{P}}_{{\rm{lin.}}} \sim {N}_{{\rm{l.r.}}}{(1-p)}^{{d}^{2}-d/2}{({p}_{\text{h.r.}}+{p}_{\text{l.r.}})}^{d/4}{(2{p}_{\text{l.r.}})}^{d/4}$$ (3) where N l.r. ~ 2 γ d is an entropy term with 3/2 ≲ γ ≲ 2 49 . We justify this ansatz and estimate γ in Methods. This structured noise model thus leads to two distinct regimes, depending on which failure process is dominant. In the first regime where \({\overline{P}}_{{\rm{quad.}}}\gg {\overline{P}}_{{\rm{lin.}}}\) , we expect that the logical failure rate will decay like \(\sim {p}_{\,\text{h.r.}\,}^{{d}^{2}/2}\) . We find this behaviour with systems of a finite size and at high bias where error rates are near to threshold. We evaluate logical failure rates using numerical simulations to demonstrate the behavior that characterises this regime; see Fig. 6 (a). Our data show good agreement with the scaling ansatz \(\overline{P}=A{e}^{B{d}^{2}}\) . In contrast, our data are not well described by a scaling \(\overline{P}=A{e}^{Bd}\) . Fig. 6: Sub-threshold scaling of the logical failure rate with the XZZX code. a Logical failure rate \(\overline{P}\) at high bias near to threshold plotted as a function of code distance d . We use a lattice with coprime dimensions d × ( d + 1) for d ∈ {7, 9, 11, 13, 15} at bias η = 300, assuming ideal measurements. The data were collected using \({N = 5\, \times 10}^{5}\) iterations of Monte-Carlo (MC) samples for each physical rate sampled and for each lattice dimension used. The physical error rates used are, from the bottom to the top curves in the main plot, p = 0.19, 0.20, 0.21, 0.22 and 0.23. Error bars represent one standard deviation for the Monte-Carlo simulations. The solid lines are a fit of the data to \({\overline{P}}_{{\rm{quad.}}}=A{e}^{B{d}^{2}}\) , consistent with Eq. ( 2 ), and the dashed lines a fit to \({\overline{P}}_{{\rm{lin.}}}=A{e}^{Bd}\) , consistent with Eq. ( 3 ) where we would expect \(B=\mathrm{log}\,(p/(1-p))/2\) , see Methods. The data fit the former very well; for the latter, the gradients of the best fit dashed lines, as shown on the inset plot as a function of \(\mathrm{log}\,(p/(1-p))\) , give a linear slope of 0.61(3). Because this slope exceeds the value of 0.5, we conclude that the sub-threshold scaling is not consistent with \({\overline{P}}_{{\rm{lin.}}}=A{e}^{Bd}\) . b Logical failure rates \(\overline{P}\) at modest bias far below threshold plotted as a function of the physical error rate p . The data (markers) were collected at bias η = 3 and coprime d × ( d + 1) code dimensions of d ∈ {5, 7, 9, 11, 13, 15} assuming ideal measurements. Data is collected using the Metropolis algorithm and splitting method presented in refs. 76 , 77 . The solid lines represent the prediction of Eq. ( 3 ). The data show very good agreement with the single parameter fitting for all system sizes as p tends to zero. Full size image We observe the regime where \({\overline{P}}_{{\rm{lin.}}}\gg {\overline{P}}_{{\rm{quad.}}}\) using numerics at small p and modest η . In this regime, logical errors are caused by a mixture of low-rate and high-rate errors that align along a path of weight O ( d ) on some non-trivial cycle. In Fig. 6 b, we show that the data agree well with the ansatz of Eq. ( 3 ), with γ ~ 1.8. This remarkable correspondence to our data shows that our decoder is capable of decoding up to ~ d /4 low-rate errors, even with a relatively large number of high-rate errors occurring simultaneously on the lattice. In summary, for either scaling regime, we find that there are significant implications for overheads. We emphasise that the generic case for fault-tolerant quantum computing is expected to be the regime dominated by \({\overline{P}}_{{\rm{lin.}}}\) . In this regime, the logical failure rate of a code is expected to decay as \(\overline{P} \sim {p}^{d/2}\) below threshold 5 , 50 , 51 . Under biased noise, our numerics show that failure rates \(\overline{P} \sim {(p/\sqrt{\eta })}^{d/2}\) can be obtained. This additional decay factor ~ η − d /4 in our expression for logical failure rate means we can achieve a target logical failure rate with far fewer qubits at high bias. The regime dominated by \({\overline{P}}_{{\rm{quad.}}}\) scaling is particularly relevant for near-term devices that have a small number of qubits operating near the threshold error rate. In this situation, we have demonstrated a very rapid decay in logical failure rate like \(\sim {p}^{{d}^{2}/2}\) at high bias, if they can tolerate ~ d 2 /2 dephasing errors. We finally show that we can obtain a low-overhead implementation of the XZZX surface code with open boundary conditions using an appropriate choice of lattice geometry. As we explain below, this is important for performing fault-tolerant quantum computation with a two-dimensional architecture. Specifically, with the geometry shown in Fig. 1 h, we can reduce the length of one side of the lattice by a factor of \(O(1/{\mathrm{log}}\,\eta )\) , leaving a smaller rectangular array of qubits. This is because high-rate error strings of the biased-noise model align along the horizontal direction only. We note that d X ( d Z ) denote the least weight logical operator comprised of only Pauli-X (Pauli-Z) operators. We can therefore choose d X ≪ d Z without compromising the logical failure rate of the code due to Pauli-Z errors at high bias. This choice may have a dramatic effect on the resource cost of large-scale quantum computation. We estimate that the optimal choice is $${d}_{Z}\approx {d}_{X}\left(1-\frac{{{\mathrm{log}}}\,\eta }{{{\mathrm{log}}}\,p}\right).$$ (4) using approximations that apply at low error rates. To see this, let us suppose that a logical failure due to high(low)-rate errors is \({\overline{P}}_{{\rm{h.r.}}}\approx {p}^{{d}_{Z}/2}\) ( \({\overline{P}}_{{\rm{l.r.}}}\approx {(p/\eta )}^{{d}_{X}/2}\) ) where we have neglected entropy terms and assumed p h.r. ~ p and p l.r. ~ p / η . Equating \({\overline{P}}_{\text{l.r.}}\) and \({\overline{P}}_{\text{h.r.}}\) gives us Eq. ( 4 ). Similar results have been obtained in, e.g. refs. 16 , 26 , 52 , 53 , 54 with other codes. Assuming an error rate that is far below threshold, e.g. p ~ 1%, and a reasonable bias we might expect η ~ 100, we find an aspect ratio d X ~ d Z /2. Low-overhead fault-tolerant quantum computation As with the CSS surface code, we can perform fault-tolerant quantum computation with the XZZX code using code deformations 29 , 30 , 31 , 55 , 56 , 57 . Here we show how to maintain the advantages that the XZZX code demonstrates as a memory experiencing structured noise, namely, its high-threshold error rates and its reduced resource costs, while performing fault-tolerant logic gates. A code deformation is a type of fault-tolerant logic gate where we manipulate encoded information by changing the stabilizer group we measure 55 , 57 . These altered stabilizer measurements project the system onto another stabilizer code where the encoded information has been transformed or ‘deformed’. These deformations allow for Clifford operations with the surface code; Clifford gates are universal for quantum computation when supplemented with the noisy initialisation of magic states 58 . Although initialisation circuits have been proposed to exploit a bias in the noise 59 , here we focus on fault-tolerant Clifford operations and the fault-tolerant preparation of logical qubits in the computational basis. Many approaches for code deformations have been proposed that, in principle, could be implemented in a way to take advantage of structured noise using a tailored surface code. These approaches include braiding punctures 55 , 56 , 57 , 60 , lattice surgery 29 , 30 , 61 , 62 and computation with twist defects 30 , 63 , 64 . We focus on a single example based on lattice surgery as in refs. 31 , 62 ; see Fig. 7 a. We will provide a high-level overview and leave open all detailed questions of implementation and threshold estimates for fault-tolerant quantum computation to future work. Fig. 7: Generalised lattice surgery. Details of generalised lattice surgery are given in refs. 31 , 62 . a Pairs of qubits are encoded on surface codes with six twist defects lying on their boundaries 30 (i). Entangling operations are performed by making parity measurements with an ancillary surface code, (ii). Circled areas are described in terms of the microscopic details of the architecture in parts b and c of the figure, respectively. b Initialising a hexon surface code. Red(blue) vertices are initialised in Pauli-X(Pauli-Z) basis. The system is prepared in an eigenstate of the stabilizers shown on the shaded faces, (iii) and the logical Pauli-Z operators, (iv). This initialisation strategy is robust to biased noise. Pauli-Z errors that can occur on red vertices are detected by the shaded faces (v). We can also detect low-rate Pauli-X errors on blue vertices with this method of initialisation (vi). We can decode all of these initialisation errors on this subset of faces using the minimum-weight perfect-matching decoder in the same way we decode the XZZX code as a memory. c The hexon surface code fused to the ancillary surface code to perform a logical Pauli-Y measurement. The lattice surgery procedure introduces a twist in the centre of the lattice. We show the symmetry with respect to the Pauli-Z errors by lightly colored faces. Again, decoding this model in the infinite bias limit is reduced to decoding one-dimensional repetition codes, except at the twist where there is a single branching point. Full size image Our layout for fault-tolerant quantum computation requires the fault-tolerant initialisation of a hexon surface code, i.e. a surface code with six twist defects at its boundaries 30 ; see Fig. 7 (b). We can fault-tolerantly initialise this code in eigenstates of the computational basis through a process detailed in Fig. 7 . We remark that the reverse operation, where we measure qubits of the XZZX surface code in this same product basis, will read the code out while respecting the properties required to be robust to the noise bias. Using the arguments presented above for the XZZX code with rectangular boundaries, we find a low-overhead implementation with dimensions related as d Z = A η d X , where we might choose an aspect ratio \({A}_{\eta }=O({\mathrm{log}}\,\eta )\) at low error rates and high noise bias. We briefly confirm that this method of initialisation is robust to our biased-noise model. Principally, this method must correct high-rate Pauli-Z errors on the red qubits, as Pauli-Z errors act trivially on the blue qubits in eigenstates of the Pauli-Z operator during preparation. Given that the initial state is already in an eigenstate of some of the stabilizers of the XZZX surface code, we can detect these Pauli-Z errors on red qubits, see, e.g. Fig. 7 (v). The shaded faces will identify defects due to the Pauli-Z errors. Moreover, as we discussed before, strings created by Pauli-Z errors align along horizontal lines using the XZZX surface code. This, again, is due to the stabilizers of the initial state respecting the one-dimensional symmetries of the code under pure dephasing noise. In addition to robustness against high-rate errors, low-rate errors as in Fig. 7 (vi) can also be detected on blue qubits. The bit-flip errors violate the stabilizers we initialise when we prepare the initial product state. As such we can adapt the high-threshold error-correction schemes we have proposed for initialisation to detect these errors for the case of finite bias. We therefore benefit from the advantages of the XZZX surface code under a biased error model during initialisation. Code deformations amount to initialising and reading out different patches of a large surface code lattice. As such, performing arbitrary code deformations while preserving the biased-noise protection offered by the XZZX surface code is no more complicated than what has already been demonstrated. This is with one exception. We might consider generalisations of lattice surgery or other code deformations where we can perform fault-tolerant Pauli-Y measurements. In this case, we introduce a twist to the lattice 63 and, as such, we need to reexamine the symmetries of the system to propose a high-performance decoder. We show the twist in the centre of Fig. 7 c together with its weight-five stabilizer operator. A twist introduces a branch in the one-dimensional symmetries of the XZZX surface code. A minimum-weight perfect-matching decoder can easily be adapted to account for this branch. Moreover, should we consider performing fault-tolerant Pauli-Y measurements, we do not expect that a branch on a single location on the lattice will have a significant impact on the performance of the code experiencing structured noise. Indeed, even with a twist on the lattice, the majority of the lattice is decoded as a series of one-dimensional repetition codes in the infinite bias limit. Discussion We have shown how fault-tolerant quantum architectures based on the XZZX surface code yield remarkably high memory thresholds and low overhead as compared with the conventional surface code approach. Our generalised fault-tolerant decoder can realise these advantages over a broad range of biased error models representing what is observed in experiments for a variety of physical qubits. The performance of the XZZX code is underpinned by its exceptional code-capacity thresholds, which match the performance of random coding (hashing) theory, suggesting that this code may be approaching the limits of what is possible. In contrast to this expectation, the XZZX surface code threshold is numerically observed to exceed this hashing bound for certain error models, opening the enticing possibility that random coding is not the limit for practical thresholds. We note that for both code capacities and fault-tolerant quantum computing, the highest achievable error thresholds are not yet known. We emphasise that the full potential of our results lies not just in the demonstrated advantages of using this particular architecture, but rather the indication that further innovations in codes and architectures may still yield significant gains in thresholds and overheads. We have shown that substantial gains on thresholds can be found when the code and decoder are tailored to the relevant noise model. While the standard approach to decoding the surface code considers Pauli-X and Pauli-Z errors separately, we have shown that a tailored non-CSS code and decoder can outperform this strategy for essentially all structured error models. There is a clear avenue to generalise our methods and results to the practical setting involving correlated errors arising from more realistic noise models as we perform fault-tolerant logic. We suggest that the theory of symmetries 10 , 37 may offer a formalism to make progress in this direction. Because our decoder is based on minimum-weight matching, there are no fundamental obstacles to adapt it to the more complex setting of circuit noise 47 , 56 , 65 . We expect that the high numerical thresholds we observe for phenomenological noise will, when adapted to circuit level noise, continue to outperform the conventional surface code, especially when using gates that preserve the structure of the noise 27 , 28 . We expect that the largest performance gains will be obtained by using information from a fully characterised Pauli noise model 66 , 67 , 68 that goes beyond the single-qubit error models considered here. Along with high thresholds, the XZZX surface code architecture can yield significant reductions in the overheads for fault-tolerant quantum computing, through improvements to the sub-threshold scaling of logical error rates. It is in this direction that further research into tailored codes and decoders may provide the most significant advances, bringing down the astronomical numbers of physical qubits needed for fault-tolerant quantum computing. A key future direction of research would be to carry these improvements over to codes and architectures that promise improved (even constant) overheads 39 , 40 , 42 . Recent research on fault-tolerant quantum computing using low-density parity check (LDPC) codes that generalise concepts from the surface code 41 , 69 , 70 , 71 , 72 , 73 , 74 provide a natural starting point. Methods Optimal thresholds In the main text, we obtained optimal thresholds using a maximum-likelihood decoder to highlight features of the codes independent of any particular heuristic decoding algorithm. Maximum-likelihood decoding, which selects a correction from the most probable logical coset of error configurations consistent with a given syndrome, is, by definition, optimal. Exact evaluation of the coset probabilities is, in general, inefficient. An algorithm due to Bravyi, Suchara and Vargo 38 efficiently approximates maximum-likelihood decoding by mapping coset probabilities to tensor-network contractions. Contractions are approximated by reducing the size of the tensors during contraction through Schmidt decomposition and retention of only the χ largest Schmidt values. This approach, appropriately adapted, has been found to converge well with modest values of χ for a range of Pauli noise channels and surface code layouts 8 , 38 . A full description of the tensor network used in our simulations with the rotated CSS surface code is provided in ref. 8 ; adaptation to the XZZX surface code is a straightforward redefinition of tensor element values for the uniform stabilizers. Figure 2 , which shows threshold values over all single-qubit Pauli noise channels for CSS and XZZX surface codes, is constructed as follows. Each threshold surface is formed using Delaunay triangulation of 211 threshold values. Since both CSS and XZZX square surface codes are symmetric in the exchange of Pauli- X and Z , 111 threshold values are estimated for each surface. Sample noise channels are distributed radially such that the spacing reduces quadratically towards the sides of the triangle representing all single-qubit Pauli noise channels, see Fig. 8 a. Each threshold is estimated over four d × d codes with distances d ∈ {13, 17, 21, 25}, at least six physical error probabilities, and 30,000 simulations per code distance and physical error probability. In all simulations, a tensor-network decoder approximation parameter of χ = 16 is used to achieve reasonable convergence over all sampled single-qubit Pauli noise channels for the given code sizes. Fig. 8: Optimal threshold sample distribution and decoder convergence. a Distribution of 211 samples over the surface of all single-qubit Pauli noise channels. To construct each threshold surface of Fig. 2 , by code symmetry, thresholds are estimated for 111 of these samples. b Tensor-network decoder convergence for the 77 × 77 XZZX surface code with Z -biased noise, represented by shifted logical failure rate f χ − f 24 , as a function of truncated bond dimension χ at a physical error probability p near the zero-rate hashing bound for the given bias η . Each data point corresponds to 30,000 runs with identical errors generated across all χ for a given bias. Full size image Figure 3 , which investigates threshold estimates exceeding the zero-rate hashing bound for the XZZX surface code with Z -biased noise, is constructed as follows. For bias 30 ≤ η ≤ 1000, where XZZX threshold estimates exceed the hashing bound, we run compute-intensive simulations; each threshold is estimated over sets of four d × d codes with distances up to d ∈ {65, 69, 73, 77}, at least fifteen physical error probabilities, and 60,000 simulations per code distance and physical error probability. Interestingly, for the XZZX surface code with Z -biased noise, we find the tensor-network decoder converges extremely well, as summarised in Fig. 8 b for code distance d = 77, allowing us to use χ = 8. For η = 30, the shift in logical failure rate between χ = 8 and the largest χ shown is less than one fifth of a standard deviation over 30,000 simulations, and for η > 30 the convergence is complete. All other threshold estimates in Fig. 3 , are included for context and use the same simulation parameters as described above for Fig. 2 . All threshold error rates in this work are evaluated use the critical exponent method of ref. 45 . The minimum-weight perfect-matching decoder Decoders based on the minimum-weight perfect-matching algorithm 43 , 44 are ubiquitous in the quantum error-correction literature 5 , 14 , 37 , 45 , 75 . The minimum-weight perfect-matching algorithm takes a graph with weighted edges and returns a perfect matching using the edges of the input graph such that the sum of the weights of the edges is minimal. We can use this algorithm for decoding by preparing a complete graph as an input such that the edges returned in the output matching correspond to pairs of defects that should be locally paired by the correction. To achieve this we assign each defect a corresponding vertex in the input graph and we assign the edges weights such that the proposed correction corresponds to an error that occurred with high probability. The runtime of the minimum-weight perfect-matching algorithm can scale like O ( V 3 ) where V is the number of vertices of the input graph 44 , and the typical number of vertices is V = O ( p d 2 ) for the case where measurements always give the correct outcomes and V = O ( p d 3 ) for the case where measurements are unreliable. The success of the decoder depends on how we choose to weight the edges of the input graph. Here we discuss how we assign weights to the edges of the graph. It is convenient to define an alternative coordinate system that follows the symmetries of the code. Denote by \(f\in {{\mathcal{D}}}_{j}\) sets of faces aligned along a diagonal line such that \(S={\prod }_{f\in {{\mathcal{D}}}_{j}}{S}_{f}\) is a symmetry of the code with respect to Pauli-Z errors, i.e. S commutes with Pauli-Z errors. One such diagonal is shown in Fig. 1 (e). Let also \({{\mathcal{D}}}_{j}^{\prime}\) be the diagonal sets of faces that respect symmetries introduced by Pauli-X errors. Let us first consider the decoder at infinite bias. We find that we can decode the lattice as a series of one-dimensional matching problems along the diagonals \({{\mathcal{D}}}_{j}\) at infinite bias. Any error drawn from the set of Pauli-Z errors \({{\mathcal{E}}}^{Z}\) must create an even number of defects along diagonals \({{\mathcal{D}}}_{j}\) . Indeed, \(S={\prod }_{f\in {{\mathcal{D}}}_{j}}{S}_{f}\) is a symmetry with respect to \({{\mathcal{E}}}^{Z}\) since operators S commute with errors \({{\mathcal{E}}}^{Z}\) . In fact, this special case of matching along a one-dimensional line is equivalent to decoding the repetition code using a majority vote rule. As an aside, it is worth mentioning that the parallelised decoding procedure we have described vastly improves the speed of decoding in this infinite bias limit. We next consider a finite-bias error model where qubits experience errors with probability p . Pauli-Z errors occur at a higher rate, p h.r. = p η /( η + 1), and Pauli-X and Pauli-Y errors both occur at the same low error rate p l.r. = p /2( η + 1). At finite bias, string-like errors can now extend in all directions along the two-dimensional lattice. Again, we use minimum-weight perfect matching to find a correction by pairing nearby defects with the string operators that correspond to errors that are likely to have created the defect pair. We decode by giving a complete graph to the minimum-weight perfect-matching algorithm where each pair of defects u and v are connected by an edge of weight \(\sim -{\mathrm{log}}\,{\rm{prob}}({E}_{u,v})\) , where prob( E u , v ) is the probability that the most probable string E u , v created defects u and v . It remains to evaluate \(-{\mathrm{log}}\,{\rm{prob}}({E}_{u,v})\) . For the uncorrelated noise models we consider, \(-{\mathrm{log}}\,{\rm{prob}}({E}_{u,v})\) depends, anisotropically, on the separation of u and v . We define orthogonal axes \(x^{\prime}\) ( \(y^{\prime}\) ) that align along (run orthogonal to) the diagonal line that follows the faces of \({{\mathcal{D}}}_{j}\) . We can then define separation between u and v along axes \(x^{\prime}\) and \(y^{\prime}\) using the Manhattan distance with integers \({l}_{x^{\prime} }\) and \({l}_{y^{\prime} }\) , respectively. On large lattices then, we choose \(-{\mathrm{log}}\,{\rm{prob}}({E}_{u,v})\propto {w}_{{\rm{h.r.}}}{l}_{x^{\prime} }+{w}_{{\rm{l.r.}}}{l}_{y^{\prime} }\) where $${w}_{{\rm{l.r.}}}=-{{\mathrm{log}}}\,\left(\frac{{p}_{\text{l.r.}}}{1-p}\right),\quad {w}_{{\rm{h.r.}}}=-{{\mathrm{log}}}\,\left(\frac{{p}_{\text{h.r.}}}{1-p}\right).$$ (5) The edges returned from the minimum-weight perfect-matching algorithm 43 , 44 indicate which pairs of defects should be paired. We note that, for small, rectangular lattices with periodic boundary conditions, it may be that the most probable string E u , v is caused by a large number of high-rate errors that create a string that wraps around the torus. It is important that our decoder checks for such strings to achieve the logical failure rate scaling like \(O({p}_{\,\text{h.r.}\,}^{{d}^{2}/2})\) . We circumvent the computation of the weight between two defects in every simulation by creating a look-up table from which the required weights can be efficiently retrieved. Moreover, we minimise memory usage by taking advantage of the translational invariance of the lattice. We finally remark that our minimum-weight perfect-matching decoder naturally extends to the fault-tolerant regime. We obtain this generalisation by assigning weights to edges connecting pairs of defects in the 2 + 1-dimensional syndrome history such that $$-{{\mathrm{log}}}\,{\rm{prob}}({E}_{u,v})\propto {l}_{x^{\prime} }{w}_{{\rm{h.r.}}}+{l}_{y^{\prime} }{w}_{{\rm{l.r}}}+{l}_{t}{w}_{t},$$ (6) where now we have l t the separation of u and v along the time axis, \({w}_{t}=-{\mathrm{log}}\,\left(\frac{q}{1-q}\right)\) and q = p h.r. + p l.r. . In the limit that η = 1/2 our decoder is equivalent to the conventional minimum-weight perfect-matching decoder for phenomenological noise 45 . Ansatz at low error rates In the main text we proposed a regime at low error rates where the most common cause of logical failure is a sequence of ~ d /4 low-rate and ~ d /4 high-rate errors along the support of a weight d logical operator; see Fig. 9 . Here we compare our ansatz, Eq. ( 3 ) with numerical data to check its validity and to estimate the free parameter γ . Fig. 9: A low-weight error that causes a logical failure. The error consists of ~ d /4 high-rate errors and ~ d /4 low-rate errors along the support of a weight d logical operator. Full size image We take the logarithm of Eq. ( 3 ) to obtain $${{\mathrm{log}}}\,{\overline{P}}_{{\rm{lin.}}} \sim \;n{\mathrm{log}}\,(1-p)+\gamma d{\mathrm{log}}\,2+\frac{d}{2}{\mathrm{log}}\,\left[\frac{p}{1-p}\right]+\frac{d}{4}{\mathrm{log}}\,\left[\frac{\eta +1/2}{{(\eta +1)}^{2}}\right].$$ (7) Neglecting the small term \(n{\mathrm{log}}\,(1-p)\) we can express the this equation as \({\mathrm{log}}\,\overline{P}\approx G(p,\eta )d\) where we have the gradient $$G(p,\eta )=\frac{1}{2}{\mathrm{log}}\,\left[\frac{p}{1-p}\right]+\gamma {\mathrm{log}}\,2+\frac{1}{4}{\mathrm{log}}\,\left[\frac{\eta +1/2}{{(\eta +1)}^{2}}\right].$$ (8) In Fig. 10 a we plot the data shown in the main text in Fig. 6 b as a function of d to read the gradient G ( p , η ) from the graph. We then plot G ( p , η ) as a function of \(\beta ={\mathrm{log}}\,[p/(1-p)]\) in the inset of Fig. 10 a. The plot reveals a gradient ~0.5, consistent with our ansatz where we expect a gradient of 1/2. Furthermore, at p = 0 we define the restricted function $$I(\eta )\equiv G(p=0,\eta )=\gamma {\mathrm{log}}\,2+\frac{1}{4}{\mathrm{log}}\,\left[\frac{\eta +1/2}{{(\eta +1)}^{2}}\right].$$ (9) We estimate I ( η ) from the extrapolated p = 0 intercepts of our plots, such as shown in the inset of Fig. 10 a, and present these intercepts a function of \({\mathrm{log}}\,[(\eta +1/2)/{(\eta +1)}^{2}]\) ; see Fig. 10 b. We find a line of best fit with gradient 0.22 ± 0.03, which agrees with the expected value of 1/4. Moreover, from the intercept of this fit, we estimate γ = 1.8 ± 0.06, which is consistent with 3/2 ≤ γ ≤ 2 that we expect 49 . Thus, our data are consistent with our ansatz, that typical error configurations lead to logical failure with ~ d /4 low-rate errors. Fig. 10: Analysis of the XZZX code at low error rates. a Plot showing logical failure rate \(\overline{P}\) as a function of code distance d for data where noise bias η = 3. The physical error rates used are, from the bottom to the top curves in the main plot, 0.0001, 0.0002, 0.0005, 0.001 and 0.002. We estimate G ( p , η ) for each physical error rate p by taking the gradient of each line. Low error rate data are collected using the method proposed in refs. 76 , 77 . The inset plot shows the gradients G ( p , η ) as function of \(\mathrm{log}\,[p/(1-p)]\) for η = 3. Values for G ( p , η ), see Eq. ( 8 ), are estimated using the linear fittings. Gradient of line of best fit to these data points is 0.504(4) in agreement with the expected gradient 1/2. b Plot showing intercepts I ( η ) shown as a function of \(\mathrm{log}\,[(\eta +1/2)/{(\eta +1)}^{2}]\) . The intercept function is defined in Eq. ( 9 ) and estimated from the intercept of lines such as that shown in the inset of plot a . Error bars indicate one standard deviation relative to the fitting procedure. Full size image Data availability The data that support the findings of this study are available at . Code availability Software for all simulations performed for this study is available at and released under the OSI-approved BSD 3-Clause licence. This software extends and uses services provided by qecsim 78 , 79 , a quantum error-correction simulation package, which leverages several scientific software packages 44 , 80 , 81 , 82 . | A simple yet elegant change to code studied for more than 20 years could shorten timeline to achieve scalable quantum computation and has attracted the attention of quantum computing programs at Amazon Web Services and Yale University. What started out as a second-year physics project is making its way into Amazon Web Service's (AWS) quantum computing program. University of Sydney science undergraduate Pablo Bonilla Ataides has tweaked some computing code to effectively double its capacity to correct errors in the quantum machines being designed in the emerging technology sector. The simple but ingenious change to quantum error correcting code has grabbed the attention of quantum researchers at the AWS Center for Quantum Computing in Pasadena, California, and the quantum technology programs at Yale University and Duke University in the United States. "Quantum technology is in its infancy, partly because we haven't been able to overcome the inherent instability in the machines that produce so many errors," 21-year-old Mr Bonilla said. "In second-year physics I was asked to look at some commonly used error correcting code to see if we could improve it. By flipping half of the quantum switches, or qubits, in our design, we found we could effectively double our ability to suppress errors." The research is published today in Nature Communications. The results of the study, co-authored by Dr. Steve Flammia who has recently moved from the University of Sydney to AWS's quantum computing effort, are to feature in the tech company's arsenal of error correction techniques as it develops its quantum hardware. Dr. Earl Campbell is a senior quantum research scientist at AWS. He said: "We have considerable work ahead of us as an industry before anyone sees real, practical benefits from quantum computers. "This research surprised me. I was amazed that such a slight change to a quantum error correction code could lead to such a big impact in predicted performance. "The AWS Center for Quantum Computing team looks forward to collaborating further as we explore other promising alternatives to bring new, more powerful computing technologies one step closer to reality." Quantum errors Errors are extremely rare in the digital transistors, or switches, that classical computers use to run our phones, laptops and even the fastest supercomputers. However, the 'switches' in quantum computers, known as qubits, are particularly sensitive to interference, or 'noise," from the external environment. In order to make quantum machines work, scientists need to produce a large number of high-quality qubits. This can be done by improving the machines so they are less noisy and by using some capacity of the machines to suppress qubit errors below a certain threshold in order for them to be useful. That is where quantum error correction comes in. Assistant Professor Shruti Puri from the quantum research program at Yale University said her team is interested in using the new code for its work. "What amazes me about this new code is its sheer elegance. It's remarkable error-correcting properties are coming from a simple modification to a code that has been studied extensively for almost two decades," Assistant Professor Puri said. "It is extremely relevant for a new generation of quantum technology being developed at Yale and elsewhere. With this new code, I believe, we have considerably shortened the timeline to achieve scalable quantum computation." Co-author Dr. David Tuckett from the School of Physics said: "It's a bit like playing battleships with a quantum opponent. Theoretically, they could place their pieces anywhere on the board. But after playing millions of games, we know that certain moves are more likely." Retrofit for industry Co-author and Associate Dean (Research) in the Faculty of Science Professor Stephen Bartlett said: "What's great about this design is that we can effectively retrofit it to the surface codes being developed across the industry. "Having the code work on a two-dimensional surface is ideal for application in an industry that has historically produced 2D chip designs. We are optimistic that this work will help the industry build better experimental devices." Co-author Dr. Ben Brown from the University of Sydney Nano Institute and School of Physics worked closely with Mr Bonilla on the project. He said: "Building a functional quantum computer is a bit like trying to build the Wright Brothers' plane, and we haven't even gotten off the ground yet. "Experimentalists are producing the strong, light-weight materials to build the plane, and we've just come up with a more aerodynamic design for the wings that have more lift. We might have just come up with the design that will help large-scale quantum computing take off." | 10.1038/s41467-021-22274-1 |
Nano | Entropy measurements reveal exotic effect in 'magic-angle' graphene | Entropic evidence for a Pomeranchuk effect in magic-angle graphene, Nature (2021). DOI: 10.1038/s41586-021-03319-3 Journal information: Nature | http://dx.doi.org/10.1038/s41586-021-03319-3 | https://phys.org/news/2021-04-entropy-reveal-exotic-effect-magic-angle.html | Abstract In the 1950s, Pomeranchuk 1 predicted that, counterintuitively, liquid 3 He may solidify on heating. This effect arises owing to high excess nuclear spin entropy in the solid phase, where the atoms are spatially localized. Here we find that an analogous effect occurs in magic-angle twisted bilayer graphene 2 , 3 , 4 , 5 , 6 . Using both local and global electronic entropy measurements, we show that near a filling of one electron per moiré unit cell, there is a marked increase in the electronic entropy to about 1 k B per unit cell ( k B is the Boltzmann constant). This large excess entropy is quenched by an in-plane magnetic field, pointing to its magnetic origin. A sharp drop in the compressibility as a function of the electron density, associated with a reset of the Fermi level back to the vicinity of the Dirac point, marks a clear boundary between two phases. We map this jump as a function of electron density, temperature and magnetic field. This reveals a phase diagram that is consistent with a Pomeranchuk-like temperature- and field-driven transition from a low-entropy electronic liquid to a high-entropy correlated state with nearly free magnetic moments. The correlated state features an unusual combination of seemingly contradictory properties, some associated with itinerant electrons—such as the absence of a thermodynamic gap, metallicity and a Dirac-like compressibility—and others associated with localized moments, such as a large entropy and its disappearance under a magnetic field. Moreover, the energy scales characterizing these two sets of properties are very different: whereas the compressibility jump has an onset at a temperature of about 30 kelvin, the bandwidth of magnetic excitations is about 3 kelvin or smaller. The hybrid nature of the present correlated state and the large separation of energy scales have implications for the thermodynamic and transport properties of the correlated states in twisted bilayer graphene. Main Systems of strongly interacting fermions exhibit competition between localization, which minimizes the potential energy, and itineracy, which minimizes the kinetic energy. The advent of two-dimensional moiré systems, such as magic-angle twisted bilayer graphene 2 , 3 , 4 , 5 , 6 (MATBG), allows this physics to be studied by controlling the ratio between the electronic interactions and bandwidth in a highly tunable way. When this ratio is large, electrons tend to localize and form Mott insulators 7 , 8 . When the bandwidth dominates, a Fermi liquid state is formed in which electrons are itinerant. MATBG is at the boundary between these two extremes, showing a host of fascinating electronic phases, including correlated insulators 3 , 9 , 10 , Chern insulators 11 , 12 , 13 , superconductors 4 , 9 , 10 and ferromagnets 14 , 15 . Scanning tunnelling spectroscopy 16 , 17 , 18 , 19 and electronic compressibility measurements 20 , 21 indicate that in this system the strengths of the Coulomb interaction and the kinetic energy are indeed comparable. In this regime, there is an inherent tension between localized and itinerant descriptions of the physics. Moreover, the topological character 22 , 23 , 24 of the nearly-flat bands in MATBG implies that a simple ‘atomic’ description, in which electrons are localized to individual moiré lattice sites, may not be appropriate. Instead, a picture analogous to that of quantum Hall ferromagnetism has been proposed 25 , 26 , 27 . Understanding this interplay between itineracy and localization, and the new physics that emerges from it, remains a major challenge. In this work we find that, surprisingly, the correlated state in MATBG above a filling of one electron per moiré site has a hybrid nature, with some properties resembling those of an itinerant system, and others resembling those of localized electrons. At temperatures of a few kelvin we measure unusually large excess entropy, which is rapidly suppressed by a moderate in-plane magnetic field. This suggests that even at such low temperatures, there are strongly fluctuating magnetic moments in the system, a behaviour typically associated with local moments. On the other hand, our measurements find that this state is metallic and has no thermodynamic gap, naturally fitting an itinerant picture. The presence of fluctuating moments at temperatures much below the electronic bandwidth indicates the existence of a new, anomalously small energy scale associated with the bandwidth of magnetic excitations, which is an order of magnitude smaller than the scale where a jump appears in the compressibility 21 , 28 . This jump marks the boundary between the new state at filling factor ν > +1 and the state at lower densities. By tracking the dependence of this boundary on temperature and magnetic field, we find that it exhibits an electronic analogue 29 , 30 , 31 , 32 of the Pomeranchuk effect 1 in 3 He. In that system, a transition from a Fermi liquid to a solid occurs on increasing the temperature, driven by the high nuclear spin entropy of the atoms in the solid. Similarly, we find that the new state above ν = +1 is favoured relative to the metallic state at ν < +1 on raising the temperature, owing to the former’s high magnetic entropy. The transition near ν = +1 can also be driven by an in-plane magnetic field that polarizes the free moments. (A related effect near ν = −1 was proposed very recently, on the basis of transport measurements 33 .) The hybrid state observed here, with itinerant electrons coexisting with strongly fluctuating magnetic moments, calls for a new understanding of electron correlations in MATBG. Our data are measured using two independent techniques on two conceptually different devices. The bulk of the results are obtained from local measurements of the electronic entropy 34 , 35 and compressibility using a scanning nanotube single-electron transistor (SET) that probes a hexagonal boron nitride (hBN)-encapsulated MATBG device (device 1, Fig. 1a ). We focus on a large (5 μm × 4 μm) region with an extremely homogenous twist angle that is close to the theoretical magic angle θ = 1.130° ± 0.005°. Similar results are obtained from global entropy measurements using a monolayer graphene sensor (device 2; see section ‘Global measurements of the entropy’). Both methods have been described elsewhere 21 , 36 . Fig. 1: Experimental setup and device characterization. a , A nanotube-based single electron transistor (SET) is used to measure the local electronic compressibility and entropy of magic-angle twisted bilayer graphene (MATBG). The MATBG is encapsulated between top and bottom h-BN layers (not shown) and has a metallic back gate. By monitoring the current through the SET, we track changes in the MATBG chemical potential, d μ , in response to a density modulation, d n , produced by an a.c. voltage on the back gate 21 , δ V BG . A d.c. back-gate voltage, V BG , sets the overall carrier density in the MATBG, n . Some of the measurements are performed in a parallel magnetic field, B || (indicated). b , Inverse compressibility, d μ /d n , measured as a function of the moiré lattice filling factor, ν = n /( n s /4), at T = 15 K ( n s is the density that corresponds to four electrons per moiré site). Measurements are done on a large spatial domain (5 μm × 4 μm) throughout which the twist angle is extremely homogenous, θ = 1.130° ± 0.005° (measured by spatial mapping of the V BG that corresponds to n s , as in refs. 21 , 41 ). As seen previously 21 , a jump of d μ /d n appears near all integer filling factors. This jump corresponds to a Fermi surface reconstruction, in which some combination of the spin/valley flavours filling is reset back to near the charge neutrality point, and correspondingly d μ /d n shows a cascade of sawtooth features as a function of density. The trace is measured at T = 15 K, showing that even at this high temperature this sawtooth cascade is well developed. c , Two-probe resistance, R , measured as a function of ν and temperature (see key). Notice that unlike the inverse compressibility, which measures a local quantity, the resistance gives an averaged result over domains with different twist angle. Therefore, the resistance maxima are slightly shifted from the usual integer ν values, probably because another domain with a small difference in twist angle dominates the transport characteristics globally. Full size image Electronic compressibility and transport The inverse compressibility, d μ /d n , measured in device 1 at T = 15 K as a function of the filling factor, ν = n /( n s /4) (where μ is the chemical potential, n is the carrier density, and n s corresponds to four electrons per moiré superlattice unit cell), is shown in Fig. 1b . As reported previously 21 , sharp jumps in d μ /d n are observed close to integer values of ν , reflecting Fermi surface reconstructions. These were termed Dirac revivals as they were interpreted as resets of partially filled energy bands back to near the Dirac point, leading to the decreased compressibility. The cascade of revivals is already very prominent at this relatively high temperature. Measurements of the two probe resistance, R , versus ν at various temperatures T (Fig. 1c ) show insulating behaviour at ν = 2, 3 and semi-metallic behaviour at ν = 0. As previously noted 37 , R shows a step-like increase across ν ≈ 1, which gradually disappears with decreasing temperature, very different to the behaviour at other integer values of ν . The unusual physics near ν = 1 is revealed by the dependence of d μ /d n on T and parallel magnetic field, B || . At low temperature and B || = 0 T (Fig. 2a ), the jump in d μ /d n occurs at ν slightly larger than 1. Increasing the temperature moves the jump towards lower ν , and surprisingly, increases the magnitude of the jump rather than smearing it. Similar measurements with B || = 12 T at low T (Fig. 2b ) show a much larger jump, which is also closer to ν = 1. With increasing temperature, this jump remains close to ν = 1, but in contrast to the B || = 0 T case, its amplitude is reduced and its width increased. Fig. 2: Measurement of large magnetic entropy above ν = 1. a , Inverse compressibility, d μ /d n , as a function of ν , near ν = 1, measured at zero parallel magnetic field, B || = 0 T, and at several temperatures (see key). With increasing T , the jump in d μ /d n moves towards lower ν and becomes stronger. b , Same measurement as a but done at B || = 12 T. Here, opposite to the zero-field case, increasing T reduces the magnitude of the d μ /d n jump, as expected from thermal smearing. c , The chemical potential μ (relative to that of the charge neutrality point) versus ν at B || = 0 T, obtained by integrating the d μ /d n signal in a with respect to n . Inset, Δ μ (defined as μ ( T , ν ) − μ ( T = 2.8 K, ν )) as a function of temperature for ν = 0.2 (blue) and ν = 0.9 (red). At ν = 0.2 the chemical potential is nearly temperature independent, whereas at ν = 0.9 it is roughly constant until T ≈ 4 K and then starts decreasing approximately linearly with T . d , Similar to c but at B || = 12 T. In contrast to the zero-field case, here, below ν ≈ 0.9, μ decreases with T while above ν ≈ 0.9, μ increases with T . e . The electronic entropy s (in units of k B per moiré unit cell) as a function of ν at T ≈ 10 K and at various parallel magnetic fields, B || = 0, 4, 8, 12 T (see key). To obtain the entropy we determine the partial derivative \({(\partial \mu /\partial T)}_{\nu ,{B}_{\parallel }}\) from a linear fit to the measured μ versus T in the range T = 4.5–15 K. The entropy per moiré cell is then obtained by integrating Maxwell’s relation, \({(\partial s/\partial \nu )}_{T,{B}_{\parallel }}=-{(\partial \mu /\partial T)}_{\nu ,{B}_{\parallel }}\) , over ν (see Supplementary Information for details ). At B || = 0, the entropy climbs rapidly near ν = 1 to a value of about 1.2 k B per moiré cell. Inset, the difference between the entropies at low and high fields, s ( B || = 0 T) − s ( B || = 12 T). The purple shading shows the estimated standard deviation due to the measurement accuracy. Full size image Local measurements of electronic entropy The chemical potential, μ ( ν , T ) (measured relative to that at charge neutrality), can be obtained by integrating d μ /d n over carrier density (Fig. 2c, d ). Visibly, μ depends strongly on T for a range of values of ν . This is clearly seen when we plot μ versus T at two representative ν values (Fig. 2c , inset). At ν = 0.2, μ is practically independent of T (blue). In contrast, at ν = 0.9 (red) μ is nearly constant until T ≈ 4 K, and then decreases approximately linearly with T . At ν > 1.15, μ is again nearly temperature independent. Comparison of μ at B || = 0 T (Fig. 2c ) with that at B || = 12 T (Fig. 2d ) reveals a clear contrast: whereas for B || = 0 T, μ is a decreasing function of temperature for 0.4 < ν <1.15, for B || = 12 T, μ decreases with T for ν < 0.9 and increases for ν >0.9. These measurements allow us to directly determine the entropy of the system, by integrating Maxwell’s relation, \({(\frac{\partial s}{\partial \nu })}_{T}=-{(\frac{\partial \mu }{\partial T})}_{\nu }\) , to obtain s ( ν , T ) (where s is the entropy per moiré unit cell). For more details on this procedure, see Supplementary Information section 1 . Figure 2e shows s ( ν ) at T ≈ 10 K (obtained from the slope of μ versus T in the range T = 4.5–15 K), for B || = 0 T, 4 T, 8 T and 12 T. At B || = 0 T, the entropy is small at low ν , climbs close to ν ≈ 1, remains roughly constant between ν ≈ 1 and 2 at s ≈ 1.2 k B , drops rapidly near ν ≈ 2, and decreases towards zero after ν ≈ 3. Clearly, the ν dependence of the entropy is qualitatively different from that of the compressibility: whereas the latter drops sharply near ν ≈ 1 (Fig. 2a ), the former remains at a high value. An important insight into the origin of this large entropy is given by its magnetic field dependence. As seen in Fig. 2e , the entropy above ν ≈ 1 depends strongly on B || . In particular, at B || = 12 T, most of the entropy between ν ≈ 1 and 2 is quenched. The inset shows \(s({B}_{\parallel }=0\,{\rm{T}})-s({B}_{\parallel }=12\,{\rm{T}})\) versus ν (the purple shading indicates error bars; see Supplementary Information section 1 ). The entropy difference increases sharply near ν ≈ 1, reaching a maximum of (0.85 ± 0.1) k B between ν ≈ 1 and 2. To appreciate the importance of this value, we recall that an entropy of k B ln2 ≈ 0.7 k B corresponds to two degenerate states on each moiré unit cell. Moreover, in a Fermi liquid, we would expect a much weaker change of the entropy with B || (see Supplementary Information section 4 ), of the order of k B times the ratio of the Zeeman energy (about 1 meV at B || = 12 T) to the bandwidth, estimated to be W ≈ 30 meV (see below). Finally, we observe that at B || = 12 T the entropy shows a cascade of drops following each integer ν . These drops are similar to the revival drops observed in the compressibility (Supplementary Information section 5 ) and are reproduced by mean-field calculations (Supplementary Information section 3 ). The strong quenching of entropy by moderate B || strongly suggests a magnetic origin. Global measurements of the entropy To test the robustness of our results, we measured the entropy in a completely different setup, in which a sheet of monolayer graphene senses the chemical potential of MATBG, averaged over the entire device 36 (Fig. 3a ). Figure 3b shows the entropy extracted in three different temperature ranges. We see (inset) that the globally measured entropy for T = 4–16 K is in good agreement with the locally measured one over a similar range of temperatures, both in the overall shape, the magnitude of s ( ν ), and the detailed features. At elevated temperatures, the minimum in the entropy at ν = 0 gradually fills in, evolving from a double-dome structure at low T (corresponding to the valence and conduction flat bands) to a single dome at high T . This dependence is qualitatively reproduced by a naïve calculation for a system of non-interacting electrons, whose density of states rises linearly from the charge neutrality point until the band edges (Fig. 3c ). The merging of the domes in s ( ν ) occurs when the temperature exceeds a fraction of the bandwidth. Calibrating the bandwidth using the measured entropy at T ≈ 55 K gives W ≈ 30 meV (where W is the full bandwidth—from valence band bottom to conduction band top), in rough agreement with scanning tunnelling microscopy 16 , 17 , 18 , 19 and compressibility 36 experiments. This free-electron picture is of course invalid at low temperatures, where interactions are important. The measured s ( ν ) in the valence band is approximately a mirror image of s ( ν ) in the conduction band (Fig. 3b ), although it is smaller and has less pronounced features. This is consistent with the weaker d μ /d n revivals observed in the valence band relative to the conduction band 21 , 36 (Supplementary Information section 9 ). Fig. 3: Temperature dependence of the entropy. a , Experimental setup for measuring the global entropy, averaged over the entire device 36 . The device consists of MATBG and a monolayer graphene (MLG) sensor layer, separated by an ultrathin (1 nm) layer of h-BN (not shown), as well as top and back metallic gates. By balancing the electrochemical potential of the adjacent layers in the device, we can obtain the relationship between the carrier density and chemical potential of MATBG and MLG and the gate voltages applied to the system (top-gate voltage, V TG ; back-gate voltage, V BG ). In the special case where the carrier density of MLG is zero, tuned by V MLG , that is, at its charge neutrality point, the chemical potential of MATBG is directly proportional to the voltage applied to the top gate. This technique allows us to reliably extract the chemical potential and entropy of MATBG at temperatures up to 70 K. b , The measured entropy, in units of k B per moiré unit cell, as a function of ν at three different temperature ranges (see key). The entropy derivative, d s /d ν , is obtained from a linear fit to μ versus T in the corresponding temperature range, and is then integrated over ν to yield the entropy per moiré unit cell (similar to Fig. 2e ). Inset, comparison between the ν dependence of the entropy, measured at the low temperature range, as obtained from local and global measurements. c , The entropy as a function of ν and T calculated for a system of four degenerate non-interacting Dirac bands (whose density of states climbs linearly with energy from the Dirac point to the end of the conduction or the valence band). The colour-coded lines (see key) show the curves whose temperatures correspond to the mean of the temperature ranges of the experimental curves. The grey lines represent the entire evolution from zero temperature to high temperature, where the entropy saturates at a value of 8ln2 ≈ 5.5, where the factor 8 reflects the total number of energy bands. A bandwidth of W = 30 meV is chosen such that the calculated value of the entropy at the highest temperature roughly matches the one obtained from the measured curve at the same temperature. Full size image Mapping the phase diagram So far, we have shown the change in the magnetic entropy and compressibility near ν = 1. This change may be due to a continuous build-up of electronic correlations. Alternatively, it could be interpreted as an underlying first-order phase transition between two distinct phases. Naively, one would then expect a discontinuous jump in thermodynamic properties and hysteretic behaviour across the transition, which are not observed. However, we note that a true first-order phase transition is forbidden in two dimensions in the presence of disorder or long-range Coulomb interactions 38 , as these broaden the transition into a mesoscale coexistence region (Supplementary Information section 10 ). Experimentally, although the revival transition is very sharp and may be consistent with a Coulomb- and/or disorder- smeared first-order transition, we cannot rule out a sharp crossover or a higher-order phase transition. Nevertheless, the sharpness of the rise of d μ /d n at the revival transition allows us to precisely track its filling factor, ν = ν R (Fig. 4a ), and map a phase diagram, which is naturally explained when this feature is interpreted as a proxy for a first-order transition. Fig. 4: Experimental phase diagram. a , The inverse compressibility, d μ /d n , measured as a function of ν near ν = 1, at several values of parallel magnetic field, B || . We track the filling factor that corresponds to the centre of the jump in d μ /d n (labelled ν R ). The application of B || is seen to push ν R to lower values. b , Left panel, measured ν R as a function of B || and T , plotted as dots in ( ν , B || , T ) space (the dots are coloured by their temperature as in c ; the dashed lines are polynomial fits to the dots at constant B || or constant T ). Right panel, the same surface calculated from a simple model that assumes a transition between a Fermi liquid and a metallic phase that contains one free moment per moiré site (see text). c , Projection of the data in b onto the ( ν , B || ) plane, showing the dependence of ν R on B || for various temperatures. At low fields, ν R is independent of field but becomes linear in B || at high fields, a behaviour expected from the field polarization of free moments (see text). Inset, curves calculated from the model. d , Projection onto the ( ν , T ) plane, showing the dependence of ν R on T for various magnetic fields. At B || = 0 T, ν R is linear in T at small values of T and then curves up at higher values of T . At high magnetic field, the dependence of ν R on T becomes non-monotonic. Inset, curves calculated from the model. Full size image The measured ν R versus B || and T forms a surface in ( ν , B || , T ) space (Fig. 4b ), whose projections onto the ( ν , B || ) and ( ν , T ) planes are shown in Fig. 4c, d . At T = 2.8 K and at low B || , ν R depends weakly on B || , but decreases linearly above B || ≈ 4 T (Fig. 4c , blue). A similar crossover is observed at higher temperatures, but with a crossover B || that increases with temperature. The T dependence of ν R at B || = 0 T (Fig. 4d ) is linear at low temperatures and curves up at higher temperatures. As B || increases, the curve shifts towards smaller values of ν , and simultaneously its slope at low temperatures changes sign. At B || = 12 T, ν R first increases with T , reaches a maximum at T ≈ 9 K, and then decreases. The phenomenology seen in Fig. 4b–d can be understood in terms of a first-order phase transition at ν = ν R between a Fermi liquid phase below ν R , and a ‘free moment’ phase above it. The latter has a high concentration of free moments (about one per moiré site), coexisting with a low density of itinerant electrons. Within this framework, the shift of ν R as a function of B || and T reflects the magnetization and entropy differences between the two neighbouring phases. At B || = 0 T, the free moment phase has a higher entropy than the Fermi liquid, owing to thermal fluctuations of the moments. Hence, the former becomes entropically favourable at high temperatures. This explains the observed decrease of ν R with increasing T at low fields (Fig. 4d ). Raising the temperature at fixed ν may therefore drive a transition from the Fermi liquid to the free moments phase, an electronic analogue of the Pomeranchuk effect. As B || increases and the Zeeman energy exceeds the temperature, the moments become nearly fully polarized and their entropy is quenched (as is observed directly in Fig. 2e ). Consequently, at low temperatures and sufficiently high fields, the Fermi liquid phase is favoured by raising the temperature. The trend reverses once the temperature exceeds the Zeeman energy. This explains the non-monotonic behaviour of ν R as a function of T , seen at B || = 12 T in Fig. 4d . The main features of the phase boundary are qualitatively reproduced in a thermodynamic model of the two phases (Supplementary Information section 7 , and insets of Fig. 4b–d ). Note that the experiment probes moments that couple to an in-plane field. This includes Zeeman-coupled spins and may also include the valleys if their in-plane orbital moment is non-zero. Discussion The observation of free magnetic moments at surprisingly low temperatures has profound implications for the physics of MATBG. Low-energy magnetic fluctuations are destructive for superconductivity, and may be the limiting factor for the superconducting transition temperature. Moreover, increased scattering from fluctuating moments can account for the ‘strange metal’ behaviour reported over a broad range of temperatures 39 , 40 . An important question raised by our observations concerns the origin of the free moments. Soft collective modes have been predicted in insulating states of MATBG 25 , 26 , 27 , but our experiments show metallic behaviour near ν = 1. Moreover, the energy scale associated with the appearance of free moments is strikingly low (3 K or less), much below the microscopic energy scales in the system. Understanding the state near ν = 1, which combines behaviours associated with electron localization and itineracy, and its surprisingly low onset temperature, poses an important challenge for the theory of MATBG. Data availability The data in the main text are available at Code availability The code used in this work is available at | Most materials go from being solids to liquids when they are heated. One rare counter-example is helium-3, which can solidify upon heating. This counterintuitive and exotic effect, known as the Pomeranchuk effect, may now have found its electronic analog in a material known as magic-angle graphene, says a team of researchers from the Weizmann Institute of Science led by Prof. Shahal Ilani, in collaboration with Prof. Pablo Jarillo-Herrero's group at the Massachusetts Institute of Technology (MIT). This result, published today in Nature, comes thanks to the first ever measurement of electronic entropy in an atomically-thin two dimensional material. "Entropy describes the level of disorder in a material and determines which of its phases is stable at different temperatures," explains Ilani. "Our team set up to measure the electronic entropy in magic angle graphene to resolve some of its outstanding mysteries, but discovered another surprise." Giant magnetic entropy Entropy is a basic physical quantities that is not easy to grasp or measure directly. At low temperatures, most of the degrees of freedom in a conducting material freeze out, and only the electrons contribute to the entropy. In bulk materials, there is an abundance of electrons, and thus it is possible to measure their heat capacity and from that deduce the entropy. In an atomically-thin two-dimensional material, due to the small number of electrons, such a measurement becomes extremely challenging. So far, no experiments succeeded in measuring the entropy in such systems. To measure the entropy, the Weizmann team used a unique scanning microscope comprising of a carbon nanotube single-electron transistor positioned at the edge of a scanning probe cantilever. This instrument can spatially image the electrostatic potential produced by electrons in a material, with an unprecedented sensitivity. Based on Maxwell's relations that connect the different thermodynamic properties of a material, one can use these electrostatic measurements to directly probe the entropy of the electrons. "When we performed the measurements at high magnetic fields, the entropy looked absolutely normal, following the expected behavior of a conventional (Fermi) liquid of electrons, which is the most standard state in which electrons exist at low temperatures. Surprisingly, however, at zero magnetic field, the electrons exhibited giant excess entropy, whose presence was very mysterious." says Ilani. This giant entropy emerged when the number of electrons in the system was about one per each site of the artificial "superlattice" formed in magic angle graphene. Artificial "superlattice" in twisted layers of graphene Graphene is a one atom thick crystal of carbon atoms arranged in a hexagonal lattice. When two graphene sheets are placed on top of each other with a small and special, or "magic," misalignment angle, a periodic moiré pattern appears that acts as an artificial "superlattice" for the electrons in the material. Moiré patterns are a popular effect in fabrics and emerge wherever one mesh overlays another at a slight angle. In magic angle graphene, the electrons come in four flavors: spin 'up' or spin 'down," and two 'valleys." Each moiré site can thus hold up to four electrons, one of each flavor. Researchers already knew that this system behaves as a simple insulator when all moiré sites are completely full (four electrons per site). In 2018, however, Prof. Jarillo-Herrero and colleagues discovered to their surprise that it can be insulating at other integer fillings (two or three electrons per moiré site), which could only be explained if a correlated state of electrons is formed. However, near a filling of one electron per moiré site, the vast majority of transport measurements indicated that the system is quite simple, behaving as an ordinary metal. This is exactly where the entropy measurements by the Weizmann-MIT team found the most surprising results. "In contrast to the behavior seen in transport near a filling of one electron per moiré site, which is quite featureless, our measurements indicated that thermodynamically, the most dramatic phase transition occurs at this filling," says Dr. Asaf Rozen, a lead author in this work. "We realized that near this filling, upon heating the material, a rather conventional Fermi liquid transforms into a correlated metal with a giant magnetic entropy. This giant entropy (of about 1 Boltzmann constant per lattice site) could only be explained if each moiré site has a degree of freedom that is completely free to fluctuate." An electronic analog of the Pomeranchuk effect "This unusual excess entropy reminded us of an exotic effect that was discovered about 70 years ago in helium-3," says Weizmann theorist Prof. Erez Berg. "Most materials, when heated up, transform from a solid to a liquid. This is because a liquid always has more entropy than the solid, as the atoms move more erratically in the liquid than in the solid." In helium-3, however, in a small part of the phase diagram, the material behaves completely oppositely, and the higher temperature phase is the solid. This behavior, predicted by Soviet theoretical physicist Isaak Pomeranchuk in the 1950s, can only be explained by the existence of another "hidden" source of entropy in the system. In the case of helium-3, this entropy comes from the freely rotating nuclear spins. "Each atom has a spin in its nucleus (an 'arrow' that can point in any direction)," explains Berg. "In liquid helium-3, due to the Pauli exclusion principle, exactly half of the spins must point up and half must point down, so spins cannot freely rotate. In the solid phase, however, the atoms are localized and never come close to each other, so their nuclear spins can freely rotate." "The giant excess entropy that we observed in the correlated state with one electron per moiré site is analogous to the entropy in solid helium-3, but instead of atoms and nuclear spins, in the case of magic angle graphene we have electrons and electronic spins (or valley magnetic moments)," he says. The magnetic phase diagram To establish the relation with the Pomeranchuk effect further, the team performed detailed measurements of the phase diagram. This was done by measuring the "compressibility" of the electrons in the system- that is, how hard it is to squeeze additional electrons into a given lattice site (such a measurement was demonstrated in twisted bilayer graphene in the team's previous work). This measurement revealed two distinct phases separated by a sharp drop in the compressibility: a low-entropy, electronic liquid-like phase, and a high-entropy solid-like phase with free magnetic moments. By following the drop in the compressibility, the researchers mapped the boundary between the two phases as a function of temperature and magnetic field, demonstrating that the phase boundary behaves precisely as expected from the Pomerachuk effect. "This new result challenges our understanding of magic angle graphene," says Berg. "We imagined that the phases in this material were simple—either conducting or insulating, and expected that at such low temperatures, all the electronic fluctuations are frozen out. This turns out not to be the case, as the giant magnetic entropy shows." "The new findings will provide fresh insights into the physics of strongly correlated electron systems and perhaps even help explain how such fluctuating spins affect superconductivity," he adds. The researchers acknowledge that they do not yet know how to explain the Pomeranchuk effect in magic angle graphene. Is it exactly as in helium-3 in that the electrons in the solid-like phase remain at a great distance from each other, allowing their magnetic moments to stay completely free? "We are not sure," admits Ilani, "since the phase we have observed has a 'spit personality' - some of its properties are associated with itinerant electrons while others can only be explained by thinking of the electrons as being localized on a lattice." | 10.1038/s41586-021-03319-3 |
Medicine | Missing gene linked to autism | James Dachtler et al. 'Deletion of α-neurexin II results in autism-related behaviors in mice is published in Translational Biology' is published in Translational Psychiatry. DOI: 10.1038/TP.2014.123 Journal information: Translational Psychiatry | http://dx.doi.org/10.1038/TP.2014.123 | https://medicalxpress.com/news/2014-11-gene-linked-autism.html | Abstract Autism is a common and frequently disabling neurodevelopmental disorder with a strong genetic basis. Human genetic studies have discovered mutations disrupting exons of the NRXN2 gene, which encodes the synaptic adhesion protein α-neurexin II (Nrxn2α), in two unrelated individuals with autism, but a causal link between NRXN2 and the disorder remains unclear. To begin to test the hypothesis that Nrxn2α deficiency contributes to the symptoms of autism, we employed Nrxn2α knockout (KO) mice that genetically model Nrxn2α deficiency in vivo . We report that Nrxn2α KO mice displayed deficits in sociability and social memory when exposed to novel conspecifics. In tests of exploratory activity, Nrxn2α KO mice displayed an anxiety-like phenotype in comparison with wild-type littermates, with thigmotaxis in an open field, less time spent in the open arms of an elevated plus maze, more time spent in the enclosure of an emergence test and less time spent exploring novel objects. However, Nrxn2α KO mice did not exhibit any obvious changes in prepulse inhibition or in passive avoidance learning. Real-time PCR analysis of the frontal cortex and hippocampus revealed significant decreases in the mRNA levels of genes encoding proteins involved in both excitatory and inhibitory transmission. Quantification of protein expression revealed that Munc18-1, encoded by Stxbp1 , was significantly decreased in the hippocampus of Nrxn2α KO mice, which is suggestive of deficiencies in presynaptic vesicular release. Our findings demonstrate a causal role for the loss of Nrxn2α in the genesis of autism-related behaviors in mice. Introduction Autism is a widespread cognitive disorder characterized by impairments in social interactions, communication and language development, that can be accompanied by stereotyped patterns of behavior. Autism is a highly heritable disorder with concordance rates as high as 90% for monozygotic twins, 1 but the underlying molecular and neuropathophysiological basis is unknown in most cases. However, recent genetic and genomic studies have implicated a large number of genes in autism, 2 many of which encode synaptic proteins, 3 indicating that synaptic dysfunction may have a critical role in autism. The neurexins are a family of synaptic adhesion proteins encoded by paralogous genes ( NRXN1 - 3 ) that have a key role in synaptic function. Each gene is transcribed in neurons from two independent promoters to yield longer (α) proteins with six laminin/neurexin/sex hormone (LNS) binding domains, and shorter (β) proteins with one LNS binding domain. Intracellularly, α-neurexin binds to CASK, Mint, Munc18, syntenin and synaptotagmin, suggesting a role in vesicular release. 4 , 5 , 6 Postsynaptic binding with PSD-95 or gephyrin via neuroligins (also associated with autism 7 ), leucine-rich repeat transmembrane proteins (LRRTMs) or dystroglycan can directly influence NMDA, AMPA or GABAergic receptors at the synapse, thereby altering a cell’s excitatory or inhibitory tone. 8 , 9 , 10 The promoter for α-neurexin II (Nrxn2α) transcripts lies upstream of NRXN2 exon 1, whereas the promoter for β-neurexin II is located in the intron downstream of exon 17. 11 The first evidence for a potential role of NRXN2 in autism was provided by a report of a frameshift mutation within NRXN2 exon 12 in a boy with autism and his father with severe language delay. 12 This mutation results in a truncated Nrxn2α protein that lacks the binding sites for the established postsynaptic binding partners LRRTM2 and neuroligin-2, but does not affect β-neurexin II. 12 Subsequently, a 21-year-old man with a clinical phenotype including autistic traits, such as speech and language deficits and insistence on routine, was reported to have a 570-kb de novo deletion of 24 genes at chromosome 11q13.1, including NRXN2 . 13 However, a clear causal relationship between NRXN2 and autism has not been established. To begin to test the hypothesis that Nrxn2α deficiency contributes to the symptoms of autism, we employed mice with a targeted mutation ( Nrxn2 tm1Sud ; MGI:3042719) that deletes the first exon of Nrxn2 and abolishes expression of Nrxn2α, but does not affect β-neurexin II. 14 The 30-day survival rate and gross brain anatomy of Nrxn2α null mutants are unaltered compared with wild-type littermates. 14 , 15 In light of the putative link between Nrxn2α and autism—diagnosis of which is based purely on behavioral assessment—we predicted that Nrxn2α knockout (Nrxn2α KO) mice might exhibit autism-relevant behavioral abnormalities. Herein, we report that Nrxn2α KO mice displayed altered anxiety-like and social behaviors consistent with a causal role for the loss of Nrxn2α in the genesis of autism-related behaviors. Materials and methods All the procedures were approved by the University of Leeds Animal Ethical and Welfare Review Board and were performed under the UK Home Office Project and Personal Licences. Animals B6;129- Nrxn3 tm1Sud /Nrxn1 tm1Sud /Nrxn2 tm1Sud /J mice (JAX #006377) were purchased from the Jackson Laboratory (Bar Harbor, ME, USA) as heterozygous KO at Nrxn1 , homozygous KO at Nrxn2 and wild-type at Nrxn3 . We subsequently outbred to the C57BL/6NCrl strain (Charles River, Margate, UK) to obtain mice that were Nrxn2α KO heterozygotes, but wild-type at Nrxn1 and Nrxn3 . Nrxn2α KO heterozygotes were then intercrossed to obtain wild-type (WT) and KO littermates. DNA extracted from ear biopsies was used for PCR-based genotyping according to the Jackson Laboratory Nrxn1 v5, Nrxn2 v5 and Nrxn3 v1 protocols ( ). Briefly, the primers 5′-GAGATGGAGAGCCAGACACC-3′ (common forward), 5′-CAGTGCCATGGACTCAGTAGC-3′ (WT reverse) and 5′-GCATCGCATTGTCTGAGTAGG-3′ (KO reverse) were used with HotShot Diamond (Clent Life Science, Stourbridge, UK) using the thermocycling program of: 94 °C for 5 min, followed by 35 cycles of 94 °C for 30 s, 64 °C for 60 s and 72 °C for 60 s, followed by 72 °C for 120 s. PCR products were visualized using electrophoresis, with a 190-bp band indicating the WT allele, and a 310-bp band indicating the KO allele. Litters were separated by sex at postnatal day 21, when mice were housed with at least one other mouse of the same sex and age, with a maximum of five mice per cage. Food and water were provided ad libitum , except for the buried food experiment. Lighting was provided in a 12:12 dark/light cycle, with the light cycle commencing at 06:00. Behavioral testing All behavioral experiments were conducted using young adults over 8 weeks of age. All the mice were extensively handled before testing. WT and KO mice were tested in the following behavioral experiments (in order): open field, elevated plus maze (EPM), forced-swim test, social interaction, emergence test, novel object exploration, prepulse inhibition (PPI), passive avoidance. For detailed methodology, see Supplementary Materials and Methods . Quantitative RT-PCR WT ( n =5) and Nrxn2α KO ( n =5) mice were killed by CO 2 asphyxiation and their brains were quickly extracted and snap frozen in liquid N 2 . Frontal cortex and hippocampus were dissected on ice, and tissue was stored in RNAlater (Ambion, Paisley, UK) at 4 °C for up to 7 days. RNAlater was removed and tissue was homogenized in 1 ml TRIzol Reagent (Invitrogen, Paisley, UK). RNA was extracted from the homogenate using a PureLink RNA Kit (Ambion), followed by spectrophotometric analysis of purity/integrity by 260 nm analysis of RNA concentration and 260/280 nm ratio analysis of RNA purity. Two microlitres of each RNA sample were converted into cDNA using a Quantitect reverse transcription kit (Qiagen, Manchester, UK). The cDNA was stored at −20 °C before analysis by quantitative RT-PCR. A total of 0.1 μg of cDNA (in triplicate) was used to quantify gene expression with a Quantitect SYBR Green quantitative RT-PCR kit (Qiagen) using the thermocycling program of: 95 °C for 15 mins, followed by 40 cycles of 94 °C for 15s, 55 °C for 30 s and 72 °C for 30s. All the data were normalized to a Cyc1 neuronal reference gene. To select an optimal reference gene, the stability of four genes commonly used in real-time RT-PCR studies ( Actb (β-actin), Cyc1 , Pgk1 and Rpl13a ) was tested. Three different samples (with two replicates per sample) per genotype and brain area were run for each gene. Normfinder software 16 was used with the obtained cycle threshold (Ct) values to calculate the expression stability of the four genes. Cyc1 was consistently the most stable gene in both frontal cortex and hippocampus. Using a combination of genes did not substantially improve the stability (data not shown). Thirteen transcripts in total were studied: parvalbumin ( Pvalb ), GAD 65 ( Gad2 ), GAD 67 ( Gad1 ), AMPA receptor subunit 1 ( Gria1 ), NMDA receptor subunit 1 ( Grin1 ), NMDA receptor subunit 2a ( Grin2a ), NMDA receptor subunit 2b ( Grin2b ), postsynaptic density protein 93 ( Dlg2 ), postsynaptic density protein 95 ( Dlg4 ), syntaxin-binding protein 1 ( Stxbp1 ), Homer protein 1 ( Homer1 ), vesicular glutamate transporter ( Slc17a7 ), and vesicular inhibitory amino-acid transporter ( Slc32a1 ). Primers for Stxbp1 , Grin2b , Homer1 , Pgk1 , Rpl13a and Cyc1 were designed using the Roche Universal Probe Library ProbeFinder version 2.50 ( Table 1 ) and were synthesized by Sigma (Haverhill, UK). The remaining primers were QuantiTect Primer Assays purchased from Qiagen. Table 1 Primer sequences for genes that were designed in-house Full size table Analysis was carried out using the 2 -ΔΔCt method 17 and data are displayed as relative quantification values, relative to WT levels. Western blotting WT ( n =4) and Nrxn2 KO mice ( n =4), different to those used for the RT-PCR, were killed by CO 2 asphyxiation and their brains were quickly extracted, divided into hemispheres, snap frozen in liquid N 2 and subsequently stored at −80 °C. The cortex and hippocampus were dissected under a microscope and homogenized at a concentration of 333 mg ml −1 in RIPA Lysis Buffer with 0.5% sodium orthovanadate, 0.5% PMSF, 0.5% protease inhibitor cocktail (Santa Cruz, Heidelberg, Germany) and 1 × Phos-STOP (Roche, Welwyn Garden City, UK) on ice. The homogenate was centrifuged at 4 °C, the supernatant was aliquoted and protein concentration was measured by Bradford assay. Samples were stored at −80 °C. Aliquots of 30 μg total protein were prepared for loading by the addition of Laemmli sample buffer (Bio-Rad, Hemel Hempstead, UK) with 5% β-mercaptoethanol and incubated at 95 °C for 5 min. Samples were subjected to gradient SDS–polyacrylamide gel electrophoresis (100 V, 1.5 h) on polyacrylamide gels (4–15%) (Mini-PROTEAN TGX, Bio-Rad), transferred to BioTrace PVDF transfer membranes (Pall, Portsmouth, UK) (100 V 1.5 h on ice), and blocked for either 1–2 h at room temperature or overnight at 4 °C in 5% skimmed milk in 1 × phosphate-buffered saline with 0.05% Tween-20. Membranes were incubated with primary antibodies in 5% milk for 1 h at room temperature or 4 °C overnight at the following concentrations: Munc18-1 (sc-14557; Santa Cruz) 1:1000; parvalbumin (SAB4200545; Sigma) 1:1000; PSD-95 (sc-32290; Santa Cruz) 1:1000; GluN2A (sc-31542; Santa Cruz) 1:100. Anti-goat (sc-2020; Santa Cruz) and anti-mouse (sc-2371; Santa Cruz) HRP-linked secondary antibodies were incubated in 5% milk for 1 h at room temperature. Bound peroxidase-conjugates were visualized using ECL western blotting substrate (Promega, Southampton, UK). To confirm equal loading, membranes were immersed in stripping buffer (69 m M SDS, 63 m M Tris, 0.7% β-mercaptoethanol, pH 6.8) at 50 °C for 30 min before incubating with anti-β-actin (A1978; Sigma, Poole, UK) 1:5000. All western blots were repeated a minimum of three times. Densitometry was performed using ImageJ (v1.46; ), with expression normalized to the β-actin loading control. Data analysis All the data are expressed as mean±s.e.m. To assess the differences between the variables and their impact upon performance, two-sample t -tests or analyses of variance were conducted. Performance across time bins was analyzed by repeated measures analysis of variance. If there were significant interactions between variables, tests of simple main effects were performed (Bonferroni corrected), followed by post hoc analysis where necessary. All analyses were performed using SPSS version 20. In all cases, α was set at ⩽ 0.05. Graphs were drawn using GraphPad Prism version 6. Statistical significance within the figures is represented as: *** P <0.0001, ** P <0.01 and * P <0.05. Results Nrxn2α KO mice display deficits in social behavior In view of the putative link between Nrxn2α and autism, we assessed whether Nrxn2α KO mice with a predominantly C57BL/6 genetic background exhibit autism-related behavioral abnormalities. As impaired sociability is one of the core diagnostic criteria for autism, 18 we examined the social interaction of Nrxn2α KO mice in a three-chambered assay for sociability, in which mice were given a choice between spending time in the side with an unfamiliar mouse enclosed in a wire cage (Stranger 1) or in the side with an empty wire cage. 19 Unlike their WT littermates, Nrxn2α KO mice failed to show a significant preference for the unfamiliar conspecific. However, both genotypes spent an equivalent amount of time in proximity to the empty cage ( Figure 1a ). Following the sociability assay, subjects were given a test for social novelty preference, with a choice between the original unfamiliar mouse (Stranger 1) and a new unfamiliar mouse (Stranger 2). WT mice showed a clear preference for exploration of Stranger 2, whereas no such preference was shown by Nrxn2α KO mice, although both genotypes spent a similar time in proximity to Stranger 1 ( Figure 1b ). There was no genotypic difference in general ambulation in the three-chambered arena, as Nrxn2α KO and WT mice traveled similar distances during each phase of testing ( Supplementary Figure S1 ). To determine whether the reduced social exploration time in Nrxn2α KO mice was related to potential anxiety caused by the presence of a novel conspecific, we tested the preference for exploring soiled versus clean bedding in the same three-chambered arena using a previously untested cohort of mice. WT mice spent a greater proportion of time in proximity to the cage that contained the soiled bedding, compared with that containing the clean bedding, whereas Nrxn2α KO mice showed no bias towards either cage ( Figure 1c ). To test whether the lack of sociability could be related to a potential deficit in olfaction in the Nrxn2α KO mice, we examined their ability to locate buried food ( Supplementary Figure S1d ). There was no significant difference between the genotypes in latency to find the food. Three KO mice required the full length of the experiment to find the food, but two of these had previously shown a preference for exploring soiled bedding rather than clean bedding, suggesting that an olfactory deficit is unlikely. Figure 1 Nrxn2α KO mice exhibit deficits in sociability and social memory. During the first phase ( a ), whereby the test mouse had to discriminate between a novel mouse and an empty but identical cage, Nrxn2α KO mice ( n =16) display no significant preference for exploring the novel mouse compared with the empty cage, whereas WT ( n =31) showed a very clear discrimination (repeated measure analysis of variance (RM ANOVA), significant genotype × discrimination interaction; F (1,45) =14.89, P <0.0001). Tests of simple main effects found a significant effect of genotype when exploring the novel mouse (F (1,45) =18.51, P <0.0001) but not the empty cage (F (1,45) <1, P >0.05). In stage 2 ( b ), the preference of the test mouse to discriminate between the previously explored mouse (Stranger 1) and a second novel mouse (Stranger 2) was measured. Nrxn2α KO mice spent a similar time as WT exploring Stranger 1, but showed significantly less exploration of Stranger 2 (RM ANOVA, significant genotype × discrimination interaction; F (1,45) =8.08, P =0.007). Tests of simple main effects confirmed a significant effect of genotype on time exploring the novel mouse (F (1,45) =8.92, P =0.005) but not on time exploring the previously explored mouse (F (1,45) <1, P >0.05). Nrxn2α KO mice were also unable to discriminate between exploring soiled vs clean bedding ( c ). Nrxn2α KO mice ( n =13) showed no preference for either cage, whereas WT ( n =11) spent a proportionately longer time exploring the cage containing the soiled bedding (RM ANOVA, significant genotype × discrimination interaction; F (1,22) =8.37, P =0.008). Tests of simple main effects found a significant effect of genotype on time exploring the soiled bedding (F (1,22) =5.01, P =0.036) but not the clean bedding (F (1,22) =3.81, P >0.05). KO, knockout; Nrxn2α, α-neurexin II; NS, not significant; WT, wild type. * P <0.05; ** P <0.01; *** P <0.0001. Full size image Nrxn2α KO mice display increased anxiety in tests of exploratory activity In addition to the core symptoms of the disorder, autism is characterized by a high prevalence of all diagnostic subtypes of anxiety. 20 Indeed, anxiety-related dysfunction can often be as significant as, or even greater than, the difficulties arising from the core symptoms. 21 Anxiety in Nrxn2α KO mice was assessed in four tests of exploratory activity: the open field, EPM, emergence and novel object tests. The open field is a measure of anxiety dependent upon the natural aversion of mice for a novel, brightly lit open arena. In this situation, mice spontaneously prefer the periphery to activity in the central parts of the open field. This wall-hugging tendency, known as thigmotaxis, is bidirectionally sensitive to anxiogenic (increased) and anxiolytic (decreased) drugs, and is used as a measure of anxiety in mice. 22 Over 30 min of free exploration in the open field, the total distance traveled was not significantly different between Nrxn2α KO and WT mice ( Figure 2a ). However, Nrxn2α KO mice spent significantly more time in the peripheral zone near the walls ( Figure 2b ), significantly less time in an intermediate zone ( Figure 2c ) and showed a trend approaching significance ( P =0.085) for less time in the center of the arena ( Figure 2d ), but there were no genotypic differences in the total number of zone entries ( Supplementary Figures S2a and c ). Nrxn2α KO mice also spent more time rearing than WT mice, but there was no genotypic difference in the amount of time spent self-grooming ( Supplementary Figures S2d and e ). Figure 2 Activity of Nrxn2α KO mice in the open field. ( a ) Nrxn2α KO mice ( n =16) display a marginal but non-significant increase in locomotion compared with WT mice ( n =33) over 30 min of free exploration (two-way repeated measures analysis of variance, main effect of time block (F (5,235) =18.12, P <0.0001), no effect of genotype (F (1,47) =2.44, P >0.05) or interactions (F (5,235) <1, P >0.05)). During the trial, the arena floor was divided into three zones (outer, intermediate, center) and the mice were tracked automatically. Nrxn2α KO mice spent significantly more time within the outer zone ( b ; t (47) =2.54, P =0.015; thigmotaxis) and significantly less time in the intermediate zone ( c ; t (47) =2.59, P =0.013). There was also a trend for Nrxn2α KO mice to spend less time in the center zone ( d ; t (47) =1.76, P =0.085). KO, knockout; Nrxn2α, α-neurexin II; WT, wild type. * P <0.05. Full size image The EPM test exploits the conflict between the tendency of mice to investigate a novel environment, and to avoid brightly lit open areas. In this test, Nrxn2α KO mice spent significantly less time in the open arms ( Figure 3a ) and more time in the closed arms ( Figure 3b ) than WT littermates. Nrxn2α KO mice also made significantly fewer exploratory head dips from the center and open arms ( Figure 3c ) and spent significantly less time on the central square ( Supplementary Figure S3a ). Although Nrxn2α KO mice made significantly fewer total entries, and traveled significantly less overall than WT mice ( Supplementary Figures S3b and e ), it is unlikely that hypoactivity alone can explain their EPM behavior, as their ambulation in the other tests of exploratory activity was unaltered. Figure 3 Nrxn2α KO mice show an anxiety-like phenotype. In the elevated plus maze, KO mice ( n =16) spent significantly less time in the open arms ( a ; t (47) =2.62, P =0.012) and significantly more time in the closed arms ( b ; t (47) =6.84, P <0.0001) compared with WT ( n =33). KO mice also make significantly fewer head dips ( c ; t (47) =4.68, P <0.0001). In the emergence test, the latency to emerge from an enclosed shelter into an open arena was significantly longer in Nrxn2α KO mice ( n =16) compared with WT ( n =33) ( d ; t (47) =4.16, P <0.0001) and, overall, they spent significantly less time out of the enclosed shelter over the 15 min trial ( e ; two-way repeated measures analysis of variance, main effect of time block (F (2,94) =14.34, P <0.0001) and genotype (F (1,47) =12.30, P =0.001), no significant interactions (F (2,94) =2.01, P >0.05)). In a familiar environment, Nrxn2α KO mice also spent significantly less time engaging with novel objects ( f ; t (47) =2.86, P =0.006). KO, knockout; Nrxn2α, α-neurexin II; WT, wild type. * P <0.05; ** P <0.01; *** P <0.0001. Full size image In the emergence test, mice were placed inside a small enclosure, and evaluated for the time taken to emerge from it into a larger, brightly lit open arena. Emergence latencies reflect anxiety levels, being shorter in rodents injected with diazepam. 23 Nrxn2α KO mice took a substantially (3.7 times) longer time than WT mice to emerge from the enclosure ( Figure 3d ). Over the 15 min trial, Nrxn2α KO mice also spent significantly more time in the enclosure ( Figure 3e ), and made significantly fewer entries into the open arena ( Supplementary Figure S3f ) than WT mice. State anxiety is a transient emotional response related to exposure to a threatening stimulus, whereas trait anxiety is an enduring feature determining propensity for anxiety. 24 The EPM and open field tests have been described as measures of state anxiety, whereas the novel object test in a familiar environment is proposed to assess trait anxiety. 24 , 25 In the novel object test, Nrxn2α KO mice spent significantly less time than WT littermates exploring a novel object ( Figure 3f ). However, locomotor activity was similar between genotypes during both the habituation and test phases ( Supplementary Figures S3g and h ). To assess depression-related behaviors in Nrxn2α KO mice, we used the Porsolt forced-swim test and the tail suspension test, both of which involve measurement of escape attempts and behavioral despair. In each of these tests, Nrxn2α KO and WT mice were statistically indistinguishable ( Supplementary Figures S4a and b ). Nrxn2α KO mice exhibit normal PPI and passive avoidance learning PPI is a robust operational measure of sensorimotor gating, a process important for filtering extraneous sensory information from the external environment. In the few studies conducted to date, autism patients have not shown consistent deficits in PPI, 26 , 27 with only one study reporting decreased PPI in autistic subjects under specific testing conditions. 28 We found that Nrxn2α KO and WT mice showed similar startle responses to varying intensities of sound, and similar magnitudes of PPI ( Supplementary Figures S4c and d ), thereby suggesting normal sensorimotor gating in Nrxn2α KO mice. Up to 40% of individuals with autism have IQ scores low enough (<35) to be classified within the range of severe-to-profound intellectual disability. 29 Conversely, a man with speech problems, autistic traits and deletion of the whole NRXN2 gene was reported to have an IQ of 113 in a nonverbal intelligence test, suggesting that his mental impairment was primarily restricted to speech and language. 13 Therefore, we examined long-term (24-h) memory in Nrxn2α KO mice using step-through passive avoidance, a fear-motivated test that requires the subject to refrain from entering a specific environment (a dark chamber) in which an aversive stimulus (a mild electric shock) has previously been experienced. We found that Nrxn2α KO and WT mice had similar retention latencies 24 h after the electric shock was given ( Supplementary Figure S4e ), indicating normal cognitive performance in this hippocampus-dependent test. 30 Nrxn2α KO mice show a decrease in hippocampal Munc18-1 Mice with deletion of other genes implicated in autism have shown differences in the level of synaptic proteins, 31 so we used real-time RT-PCR analysis to measure the mRNA levels of 13 genes encoding synaptic proteins to ascertain whether their expression was altered by Nrxn2α deficiency. These genes were chosen on the basis of either known direct interactions with neurexin at the presynapse (for example, Stxbp1 ) or indirectly via neuroligins at the postsynapse (for example, Dlg4 , Pvalb ). We examined mRNA levels in two brain regions: the frontal cortex and hippocampus, both of which have links to autism. 32 , 33 Dlg4, encoding PSD-95, was the only transcript tested that had altered mRNA levels in both the frontal cortex and hippocampus, with expression significantly decreased in Nrxn2α KO mice. In the hippocampus, the mRNA levels of genes that encode proteins involved in both inhibitory ( Pvalb ; parvalbumin; Figure 4a ) and excitatory ( Grin2a ; NMDA receptor subunit 2a; Figure 4b ) transmission were significantly decreased in Nrxn2α KO mice. The mRNA level of Stxbp1 , encoding Munc18-1, was also significantly reduced in the Nrxn2α KO hippocampus ( Figure 4c ). Munc18-1 has been shown to interact presynaptically with neurexins to facilitate presynaptic vesicular release. 34 Figure 4 Altered mRNA transcript and protein expression levels in the frontal cortex and hippocampus of Nrxn2α KO brain. The 13 genes examined were divided into groups of inhibitory-related ( a ), excitatory-related ( b ) and synaptic scaffold-related ( c ). Within the frontal cortex, Nrxn2α KO mice showed significant reductions in the mRNA level of Dlg4 , whereas in the hippocampus, Pvalb , Grin2a , Stxbp1 and Dlg4 were also reduced. Gad1 and Slc17a7 (VGlut1) both had reductions approaching significance (note in a and b , # P =0.059 and # P =0.086, respectively). ( d ) Summary of the significantly altered genes (unpaired t -test). ( e and f ) Within the hippocampus, western blotting confirmed a decrease in the protein expression of Munc18-1 ( Stxbp1 ) ( t (22) =2.31, P =0.031), although there was no significant difference in the cortex ( P >0.05). KO, knockout; Nrxn2α, α-neurexin II; RQ, relative quantification; WT, wild type. * P <0.05. Full size image To determine whether these transcriptional changes led to detectable changes in protein abundance, we tested homogenates of frontal cortex and hippocampus by western blotting. Within the hippocampus, there was a significant reduction in the abundance of Munc18-1 ( Figures 4e and f ), but there was no significant difference in the frontal cortex. None of the other genes with significantly different mRNA levels in Nrxn2α KO mice showed detectable differences in protein abundance ( Supplementary Figure S5 ). Discussion Although there has been increasing focus upon the etiology of autism, its genetic basis remains poorly defined. Despite this, an increasing body of evidence has implicated the neurexin gene family in autism. Although deletions of NRXN1 are associated with autism, 35 recent studies have also discovered deletions affecting NRXN2 in autism patients, 12 , 13 although a causative link between NRXN2 and autism has not been established. In the present study, we found that deletion of the Nrxn2α gene in mice can replicate some of the core symptoms of autism. We found that Nrxn2α KO mice show reduced sociability, while also exhibiting an anxiety phenotype in the open field, EPM and emergence tests. Following quantification of mRNA extracts and protein expression, we found that deletion of Nrxn2α is associated with decreased expression of the presynaptic protein Munc18-1, which may potentially contribute to the altered behavioral state of Nrxn2α KO mice. The diagnosis of autism in humans is made upon assessment of aberrant behavioral phenotypes, typically social, communication, repetitive and stereotyped behaviors. We found that Nrxn2α KO mice fail to show sociability with novel conspecifics or a preference for exploring social odors. This behavioral phenotype is thus consistent with one of the core symptoms of autism. A similar phenotype is shown by Shank3 KO mice, which also genetically model a mutation found in autism, with deficits in sociability and social recognition. 35 However, given the lack of initial sociability in Nrxn2α KO mice, it is difficult to determine to what extent social memory was actually affected. In contrast, mice null for another gene in the neurexin family, α-neurexin I (Nrxn1α), have shown heightened sociability, with significantly more time spent exploring the stranger mouse in the three-chamber social approach test and more aggression towards juvenile conspecifics, 36 although another study observed unaltered sociability. 37 Nrxn2α KO mice may thus model social deficits associated with autism better than Nrxn1α KO mice. Although it is conceivable that the generalized anxiety phenotype of Nrxn2α KO mice could have influenced their performance in the three-chamber social approach test, the similar total locomotion of Nrxn2α KO and WT mice across all phases of the test indicates that there was not an effect of hypoactivity ( Supplementary Figure S1 ). Nrxn2α KO mice did not model other core symptoms of autism in the behavioral tests that we carried out. Within the open field, they did not exhibit stereotyped repetitive behaviors, as have been observed in Shank3 KO mice, 38 , 39 16p11.2 deletion mice 40 and Nrxn1α KO mice. 37 Nrxn2α KO mice may simply not exhibit this phenotype, or the anxiogenic effect of the open field may have reduced the chance of observing repetitive behaviors. Altered communication is also a hallmark of autism. Changes in ultrasonic vocalization have been found in Shank3 KO mice, 39 but we did not test for this phenotype. Autism is frequently comorbid with reduced intellectual ability. 41 To assess long-term memory, we used the fear-motivated passive avoidance test, but found no impairment in Nrxn2α KO mice. It is possible that Nrxn2α deletions do not directly impair intellectual capacity, as the male patient with a whole gene deletion of NRXN2 had an IQ of 113, although he did have deficiencies in speech and language. 13 Similarly, Nrxn1α KO mice did not show cognitive impairments in the Morris water maze. 37 However, as Nrxn1α deletions have been found in mentally retarded subjects without autism, 42 further work is warranted to understand the role of the neurexins in cognitive processes. We observed an anxiety phenotype in Nrxn2α KO mice across several different paradigms. In autism patients, it has been noted that anxiety can exacerbate other symptoms, and treatment of anxiety by cognitive behavioral therapy can improve the social skills of patients. 20 , 21 Nrxn2α KO mice provide a tool to further explore the links between autism and anxiety. Nrxn1α KO mice exhibit milder anxiety-like behavior, making fewer transitions in a light/dark box 36 while showing no abnormalities in the EPM. 37 Other mouse models of autism have also shown anxiety phenotypes, including Shank3 KO mice that spent less time in the open arms of the EPM. 38 At the gene transcript level, we used quantitative RT-PCR to discover that various genes, normally associated with excitatory and inhibitory transmission, were downregulated in the Nrxn2α KO brain. However, only Munc18-1 showed detectable alterations at the protein level in western blotting assays. It is unclear why the decreased transcript levels of the other genes were not reflected in the abundance of their encoded proteins. Western blotting assays are, at best, only semi-quantitative, limited in their ability to detect small differences in protein levels. In accordance with the primary antibody suppliers’ recommendations, total protein samples of 30 μg were loaded into each lane. Although loading less total protein or just the synaptosomal fraction might, conceivably, increase the sensitivity of western blotting, 40 μg of whole-brain homogenates were previously used to reveal the differences in synaptic proteins in Nlgn1 KO mice deficient in neuroligin-1, 31 a transmembrane protein that complexes with β-neurexin to form a functional excitatory synapse. 43 Moreover, changes in gene expression level are frequently not reflected at the protein level; 44 for example, a study using both RNA sequencing and quantitative mass spectrometry to calculate absolute mRNA and protein copy numbers in the same mouse fibroblasts found that mRNA levels explained only ~40% of protein level variation. 45 A 21% decrease in the abundance of Munc18-1 in the brain has previously been found in Nlgn1 KO mice that display impaired spatial memory and increased repetitive behavior. 31 Interestingly, a genome-wide copy number variation analysis has implicated NLGN1 as a candidate gene in autism susceptibility. 46 A significant decrease in expression of Munc18-1 could conceptually be important for both excitatory and inhibitory transmission. Munc18-1 is found at the presynapse and holds a critical role in facilitating presynaptic vesicular release through its interactions with syntaxin-1. 34 Furthermore, Munc18-1 can link to neurexins via its cytoplasmic tail through a complex that involves Mint1. 4 In Munc18-1 KO mice, there is a complete loss of neurotransmitter secretion from synaptic vesicles 47 by a reduction in docked vesicles at the active zone, 48 whereas in heterozygotes the readily releasable pool is more easily depressed at glutamatergic and GABAergic synapses. 49 Munc18-1 heterozygotes also show enhanced anxiety and impaired emotional learning. 50 As altered Munc18-1 expression is likely just one of many synaptic modifications caused by the loss of Nrxn2α during development, further work is warranted to understand the molecular pathways that underpin the behaviors observed in Nrxn2α KO mice. The robust deficit in social interaction and the heightened anxiety of Nrxn2α KO mice are consistent with a causal role for the loss of Nrxn2α in the genesis of autism-related behaviors, as suggested by the previous finding of deletions affecting exons of the NRXN2 gene in two unrelated individuals with autism. 12 , 13 Nrxn2α KO mice may thus provide a useful experimental system for the exploration of disease mechanisms and novel treatments in autism. | Researchers at the University of Leeds have shed light on a gene mutation linked to autistic traits. The team already knew that some people with autism were deficient in a gene called neurexin-II. To investigate whether the gene was associated with autism symptoms, the Leeds team studied mice with the same defect. They found behavioural features that were similar to autism symptoms, including a lack of sociability or interest in other mice. Dr Steven Clapcote, Lecturer in Pharmacology in the University's Faculty of Biological Sciences, who led the study published in the journal Translational Psychiatry today, said: "In other respects, these mice were functioning normally. The gene deficiency mapped closely with certain autism symptoms." Dr Clapcote added: "This is exciting because we now have an animal model to investigate new treatments for autism." The researchers also looked at how the absence of neurexin-II was affecting the brain. Co-author Dr James Dachtler, Wellcome Trust Junior Investigator Development Fellow in the Faculty of Biological Sciences at Leeds, said: "We found that the affected mice had lower levels of a protein called Munc18-1 in the brain. Munc18-1 usually helps to release neurotransmitter chemicals across synaptic connections in the brain, so neurotransmitter release could be impaired in the affected mice and possibly in some cases of autism." Research by Professor Thomas Südhof, a Nobel prize-winning biochemist at Stanford University, previously established a link between autism symptoms and neuroligin-1, another gene associated with synapse signalling. The Leeds-led study is the first to find a connection with neurexin-II. Dr Clapcote said: "Not all people with autism will have the neurexin-II defect, just as not all will have the neuroligin defect, but we are starting to build up a picture of the important role of genes involved in these synapse communications in better understanding autism." | 10.1038/TP.2014.123 |
Medicine | Small RNA identified that offers clues for quieting the 'voices' of schizophrenia | Thalamic miR-338-3p mediates auditory thalamocortical disruption and its late onset in 22q11.2 microdeletion models, Nature Medicine, nature.com/articles/doi:10.1038/nm.4240 Journal information: Nature Medicine | http://nature.com/articles/doi:10.1038/nm.4240 | https://medicalxpress.com/news/2016-11-small-rna-clues-quieting-voices.html | Abstract Although 22q11.2 deletion syndrome (22q11DS) is associated with early-life behavioral abnormalities, affected individuals are also at high risk for the development of schizophrenia symptoms, including psychosis, later in life. Auditory thalamocortical (TC) projections recently emerged as a neural circuit that is specifically disrupted in mouse models of 22q11DS (hereafter referred to as 22q11DS mice), in which haploinsufficiency of the microRNA (miRNA)-processing-factor-encoding gene Dgcr8 results in the elevation of the dopamine receptor Drd2 in the auditory thalamus, an abnormal sensitivity of thalamocortical projections to antipsychotics, and an abnormal acoustic-startle response. Here we show that these auditory TC phenotypes have a delayed onset in 22q11DS mice and are associated with an age-dependent reduction of miR-338-3p, a miRNA that targets Drd2 and is enriched in the thalamus of both humans and mice. Replenishing depleted miR-338-3p in mature 22q11DS mice rescued the TC abnormalities, and deletion of Mir338 (which encodes miR-338-3p) or reduction of miR-338-3p expression mimicked the TC and behavioral deficits and eliminated the age dependence of these deficits. Therefore, miR-338-3p depletion is necessary and sufficient to disrupt auditory TC signaling in 22q11DS mice, and it may mediate the pathogenic mechanism of 22q11DS-related psychosis and control its late onset. Main Thalamocortical projections to the auditory cortex (ACx), a brain region implicated in auditory hallucinations 1 , 2 , 3 , 4 , have emerged as a circuit that is specifically disrupted 5 in mouse models of 22q11DS 6 . This disorder, the most common microdeletion syndrome in humans 7 , 8 , is caused by a hemizygous microdeletion (1.5–3 Mb) on the long arm of chromosome 22 (ref. 9 ). 22q11DS is considered to be a leading genetic cause of schizophrenia 10 , 11 , 12 . Schizophrenia develops in 23–43% of individuals with 22q11DS 13 , 14 , 15 , 16 , 17 , 18 , most of whom experience psychosis 19 , 20 . Furthermore, 30–50% of individuals who do not have schizophrenia but have 22q11DS demonstrate subthreshold symptoms of psychosis 21 . Nonpsychotic behavioral abnormalities are present from early life in patients with 22q11DS 22 , 23 , but psychotic symptoms and schizophrenia are delayed; the median age of psychosis onset is 21 years 18 , 24 , 25 . In patients with schizophrenia, auditory hallucinations and other psychotic symptoms are similarly delayed until late adolescence or early adulthood 26 , 27 , present in 60–90% of cases 28 , and often alleviated by treatment with antipsychotics that inhibit D2 dopamine receptors (DRD2s) 29 , 30 . Given the germline occurrence of deleted genes in 22q11DS, it is unclear why the onset of positive symptoms is delayed. Recently, Dgcr8 emerged as a culprit gene responsible for several neuronal phenotypes in mouse models of 22q11DS 31 , 32 , including the disruption of synaptic transmission at TC projections to the ACx 5 . Dgcr8 is part of the microprocessor complex that mediates the biogenesis of miRNAs, small RNAs that negatively regulate the stability of target mRNAs, as well as protein translation 33 . Dgcr8 haploinsufficiency in individuals with 22q11DS leads to depletion of miRNAs and the resultant upregulation of respective targets, which in turn disrupts synaptic transmission, synaptic plasticity, and proper functioning of neural circuits 34 . In adult mouse models of 22q11DS, Dgcr8 haploinsufficiency is sufficient to upregulate Drd2 mRNA and Drd2 protein in the auditory thalamus, causing auditory abnormalities that include decreased glutamatergic synaptic transmission at TC projections to the ACx and deficient prepulse inhibition (PPI) of the acoustic-startle response 5 . Abnormally high levels of Drd2 in the thalamus of 22q11DS mice increase TC projection sensitivity to Drd2 antagonists, including antipsychotics. Consequently, auditory synaptic and behavioral abnormalities of 22q11DS mice are rescued by treatment with antipsychotics 5 . Here we tested whether TC disruption follows the same age-dependent trajectory as psychosis in patients with 22q11DS or schizophrenia and determined the molecular underpinnings of TC disruption in 22q11DS mice. Results Delayed disruption of TC synaptic transmission in mouse models of 22q11DS We compared basal synaptic transmission in young (2-month-old) and mature (4-month-old) Df(16)1 /+ mice, a murine model of 22q11DS 6 ( Fig. 1a ), and their wild-type (WT) littermates. Mice between the ages of 3 and 6 months correspond to mature human adults between the ages of 20 and 30 years 35 . Using whole-cell voltage-clamp recordings, we measured TC excitatory postsynaptic currents (EPSCs) from thalamorecipient ACx cortical layer (L) 3/4 pyramidal neurons 36 while stimulating TC projections in acute brain slices containing the auditory thalamus (i.e., the ventral part of the medial geniculate nucleus (MGv)) and the ACx ( Fig. 1b ). The input–output relationship between stimulation intensity and TC EPSC, a measure of basal synaptic transmission at TC projections, was deficient in older but not younger mutant mice as compared to that in WT controls ( Fig. 1c,d ). Consistent with the notion that the increased amount of Drd2 in thalamic relay neurons reduces glutamatergic synaptic transmission at auditory TC projections in Df(16)1 /+ mice 5 , the Drd2 mRNA level was elevated in the MGv of older but not younger Df(16)1 /+ mice ( Fig. 1e ). Figure 1: Adult onset of sensitivity to antipsychotics and of synaptic transmission disruption in auditory TC projections in mouse models of 22q11DS. ( a ) Map of mouse chromosome 16, showing 22q11DS-associated genes that are deleted in Df(16)1 /+ mice. ( b ) Illustration of voltage-clamp recordings of thalamorecipient L3/4 pyramidal neurons in TC slices. TC projections are shown in red. ACx, auditory cortex; TC, thalamocortical; MGv, ventral part of the medial geniculate nuclei. ( c , d ) Input–output relations between stimulation intensity and EPSCs at TC projections in the ACx of 2-month-old ( F (1,37) = 0.967; P = 0.338) ( c ) or 4-month-old ( F (1,46) = 11.56; P < 0.001) ( d ) WT (20 and 23 neurons, respectively) and Df(16)1 /+ mice (19 and 25 neurons, respectively). ( e ) Drd2 transcript levels in the MGv of 2-month-old ( n = 6 mice per genotype; measured in triplicates; U = 126; P = 0.261) and 4-month-old ( n = 5 mice per genotype, measured in triplicates, t (28) = −5.78; * P < 0.001) WT and Df(16)1 /+ mice. ( f , g ) Representative effects of haloperidol on TC EPSCs in 2-month-old of 6 experiments) ( f ) and 4-month-old (of 8 experiments) ( g ) WT and Df(16)1/+ littermates. Haloperidol-induced percentage change (ΔH) in the slope of TC EPSCs relative to that at baseline (before haloperidol application; dashed line). ( h ) The ΔH as a function of mouse age in WT and Df(16)1/+ littermates. The number of cells recorded at each age is shown in parentheses above the plots. * P < 0.01 by two-tailed t -test or Mann–Whitney rank-sum U test depending on whether data were normally distributed. ( i , j ) Representative effects of haloperidol on TC EPSCs in 2-month-old (of 6 experiments) ( i ) and 4-month-old (of 12 experiments) ( j ) WT and Dgcr8 +/− littermates. In f , g , i , j , scale bars, 50 pA, 10 ms. Insets show representative EPSCs before (1) and after (2) haloperidol application. ( k ) The ΔH as a function of mouse age in WT and Dgcr8 +/− littermates. The number of cells recorded at each age is shown in parentheses above the plots. * P < 0.01 by two-tailed t -test or Mann–Whitney rank-sum U test. ( l ) Average Drd2 mRNA levels normalized to those of glyceraldehyde-3-phosphate dehydrogenase ( Gapdh ) in the auditory thalamus of 2-month-old ( n = 7 mice per genotype) and 4-month-old (WT, n = 4 mice; Dgcr8 +/− , n = 5 mice) WT and Dgcr8 +/− littermates (measured in triplicates). t (24) = −2.44; * P = 0.022. ( m , n ) Mean PPI of maximal acoustic-startle response in 2-month-old (WT, n = 23 mice; Dgcr8 +/− , n = 22 mice) ( m ) and 4-month-old (WT, n = 36 mice; Dgcr8 +/− , n = 41 mice) ( n ) WT and Dgcr8 +/− littermates. * P < 0.05 by two-tailed t -test or Mann–Whitney rank-sum U test. SPL (sound pressure level). In c , d , h , k , m , n , data are represented as the mean ± s.e.m. In e , l – n , horizontal lines represent the mean values. Throughout, t values indicate the use of t tests, U values indicate the use of the Mann–Whitney U rank-sum test, and F values indicate the use of one-way analysis of variance (ANOVA). Full size image Elevated Drd2 levels in older Df(16)1 /+ mice mediate the abnormal sensitivity of mutant TC projections to antipsychotics; thus, we tested the time course of this sensitivity at different ages of mice (1.5–7 months). In brief, we stimulated the thalamic radiation in brain slices of WT mice to evoke TC EPSCs with a rise slope of ∼ 100 pA/ms. We compared the effect of the antipsychotic agent haloperidol (1 μM) on TC EPSC 30 min after its bath application to the preapplication baseline TC EPSC (ΔH, a measure of haloperidol sensitivity). We determined that ΔH was significantly higher in Df(16)1 /+ mice than in WT littermates, but only beginning at 3 months of age ( Fig. 1f–h ). In older mice, a similar intensity of thalamic stimulation (Online Methods ) evoked substantially smaller TC EPSCs in Df(16)1 /+ mice than in WT controls, and treatment with haloperidol rescued that deficit ( Fig. 1g ). By contrast, TC projections in younger mutant mice were not sensitive to haloperidol treatment ( Fig. 1f,h ). Consistent with the notion that Dgcr8 haploinsufficiency underlies the TC deficiency in 22q11DS 5 , TC projections in Dgcr8 +/− mice older than 3 months were sensitive to haloperidol, whereas those in WT mice were not. TC projections in younger mice were not sensitive to haloperidol ( Fig. 1i–k ). Drd2 mRNA levels were also elevated in the thalamus of only the older Dgcr8 +/− mice ( Fig. 1l ). Furthermore, PPI, a measure of sensorimotor gating that is typically reduced in patients with schizophrenia 37 , 38 , was deficient in older but not younger Dgcr8 +/− mice ( Fig. 1m,n ). miR-338-3p mediates the disruption of TC synaptic transmission in 22q11DS Because Dgcr8 mediates miRNA processing 33 , we sought to identify the miRNA(s) mediating the Dgcr8 – Drd2 -dependent mechanism of TC deficiency. To this end, we performed miRNA microarray analysis of the auditory thalamus of 2- and 4-month-old mice ( Supplementary Table 1 ). Among the miRNAs that potentially target the Drd2 transcript (on the basis of the miRWalk and TargetScan miRNA-target-prediction algorithms), only five miRNAs (miR-337-3p, miR-337-5p, miR-335-5p, miR-335-3p, and miR-338-3p) were depleted in the auditory thalamus of Df(16)1 /+ or Dgcr8 +/− mice ( Fig. 2a–d ). Because miR-185, which is not a Drd2 -targeting miRNA, is encoded within the Df(16)1 microdeletion, its depletion in Df(16)1/+ mice served as a positive control ( Fig. 2a,b ). The qRT–PCR analysis verified that all five Drd2 -targeting miRNAs were depleted in Dgcr8 +/− mice ( Supplementary Fig. 1a ). The expression of miRNAs decreased with age, regardless of genotype. The miRNA levels in older mice were lower than those in young WT or Dgcr8 +/− mice. However, because Dgcr8 haploinsufficiency depleted these miRNAs at both ages, the age-dependent decline in miRNA expression was exacerbated in mutants and reached minimal values at 4 months in Dgcr8 +/− mice ( Supplementary Fig. 1a ). Of the five miRNAs predicted to target Drd2 overexpression, miR-337-3p, miR-337-5p, miR-335-3p, and miR-338-3p, but not miR-335-5p, decreased DRD2 mRNA in vitro in human SH-SY5Y cells ( Supplementary Fig. 1b ). Figure 2: Identification of Drd2 -targeting miR-338-3p in the auditory thalamus. ( a – d ) Volcano plots of miRNA microarray data from the auditory thalamus of 2-month-old ( a , c ) and 4-month-old ( b , d ) WT (2 months, n = 6 mice; 4 months, n = 8 mice) and Df(16)1 /+ (2 months, n = 4 mice; 4 months, n = 8 mice) ( a , b ) or WT (2 months, n = 7 mice; 4 months, n = 7 mice) and Dgcr8 +/− (2 months, n = 9 mice; 4 months, n = 7 mice) ( c , d ) male littermates. The difference between miRNA levels in WT and mutants was considered significant if P < 0.01. Symbol size represents the miRNA expression level in the microarray. Of note, miR-338-3p had the highest expression among those from all of the predicted Drd2 -targeting miRNAs. ( e ) Diagram of the mouse Drd2 3′ UTR ( XM_006509996.2 ) with seed sites for the five miRNAs indicated. ( f ) Experimental design of a recombinant AAV encoding a chimeric construct overexpressing a miRNA of interest (top) injected into the mouse MGv (bottom). ( g ) Representative image (of n = 3 images) of GFP expression specifically in the auditory TC projections after in vivo injection of recombinant AAV. Scale bar, 500 μm. ( h ) Haloperidol sensitivity of TC projections in 4-month-old WT and Df(16)1 /+ mice after injection with AAVs encoding different miRNAs or GFP. The number of cells recorded is shown in parentheses. * P < 0.01 by two-tailed t -test or Mann–Whitney rank-sum U test depending on whether data pass or fail the normality test. ( i ) Relative average levels of miRNAs in the thalamus, hippocampus, and cortex of WT mice ( n = 5 mice for each brain region, run in triplicates). Data were normalized to the average expression of three housekeeping genes: Rnu6 (encoding U6 snRNA), Snord68 , and Snord70 . Only miR-338-3p shows enrichment in the thalamus. H (2) = 18.85, * P < 0.001. ( j ) Mean relative miR-338-3p levels (normalized to those of Rnu6 ) in the post-mortem MGv and ACx tissues from healthy controls and patients with schizophrenia (SCZ) (MGv: n = 7 controls; n = 7 patients; t (40) = 4.56; * P < 0.001; ACx: n = 8 controls; n = 8 patients; U = 278; P = 0.845; measured in triplicates). Throughout, t values indicate the use of t tests, U values indicate the use of the Mann–Whitney U rank-sum test, F values indicate the use of one-way analysis of variance (ANOVA), and H values indicate the use of the Kruskal–Wallis one-way ANOVA on ranks H test followed by a multiple comparison procedure (Dunn's method). Full size image To identify which of the miRNA(s) that target the Drd2 3′ untranslated region (UTR) ( Fig. 2e ) regulate the Dgcr8 – Drd2 -dependent mechanism of TC deficiency in vivo , we performed a screen based on the abnormal sensitivity of TC projections to haloperidol. We overexpressed mature miRNAs in excitatory thalamic neurons by injecting adeno-associated viruses (AAVs) encoding GFP and miR-337-5p, miR-338-3p, miR-335-5p, miR-337-3p, or miR-335-3p under the control of the promoter of the excitatory-neuron-specific gene Camk2a into the MGv of Df(16)1 /+ and WT mice ( Fig. 2f,g ). Overexpression of individual miRNAs in 4-month-old Df(16)1 /+ mice not only replenished the depleted miRNA levels, but it also increased their levels to those greater than the levels in WT mice ( Supplementary Fig. 2a–e ). However, of the five miRNAs, only miR-338-3p overexpression rescued the abnormal haloperidol sensitivity observed in Df(16)1 /+ mice ( Fig. 2h and Supplementary Fig. 2f–k ). Overexpression of miR-338-3p in the MGv of Df(16)1 /+ mice decreased Drd2 mRNA levels in the MGv by 47.6 ± 10.2% as compared to those in the MGv treated with control virus ( n = 6 mice for AAV-GFP-miR-338-3p; n = 6 mice for AAV-GFP; t (10) = 3.27, P = 0.008 by two-tailed t -test), confirming that miR-338-3p regulates Drd2 levels. miR-338-3p, miR-335-3p, and miR-335-5p have conserved seed sites in the mouse and human Drd2 3′ UTR. Consistent with the notion that only abundant miRNAs effectively regulate the targeting transcript(s) 39 , miR-338-3p seemed to be more crucial for Drd2 regulation in the auditory thalamus than did miR-337-5p, miR-335-5p, miR-337-3p, or miR-335-3p. Indeed, miR-338-3p was enriched in the thalamus as compared to the other four miRNAs; miR-337-3p, miR-335-5p, miR-337-3p, and miR-335-3p levels were approximately 0–1% those of miR-338-3p ( Fig. 2i ). Moreover, miR-338-3p was enriched in the thalamus as compared to other tested brain regions ( Fig. 2i ), suggesting that depletion of this Drd2 -regulating miRNA in subjects with 22q11DS mainly affects thalamic function. Similarly, miR-338-3p was enriched in the MGv as compared to the ACx (Brodmann area 41) in post-mortem tissue samples from human subjects ( Fig. 2j and Supplementary Table 2 ). Moreover, miR-338-3p levels were significantly decreased in the thalamus but not the ACx of patients with schizophrenia as compared to those in age- and sex-matched controls ( Fig. 2j ). We previously showed that the DRD2 protein level is elevated in MGv samples 5 . Replenishing miR-338-3p in the MGv rescues the TC deficits in 22q11DS mice To rescue the disruption of TC synaptic transmission in Df(16)1 /+ mice, we replenished miR-338-3p in the thalamic relay neurons by injecting AAV-GFP-miR-338-3p into the MGv in vivo , using an approach similar to that used in previous experiments ( Fig. 2h ); AAV-GFP was used as a control ( Fig. 3a ). After 3 to 4 weeks, we found robust expression of GFP in the MGv neurons, and the GFP-labeled projections were clearly visible in the L3/4 thalamorecipient layer of the ACx ( Fig. 3b ). We recorded the input–output relations at TC projections in 4- to 5-month-old WT and Df(16)1 /+ mice that were injected with either AAV-GFP-miR-338-3p or AAV-GFP. As in previous experiments 5 ( Fig. 1d ), we observed a substantial deficit in TC synaptic transmission between WT and Df(16)/1+ mice that were injected with AAV-GFP. WT mice injected with AAV-GFP-miR-338-3p did not show a significant increase in synaptic transmission as compared to control mice. However, the TC synaptic transmission deficit was rescued in Df(16)1 /+ mice that were injected with AAV-GFP-miR-338-3p ( Fig. 3c ). Similarly, the presynaptic TC deficit observed in Df(16)1 /+ mice 5 was rescued by replenishing miR-338-3p. Injections of AAV-GFP-miR-338-3p but not of AAV-GFP into the MGv rescued a deficient paired-pulse ratio (PPR) of two consecutive TC EPSCs in Df(16)1 /+ mice without affecting TC PPR in WT mice at all of the measured interpulse intervals ( Fig. 3d ). These results indicate that depletion of miR-338-3p is a necessary component for developing TC deficits in mouse models of 22q11DS. Figure 3: Replenishment of miR-338-3p in the auditory thalamus rescues deficits in synaptic transmission and presynaptic neurotransmitter release at TC projections in 22q11DS mouse models. ( a ) In vivo infection of MGv relay neurons with AAV-GFP-miR-338-3p or AAV-GFP. ( b – d ) Representative images of n values reported in c , d of GFP expression (green) in cell bodies in the MGv (scale bar, 100 μm) (left) and in projections to the thalamorecipient L3/4 layer of the ACx (scale bar, 20 μm) (right) (a patch pipette and part of an L3/4 pyramidal neuron filled with Alexa 594 are shown in red) ( b ) and input–output relationships between stimulation intensity and EPSCs ( n = 16 WT;GFP neurons; n = 10 WT;miR-338-3p neurons; n = 20 Df(16)1 /+;GFP neurons; n = 13 Df(16)1 /+;miR-338-3p neurons; F (3,24) = 268.7; * P < 0.001) ( c ) or PPR ( n = 17 WT;GFP neurons; n = 12 WT;miR-338-3p neurons; n = 19 Df(16)1 /+;GFP neurons; n = 14 Df(16)1 /+;miR-338-3p neurons; F (3,4) = 107.2; * P < 0.001) ( d ) at TC projections in the ACx of 4- to 5-month-old WT and Df(16)1 /+ mice that were injected with either AAV-GFP-miR-338-3p or AAV-GFP. Insets in d show representative EPSCs. Scale bars, 20 ms and 50 pA. In c , d , data are represented as the mean ± s.e.m. Full size image miR-338-3p depletion in the MGv or Mir338 knockout recapitulates the auditory TC synaptic abnormalities of 22q11DS mice To test whether miR-338-3p depletion is sufficient to trigger TC deficits, we used two strategies. First, we constructed a miR-338-3p sponge by using a previously described strategy 40 . Second, we generated Mir338 -knockout (KO) mice ( Fig. 4 ). The miR-338-3p sponge efficiency was verified in an in vitro system by using the luciferase assay. The sponge with 12 seed sites substantially and specifically depleted miR-338-3p levels in vitro ( Supplementary Fig. 3 ). On the basis of these data, we constructed AAVs expressing either the miR-338-3p sponge or a scrambled control vector under the control of the Camk2a promoter ( Fig. 4a ). An AAV expressing the miR-338-3p sponge injected into the MGv of WT mice was sufficient to increase levels of the Drd2 mRNA ( Fig. 4b ) and render the TC projections sensitive to haloperidol ( Fig. 4c ). Figure 4: The depletion of mir-338-3p or knockout of Mir338 replicates the TC deficiency of Df(16)1 /+ mice. ( a ) Schematic of the AAV expressing a miR-338-3p sponge construct, with multiple binding sites to miR-338-3p in the GFP 3′ UTR under the control of the Camk2a promoter. Sequences for the miR-338-3p sponge and the scrambled control are shown below. Seed site sequences are indicated. ( b ) Relative Drd2 mRNA levels after infection of MGv excitatory neurons in WT mice with an AAV encoding a scrambled control ( n = 5 mice, run in triplicates) or a miR-338-3p sponge ( n = 6 mice, run in triplicates). U = 53, and * P = 0.005. ( c ) Normalized mean TC EPSCs before and after application of haloperidol in WT mice after infection of MGv neurons with AAVs encoding a scrambled control ( n = 6 neurons) or a miR-338-3p sponge ( n = 14 neurons). Insets show representative EPSCs. * P < 0.001 by two-tailed t -test. ( d ) Schematic of the Aatk locus in WT (top) and Mir338 -knockout (KO) mice (bottom). ( e ) Normalized levels of miR-338-3p and Drd2 mRNA in the auditory thalamus of WT (miR-338-3p, n = 3 mice; Drd2 , n = 5 mice), Mir338 +/− (miR-338-3p, n = 4 mice; Drd2 , n = 6 mice) and Mir338 −/− mice (miR-338-3p, n = 4 mice; Drd2 , n = 4 mice) (run in triplicates). miR-338-3p: H (2) = 26.5, * P < 0.001; Drd2 : H (2) = 25.2, * P < 0.001. ( f ) Normalized levels of Drd2 protein in the auditory thalamus ( H (2) = 23.3, * P < 0.01), cortex ( H (2) = 1.21, P = 0.544), and hippocampus (Hipp.) ( t (2) = 0.4, P = 0.674) of WT ( n = 3), Mir338 +/− ( n = 4), and Mir338 −/− ( n = 4) mice. Samples were run in duplicates (cortex and hippocampus) or triplicates (MGv). ( g ) Schematic showing simultaneous recordings of EPSCs in L3/4 pyramidal neurons evoked by electrical stimulation of the thalamocortical (TC) and corticocortical (CC) projections. ( h , i ) Input–output relationships between electrical stimulation intensity and EPSCs at TC projections ( F (1,37) = 26.9; * P < 0.001) ( h ) or CC projections ( F (1,37) = 0.002; P = 0.964) ( i ) in the ACx of 4-month-old WT mice ( n = 19 neurons) and Mir338 +/− mice ( n = 20 neurons). ( j – m ) PPR ratio ( j , k ) and NMDAR/AMPAR ratio ( l , m ) of electrically evoked EPSCs measured at TC projections ( j , l ) and CC projections ( k , m ) of 4-month-old WT (26 neurons, 23 neurons, 10, neurons, 9 neurons, respectively) and Mir338 +/− mice (22 neurons, 19 neurons, 12 neurons, 10 neurons, respectively). In j , * P < 0.001 by two-tailed t -test; in k , P > 0.05 by two-tailed t -test; in l , t (20) = 0.03, P = 0.974 by two-tailed t -test; in m , t (17) = −0.576, P = 0.572 by two-tailed t -test. ( n ) Optogenetic experiments in TC slices. ChR2 was expressed in the MGv under control of the Camk2a promoter. ( o – q ) Input–output relationships ( o ), PPR ( p ), and NMDAR/AMPAR ratio ( q ) of optically evoked EPSCs (oEPSC) measured at TC projections of 4-month-old WT (10, 16, 14 neurons, respectively) and Mir338 +/− mice (9, 16, 17 neurons, respectively). In o , F (1,17) = 11.25, * P = 0.004; in p , * P < 0.001 by two-tailed t -test; in q , U = 98, P = 0.296. Insets show representative AMPAR-mediated (−70 mV holding membrane potential) and NMDAR-mediated (+40 mV holding membrane potential) EPSC and oEPSC traces. For insets in panels j – m , p , q , scale bars, 20 ms, 50 pA. In c , h – k , o , p , data are represented as the mean ± s.e.m. In b , e , f , l , m , q , horizontal lines represent the mean values. Full size image We then generated a mutant mouse lacking Mir338 ( Mir338 -KO mice) ( Fig. 4d ). The Mir338 locus is within the seventh intron of the apoptosis-associated tyrosine kinase ( Aatk ) gene. However, unlike miR-338-3p, Aatk expression in the MGv was not affected by age or Mir338 deletion ( Supplementary Fig. 4a,b ). The Mir338 -KO mice lacked miR-338-3p, miR-338-5p, and both miR-3065-3p and miR-3065-5p, whose genomic loci overlap with that of Mir338 . However, miR-338-5p, miR-3065-3p, and miR-3065-5p were not Drd2 -targeting miRNAs, as predicted by the miRNA-target-prediction algorithms, and their expression levels in the auditory thalamus were 0% to 2.5% that of miR-338-3p ( Supplementary Fig. 4c ). The Mir338 +/− and Mir338 −/− mice developed normally ( Supplementary Fig. 4d,e ). Their Drd2 RNA levels were inversely correlated with miR-338-3p levels in the auditory thalamus ( Fig. 4e ). Moreover, Drd2 protein levels were elevated in the MGv but not in the cortex or hippocampus of the Mir338 +/− or Mir338 −/− mice ( Fig. 4f ), further indicating that miR-338-3p regulates Drd2 expression in the auditory thalamus. Because miR-338-3p is depleted but not eliminated in Df(16)1 /+ mice, we tested TC synaptic properties in 4-month-old Mir338 +/− mice. Similar to that observed in Df(16)1/+ mice 5 , synaptic transmission at TC projections was substantially disrupted in Mir338 +/− mice. The input–output function, which we tested by electrical stimulation of TC projections, showed a decrease in TC EPSCs in Mir338 +/− mice as compared to that in WT mice ( Fig. 4g,h ). This disruption was specific to TC projections. The input–output function tested by electrical stimulation of corticocortical (CC) projections in the same slices did not differ between Mir338 +/− and WT mice ( Fig. 4i ). The PPR of two consecutive electrically evoked EPSCs was substantially altered in TC but not CC projections of Mir338 +/− mice as compared to that in WT controls ( Fig. 4j,k ). By contrast, the ratio of NMDA-receptor-mediated current to AMPA-receptor-mediated current (NMDAR/AMPAR ratio, a measure of the postsynaptic function) was normal in both TC and CC projections of Mir338 +/− mice ( Fig. 4l,m ). Because electrical stimulation of the thalamic radiation may affect circuits other than TC projections 41 , we activated TC projections more selectively by using the optogenetic approach. To that end, we injected AAVs expressing the light-activated cation channel channelrhodopsin 2 (ChR2) under the control of Camk2a promoter into the MGv of Mir338 +/− and WT littermates. We then activated TC projections by using 473-nm light pulses ( Fig. 4n ). The input–output relations and PPR (but not the NMDAR/AMPAR ratio) of optically evoked EPSCs were substantially decreased in 4-month-old Mir338 +/− mice as compared to those in WT littermates ( Fig. 4o–q ), which recapitulated the TC disruption in mouse models of 22q11DS. Mir338 haploinsufficiency disrupts TC transmission by decreasing the glutamate-release probability of thalamic projections We previously showed that the TC disruption of synaptic plasticity in mouse models of 22q11DS was due to defective presynaptic function, which was in turn caused by the reduced probability of glutamate release from thalamic projections 5 . Abnormalities in the input–output relation and PPR at TC projections of Mir338 +/− mice also suggested a deficit in presynaptic function at TC glutamatergic synapses. To understand the nature of this deficit, we performed two-photon calcium imaging in dendritic spines, which are the inputs of thalamic projections on thalamorecipient neurons in the ACx. We loaded L3/4 pyramidal neurons with the calcium indicator Fluo-5F and cytoplasmic dye Alexa 594 ( Fig. 5a ) and identified dendritic spines that responded to electrical stimulation of the thalamic radiation ( Fig. 5b ). This method enabled us to measure three factors that may contribute to the TC disruption: the distribution of synaptic inputs on dendritic trees of postsynaptic neurons, the amplitudes of calcium transients, and the probability of calcium transients at individual dendritic spines (a proxy for the probability of neurotransmitter release measured at a single synaptic input) 42 , 43 . The distribution of active TC inputs on dendritic trees and the peak amplitudes of postsynaptic calcium transients in Mir338 +/− mice were comparable to those in WT mice ( Fig. 5c–e ), suggesting that TC development, pathfinding, synaptic targeting of cortical neurons by TC projections, and postsynaptic glutamatergic receptor function were not compromised in Mir338 +/− mice. Although this negative result shows the lack of a deficit in TC morphology, it does not rule out potential morphological deficits on a finer subsynaptic scale. Notably, the probability of calcium transients in dendritic spines of thalamorecipient neurons in response to a low-frequency (0.1 Hz) train of stimuli was deficient in Mir338 +/− mice ( Fig. 5f ). This result indicates that the depletion of miR-338 decreased the probability of glutamate release at TC projections, which underlies the TC disruption in individuals with 22q11DS. This deficit in the probability of glutamate release was rescued in slices that were treated with haloperidol ( Fig. 5f ). Haloperidol treatment also effectively rescued the deficit in the probability of glutamate release in the same dendritic spine. The probability of detecting the calcium transient increased in Mir338 +/− (but not WT) dendritic spines after haloperidol application ( Fig. 5g,h ). This finding further demonstrates that haloperidol rescues the presynaptic deficit of TC synaptic transmission in Mir338 KO mice. Figure 5: Probability of glutamate release is reduced at TC projections from Mir338 +/− mice. ( a ) Representative images of an L3/4 pyramidal neuron filled with Fluo-5F and Alexa 594 through a patch pipette (left) to visualize synaptically evoked calcium transients inside dendritic spines (right). Yellow line represents the line scan. Scale bars, 10 μm (left), 1 μm (right). ( b ) Calcium transients in a dendritic spine in response to a single thalamic stimulation (arrows) repeated ten times at 0.05–0.10 Hz. Scale bar, 200 ms. ( c ) Location of active TC inputs on dendritic trees of L3/4 pyramidal neurons spines. (0;0), soma coordinates (apical dendrites pointing upwards). ( d – f ) Average distances from the soma to active TC inputs ( d ), calcium transient peak amplitudes ( e ), and probabilities ( f ) in response to 10 to 20 single TC stimulations in WT and Mir338 +/− slices that were not treated (WT, n = 27 spines; Mir338 +/− , n = 32 spines) or treated (WT, n = 32 spines; Mir338 +/− , n = 41 spines) with haloperidol. In d , F (3) = 1.05, P = 0.373; in e , H (3) = 0.644, P = 0.886; in f , H (3) = 31.51, * P < 0.001. ( g , h ) Representative traces ( g ) and average probability ( h ) of calcium transients in the same dendritic spines before and after haloperidol application in WT ( n = 5 spines) and Mir338 +/− ( n = 9 spines) mice. H (3) = 14.5, * P = 0.002. In g , scale bar, 0.1 ΔG/R, 200 ms. In d – f , h , data are represented as the mean (white lines), median (yellow lines), 10th, 25th, 75th, and 90th percentiles. Full size image Mir338 deletion or miR-338-3p depletion eliminate the age dependency of TC disruption and PPI The deletion of Mir338 was sufficient to upregulate Drd2 expression in the thalamus, which suggested that depletion of only miR-338-3p underlies the abnormal sensitivity of TC projections in individuals with 22q11DS to treatment with antipsychotics. To test this hypothesis, we compared the sensitivity of TC projections in Mir338 +/− mice and WT mice. First, we determined that TC projections of Mir338 +/− (but not WT) mice were sensitive to the Drd2-specific antagonist L-741,626 (20 nM) ( Fig. 6a ). In Mir338 +/− mice, TC EPSCs substantially increased in response to L-741,626, but that increase was not further elevated by application of haloperidol. Treatment with haloperidol alone increased TC EPSCs in 4-month-old Mir338 +/− (but not in WT) mice to magnitudes similar to those observed in Df(16)1 /+ or Dgcr8 +/− mice, suggesting that haloperidol's effect in mutant TC projections was mediated by elevated expression of Drd2 receptors ( Fig. 6b and Supplementary Fig. 5 ). Similarly, treatment with other antipsychotics (such as clozapine and olanzapine) increased TC EPSCs to magnitudes similar to those seen in Mir338 +/− but not WT mice ( Supplementary Fig. 6 ). Figure 6: Deletion of Mir338 in mice eliminates age dependency for sensitivity to antipsychotics and replicates 22q11DS phenotypes. ( a ) Average TC EPSCs before (1) and during (2 and 3) application of the Drd2-specific inhibitor L-741,626 and haloperidol in 4-month-old WT ( n = 6 neurons) and Mir338 +/− ( n = 9 neurons) mice. ( b ) Mean haloperidol sensitivity (ΔH) in WT (10, 7, 6 neurons at 1.5, 2, and 4 months, respectively) and Mir338 +/− mice (12 neurons at each age) between 1.5 and 4 months of age. * P < 0.001 by two-tailed t -test. ( c , d ) Mean TC EPSCs before (1) and after (2) haloperidol in 2-month-old WT (control siRNA, n = 10 neurons; Drd2 -specific siRNA, n = 9 neurons) and Mir338 +/− control siRNA, n = 14 neurons; Drd2 -specific siRNA, n = 10 neurons) mice that received control ( c ) or Drd2 -specific siRNA ( d ) injected into their MGv. Insets show representative EPSCs. In a–d , data are represented as the mean ± s.e.m. ( e – g ) Mean PPI of maximal acoustic-startle response in 1.5-month-old ( e ), 2-month-old ( f ), and 4-month-old ( g ) WT (22, 22, and 21 mice, respectively) and Mir338 +/− littermates (21, 21, and 20 mice, respectively). In e , F (5) = 21.648, * P < 0.001; in f , H (5) = 39.887, * P < 0.001; in g , H (5) = 17.348, * P = 0.004. SPL, sound pressure level. ( h ) Model of TC disruption in individuals with 22q11DS. DGCR8-dependent depletion of the thalamus-enriched miR-338-3p leads to an increase in DRD2 levels in the auditory thalamus (MGv) and disruption of thalamocortical synaptic transmission to the auditory cortex (ACx) later in life. The red cross (X) represents disruption of synaptic transmission in TC projections. Full size image Unlike Df(16)1 /+ or Dgcr8 +/− mice, Mir338 +/− mice became sensitive to haloperidol treatment in an age-independent manner ( Fig. 6b and Supplementary Fig. 5 ). Furthermore, in young (2-month-old) WT mice, TC projections became sensitive to haloperidol when the miR-338-3p sponge was expressed in the MGv ( Supplementary Fig. 7 ), indicating that depletion of miR-338-3p in the MGv is sufficient for sensitivity to antipsychotics. TC sensitivity to haloperidol in 2-month-old Mir338 +/− mice was eliminated by expression of small interfering RNA (siRNA) specific for Drd2 (but not a control siRNA) in the MGv ( Fig. 6c,d ). The specificity of the Drd2 -specific siRNA has been characterized previously 5 . These experiments further indicated that miR-338-3p is sufficient to regulate Drd2 in the thalamus, regardless of age. Similarly, Mir338 +/− mice were deficient in PPI as compared to WT control mice, and this deficit was observed at all of the time points tested (1.5, 2, and 4 months) ( Fig. 6e–g ). The defect in PPI was not caused by peripheral hearing defects because the acoustic-brainstem-response testing showed no differences between Mir338 +/− and WT mice at these ages ( Supplementary Fig. 8 ). Discussion The recent identification of disrupted glutamatergic synaptic transmission at thalamic inputs to the ACx in 22q11DS mice 5 suggests that TC disruption could be a pathogenic mechanism that mediates the susceptibility to positive psychotic symptoms in 22q11DS-related schizophrenia for the following reasons. (i) TC disruption in 22q11DS mice is rescued by treatment with antipsychotic medications that are Drd2 antagonists and that effectively treat predominantly psychotic symptoms but substantially less effectively treat cognitive or negative symptoms of schizophrenia 44 , 45 , 46 . This disruption was specific to auditory TC projections and not observed at other glutamatergic projections (i.e., hippocampal, corticocortical, or corticofugal projections) that may be involved in cognitive, social, or motivational tasks. (ii) TC disruption in 22q11DS mice is caused by abnormal elevation of Drd2 mRNA and Drd2 protein levels in the thalamus, a brain region previously linked to psychotic symptoms of schizophrenia 47 , 48 . The increase in dopamine signaling in the thalamus was also described in patients with schizophrenia 49 , and studies have indicated that drug-naive patients with schizophrenia have elevated levels of DRD2s in other brain regions 50 , 51 . Furthermore, theoretical and empirical studies have proposed that deficient connectivity and abnormal patterns of activity in TC projections contribute to the pathogenesis of the disease 52 , 53 , 54 , 55 . Moreover, a local ischemic infarction that disrupts auditory TC projections in a patient without psychosis can cause auditory hallucinations 56 . (iii) Abnormal sensitivity to antipsychotics is observed in the auditory but not the visual or somatosensory TC projections of 22q11DS mice 5 , which is consistent with clinical observations of the substantially higher prevalence of auditory hallucinations, as compared to the hallucinations observed in other sensory modalities, in individuals with schizophrenia 28 . Neuroimaging and electrophysiological studies in patients with schizophrenia have shown abnormal activation of the auditory thalamus and the ACx during auditory hallucinations 1 , 3 , 21 . (iv) An increase in Drd2 levels only in the auditory thalamus of 22q11DS mice was sufficient to reduce the PPI of the acoustic-startle response 5 , the behavioral endophenotype characteristic of patients with psychiatric diseases, including 22q11DS and schizophrenia 38 , 57 . Here we showed that the disruption of synaptic transmission at auditory TC projections recapitulates another prominent feature of psychotic symptoms. The TC disruption in 22q11DS mice becomes evident only after 3 months of age, which is equivalent to ∼ 20 years of age in humans 35 . These data correspond well with the age of onset of clinical manifestations of psychosis in patients with 22q11DS (median age, 21 years) 25 or schizophrenia during late adolescence or early adulthood, which is typically between the ages of 16 and 30 years 26 , 58 . This age-dependent TC decrease in synaptic function is evident in Df(16)1 /+ mice, which carry a large microdeletion, and in Dgcr8 +/− mice, further strengthening the case that Dgcr8 is the culprit gene and that its haploinsufficiency underlies auditory abnormalities in individuals with 22q11DS. Previous work established that the deletion of one copy of Dgcr8 leads to increased levels of Drd2 in the auditory thalamus 5 . Because Dgcr8 is part of the miRNA-processing machinery, we hypothesized that a Dgcr8 –miRNA– Drd2 mechanism underlies the disruption of TC synaptic transmission. Here we identified miR-338-3p as the mediator of this mechanism ( Fig. 6h ). We also showed that miR-338-3p negatively regulates the level of Drd2 in the thalamus. Replenishing miR-338-3p in the thalamus eliminates deficient TC synaptic transmission and abnormal antipsychotic sensitivity of TC projections in 22q11DS mice, and the deletion of Mir338 or auditory-thalamus-specific knockdown of miR-338-3p expression mimics TC disruption of synaptic transmission and antipsychotic sensitivity in WT mice. Depletion of miR-338-3p is therefore necessary and sufficient to upregulate Drd2 in the thalamus, which in turn reduces glutamate release from thalamic projections, reduces TC synaptic transmission, and renders TC projections sensitive to antipsychotics ( Fig. 6h ). Because miR-338-3p is enriched in the auditory thalamus and more abundant miRNAs more effectively regulate the targeting transcripts 39 , miR-338-3p reduction in the thalamus (but not in tissues where it is weakly expressed) would upregulate thalamic Drd2 and provide tissue specificity. TC disruption in 22q11DS mice and Mir338 -deficient mice correlates with deficits in PPI, an impaired sensorimotor gating feature that occurs in individuals with schizophrenia and several other neuropsychiatric disorders. The importance of the auditory thalamus for PPI is well known 59 , but the effect of disrupting TC projections on PPI deficits is not. Therefore, other projections emanating from the auditory thalamus and containing depleted miR-338-3p and upregulated Drd2 might cause PPI deficits in 22q11DS mice. One copy of Dgcr8 is deleted in individuals with 22q11DS, in all cells at all ages, so it is unclear why synaptic disruption occurs in projections emanating only from the thalamus and only later in life. The regional specificity most likely arises from the fact that miR-338-3p is substantially enriched in the auditory thalamus as compared to other brain regions, such as the cortex or hippocampus. Explaining why miR-338-3p is thalamus-enriched will require further investigation. We also determined that the expression of miR-338-3p is regulated in an age-dependent manner. Although miR-338-3p is depleted in the auditory thalamus in 22q11DS mice at all ages, as compared to WT mice, it declines further with age in both 22q11DS and WT mice. Therefore, miR-338-3p expression may be controlled by a combination of Dgcr8 - and age-dependent mechanisms. Although we can assume that Dgcr8 haploinsufficiency reduced the levels of miRNAs, the mechanism of age-dependent miRNA decline is unknown. In the context of Drd2 regulation, a minimal threshold of miR-338-3p expression probably triggers the overexpression of Drd2. In WT mice, miR-338-3p expression declines during the first few months of life, but it may not reach that threshold. However, in 22q11DS mice, a combination of Dgcr8 haploinsufficiency and age-dependent decline in miRNA production drives the miR-338-3p level below this threshold, triggers the increase in Drd2 levels in the thalamus, and causes TC synaptic and behavioral deficiencies. In summary, our data implicate thalamus-enriched miR-338-3p as the key mediator of the disruption of synaptic transmission at TC projections and the late onset of auditory symptoms in individuals with 22q11DS. Our data also suggest that replenishment of miR-338-3p in the thalamus could be a more tolerable therapeutic approach for positive symptoms. Current therapy relies on antipsychotics to alleviate psychosis in patients with schizophrenia through the systemic inhibition of DRD2, which is accompanied by multiple, and sometimes devastating, side effects 27 , 60 . Given that the seed sites of miR-338-3p are conserved between humans and mice and that miR-338-3p is enriched in the thalamus of both species and becomes depleted in the thalamus of mouse models of 22q11DS and of patients with schizophrenia, this strategy is potentially applicable to patients. Thus, our results suggest that miR-338-3p is a potential therapeutic target for treating positive symptoms of 22q11DS and related cases of schizophrenia. Methods Animals. Mice of both sexes were used for all experiments. Df(16)1/+ and Dgcr8 +/− mouse strains were reported previously 6 , 31 and were backcrossed onto the C57BL/6J genetic background for at least ten generations. Mice ranging in age from 1.5 to 7 months were used, which approximately corresponds to 13 to 35 years of age in humans 35 ( ). The Mir338 +/− and Mir338 −/− mice were generated from embryonic stem cells from C57BL/6N-A tm1Brd mice that were purchased from the Mutant Mouse Regional Resource Center (MMRRC; clone #034476-UCD). These cells were tested and found to be negative for mycoplasma contamination. C57BL/6 blastocyst injections were performed by the Transgenic/Gene Knockout Shared Resource at St. Jude Children's Research Hospital (St. Jude). Chimeric mice were genotyped according to MMRRC protocols by using the following primers: 5′ common reverse (ATAGCATACATTATACGAAGTTATCACTGG), 5′ gene-specific (CTTCACTACACTCTCCCTAGTACAGTCTC), 3′ common forward (TCTAGAAAGTATAGGAACTTCCATGGTC), and 3′ gene-specific (AGGAGACTCATAGTTCTCTGTATCATAGC). PCR was performed under the following conditions: 93 °C for 3 min, 93 °C for 15 s, and 68 °C for 9 min for 8 cycles, and then 93 °C for 15 s, 60 °C for 30 s, and 68 °C for 9 min for 32 cycles. The mutant allele generated a 6.1-kb band with the 5′ common-reverse and 5′ gene-specific primers and a 4.1-kb band with the 3′ common-forward and 3′ gene-specific primers. The wild-type (WT) allele did not generate a band with either primer set. Subsequent genotyping was performed at Transnetyx (Cordova, Tennessee). For the majority of experiments, mice were divided into groups according to genotype or viral injections, and the experimenters were blinded to the genotype or treatment. The care and use of animals were reviewed and approved by the St. Jude Institutional Animal Care and Use Committee. Whole-cell electrophysiology. Acute primary thalamocortical (TC) slices (400-μm thick) containing the left auditory cortex (ACx) and the left ventral part of the medial geniculate nucleus (MGv) of the thalamus were prepared as previously described 5 , 61 . Briefly, mouse brains were quickly removed and placed in cold (4 °C) dissecting artificial cerebrospinal fluid (ACSF) containing 125 mM choline chloride, 2.5 mM KCl, 0.4 mM CaCl 2 , 6 mM MgCl 2 , 1.25 mM NaH 2 PO 4 , 26 mM NaHCO 3 , and 20 mM glucose (300–310 mOsm), with 95% O 2 and 5% CO 2 . Primary TC slices were obtained from the left hemisphere by using a slicing angle of 15°. After a 1-h incubation in ACSF (125 mM NaCl, 2.5 mM KCl, 2 mM CaCl 2 , 2 mM MgCl 2 , 1.25 mM NaH 2 PO 4 , 26 mM NaHCO 3 , 20 mM glucose (300–310 mOsm), with 95% O 2 and 5% CO 2 ) at room temperature, the slices were transferred into the recording chamber and superfused (2–3 mL/min) with warm (30–32 °C) ACSF. Whole-cell recordings were obtained from cell bodies of layer (L) 3/4 thalamorecipient neurons in the ACx. Mice were chosen in a pseudorandom order, without the experimenter's prior knowledge of genotype or treatments. Patch pipettes (open-pipette resistance, 3.5–5.0 MΩ) were filled with an internal solution containing 125 mM CsMeSO 3 , 2 mM CsCl, 10 mM HEPES, 0.1 mM EGTA, 4 mM MgATP, 0.3 mM NaGTP, 10 mM Na 2 creatine phosphate, 5 mM QX-314, 5 mM tetraethylammonium chloride (pH 7.4 adjusted with CsOH; 290–295 mOsm). Voltage-clamp recordings were made using a Multiclamp 700B, digitized (10 kHz), and recorded using the pCLAMP 10.0 software (Molecular Devices). Excitatory postsynaptic currents (EPSCs) were recorded at holding membrane potentials of −70 mV. In all experiments, membrane potentials were corrected for a liquid junction potential of −10 mV. TC EPSCs were evoked by current pulses (duration, 100 μs) delivered to the thalamic radiation via tungsten bipolar electrodes. The stimulation intensities were similar in experiments in Figure 1f,g (2 months: 527 ± 61 μA, 19 neurons in WT; 568 ± 50 μA, 21 neurons in Df(16)/+ mice; P > 0.05; 4 months: 523 ± 42 μA, 31 neurons in WT; 550 ± 39 μA, 30 neurons in Df(16)/+ mice; P > 0.05) and Figure 1i,j (2 months: 569 ± 44 μA, 16 neurons in WT; 582 ± 37 μA, 24 neurons in Dgcr8 +/− mice; P > 0.05; 4 months: 543 ± 43 μA, 36 neurons in WT; 551 ± 43 μA, 37 neurons in Dgcr8 +/− mice; P > 0.05). Paired-pulse ratio (PPR) of TC and corticocortical (CC) EPSCs and the NMDAR/AMPAR ratio were measured as described previously 5 . The first 2 ms of the EPSC slope were measured as an accurate indicator of monosynaptic strength at TC synapses 62 . Changes in the EPSC slope correlated with changes in the EPSC amplitude and EPSC charge 5 . To ensure consistent access resistance of the recording electrode during long-term experiments, we monitored the peak amplitude of a brief (10-ms) hyperpolarizing test pulse (−5 mV), which was given 250 ms after a stimulus. Access resistance in recorded neurons was typically 10 to 25 MΩ. Recordings were discarded if the access resistance was higher than 25 MΩ, or if it changed more than 15% during the course of the whole-cell recording. Two-photon imaging. Two-photon laser-scanning microscopy was performed using an Ultima imaging system, a Ti:sapphire Chameleon Ultra femtosecond-pulsed laser, and 60× (0.9 numerical aperture) water-immersion infrared objectives. Synaptically evoked calcium transients were measured in dendritic spines, the site of thalamic inputs, as described previously 43 . Briefly, Alexa Fluor 594 (30 μM) and Fluo-5F (300 μM) were included in the internal pipette solution (see above) and were excited at 820 nm. Synaptically evoked changes in fluorescence of both fluorophores were measured in the line-scan mode (750 Hz) in spine heads and the parent dendritic shaft. Line scans were analyzed as changes in green (G, Fluo-5F) fluorescence normalized to red (R, Alexa Fluor 594) fluorescence (Δ G / R ). The amplitude and probability of calcium transients were measured in response to 10 to 20 stimulations delivered at 0.1 Hz to the thalamic radiation. Distance (angular) of the active thalamic inputs from the center of the soma was calculated by using maximum-intensity projections of z -scan images of the entire cell collected at lower magnification. Optogenetics. In optogenetic experiments, we expressed the light-activated cation channel ChR2 in the MGv by using adeno-associated virus (AAV) and evoked optically induced EPSCs by briefly illuminating TC slices with a 473-nm light 63 . AAVs were generated from the pAAV- Camk2a -hChR2(H134R)-YFP-WPRE-pA (which we refer to as CamKIIα-ChR2-YFP) plasmid and produced commercially (UNC Vector; serotype 2/1; 4 × 10 12 infectious units per mL). AAVs were injected into the MGv as described previously 41 . Adult mice were anesthetized with isoflurane in pure oxygen, and a 200- to 400-nL sample of virus was slowly pressure-injected into the MGv (from the bregma: anterior–posterior, −3.0 mm; medial–lateral, ± 2.0 mm; dorsal–ventral, 3.1 mm). Approximately 21–28 d after virus injection, the mice were decapitated, and TC slices were prepared. Confocal imaging of YFP in the MGv was used to verify on-target infection of CamKIIα-ChR2-YFP viruses. Short light pulses (10–200 mW) from a 473-nm laser were directed to the slices through the visible light photoactivation module or through the objective. miRNA microarray. Total RNA was isolated from the thalami containing MGv of 2- and 4-month-old male WT, Df(16)1/+ , and Dgcr8 +/− mice by using the mirVana RNA isolation kit (Life Technologies, Carlsbad, CA). Total RNAs (100 ng) were labeled using the miRNA Complete Labeling and Hyb Kit (Agilent, Santa Clara, CA), followed by hybridizing to the Mouse miRNA v19 Microarray (Agilent-046065) that contains 3,105 unique probes targeting 1,247 mature miRNAs, according to the mouse miRBase version 19.0 ( ; August 2012). Microarrays were scanned using an Agilent array scanner (G2565CA) at 3-μm resolution. Microarray data were extracted by Agilent Feature Extraction software (v.10.5.1.1) with the miRNA_107_Sep09 protocol. The data process was performed using Partek software (St. Louis, MO). After quantile normalization among arrays, each probe was summarized by averaging intensities with a single normalized intensity value. The Student's t -test was used to determine statistical significance between sets of replicates from different experimental groups. The miRNA was considered significantly differentially expressed when the P value was less than 0.01 for more than one probe targeting the mature form of the miRNA. The mRNAs targeted by differentially expressed miRNAs were predicted using bioinformatics tools miRWalk 64 and TargetScan ( ). Quantitative RT–PCR. Total RNA was isolated from various brain regions (i.e., the auditory thalamus containing the MGv, hippocampus, or cortex) or from SH-SY5Y cells (ATCC, CRL-2266) by using the mirVana RNA Isolation Kit (Life Technologies). The iScript kit (Bio-Rad, Hercules, California) was used to synthesize cDNA from mRNA, and the miRNA First-Strand cDNA Synthesis Kit (Agilent) was used to synthesize cDNA from miRNA. The experiments were performed using SYBR Green (Life Technologies). The following forward primers were used for miRNA analysis: mmu-miR-338-3p and hsa-miR-338-3p (TCCAGCATCAGTGATTTTGTTG), mmu-miR-335-3p (TTTTTCATTATTGCTCCTGACC), mmu-miR-335-5p (TCAAGAGCAATAACGAAAAATGT), mmu-miR-337-3p (TCAGCTCCTATATGATGCCTTT), mmu-miR-337-5p (CGGCGTCATGCAGGAGTTGATT), mmu-miR-3065-5p (TCAACAAAATCACTGATGCTGG), and mmu-miR-3065-3p (TCAGCACCAGGATATTGTTGGGG). The universal reverse primer specific to the sequence tag (miRNA First-Strand cDNA Synthesis Kit) was used. The following primers were used for mRNA analysis: Drd2 forward (GGATGTCATGATGTGCACAGC), Drd2 reverse (CGCTTGCGGAGAACGATG), Aatk forward (ATGCTGGCCTGCCTGTGTTGT), and Aatk reverse (AGGGGCAGGACATACACATCGG). The following loading controls were used: U6 snRNA forward (CGCTTCGGCAGCACATATAC), U6 snRNA reverse (TTCACGAATTTGCGTGTCAT) (the same primers were used for mouse and human samples), Snord68 (CTTTTGAACCCTTTTCCATCTG), and Snord70 (TTAACAAAAATTCGTCACTACCA). The same universal reverse primer was used for SnoRNA202 and SnoRNA234. To measure DRD2 mRNA in SH-SY5Y cells, the Lipofectamine 2000 (Invitrogen) method was used to transfect these cells with pGIPZ plasmids (Open Biosystems) expressing a miRNA of interest or no miRNA (empty vector control). The following primers were used to clone hsa-miR-337-3p and hsa-miR-337-5p into the pGIPZ plasmid: hsa-miR-337-3p-1 (TCGAGGCTGTTGACAGTGAGCGACCTCCTATATGATGCCTTTCTTCTGTGAA); hsa-miR-337-3p-2, (CCATCTGTGGCTTCACGAAGAAAGGCATCATATAGGAGGTCGCTCACTGTCAACAGCC); hsa-miR-337-3p-3 (GCCACAGATGGGAAGAAAGGCATCATATAGGAGGCTGCCTACTGCCTCGGAA); hsa-miR-337-3p-4 (TCGATTCCGAGGCAGTAGGCAGC CTCCTATATGATGCCTTTCTTC); hsa-miR-337-5p-1, (TCGAGGCTGTTGACAGTGAGCGACGAACGGCTTCATACAGGAGTTTGTGAA); hsa-miR-337-5p-2, (CCATCTGTGGCTTCACAACTCCTGTATGAAGCCGTTCGTCGCTCACTGTCAACAGCC); hsa-miR-337-5p-3 (GCCACAGATGGAACTCCTGTATGAAGCCGTTCGCTGCCTACTGCCTCGGAA); hsa-miR-337-5p-4, (GCCACAGATGGAACTCCTGTATGAAGCCGTTCGCTGCCTACTGCCTCGGAA). For cloning hsa-miR-335-3p, hsa-miR-335-5p, and hsa-miR-338-3p (all three are conserved between mice and humans), the primers given below were used. The following primers were used in SH-SY5Y cells: DRD2 forward (GAGTGGAAATTCAGCAGGATTC); DRD2 reverse (GAAGGACAGGACCCAGACGATG); turboGFP forward (CTTCAGCTACCGCTACGAGG); and turboGFP reverse (GCTCTTGAAGTGCATGTGGC). DRD2 levels were normalized to those of turboGFP. Samples from each mouse or each well containing SH-SY5Y cells were run in triplicate. Western blotting. Mouse brain tissues were lysed in ice-cold RIPA buffer (50 mM Tris-HCl (pH 7.4), 1% NP-40, 0.25% sodium deoxycholate, 150 mM NaCl, and 1 mM EDTA) that included protease inhibitor cocktail tablets. A total of 25 μg (MGv) or 30 μg (cortex and hippocampus) protein was loaded per lane. SDS–PAGE, protein transfer to polyvinylidene difluoride membranes, and western blotting were performed using standard techniques. The following primary antibodies were used: rabbit anti-DRD2 (Abcam, ab85367; 1:500) and mouse anti-β-actin (Sigma-Aldrich, A5316, 1:10,000). The following secondary antibodies were used: anti-rabbit (LI-COR Biosciences, 926-68021; 1:15,000) and anti-mouse (LI-COR Biosciences, 926-32212, 1:15,000) antibodies conjugated to IR dye 680 or 800, respectively. Blots were imaged and quantified using the Odyssey CLx infrared imaging system. Samples from each mouse were run in triplicate. Human brain tissue. Post-mortem samples of human MGv and ACx (Brodmann area 41) were obtained from The Maryland Brain Collection (Maryland Psychiatric Research Center, University of Maryland School of Medicine, Catonsville, Maryland). We tested the level of mature miR-338-3p in patients with schizophrenia and age-, race-, and sex-matched healthy controls. Only samples with an RNA integrity number >7 were used in these experiments (Agilent RNA 6000 Nano kit). The mean post-mortem interval was 15.3 ± 2.0 h for patients with schizophrenia and 17.2 ± 1.6 h ( P > 0.05) for healthy controls. Quantitative RT–PCR for each brain tissue sample was run in triplicate. Plasmids and viruses. To overexpress the miRNAs of interest, we generated recombinant AAVs (serotype 5) by cloning chimeric hairpins of the miRNAs of interest with hsa-miR-30a into the 3′ UTR of GFP under the control of the Camk2a promoter by using a previously described strategy 65 . The following primers were used: miR-338-3p-1 (GTACAGCTGTTGACAGTGAGCGACTCCAGCATCAGTGATTTTGTTGTGTGAA), miR-338-3p-2 (CCATCTGTGGCTTCACACAACAAAATCACTGATGCTGGAGTCGCTCACTGTCAACAGCT), miR-338-3p-3 (GCCACAGATGGCAACAAAATCTGATGCTGGAGCTGCCTACTGCCTCGGAA), miR-338-3p-4 (AGCTTTCCGAGGCAGTAGGCAGCTCCAGCATCAGATTTTGTTG), miR-337-3p-1 (GTACAGCTGTTGACAGTGAGCGACTCAGCTCCTATATGATGCCTTTTGTGAA), miR-337-3p-2 (CCATCTGTGGCTTCACAAAAGGCATCATATAGGAGCTGAGTCGCTCACTGTCAACAGCT), miR-337-3p-3 (GCCACAGATGGAAAGGCATCATAGGAGCTGAGCTGCCTACTGCCTCGGAA), miR-337-3p-4 (AGCTTTCCGAGGCAGTAGGCAGCTCAGCTCCTATGATGCCTTT), miR-337-5p-1 (GTACAGCTGTTGACAGTGAGCGACCGGCGTCATGCAGGAGTTGATTTGTGAA), miR-337-5p-2 (CCATCTGTGGCTTCACAAATCAACTCCTGCATGACGCCGGTCGCTCACTGTCAACAGCT), miR-337-5p-3 (GCCACAGATGGAATCAACTCGCATGACGCCGGCTGCCTACTGCCTCGGAA), miR-337-5p-4 (AGCTTTCCGAGGCAGTAGGCAGCCGGCGTCATGCGAGTTGATT), miR-335-3p-1 (GTACAGCTGTTGACAGTGAGCGACTTTTTCATTATTGCTCCTGACCTGTGAA), miR-335-3p-2 (CCATCTGTGGCTTCACAGGTCAGGAGCAATAATGAAAAAGTCGCTCACTGTCAACAGCT), miR-335-3p-3 (GCCACAGATGGGGTCAGGAGATAATGAAAAAGCTGCCTACTGCCTCGGAA), miR-335-3p-4 (AGCTTTCCGAGGCAGTAGGCAGCTTTTTCATTATCTCCTGACC), miR-335-5p-1 (GTACAGCTGTTGACAGTGAGCGACTCAAGAGCAATAACGAAAAATGTTGTGAA), miR-335-5p-2 (CCATCTGTGGCTTCACAACATTTTTCGTTATTGCTCTTGAGTCGCTCACTGTCAACAGCT), miR-335-5p-3 (GCCACAGATGGACATTTTTCGATTGCTCTTGAGCTGCCTACTGCCTCGGAA), and miR-335-5p-4 (AGCTTTCCGAGGCAGTAGGCAGCTCAAGAGCAATCGAAAAATGT). The miR-338-3p sponges were generated as described previously 66 , 67 . Twelve copies of the following sequences were inserted for the miR-338-3p sponge (CAACAAAATGCGGATGCTGGA) or the scrambled control (GACACTGTGAGCGAAGACATA) into the 3′ UTR of GFP under the control of the Camk2a promoter. Recombinant AAVs (1–2 × 10 13 infectious units per mL) were generated at the St. Jude Vector Development and Production Core and were injected into the MGv of anesthetized mice, as described previously 5 . In the luciferase assay, multiple copies of the miR-338-3p sponge and scramble control were cloned into 3′ UTR of the Renilla luciferase gene contained within the psiCHECK-2 vector (Promega). To test the effect of the sponge in cells, the plasmids were transfected into HEK 293T (ATCC, CCL-3216) or Neuro 2a (ATCC, CCL-131) cells along with control pcDNA3.1 or primary miR-338, miR-337, miR-335, and irrelevant miR-185-overexpressing plasmids. The cell lines were not authenticated or tested for mycoplasma. After 2 d in culture, Renilla and firefly luciferase activities were measured using the dual-luciferase reporter assay (Promega) according to the manufacturer's instructions. Renilla luciferase expression was normalized to firefly luciferase expression as a readout. Mouse behavioral tests. Prepulse inhibition (PPI) experiments were performed as previously described 5 . Briefly, each day before testing, the mice were transported from the animal-housing room and allowed a 1-h habituation period in the testing room. Before experiments were initiated, the mice had a 20-min acclimation period in the Plexiglas restraint chamber (6 cm × 6 cm × 4.8 cm). The mice then had a 5-min acclimation period to 65-dB background white noise, which played throughout the session. For PPI experiments, three acoustic startles (white noise (1–20 kHz), 120 dB, 40 ms) were delivered, separated by a 15-s intertrial interval. The testing session consisted of the following trials: pulse alone, in which the startle pulse was presented; the combination of a 40-ms white noise prepulse (74 dB, 82 dB, or 90 dB) in WT and Dgcr8 +/− littermates and (70 dB, 80 dB, or 90 dB) in WT and Mir338 +/− littermates and preceding the startle pulse by 100 ms, and no stimuli. Trials were separated by 15 s and presented in a pseudo-random order. PPI was calculated as follows: 100 × (pulse-alone response − prepulse + pulse response)/pulse-alone response. Auditory brainstem response (ABR) experiments were performed as previously described 68 . Briefly, mice were anesthetized with avertin (0.6 mg per g bodyweight, intraperitoneally), and ABR was measured using a Tucker Davis Technology (TDT) System III with RZ6 Multiprocessor and BioSigRZ software. Sounds were delivered via the MF-1 speaker in the open-field configuration. ABR waveforms were recorded using subdermal needles placed at the vertex of the skull, below the pinna of the ear, and at the base of the tail. The needles were connected to a low-impedance headstage (RA4LI, TDT) and fed into the RZ6 multiprocessor through a preamplifier (RA4PA, Gain 20×, TDT). ABR waveforms were averaged from 500 presentations of a tone (21 tones/s) in the alternating phase and were band-pass filtered (300 Hz–3 kHz). The ABR threshold was defined as the minimum sound intensity that elicited a wave above the noise level. All ABR experiments were conducted in a sound booth (Industrial Acoustic Company, IAC, Model 120A double wall). Statistical analyses. All statistical data were computed using the Sigma Plot 12.5 software. Parametric or nonparametric tests were chosen based on the normality and variance of data distribution. Independent or paired two-tailed t -tests, Mann–Whitney rank-sum U test, one-way analysis of variance (ANOVA) and Kruskal–Wallis one-way ANOVA on ranks H test followed by a multiple comparison procedure (Dunn's method), two-way ANOVA and two-way repeated measures ANOVA with one-factor repetition followed by the Holm–Sidak multiple-comparison procedure were the statistical tests used. F -values were reported for ANOVA. Differences with P < 0.05 were considered significant. Data availability. The microarray data are available in the NCBI GEO database under accession number GSE73981 . Mir338 -knockout mice are available upon request. Accession codes Primary accessions Gene Expression Omnibus GSE73981 Referenced accessions NCBI Reference Sequence XM_006509996.2 | St. Jude Children's Research Hospital scientists have identified a small RNA (microRNA) that may be essential to restoring normal function in a brain circuit associated with the "voices" and other hallucinations of schizophrenia. The microRNA provides a possible focus for antipsychotic drug development. The findings appear today in the journal Nature Medicine. The work was done in a mouse model of a human disorder that is one of the genetic causes of schizophrenia. Building on previous St. Jude research, the results offer important new details about the molecular mechanism that disrupts the flow of information along a neural circuit connecting two brain regions involved in processing auditory information. The findings also provide clues about why psychotic symptoms of schizophrenia are often delayed until late adolescence or early adulthood. "In 2014, we identified the specific circuit in the brain that is targeted by antipsychotic drugs. However, the existing antipsychotics also cause devastating side effects," said corresponding author Stanislav Zakharenko, M.D., Ph.D., a member of the St. Jude Department of Developmental Neurobiology. "In this study, we identified the microRNA that is a key player in disruption of that circuit and showed that depletion of the microRNA was necessary and sufficient to inhibit normal functioning of the circuit in the mouse models. "We also found evidence suggesting that the microRNA, named miR-338-3p, could be targeted for development of a new class of antipsychotic drugs with fewer side effects." There are more than 2,000 microRNAs whose function is to silence expression of particular genes and regulate the supply of the corresponding proteins. Working in a mouse model of 22q11 deletion syndrome, researchers identified miR-338-3p as the microRNA that regulates production of the protein D2 dopamine receptor (Drd2), which is the prime target of antipsychotics. Individuals with the deletion syndrome are at risk for behavior problems as children. Between 23 and 43 percent develop schizophrenia, a severe chronic disorder that affects thinking, memory and behavior. Researchers at St. Jude are studying schizophrenia and other brain disorders to improve understanding of how normal brains develop, which provides insights into the origins of diseases like cancer. The scientists reported that Drd2 increased in the brain's auditory thalamus when levels of the microRNA declined. Previous research from Zakharenko's laboratory linked elevated levels of Drd2 in the auditory thalamus to brain-circuit disruptions in the mutant mice. Investigators also reported that the protein was elevated in the same brain region of individuals with schizophrenia, but not healthy adults. Individuals with the deletion syndrome are missing part of chromosome 22, which leaves them with one rather than the normal two copies of more than 25 genes. The missing genes included Dgcr8, which facilitates production of microRNAs. Working in mice, researchers have now linked the 22q11 deletion syndrome and deletion of a single Dgcr8 gene to age-related declines in miR-338-3p in the auditory thalamus. The decline was associated with an increase in Drd2 and reduced signaling in the circuit that links the thalamus and auditory cortex, a brain region implicated in auditory hallucination. Levels of miR-338-3p were lower in the thalamus of individuals with schizophrenia compared to individuals of the same age and sex without the diagnosis. The miR-338-3p depletion did not disrupt other brain circuits in the mutant mice, and the findings offer a possible explanation. Researchers found that miR-338-3p levels were higher in the thalamus than in other brain regions. In addition, miR-338-3p was one of the most abundant microRNAs present in the thalamus. Replenishing levels of the microRNA in the auditory thalamus of mutant mice reduced Drd2 protein and restored the circuit to normal functioning. That suggests that the microRNA could be the basis for a new class of antipsychotic drugs that act in a more targeted manner with fewer side effects. Antipsychotic drugs, which target Drd2, also restored circuit function. The findings provide insight into the age-related delay in the onset of schizophrenia symptoms. Researchers noted that microRNA levels declined with age in all mice, but that mutant mice began with lower levels of miR-338-3p. "A minimum level of the microRNA may be necessary to prevent excessive production of the Drd2 that disrupts the circuit," Zakharenko said. "While miR-338-3p levels decline as normal mice age, levels may remain above the threshold necessary to prevent overexpression of the protein. In contrast, the deletion syndrome may leave mice at risk for dropping below that threshold." | nature.com/articles/doi:10.1038/nm.4240 |
Computer | Novel semiconductor-superconductor structure features versatile gallium nitride | Rusen Yan et al. GaN/NbN epitaxial semiconductor/superconductor heterostructures, Nature (2018). DOI: 10.1038/nature25768 Journal information: Nature | http://dx.doi.org/10.1038/nature25768 | https://techxplore.com/news/2018-03-semiconductor-superconductor-features-versatile-gallium-nitride.html | Abstract Epitaxy is a process by which a thin layer of one crystal is deposited in an ordered fashion onto a substrate crystal. The direct epitaxial growth of semiconductor heterostructures on top of crystalline superconductors has proved challenging. Here, however, we report the successful use of molecular beam epitaxy to grow and integrate niobium nitride (NbN)-based superconductors with the wide-bandgap family of semiconductors—silicon carbide, gallium nitride (GaN) and aluminium gallium nitride (AlGaN). We apply molecular beam epitaxy to grow an AlGaN/GaN quantum-well heterostructure directly on top of an ultrathin crystalline NbN superconductor. The resulting high-mobility, two-dimensional electron gas in the semiconductor exhibits quantum oscillations, and thus enables a semiconductor transistor—an electronic gain element—to be grown and fabricated directly on a crystalline superconductor. Using the epitaxial superconductor as the source load of the transistor, we observe in the transistor output characteristics a negative differential resistance—a feature often used in amplifiers and oscillators. Our demonstration of the direct epitaxial growth of high-quality semiconductor heterostructures and devices on crystalline nitride superconductors opens up the possibility of combining the macroscopic quantum effects of superconductors with the electronic, photonic and piezoelectric properties of the group III/nitride semiconductor family. Main The experimental discovery 1 of superconductivity in 1911 predated the controllable synthesis and understanding of semiconductors 2 by nearly three decades. However, in the time it took to uncover the correlated physics behind superconductivity, rapid advances in the band-theory of semiconductors, perfection in crystal growth, and discoveries such as donor- and acceptor-doping and quantum heterostructure 3 , 4 design had unleashed their technological potential, enabling electronic amplifiers and switches, as well as light-emitting diodes and diode lasers that operate at room temperature. These solid-state devices have replaced bulky and slow vacuum tubes and table-top lasers, and have shrunk information processing, storage, and communication systems onto a chip. Today, semiconductor transistors are reaching their fundamental Boltzmann limits in terms of switching energy and power consumption in the digital von-Neumann computational architecture 5 , and communication systems are approaching their Shannon limits in terms of bandwidth and security. Quantum technologies have been envisaged to offer exponentially faster computation and guaranteed secure communications 6 , and the leading materials for these emerging technologies make use of the macroscopic manifestation of quantum properties in superconductors. Devices such as Josephson junction flux qubits 7 , lossless microwave resonators 8 , AC Josephson junction lasers 9 and superconducting single-photon detectors 10 are the building blocks of these new quantum-information systems. Substantial advances in such systems would be expected if the power of semiconductors could be combined with that of superconductors on a single epitaxial platform 11 , 12 , 13 . The group III/nitride semiconductors GaN (with a bandgap, E g , of about 3.4 eV), indium nitride (InN; E g ≈ 0.6 eV) and AlN ( E g ≈ 6.2 eV) constitute the most revolutionary semiconductor family since silicon. That is because they offer, in a single heterostructure material family (see Fig. 1 ), the necessary ingredients for ultrafast microwave communications 14 , ultralow-power computation 15 , high-voltage switches 16 , infrared through visible to deep-ultraviolet photonic emitters and detectors 17 , 18 , and high-frequency circuit components such as surface acoustic wave and bulk acoustic wave filters 19 . On the other hand, one of the most technologically important superconductor families comprises the nitride compounds NbN x , which have been used for superconducting radio-frequency circuits 20 , squid magnetometers 21 , Josephson junctions 22 , single-photon detectors 10 for quantum communications and astronomy, and a host of other applications 23 . Here, we report the successful epitaxial integration of the semiconducting and superconducting nitride families as a crucial enabler for several applications. Figure 1: Bandgap, lattice constant, crystallinity and superconductivity in epitaxial NbN x on SiC. a , Bandgap versus lattice constant for select nitride semiconductors as well as for SiC. b , Cross-section HAADF-STEM images in black/white (left) and false-colour (right) of 5-nm NbN x grown on an SiC substrate with a AlN capping layer. c , Resistance versus temperature (normalized to the resistance at 16 K), showing the superconducting phase transition of 5-nm (red) and 35-nm (blue) epitaxial NbN x on SiC. Inset, resistance measured up to 300 K. d , The Meissner effect measured on the 5-nm and 35-nm samples, showing clear magnetic-flux expulsion accompanying the superconducting phase transition. These measurements are consistent with the T c obtained in panel c . ×10 and ×0.14 indicate multiplication of the data by 10 or 0.14, respectively, to allow data of different scales to be shown on the same plot. PowerPoint slide Full size image Figure 1a shows that the lattice constants of Nb-based nitride metals—such as hexagonal Nb 2 N and NbN, as well as cubic NbN rotated onto the (111) plane—are very close to the lattice constants of SiC, AlN and the GaN family. Wurtzite GaN and AlN can be grown on cubic (111) silicon, and hexagonal SiC serves as the substrate for the epitaxial growth of AlN- and GaN-based heterostructures for microwave transistors 24 and for quantum-well visible-light-emitting diodes 18 . Recently, we succeeded in growing crystalline epitaxial metal (epiMetal) niobium nitride layers by molecular beam epitaxy (MBE) on SiC, and further grew GaN and AlN layers on the epiMetal layers 25 , 26 . We found that the epiMetal layers retained high crystallinity and electronic conductivity down to thicknesses of a few nanometres 25 , 26 . The crystalline phases of the epilayers could be either hexagonal Nb 2 N or NbN, or cubic NbN. In this study, we have determined that our films are cubic NbN x , with x being around 0.75–0.88 as measured by secondary-ion mass spectrometry (SIMS). In what follows, we will simply refer to the phase and stoichiometry as NbN x . The use of NbN x enables an unprecedented level of epitaxial integration of buried metallic layers with wide-bandgap semiconductors and insulators. While investigating the low-temperature transport properties of the thin MBE-grown NbN x layers, we find a superconducting phase transition at critical temperatures ( T c ) ranging from 6 K to 15 K, similar to what has been found for NbN x grown by other methods 27 , 28 . Epitaxial layers of NbN x thinner than the coherence length are found to exhibit two-dimensional superconductivity, with in-plane critical magnetic fields ( ) well in excess of 20 T (the out-of-plane fields, , are around 3 T). NbN x is the first epitaxial superconductor to have been integrated with a technologically relevant semiconductor system. Growth of NbN x films by MBE Niobium nitride used in superconducting electronics and bolometers for single-photon detectors, deposited by electron-beam evaporation or sputtering on non-epitaxial substrates, is typically polycrystalline 10 , 21 . Taking advantage of advances in MBE-based control of the growth of group III/nitride semiconductor heterostructures on SiC, we grew epitaxial layers of NbN x directly on silicon-terminated, semi-insulating, four-hexagonal and six-hexagonal (4H and 6H) SiC substrates. We used a radio-frequency plasma nitrogen source of electronic-grade purity—identical to that used for AlN and GaN high-electron-mobility transistors (HEMTs), LEDs and lasers—to provide the active nitrogen atoms. We also used an electron-beam source of niobium, and monitored the growth in situ by reflection high-energy electron diffraction. Semiconducting Al(Ga)N/GaN quantum heterostructures were then grown epitaxially on top of the crystalline NbN x layers. Figure 1b shows high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) images of 5 nm NbN x epitaxial layers grown on a semi-insulating 4H-SiC substrate and capped with an AlN layer. The epitaxial NbN x layers are nearly completely cubic, with high crystalline quality over large areas. Occasional twin boundaries are seen—typically separated by about 1 μm—as would be expected from the symmetry mismatch between cubic NbN x and hexagonal SiC and AlN (see Extended Data Fig. 1 ). Figure 1b shows the epitaxial AlN on the NbN x to be of nitrogen polarity; the entire AlN layer and all subsequent nitride semiconducting layers are hexagonal. The surfaces of uncapped NbN x layers were extremely smooth, with a root-mean-square surface roughness of 0.16 nm for a 1 μm × 1 μm region, as measured by atomic force microscopy (AFM; see Extended Data Fig. 2 ). Extended Data Fig. 3 shows X-ray diffraction (XRD) images of the epitaxial NbN x . Electronic and magnetic properties of MBE-grown NbN x In its normal state we find that MBE NbN x films are metallic with a resistivity of about 10 −5 Ω cm, comparable to that of bulk platinum at room temperature. The measured Hall-effect carrier sign is negative, indicating electron conductivity, and the Hall-effect carrier density in three dimensions ( n 3d ) is about 2 × 10 23 cm −3 , with a mean free path ( λ ) of roughly 1 − 2 a 0 , where a 0 is the lattice constant (see Extended Data Table 1 for more metallic-state properties). Using a spherical Fermi surface approximation, the Mott–Ioffe–Regel criterion indicates that k F λ is much greater than 1, where the Fermi wavevector ( k F ) is about , implying that the normal state transport is far above the minimum metallic conductivity regime. Although the Fermi surface is not spherical, we expect this conclusion to hold. We therefore find that our epitaxial NbN x films are best characterized as working in the dirty limit ( λ ≪ ξ , where ξ is the coherence length), where the electron mean-free path is less than the Cooper-pair coherence length extracted from superconducting measurements, as described next. Electrical transport measurements performed on the NbN x layers, for thicknesses ranging from 4 nm to 100 nm, revealed superconductivity at transition temperatures of between 6 K and 15 K. Figure 1c shows the measured resistance R ( T ) normalized to the resistance at 16 K ( R n ) for NbN x layers of thickness 5 nm and 35 nm. The resistivity of the samples exhibits a superconducting phase transition at around 7 K for the 5-nm sample, and about 9 K for the 35-nm sample. The inset shows the resistance up to 300 K for these two samples. In the metallic phase for temperatures T c < T < 300 K, the resistance shows an expected increase owing to phonon scattering. Figure 1d shows the Meissner effect measured on these two samples by vibrating sample magnetometry (VSM), revealing clear magnetic-flux expulsion accompanying the superconducting phase transition. The superconductivity transition temperature measured from electron transport and the Meissner effect are found to be consistent. When the thickness of the semiconductor heterostructure quantum wells becomes smaller than the electron de-Broglie wavelength, quantum confinement drives signature two-dimensional effects such as the integer quantum Hall effect in single-particle magnetotransport 29 . Similarly, when the thickness of a superconducting layer d is less than the coherence length ξ , a high anisotropy in the Meissner effect upper critical field versus is expected. These effects were recently reported in monolayer NbSe 2 , a transition-metal dichalcogenide superconductor 30 . Figure 2a, b shows the out-of-plane and in-plane magnetic-field-dependent normalized resistance R ( T )/ R n as a function of temperature for the 35-nm NbN x epitaxial film. The variation of the critical field with the critical temperature is shown in Fig. 2c . Both out-of-plane and in-plane magnetic fields of strengths 0–4 T are seen to lower the critical temperature approximately linearly. Figure 2: Magnetotransport measurements on 35-nm and 5-nm NbN x epitaxial films, showing two-dimensional superconductivity when the epilayer thickness is less than the coherence length. a – c , 35-nm NbN x films. d – f , 5-nm NbN x films. a , b , Temperature-dependent normalized resistance for the 35-nm sample, for various out-of-plane magnetic fields B ⊥ ( a ) and in-plane fields B ‖ ( b ). c , The critical field H c2 decreases linearly with temperature, consistent with the Ginzburg–Landau model of bulk superconductivity. d , For the 5-nm NbN x sample, the out-of-plane magnetic field destroys superconductivity easily at low fields. e , Much higher critical fields are needed when the field is in-plane. f , This strong anisotropy of critical fields is shown, plotting the critical field H c2 versus temperature for various angles, θ , made by the magnetic field with the NbN x plane. The lines fit the linearized Ginzburg–Landau formula at θ = 0° and θ = 90°, and the lines for the intermediate angles are consistent with the Tinkham formula (see text). PowerPoint slide Full size image The behaviour of the 5-nm-thick NbN x epitaxial layer is quite different. Figure 2d, e shows that substantially stronger in-plane magnetic fields compared with out-of-plane fields are required to break superconductivity for the 5-nm sample. The 5-nm sample remains superconducting at 3 K for in-plane fields up to 14 T, whereas the 35-nm sample is far into the metallic regime at this field. The linearized Ginzburg–Landau equation for the perpendicular critical field is: where ϕ 0 = h /2 e is the superconducting flux quantum, with h being the Planck constant and 2 e the charge of a Cooper pair; and ξ = ξ GL (0) is the extrapolation of the Ginzburg–Landau coherence length to T = 0 K. From the θ = 90° fits in Fig. 2c, f , we extract ξ ≈ 11 nm for the d = 5-nm sample, and ξ ≈ 10 nm for the d = 35-nm sample. This explains our choice of representative sample thicknesses: one sample behaves like a thin film ( d > ξ ) and one is in the two-dimensional limit ( d < ξ ). When the film thickness d is less than ξ , vortex formation under an in-plane magnetic field is severely suppressed. Because the density of Cooper pairs cannot change on a length scale shorter than ξ , vortices cannot accommodate flux for in-plane magnetic fields. Because for the d = 5-nm film d ≤ ξ /2, Cooper-pair breaking caused by orbital effects requires a higher in-plane than out-of-plane magnetic field to destroy superconductivity. We believe that the Zeeman effect for pair-breaking 31 is suppressed in our NbN x films, and that orbital-pair-breaking is the dominant mechanism responsible for the abnormally large values. For in-plane critical fields, the Ginzburg–Landau formula in the two-dimensional limit is: With ξ extracted from equation (1), the effective superconducting thickness is extracted to be d = 4.9 nm for the thin NbN x layer, in excellent agreement with the thickness measured by STEM. The extrapolation of this formula for θ = 0° in Fig. 2f to T → 0 K suggests an upper critical field of about 22 T. This is twice the value of the Pauli paramagnetic limit, H p , of about 1.86 × T c —that is, 11 T—resulting from the Bardeen–Cooper–Schrieffer theory of superconductivity 31 . Such behaviour has also been observed in ultrathin superconducting systems: atomically thin layered transition-metal dichalcogenides 30 , ultrathin metals 32 and oxide heterojunctions 33 have all shown an anomalously large . The possible reasons for this phenomenon are discussed further in the Methods. We further ascertained the importance of the orbital-pair-breaking effect rather than the Zeeman effect by measuring the angle-dependent critical field for the thin NbN x sample. The results of angle-dependent magnetotransport measurements at θ = 0°, 3°, 5°, 10°, 20° and 90° for the 5-nm sample are shown in the H c2 versus T c phase diagram in Fig. 2f ; θ is the angle that the magnetic-field vector makes with the NbN x /SiC heterointerface. The critical-field dependence on temperature changes from linear for θ = 90° to strongly nonlinear for θ = 0° for the 5-nm sample, whereas it remains linear for the 35-nm sample. As shown in Fig. 2f , the experimentally measured H c2 versus T c at intermediate angles at θ = 3°, 5°, 10° and 20° shows an exceptional agreement with the Tinkham formula 34 , 35 : where and are obtained from equations (1) and (2), and thus H c2 ( θ , T ) is obtained by solving equation (3). Given that the Tinkham formula is obtained purely from the coupling between electron momentum and magnetic field 34 , the close agreement indicates that the observed pair-breaking is primarily a result of orbital effects, instead of the Zeeman effect (see Methods for further discussion). With the experimental determination of the critical temperature, coherence length and critical fields complete, we moved to the integration of nitride semiconductor heterostructures with epitaxial NbN x films. Semiconductor/superconductor heterojunctions The ability to grow epitaxial Al(Ga)N and GaN on NbN x has created an opportunity for the intimate integration of semiconductors with superconductors. To demonstrate this functionality, we have grown a GaN/AlGaN quantum-well heterostructure on the buried epitaxial NbN x superconducting layer, as shown in Fig. 3a . After epitaxial growth of 28-nm NbN x on SiC, a 22-nm AlN layer, a 1.3-μm GaN buffer layer, a 32-nm Al 0.4 Ga 0.6 N barrier, and a 32-nm GaN channel layer are grown successively by MBE in a single run without breaking vacuum. The entire AlN/GaN/AlGaN/GaN heterostructure takes a nitrogen-polar wurtzite form of high crystallinity and has a sharp heterojunction. This is confirmed by Hall-effect measurements of the mobility ( μ ) of a two-dimensional electron gas (2DEG) of about 1,350 cm 2 V −1 s −1 at 300 K and about 3,400 cm 2 V −1 s −1 at 2 K, with two-dimensional densities ( n 2d ) of about 1.3 × 10 13 cm −2 at 300 K and 1.2 × 10 13 cm −2 at 2 K. The 2DEG is formed in a triangular quantum well that is produced at the top GaN/Al 0.4 Ga 0.6 N heterojunction owing to the Berry-phase-driven spontaneous and piezoelectric polarization difference between AlGaN and GaN 36 . The high 2DEG mobility is comparable to that obtained in similar heterostructures without the NbN x buried layer, indicating a successful epitaxial integration. The Hall-effect measurement also proves that the 2DEG is electrically isolated from the buried NbN x metal layer. This 2DEG channel has enabled the integration of an HEMT with NbN x ; before describing this integration, we discuss the quantum-transport properties of the 2DEG channel as probed by low-temperature magnetoresistance. Figure 3: Electrical and magnetotransport characterizations of group III/nitride/NbN x heterostructures. a , Cross-section schematic (top) and scanning transmission electron microscopy (STEM) imaging (bottom left and right) of Al(Ga)N/GaN HEMTs/NbN x grown by MBE on SiC substrates. b , Four-probe resistance of the buried epitaxial NbN x layer, showing that it remains superconducting—with a T c of about 7.7 K—after the subsequent growth of the HEMT. c , Measured Δ R xx versus 1/ B at various temperatures, extracted from the longitudinal resistivity ( R xx ) versus 1/ B after background subtraction (see Methods). The resistance oscillation period, Δ(1/ B ), is 0.0038 T − 1, which can be used to estimate carrier concentration as ns = 1.26 × 10 cm −2 . The numbers on the arrows indicate the Landau level indices. d , χ /sinh( χ ) as a function of temperature at B = 12.8 T. The lines are fittings made using effective masses, m *, of 0.2 m e , 0.21 m e and 0.22 m e . The inset shows Dingle plots at various temperatures, allowing extraction of the quantum-scattering time τ q . The linear fit to experimental data gives τ q = 66 fs, which translates to a momentum/quantum-scattering ratio of τ t / τ q = 5.6 >> 1—a clear indication of charged dislocations as the dominant scattering mechanism in this 2DEG 42 . PowerPoint slide Full size image Low-temperature and high-magnetic-field measurements revealed clear Shubnikov–de Haas oscillations in the magnetoresistance of the 2DEG ( Fig. 3c, d ). These oscillations are commensurate with the magnetic-field-driven formation of Landau levels, and are used to extract the carrier concentration, electron effective mass, and quantum-scattering times 37 , 38 by using the Lifshitz–Kosevich 39 form of the magnetoresistance: In this equation, the periodicity in inverse magnetic field depends only on the carrier concentration n SdH and the fundamental constants e and ħ . The measured period of Δ(1/ B ) = 0.0038 T −1 shown in Fig. 3c corresponds to a carrier concentration of 1.26 × 10 13 cm −2 , consistent with low-field Hall-effect measurements. measures the thermal damping owing to a broadening of Landau levels, with the dimensionless factor χ =2π 2 k B T/ħω c parametrizing the ratio of the thermal energy to the Landau-level energy separation 39 . Here k B is the Boltzmann constant, T is the temperature, and is the cyclotron frequency with effective mass . Figure 3d shows the factor plotted against the temperature dependence of the N = 19 Landau-level peak amplitude. The effective mass extracted from this plot is consistent with prior reports for 2DEGs in GaN 40 . Using the measured effective mass, the Dingle factor reveals the quantum-scattering lifetime τ q (ref. 41 ). The inset of Fig. 3d shows that the peak amplitude varies with inverse magnetic field for various temperatures with a characteristic quantum-scattering time of about 66 fs. This value is substantially smaller than the transport-scattering time ( τ t ) extracted from the low-temperature Hall-mobility measurement; the ratio τ t / τ q = 5.6, being much greater than 1, suggests that Coulomb scattering from charged dislocations is the dominant scattering mechanism in the 2DEG 42 . Dislocations of density of about 10 9 cm −2 are typically present in GaN/AlGaN 2DEGs grown on SiC, Si or other substrates 42 , 43 , 44 . We emphasize that the presence of magnetic quantum oscillations demonstrates the high-quality epitaxial growth of the GaN/AlGaN 2DEG on the superconducting NbN x film. We fabricated nitrogen-polar GaN HEMTs as described in ref. 26 . Low-resistance source/drain ohmic contacts were formed to the polarization-induced 2DEG, and 10 nm TiO 2 high-K dielectric was used before depositing the gate metal. Details of the process and device dimensions are described in the Methods. To form an electrical contact to the NbN x layer, we applied a large voltage between two adjacent metal contacts, S and S ′, that were initially isolated from each other by mesa etching ( Fig. 3a ). This process formed a low-resistance contact between S and S ′ through the epitaxial NbN x layer (dashed red line in Fig. 3a ). A four-probe resistance measurement on such contacts (see Fig. 3b and Methods) confirmed that the buried NbN x epilayer retained its superconductivity, with a transition temperature of around 7.7 K, even after the epitaxial growth of the entire nitride heterostructure on top of it and the subsequent device processing and annealing steps. Figure 4a shows the HEMT drain current ( J d ) per unit width, J d = I d / W , in logarithmic scale as a function of the gate voltage for two drain voltages at 5 K. Note that the gate voltage V gs′ (the voltage difference between gate g and source s ′)and drain voltage V ds′ (the voltage difference between drain d and source s ′) are measured with the buried NbN x layer serving as the source load of the HEMT. The gate leakage current is low, and the drain current changes by about six to seven orders of magnitude as the Fermi level of the GaN quantum-well channel is pulled from inside the conduction band at V gs′ = 0 V into the gap at V gs′ = −8 V. The high on/off ratio was also observed at room temperature, as shown in Extended Data Fig. 4 and discussed in the Methods. Figure 4: Current–voltage characterizations of HEMTs with a superconducting source load at low temperatures. a , Drain current density versus gate–source voltage ( J d – V gs′ ) transfer curves of the HEMTs at 5 K, showing a high on/off ratio at V ds′ = 0.1 V and 1.1 V. The inset shows the equivalent circuit diagram for the device. b – d , J d , versus source–drain voltage, V ds′ , for various top-gate voltages, V gs′ , of GaN HEMTs with a buried epitaxial superconductor load at the source side, at temperatures of 10 K ( b ), 7 K ( c ), and 5 K ( d ). The results show that when the NbN x layer becomes superconducting, the transistor output characteristics exhibit a negative differential resistance (NDR), as seen by the decrease in resistance with increasing V ds′ between the iso-power contours. The black solid and dashed lines in panels c , d indicate iso-power contours. PowerPoint slide Full size image To quantify the effect of the superconducting load element, we compare the current in the HEMT from the drain ( D ) to S and S ′ under varying gate voltages. The J d – V gs transfer curve measured at 5 K deviates from the J d – V gs′ transfer curve for currents of greater than 0.1 A mm -1 ( Extended Data Fig. 5 ). Below 0.1 A mm -1 , NbN x remains superconducting with R sc = 0 Ω, and therefore does not contribute to the measured transfer curve. A current larger than 0.1 A mm -1 drives the NbN x into a normal metal state with R sc ≈ 4.6 Ω. The superconductor-to-metal phase transition can occur when the magnetic field is greater than the critical magnetic field ( H c ), when the current density is higher than the critical current density ( J c ), or when the temperature is higher than the critical temperature ( T c ). According to Ampere’s law, the magnetic field resulting from the 2DEG current J d at the superconducting layer is around μ 0 J d /2 ≈ 10 −4 T (that is, much less than H c ). As shown in Extended Data Fig. 6 , we have measured the critical current density of the MBE NbN x to be J c = 10 5 A cm −2 , and for a thickness of t = 28 nm the net current density is estimated to be J = 10 3 A cm −2 (much less than J c ). Thus, we rule out the Meissner effect and high current injection as possible causes of the superconductor-to-metal transition driven by the transistor. We attribute the transition to Joule heating at the semiconductor/superconductor junction. The abrupt appearance of a resistive load lowers the measured transistor current flowing across D -to- S ′, changing the transfer curve. Further investigation of this electronic phase change in the load shows that the effect is strong enough to drive a negative differential resistance (NDR) in the transistor output characteristics. Figure 4b–d shows the measured J d – V ds′ output characteristics of the HEMT as a function of gate voltages, measured at 10 K, 7 K and 5 K, with the NbN x layer as the source load. At temperatures of 10 K (greater than T c ), the NbN x layer acts as a resistive load at all bias conditions, and J d increases monotonically with V ds′ ( Fig. 4b ). As the temperature is lowered to 7 K (less than T c ), the NbN x load drops to its zero-resistance state. This is characterized by a lower transistor on-resistance and a weak NDR ( Fig. 4c ). As the power level is increased, Joule heating warms the NbN x /AlN/GaN junction to temperatures higher than T c , turning the surrounding superconducting NbN x into a normal metal, and thus lowering the channel current. The abrupt increase in resistance caused by the superconductor-to-metal transition leads directly to the appearance of an NDR, as seen clearly seen in Fig. 4c, d . The transition regime of load from superconducting phase to normal metal for all J d – V ds′ curves at 7 K and 5 K lies within two iso-power contours, P = I × V (solid and dashed lines in Fig. 4c, d ). This is a clear indication that the phase transition is thermally induced by Joule heating, and not through the critical current or critical magnetic field of NbN x . A critical-current-mediated manifestation or a critical-magnetic-field-mediated phase transition would have caused an NDR at the same current level, not the same power level. This form of a phase-transition element attached at the source contact of a transistor has been used to demonstrate sub-Boltzmann switching in silicon and GaN transistors at room temperature 45 . In such phase-field-effect transistors, the phase change was obtained through a filamentary metal-to-insulator transition in VO 2 that was driven through a combination of thermal phase transition and Mott–Hubbard interactions by injected current. The superconducting phase transition at a low temperature in the hybrid superconductor–transistor phase-field-effect transistors and the resulting NDR behaviour has not been observed before. Conclusions The successful epitaxial integration of group III/nitride semiconductors and transistor gain elements with NbN x -based superconductors points towards several new opportunities. Just as the development of reduced surface and interface states of silicon paved the way for the metal-oxide-semiconductor field-effect transistor, so do epitaxial NbN x /group III/nitride structures offer the possibility of defect-free metal/semiconductor heterojunctions. Semiconductor transistors were instrumental in the discovery of the quantum-Hall effect 29 , which led to the discovery of topological insulators 46 and introduced topology into condensed-matter physics 47 . Epitaxial integration of semiconductor/superconductor heterostructures could enable phenomena that require both materials families, such as the Majorana zero-modes for braiding-based, topologically protected quantum computation 11 , 48 . Moreover, the presence of spontaneous and piezoelectric polarization induced by broken inversion symmetry in group III/nitride semiconductor crystals 49 offers the possibility of Rashba-driven topological insulators 50 . In more near-term applications, NbN x -based single-photon detectors can now be epitaxially integrated with GaN HEMT amplifiers for secure quantum communications. Finally, combining GaN HEMT microwave amplifiers with NbN x -based Josephson junctions can provide an all-epitaxial platform for superconducting qubits whereby the most desirable properties of semiconductors and superconductors are combined epitaxially in a seamless braid. Methods We describe here in detail the epitaxial growth and structural, magnetic and electronic characterization of the group III/nitride semiconductor heterostructures and NbN x superconductors. We also describe the method of fabrication, as well as measurements and characterization, of the epitaxial semiconductor transistor/superconductor heterostructures and devices. MBE growth Epitaxial NbN x films were grown at 800 °C by radio-frequency plasma-assisted MBE on three-inch-diameter, metal-polar semi-insulating 4H- and 6H-SiC substrates. The substrates had been commercially polished using chemical–mechanical polishing to an epi-ready finish, and were used as received. The reactive nitrogen was generated using a radio-frequency plasma source fed by ultrahigh-purity N 2 , which was further purified by an in-line purifier. The Nb flux was generated using an in situ electron-beam evaporator source with 3N5-pure (excluding tantalum, Ta) Nb pellets in a tungsten hearth liner. Further details regarding MBE growth conditions are in ref. 25 . Structural measurements We measured the surface morphology of the MBE-grown NbN x films using a Bruker Dimension FastScan atomic force microscope in tapping mode. The root-mean-square roughness of the 5-nm NbN x film on SiC is 0.15 nm in an area of 3 × 3 μm 2 , and 0.56 nm for the 35-nm film ( Extended Data Fig. 2 ). We determined the lattice constants and phase of the NbN x films through X-ray diffraction (XRD) measurements, using a Rigaku system that employs a rotating copper anode to produce Cu-Kα radiation. Structural properties and lattice parameters of NbN x on SiC are given in ref. 51 . The measured XRD spectra of the 5-nm and 35-nm films are shown in Extended Data Fig. 3 : the peaks for NbN x are seen in first- and second-order reflection of the SiC (0004) plane in the relatively thick 35-nm sample, but the peaks are absent in the 5-nm sample because of the weak XRD signal in such thin films. Transport and magnetic measurements All of our electrical transport, Hall-effect and VSM measurements were carried out in a physical property measurement system (PPMS) manufactured by Quantum Design Inc. Extended Data Table 1 summarizes the basic material parameters extracted from measurements on the 5-nm and 35-nm samples. Extended Data Fig. 6 shows that, at 300 K, the carrier density in these NbN x films is as high as n 3 d ≈ 10 23 cm −3 , but the mobility is less than 1 cm 2 V −1 s −1 , probably limited by impurities, crystal defects and phonons. In terms of the superconducting behaviour, critical temperatures extracted from electrical resistance and VSM measurements are consistent with each other, but vary slightly from sample to sample in the range 6–15 K. Characterization of the dependence of superconductivity on the in-plane magnetic field was limited by the 14 T field capabilities of our measurement system. We anticipate that our ongoing low-temperature and high-magnetic-field (up to 35 T) measurements will provide deeper insights into the orbital/spin pair-breaking mechanism and the presence of spin-orbit scattering. Critical current density Extended Data Fig. 6b, c shows the measured J c in our MBE-grown NbN x films. We carried out the measurements by injecting current into the film and detecting the voltage drop when the injected current density exceeded J c . The measured values are close to 10 5 A cm −2 for MBE-grown NbN x . Pair-breaking mechanisms in epitaxial thin NbN x films The in-plane critical fields measured for the 5-nm NbN x epitaxial superconducting layers are much higher than expected from the out-of-plane critical fields and the Tinkham formula. Mechanisms that could lead to this phenomenon include a modified electron g -factor 31 , the presence of spin-orbit scattering 52 , and Rashba spin-orbit coupling 32 . If the mechanism were a modified electron g -factor, then given that the measured critical in-plane field is a factor of two larger than the Pauli limit, the epitaxial NbN x would need an effective g -factor of less than 1, which we find unlikely: because NbN x is a good metal, we suspect its effective g -factor to be close to 2 (ref. 53 ). Spin-orbit scattering is possible in the MBE-grown NbN x films owing to the presence of trace amounts of Ta in the purest available Nb sources. However, the dilute concentration of Ta that we have measured in our MBE NbN x suggests that this scenario is unlikely. Finally, the presence of Rashba spin-splitting owing to broken inversion symmetry of the samples has recently been suggested 32 as a mechanism by which to suppress the Pauli paramagnetic limit. Because our films are grown in an asymmetric stack, we find this the most plausible explanation. Previous experimental and theoretical work has suggested the importance of Rashba spin-orbit coupling in identifying an anomalously large . However, our epitaxial NbN x provides a platform for testing this idea directly, because of the ability to grow nominally symmetric stacks in NbN x —a feat difficult in ultrathin lead films 32 , but potentially possible, if challenging, in two-dimensional materials. Shubnikov–de Haas oscillations Extended Data Fig. 7a shows the raw measured Shubnikov–de Haas oscillations of the GaN/AlGaN 2DEG grown epitaxially on NbN x layers. The oscillations become sharper as the temperature is lowered. We used these Shubnikov–de Haas measurements, with a fit to the Lifshitz–Kosevich form of the magnetoresistance, to extract carrier concentration, effective mass and quantum-scattering times as discussed in the main text. The magnetoresistance data were uniformly resampled over an inverse magnetic field, and then smoothed over a window of 0.00056 T −1 before background subtraction. To extract the effective mass and quantum-scattering times, we removed the non-oscillating background component of the resistance and used the oscillatory components ( Extended Data Fig. 7a inset). A non-oscillatory background of the form p (1/ B ) = a + b / B 1/2 + c / B was subtracted from the R xx data before fitting to the Lifshitz–Kosevich form. Extended Data Fig. 7b shows a Landau plot of the Shubnikov–de Haas oscillation peaks. The range of magnetic fields used in this measurement, 0–14 T, allowed the Fermi level to fill 19–25 Landau levels at a fixed 2DEG density of n 2 d ≈ 1.2 × 10 13 cm –2 . Superconductor/semiconductor transistor devices To fabricate the GaN HEMT structure ( Extended Data Fig. 4a inset), we first grew 28-nm NbN x on a 6H-SiC substrate by MBE; this was followed by nucleation with 22 nm AlN, two-step application of a 1.3-μm GaN buffer layer, and then growth of 32-nm Al 0.4 Ga 0.6 N and 32-nm GaN channel at 700 °C. After the growth, ohmic contacts with Ti/Al/Ni/Au (20/100/10/50 nm) stacks were defined by optical lithography and electron-beam evaporation. Rapid thermal annealing at 850 °C produced ohmic contacts with a contact resistance of 0.4 Ω mm −1 . Inductively coupled plasma etching with a Cl 2 /BCl 3 /Ar gas was then used to isolate separate HEMTs. To reduce the gate leakage current, we deposited a 10-nm-thick, high-K dielectric layer of TiO 2 by atomic-layer deposition at 300 °C; this was followed by Pt/Au (30/200 nm) electron-beam evaporation to produce the gate metal stack. Finally, the TiO 2 on top of the drain and source contacts were removed with fluorine-based plasma etching, and a second metalization of Ti/Pt/Au (25/25/400 nm) was performed. Using fabricated van der Pauw structures, we performed Hall-effect measurements on the 2DEG at the GaN/Al 0.4 Ga 0.6 N interface. We determined the electron concentration to be 1.3 × 10 13 cm −2 , with a mobility of 1,350 cm 2 V −1 s −1 at room temperature and 3,400 cm 2 V −1 s −1 at 2 K, indicating that a high-quality 2DEG channel is achieved in these heterostructures and, more importantly, that processing did not lead to performance degradation. A representative room-temperature electrical characterization of the fabricated GaN HEMTs on NbN x is shown in Extended Data Fig. 4 . For a gate length of L g = 1 μm and gate width of W = 75 μm, the transistors show an I on / I off ratio of more than 10 5 ( Extended Data Fig. 4a ). The on-current density exceeds 1 A mm −1 at V d = 3 V and V g = −1 V, with a clear current saturation ( Extended Data Fig. 4b ). Overall, the properties of the transistors studied here are similar to those of GaN HEMTs that are grown directly on SiC without the NbN x layer underneath 28 . This is, to our knowledge, the first successful direct epitaxial integration of a high-performance semiconductor transistor on a superconductor. Extended Data Fig. 5 shows the measured drain currents without the superconductor load ( J ds ; solid lines) and with the superconductor load ( J ds′ ; circles) at 5 K (less than T c ) under two different drain voltages in a linear scale. We can see that, when V ds and V ds′ are 0.1 V, the drain currents as a function of gate voltage are identical, because the NbN x remains superconducting with a resistance of 0 Ω throughout this gate-voltage range. However, when V ds and V ds′ are 1.1 V, the J d versus V gs′ curve deviates from the J d versus V gs curve once J d exceeds 0.1 A mm −1 . This indicates the occurrence of a superconductor-to-metal phase transition of the NbN x film at the source end driven by this current (power) level. Determination of N/Nb ratio ( x ) by SIMS Extended Data Fig. 8 shows a SIMS measurement of the entire HEMT epitaxial heterostructure. Sharp and abrupt transitions of the SiC/superconductor, superconductor/AlN, AlN/GaN and AlGaN/GaN heterointerfaces are observed. The SIMS profile provides a calibrated measurement of the stoichiometry of each layer of the heterostructure. The semiconducting AlN, GaN and AlGaN layers are perfectly stoichimetric within the limits of the measurement, and the NbN x layer has an N/Nb ratio ( x ) of 43.3/56.7 = 0.762. Extended Data Table 2 shows additional N/Nb ratios measured by Rutherford back scattering (RBS) and SIMS, as well as the relation between N/Nb ratios, the residual resistance ratio (RRR) and the superconducting transition temperature. Data availability The datasets generated and analysed here are available from the corresponding author on reasonable request. | Silicon has been the semiconductor material of choice for electronics pretty much since the transistor effect was first observed and identified nearly 80 years ago. There's a valley in California named for it, after all. But a relatively new family of semiconductors – group III-nitrides, including gallium nitride (GaN), indium nitride and aluminum nitride – offers greater versatility than silicon with capabilities for ultrafast wireless communications, high-voltage switches and high intensity lighting and photonics. A team led by Debdeep Jena, professor of electrical and computer engineering (ECE), and David Meyer, head of the Wide Bandgap Materials and Devices section at the Naval Research Laboratory, has successfully devised a semiconductor-superconductor crystal structure featuring GaN grown directly onto a crystal of niobium nitride (NbN), a proven superconductor material used in quantum communications, astronomy and a host of other applications. The group's paper, "GaN/NbN Epitaxial Semiconductor/Superconductor Heterostructures," is being published online March 8 in Nature. Former postdoctoral researcher Rusen Yan and current postdoc Guru Khalsa are co-lead authors. Other key contributors were Grace Xing, the Richard Lundquist Sesquicentennial Professor in ECE and MSE, and David Muller, the Samuel B. Eckert Professor of Engineering in the Department of Applied and Engineering Physics. The method for combining the two materials – molecular beam epitaxy (MBE), essentially spray painting of gallium and nitrogen atoms onto the NbN in a vacuum environment – creates an extremely clean interface and is key to the success of the novel structure. This advance, the group says, opens up a range of possibilities that can now combine the macroscopic quantum effects of superconductors with the rich electronic and photonic properties of group III-nitride semiconductors. "People have tried it with other semiconductors, like silicon and gallium arsenide, but I don't think anything has been as successful as what we've managed to do with GaN," said Jena, who has a dual appointment with the Department of Materials Science and Engineering (MSE). Gallium nitride-based semiconductors have recently made major inroads in the areas of LED lighting, Blu-ray laser diodes, energy and communications. In fact, the 2014 Nobel Prize in physics was given to a trio of Japanese scientists for their invention of energy-efficient blue light-emitting diodes (LEDs) using GaN. Technological advances – particularly the type of MBE used in this work, which was developed at the Naval Research Laboratory – has made it possible for scientists to think about semiconductor-superconductor heterostructures such as the one Jena's group has developed. The specialized nitride MBE system includes an electron beam evaporator source, which "melts" the niobium – which has a melting point of around 4,500 degrees – but not the crucible it's in. Atoms of niobium are deposited onto a silicon carbide wafer, and the GaN semiconductor layers are then grown on top of that, also by MBE. "This new source allowed us to overcome the temperature limitations of conventional sources, and bring high-melting-point, refractory transition metals like niobium and tantalum into the picture," Meyer said. The team demonstrated for the first time the growth and fabrication of a semiconductor transistor switch, the prototypical gain element in electronics, directly on top of a crystalline superconductor layer. This heterostructure is a kind of "best of both worlds," Jena said, offering a method for devising quantum computation and highly secure communications systems. "There are some things that we would love to do with quantum systems – quantum computation and cryptography, things that are not possible in classical systems," he said. "On the other hand, there are things that classical systems are much better at than quantum systems. And there is this mesozone where you can do wonderful things by mixing and matching the two." "We think this presents a wonderful opportunity for rapid technology development of next-generation communications and computation systems," Meyer said. | 10.1038/nature25768 |
Medicine | An oral medication shows benefits treating Type 1 diabetes for at least two years after diagnosis | Exploratory study reveals far reaching systemic and cellular effects of verapamil treatment in subjects with type 1 diabetes, Nature Communications (2022). DOI: 10.1038/s41467-022-28826-3 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-022-28826-3 | https://medicalxpress.com/news/2022-03-oral-medication-benefits-diabetes-years.html | Abstract Currently, no oral medications are available for type 1 diabetes (T1D). While our recent randomized placebo-controlled T1D trial revealed that oral verapamil had short-term beneficial effects, their duration and underlying mechanisms remained elusive. Now, our global T1D serum proteomics analysis identified chromogranin A (CHGA), a T1D-autoantigen, as the top protein altered by verapamil and as a potential therapeutic marker and revealed that verapamil normalizes serum CHGA levels and reverses T1D-induced elevations in circulating proinflammatory T-follicular-helper cell markers. RNA-sequencing further confirmed that verapamil regulates the thioredoxin system and promotes an anti-oxidative, anti-apoptotic and immunomodulatory gene expression profile in human islets. Moreover, continuous use of oral verapamil delayed T1D progression, promoted endogenous beta-cell function and lowered insulin requirements and serum CHGA levels for at least 2 years and these benefits were lost upon discontinuation. Thus, the current studies provide crucial mechanistic and clinical insight into the beneficial effects of verapamil in T1D. Introduction Diabetes continues to grow as a chronic global health problem, affecting people of all ages. Since the discovery of insulin, a century ago, therapies have improved dramatically, but many critical needs and hurdles remain and prevent subjects with diabetes from living a truly normal life. In the case of type 1 diabetes (T1D), which involves autoimmune and inflammatory processes and destruction of insulin-producing pancreatic beta cells, exogenous insulin is still the only available therapy, and it is associated with the inherent risk of low blood glucose levels or hypoglycemic episodes that can be life-threatening. In addition, administration of insulin entails either continuous infusion via a pump or multiple daily injections and no oral medications are available. Recently, we reported the results of a small, phase 2, randomized placebo-controlled trial, using oral verapamil, an approved blood pressure medication, in new-onset T1D subjects 1 . Subjects receiving verapamil had improved endogenous beta cell function (as measured by a 2 h mixed-meal-stimulated C-peptide area under the curve (AUC)), lower insulin requirements, and fewer hypoglycemic events as compared to individuals getting placebo added to their standard insulin regimen 1 . Even though all subjects were normotensive, verapamil did not lead to any hypotension or any adverse events. While highly promising, these findings also raised a number of new mechanistic questions, including what exact biological changes verapamil elicits in humans with T1D, how long they may last, and how these changes and any potential associated therapeutic success could be monitored. The current studies were aimed at addressing these questions using proteomics, transcriptomics, and pathophysiological approaches. In this work, we now show that verapamil reverses T1D associated increases in serum CHGA levels, proinflammatory interleukin-21 (IL-21) levels, and T-follicular-helper (Tfh) cell markers and promotes an anti-oxidative, anti-apoptotic and immunomodulatory gene expression profile in human islets. In addition, our results suggest that continuous use of oral verapamil in subjects with T1D may delay loss of beta cell function and lower insulin requirements for at least 2 years post-diagnosis and that such therapeutic success or disease progression can be monitored by changes in serum CHGA. Results To assess potential systemic changes in response to verapamil treatment, we conducted a global proteomics analysis using liquid chromatography-tandem mass spectrometry (LC-MS/MS) of serum samples from subjects at baseline and after 1 year of receiving verapamil or placebo and determined the effect of treatment on changes over time. 10 subjects had sufficient usable serum for LC-MS/MS at both of these time points resulting in 20 samples used for proteomics analysis (Supplementary Fig. 1 ). The baseline subject characteristics of this subset and of the full set of study participants are shown in Supplementary Table 1 and they demonstrate that there were no significant differences between the treatment groups with respect to age, gender, race, BMI, or HbA1C. Also, analysis of serum from the same subject before and 1 year after treatment allowed each subject to provide its own baseline control and avoid confounding effects from inter-individual variability. Applying a global serum proteomics workflow (Supplementary Fig. 1 ) along with this tightly controlled design, we were able to quantify 31,457 unique peptides and 867 proteins (<1% false discovery rate, FDR) with TMT reporter ion intensity data across all channels without missing data in the 20 serum samples (Supplementary Fig. 2 ). The raw dataset from this study is available at ProteomeXchange (accession # PXD026601). Following statistical analysis on treatment effect using a linear regression model, 53 proteins were identified whose relative abundance over time was significantly altered ( P < 0.05) by verapamil treatment (Supplementary Table 2 ). Enrichr 2 analysis of these proteins revealed enrichment for gene ontology biological processes such as neutrophil-mediated immunity and regulation of acute inflammatory and humoral immune responses as well as regulation of cellular metabolic processes (Supplementary Table 3 ). In fact, we observed a downregulation of leukocyte immunoglobulin-like receptor subfamily A member 3 (LILRA3 aka CD85e) and of secreted and transmembrane protein 1 (SECTM1) in response to verapamil, both proteins known to be involved in immune modulation (Supplementary Table 2 ). The cluster of differentiation 81 (CD81), which associates with CD4 and CD8 on T cells and provides a costimulatory signal with CD3 was also downregulated by verapamil, as was osteoclast-associated immunoglobulin-like receptor (OSCAR), a member of the leukocyte receptor complex protein family that regulates innate and adaptive immune responses. Interestingly though, chromogranin A (CHGA) emerged as the top serum protein altered by verapamil treatment in subjects with T1D exhibiting the most significant change in relative abundance over time in response to treatment as assessed by linear regression (Supplementary Table 2 ). In addition, two-sided t-test and paired t-test demonstrated that CHGA (unlike most other proteins) was also significantly downregulated in the verapamil group at year 1 as compared to placebo ( P = 0.007) and as compared to baseline ( P = 0.004), respectively. CHGA is a secreted glycoprotein produced by neuroendocrine cells and as such, it has been well established as a marker of neuroendocrine tumors. Within the cells, CHGA is localized in secretory granules including those of pancreatic beta cells, raising the possibility that changes in its circulating levels might also reflect alterations in beta cell integrity. Indeed, while on average the relative abundance of CHGA in serum did not significantly change during the first year of T1D in control subjects receiving placebo (Fig. 1a ), it decreased significantly in each subject receiving verapamil (Fig. 1b ). Moreover, comparison of these baseline to year 1 changes in the verapamil group as compared to the control group also revealed a clear and significant difference between the study groups (Fig. 1c ). To further confirm these LC-MS/MS results, we also measured serum levels of CHGA by ELISA in the full set of study subjects (Supplementary Table 1 ). The results were very much in alignment and again revealed no significant change over time in controls (Fig. 1d ), but a significant and consistent decrease in serum CHGA after 1 year of verapamil treatment (Fig. 1e ). Comparison of these baselines to year 1 changes in the verapamil group versus the control group again demonstrated a striking and significant difference between the study groups (Fig. 1f ). Interestingly, serum CHGA levels in healthy, non-diabetic volunteers were ~2-fold lower as compared to those in subjects with T1D, but after 1 year of verapamil treatment, there was no longer any significant difference between verapamil-treated T1D subjects and healthy individuals (Fig. 1g ). Next, we compared these results to available mixed-meal-stimulated C-peptide AUC data from the same subjects. The stimulated C-peptide AUC has remained the gold standard for assessing pancreatic beta cell function in T1D, with a decrease indicating disease progression, whereas stable levels or an increase are considered signs of successful therapeutic intervention 3 , 4 , 5 . In fact, serum CHGA showed a significant inverse correlation with C-peptide AUC (Fig. 1h ). Moreover, C-peptide AUC decreased in each individual receiving placebo, while trending up in subjects getting verapamil (Supplementary Fig. 3a, b ), providing a mirror image of the changes in CHGA observed in the same subjects (Fig. 1a, b ). In addition, individual longitudinal changes in CHGA inversely correlated with individual longitudinal changes in C-peptide AUC (Supplementary Fig. 3c ). Thus, serum CHGA seems to reflect changes in beta cell function in response to verapamil treatment or T1D progression and therefore may provide a longitudinal marker of treatment success or disease worsening. This notion is supported by recent reports suggesting that CHGA may serve as a biomarker for some autoimmune diseases including T1D 6 , 7 . Also, testing for CHGA only requires a simple blood draw and therefore may provide an easy and straightforward way to monitor changes in response to therapy or T1D progression over time. This would address a critical need, as the lack of a simple longitudinal marker has been a major challenge in the T1D field. Fig. 1: Serum CHGA in response to verapamil treatment of subjects with T1D. CHGA as assessed by LC-MS/MS ( y -axis represents relative abundance levels in zero-centered log2 form) in serum at baseline (BL) or at 1 year (Y1) of individual control subjects with T1D receiving placebo (black) ( n = 5) (NS) ( a ) or verapamil (red) ( n = 5) ( t 4 = 5.966, *P = 0.0040) ( b ). Comparison of the changes in CHGA (BL to Y1) as assessed by LC-MS/MS in the verapamil and the placebo group ( t 8 = 3.674, *P = 0.0063) ( c ). Serum CHGA levels at BL or Y1 as assessed by ELISA in individual control subjects with T1D receiving placebo ( n = 6) (NS) ( d ) or verapamil ( n = 9) ( t 8 = 5.44, *P = 0.0006) ( e ). Comparison of the changes in CHGA (BL to Y1) as assessed by ELISA in the verapamil and the placebo group ( t 13 = 4.497, *P = 0.0006) ( f ). Serum CHGA levels in healthy, non-diabetic volunteers (blue) ( n = 9) as assessed by ELISA and compared to subjects with T1D at baseline (T1D BL) (white) ( n = 15), subjects with T1D getting placebo for 1 year (Control T1D Y1) ( n = 6) or receiving verapamil for 1 year (Verapamil T1D Y1) ( n = 9) ( F 3,35 = 4.392, * P = 0.01) ( g ). Correlation of C-pep AUC and serum CHGA ( R = −0.62, P = 0.0026) ( h ). Bars represent means ± SEM. For a , b , d , e , two-tailed, paired Student’s t -test. For c , f , two-tailed Student’s t -test. For g , one way ANOVA and for h , repeated measures correlation coefficient by mixed model. Subject characteristics are listed in Supplementary Table 1 . Source data are provided as a Source Data file. Full size image Therefore, to determine whether longitudinal changes in serum CHGA would continue to mirror therapeutic effects or disease progression over a longer period of time, we measured CHGA levels in a small number of study subjects with T1D who had received verapamil in year 1 and continued the treatment for a second year after diagnosis, as well as in control study participants who had never received verapamil. In addition, a subset of verapamil users discontinued the treatment after completion of the 1-year study, and their serum CHGA was analyzed as well. The subject characteristics of these subgroups are shown in Supplementary Table 4 . Interestingly, we found that CHGA levels continued to decline and remained lower over the 2 years of verapamil treatment as compared to control subjects just receiving standard T1D treatment (Fig. 2a ). Most strikingly, CHGA levels rose in those subjects that discontinued verapamil in year 2 (Fig. 2a ). Inversely, C-peptide AUC remained stable over the 2-year period in subjects taking verapamil, whereas it continued to decline during the second year in the control group (Fig. 2b ). Moreover, discontinuation of verapamil led to a sharp drop in C-peptide AUC during year 2 (Fig. 2b ). Consistent with these changes, the insulin dose required to control blood glucose levels remained low and stable over the 2-year period with verapamil treatment but continued to increase in the control group (Fig. 2c ). Also, discontinuation of verapamil resulted in a clear increase in insulin requirements during year 2 (Fig. 2c ). Blood glucose control as assessed by HbA1C remained stable over the 2-year period and was similar in the different study groups (Fig. 2d ). The corresponding individual data are shown in Supplementary Fig. 4a–d , respectively. Together, these results not only show that changes in CHGA continue to reflect alterations in beta cell function over time providing an attractive longitudinal marker, they also demonstrate that with continuous use the beneficial effects of verapamil in subjects with T1D persist for at least 2 years and that verapamil treatment effectively keeps exogenous insulin requirements low. Fig. 2: Insulin requirements, beta cell function, and CHGA over 2 years of T1D treatment with verapamil. Changes over time in serum CHGA as assessed by ELISA ( F 4,19 = 8.723, P = 0.0003) ( a ), C-peptide AUC ( F 4,19 = 4.346, P = 0.012) ( b ), daily insulin dose ( F 4,23 = 3.094, P = 0.036) ( c ) and blood glucose control as assessed by HbA1C ( d ) in subjects with T1D receiving verapamil for 2 years (Verapamil; n = 5), discontinuing verapamil after the first year (Disc V; n = 4), or not taking any verapamil (Control; n = 6). Means ± SEM are shown, two-way repeated-measures ANOVA. Subject characteristics of these subgroups are shown in Supplementary Table 4 . Source data are provided as a Source Data file. Full size image Intriguingly, CHGA has also been identified as an autoantigen in T1D and one of its peptide fragments has been reported to be recognized as an epitope by diabetogenic T-cells 8 , 9 . Together with our global serum proteomics results and our discovery that verapamil effectively lowered serum CHGA levels and resulted in persistent beneficial effects in the context of autoimmune T1D, this raised the question of whether verapamil might also have any effects on T-cells. We, therefore, analyzed T1D-induced changes in T-cell markers using peripheral blood monocytes (PBMCs) from the same study participants with T1D whose serum was analyzed by LC-MS/MS and compared them to available PBMCs from non-diabetic healthy volunteers (Supplementary Table 1 ). General markers of CD4 T-helper (Th) cells and of proinflammatory Th1 cells such as C-X-C chemokine receptor type 3 (CXCR3, aka CD183) and signal transducer and activator of transcription 4 (STAT4), were not significantly altered in subjects with T1D as compared to healthy controls and were not affected by verapamil treatment (Fig. 3a–c ). In contrast, expression of CXCR5 (aka CD185), a surface marker of proinflammatory T follicular helper (Tfh) cells, and the Tfh signature cytokine, interleukin 21 (IL21) were significantly elevated in PBMCs of subjects with T1D as compared to healthy controls and these changes were reversed by verapamil treatment (Fig. 3d, e ). Of note, serum IL-21 levels followed the same pattern, revealing again significantly elevated levels in T1D as compared to healthy controls and a ~2-fold reduction in response to verapamil treatment (Fig. 3f ). Absolute serum IL-21 levels in healthy controls were comparable to those of healthy adults reported previously 10 , 11 . Also, the comparison of individual changes in these markers showed a similar trend with verapamil preventing the increase in CXCR5 from BL to Y1 and leading to a significant decrease in serum IL21 (Supplementary Fig. 5 ). This apparent increase in Tfh cell markers in subjects with T1D is consistent with recent reports of a Tfh cell signature in T1D that also included elevation in circulating Tfh cells and IL-21 12 , 13 , 14 , 15 . In addition, these changes have been suggested to play a role in the pathogenesis of T1D and to be potentially amenable to interventions 12 , 13 , 14 , 15 . Now our results reveal for the first time that verapamil treatment can reverse these T1D-induced changes. This suggests that verapamil, and/or the T1D improvements achieved by it, can modulate some proinflammatory cytokines and Th cell subsets, which in turn may contribute to the overall beneficial effects observed clinically. Fig. 3: Effects of T1D and verapamil treatment on T-cells. Expression of the T-cell markers CD4 (NS) ( a ), CXCR3 (NS) ( b ), STAT4 (NS) ( c ), CXCR5 ( H 3 = 14.011, * P = 0.003) ( d ) and IL21 ( H 3 = 11.516, * P = 0.009) ( e ) as assessed by qPCR in PBMCs from healthy, non-diabetic volunteers (blue) ( n = 7), subjects with T1D at baseline (T1D BL) (white) ( n = 10), subjects with T1D getting placebo for 1 year (Control T1D Y1) (grey) ( n = 5) or receiving verapamil for 1 year (Verapamil T1D Y1) (red) ( n = 5). Serum levels of the proinflammatory cytokine IL-21 as assessed by ELISA ( H 3 = 11.847, * P = 0.008) ( f ). Bars represent means ± SEM; Kruskal-Wallis (nonparametric ANOVA) and Dunn’s multiple comparisons. Source data are provided as a Source Data file. Full size image The fact that these clinical improvements included an increase in C-peptide AUC and preservation of endogenous beta cell function suggested that verapamil might also directly affect pancreatic islets. We, therefore, next investigated the effects of verapamil on the overall gene expression profile by conducting RNA sequencing of verapamil-treated ( n = 3) or untreated ( n = 3) human islet samples from three different donors each serving as its own control. The dataset generated during the current study is available in the GEO repository (accession # GSE181328). Indeed, this transcriptomics analysis revealed that a large but comparable number of genes was up- as well as downregulated in response to verapamil (907 up and 619 down, respectively) as shown in the volcano plot (Fig. 4a ). Further global analysis of these genes by gene ontology showed enrichment for a variety of biological processes, but interestingly three of the top ten processes were related to neutrophil-mediated immunity, degranulation, and activation (Supplementary Table 5 ). Of note, these are the same three processes identified in the Enrichr analysis of the human serum proteomics results (Supplementary Table 3 ), further supporting the notion that verapamil may also have some immune-modulatory functions. Recently, the notion of pancreas-resident or infiltrating neutrophils has been established 16 and some of the leucocyte cell surface molecules included in the neutrophil-associated processes identified by Enrichr have previously been found to be expressed in isolated pancreatic islets and islet endothelial cells 17 , 18 . This makes it tempting to speculate that changes in these islet-resident cells may have contributed to this surprising signature in the islets. Moreover, another process that was identified by multiple enriched terms, was antigen processing and presentation by major histocompatibility complex (MHC) class I molecules (Supplementary Table 5 ). Intriguingly, upregulation of islet MHC class I and class II antigen expression has been suggested as a defining feature of T1D and as a marker for the associated interferon alpha response and inflammation 19 , 20 , 21 . In alignment with the normalization of proinflammatory markers found in response to verapamil, we observed downregulation of genes encoding human leucocyte antigen (HLA)-A, HLA-B, HLA-C, HLA-G (MHC class I) as well as HLA-DPA1 and HLA-DRA (MHC class II) (Fig. 4b–d ). In terms of individual genes, the most extremely upregulated gene in response to verapamil was insulin-induced gene 1 ( INSIG1 ) (Fig. 4a ), which has previously been reported to promote anti-apoptotic BCL2 and reduce beta cell apoptosis 22 , consistent with the observed protective effects of verapamil. The most significantly downregulated gene was pancreatic secretory granule membrane major glycoprotein 2 ( GP2 ) (Fig. 4a ). GP2 is a specific cell surface marker of human pancreatic progenitors, has been associated with increased risk of type 2 and gestational diabetes, and has generally been identified as an immunomodulator 23 , 24 , 25 . Genes with the next most significant changes in expression in response to verapamil included the upregulated enzymes methylsterol monooxygenase 1 ( MSMO1 ), isopentenyl-diphosphate delta isomerase 1 ( IDI1 ), and squalene epoxidase ( SQLE ) as well as downregulated lysyl oxidase homolog 4 ( LOXL4 ), actin alpha 2 ( ACTA2 ), and glycerol-3-phosphate dehydrogenase ( GPD1 ). In addition, several genes that have previously been shown to modulate key islet processes including oxidative stress, apoptosis, and T1D autoimmunity were significantly up- or downregulated by verapamil (Fig. 4a, b ). Of note, the expression of thioredoxin-interacting protein ( TXNIP ), which we have previously found to be downregulated by verapamil in vitro and in vivo 26 was consistently decreased in response to verapamil in all samples as shown in the heatmap (Fig. 4c ) and by the normalized read counts (Fig. 4d ). In fact, TXNIP is considered a key factor in diabetes-associated beta cell apoptosis 27 , 28 , 29 and genetic deletion of TXNIP has been shown to mimic the anti-diabetic effects of verapamil in different mouse models 26 , 29 . Downregulation of TXNIP has therefore been suggested to mediate the beneficial effects of verapamil in the context of diabetes 26 and this notion is strongly supported by the current findings in human islets. TXNIP belongs to the thioredoxin network, a cellular redox system, however, by interacting with and inhibiting thioredoxin, TXNIP promotes oxidative stress. In contrast, thioredoxin reductase (TXNRD1) and sulfiredoxin (SRXN1), two additional members of the thioredoxin signaling network, reduce oxidized thioredoxin and peroxiredoxin back to their active states and thereby preserve the cellular redox potential. In alignment with the overall protective effects, the expression of TXNRD1 and SRXN1 was significantly upregulated by verapamil (Fig. 4a–d ). In addition, the gene encoding Bcl-2-like protein 2 ( BCL2L2 ), a pro-survival member of the bcl-2 protein family was also significantly upregulated in response to verapamil (Fig. 4a–d ). This is consistent with the anti-apoptotic effects found with TXNIP downregulation and with verapamil treatment in diabetic mouse models 26 , 29 . Furthermore, verapamil downregulated the expression of interleukin 32 ( IL32 ) (Fig. 4a–d ), a unique proinflammatory cytokine found only in primates that is induced by oxidative stress 30 and has recently been suggested to be upregulated in pancreatic islets and play a role in T1D autoimmunity 31 , 32 . Fig. 4: Gene expression profile changes in human islets in response to verapamil treatment. RNA sequencing was performed on isolated human islets from three different individuals (A–C) treated for 24 h with or without verapamil (100 µM) with each donor serving as its own control. Volcano plot contains all genes with a baseMean expression of >500. Those genes with an adjusted DESeq2 P -value < 0.05 (calculated using a Wald test and the Benjamini–Hochberg method) are shown in blue (downregulated) and red (upregulated) ( a ). Key pathways modulated by differentially expressed genes ( b ). Heatmap showing key genes changed after treatment with verapamil (color scale represents log2 fold change) ( c ). Display of the normalized read counts for key genes before and after verapamil treatment of islets from each of the individual islet donors A–C ( d ). Full size image Discussion In summary, the results of these exploratory studies suggest that continuous use of oral verapamil in individuals with T1D may delay disease progression and lower insulin requirements for at least 2 years post-diagnosis and that this is associated with normalization of serum CHGA levels as well as of proinflammatory IL-21 levels and Tfh cell markers. In addition, they show that verapamil regulates the thioredoxin system and promotes an anti-oxidative, anti-apoptotic and immunomodulatory gene expression profile in human islets suggesting that together these protective changes might explain the overall beneficial effects observed with verapamil. The significantly lower insulin requirements found after 2 years of verapamil treatment as compared to controls is consistent with earlier findings after 1 year of treatment 1 and with the observed preservation of beta cell function. In addition, our results showing verapamil regulating the thioredoxin system and inhibiting TXNIP expression in the islets provide a potential mechanistic explanation for these beta cell sparing effects. This especially when considering the beta cell protective and anti-diabetic effects observed in different mouse models in response to genetic TXNIP deletion or verapamil-induced TXNIP inhibition 26 , 29 . Moreover, in humans with T1D even a small amount of preserved endogenous insulin production as opposed to higher exogenous insulin requirements has been shown to be associated with improved outcome 33 and could help improve quality of life and lower the high costs associated with insulin use. The fact that these beneficial verapamil effects seemed to persist for 2 years, whereas discontinuation of verapamil led to disease progression, provides some additional support for this approach and its potential usefulness for long-term treatment. Intriguingly, global proteomic profiling of this unique set of before and after treatment samples also led to the discovery of serum CHGA as a potential therapeutic marker. Serum CHGA is a simple and easy blood test, showed good correlation with loss of beta cell function, accurately reflected changes in response to verapamil therapy or treatment discontinuation and as a major advantage, persisted over a time period of at least 2 years. Furthermore, due to its role as a T1D antigen 8 , 9 , it is tempting to speculate that the lowering of CHGA in response to verapamil might even help dampen some of the T1D autoimmunity. In any case, our results reveal for the first time that verapamil may reverse specific T1D-induced T-cell changes and thereby may also modulate certain aspects of the immune response. Intriguingly, Tfh cells and IL-21, both suggested to be downregulated by verapamil, have recently been reported to play an important role in the autoimmunity of T1D 12 , 13 , 14 . In fact, this may help explain why verapamil treatment was so successful even in the absence of any additional bona fide immunomodulatory intervention. This provocative idea of additional immune modulatory effects is supported by the observed enrichment in gene ontology processes associated with MHC class I antigen presentation and neutrophil-mediated functions in response to verapamil. Thus, the present exploratory studies provide the first indication of some potentially sustained benefits of verapamil use in the context of T1D. They further suggest that verapamil may result in protective effects not only at the level of insulin-producing islet beta cells, but also at the level of T-cells and proinflammatory cytokines, uncovering a previously unappreciated connection between verapamil use and the immune system in T1D. However, since the current studies were based on a very small subset of subjects, these initial findings will have to be confirmed in larger studies such as the ongoing Ver-A-T1D (NCT04545151) or CLVer (NCT04233034) verapamil trials. In addition, long-term trials with extensive sample analysis using this new knowledge will ultimately have to validate CHGA as a proper biomarker as well as the novel clinical and mechanistic insights gained in the current studies. Methods Human subjects All human studies were approved by the University of Alabama at Birmingham (UAB) Internal Review Board and written informed consent was obtained from all participants in accordance with the criteria set by the Declaration of Helsinki. Participants’ compensation was $80 per completed MMTT. Subjects with T1D had been diagnosed within 3 months and were positive for T1D associated auto-antibodies. All continued on their standard insulin regimen but were on no other diabetes medications during the entire 2 years. They were taking randomly assigned verapamil (360 mg sustained-release daily) or placebo in a blinded fashion for one year as described in the protocol and consort table of the initial trial [ clinicaltrials.gov/ct2/show/NCT02372253 2/20/2015] 1 and then chose to be on or off verapamil and to be followed for a second year. This resulted in five subjects who continued to receive verapamil (360 mg sustained-release daily) for 2 years, four subjects who discontinued active study drug for the second year, and six participants randomized to the control arm who never received active study drug over the 2-year period. All other subjects declined the second-year follow-up and/or had no usable blood samples. The study subject demographic characteristics at baseline and at the start of year 2 are listed in Supplementary Tables 1 and 4 , respectively. Supplementary Table 6 provides an additional overview of all the individual study participants, use of their samples, and their clinical data. Healthy volunteers were non-diabetic as confirmed by HbA1c, had not been diagnosed with any illness, were not receiving any prescription medications, and never received any study drug. Their characteristics are also shown in Supplementary Table 1 even though they only provided blood samples used as a comparison for some of the measurements of this exploratory study. Remaining beta cell function was assessed using stimulated C-peptide AUC during a mixed-meal tolerance test (MMTT) as described previously 4 , 5 . The MMTT was only performed when fasting blood glucose levels were within the range of 3.9–11.1 mmol/L and otherwise, the test was rescheduled. Blood samples were collected at −10, 0, 15, 30, 60, 90, and 120 min for serum C-peptide. The mean C-peptide AUC (0–120 min) was calculated using the trapezoidal rule 5 and the percent change from baseline was determined for each individual. In addition, the daily insulin dose required was calculated by analyzing the patient’s mean daily insulin use during a 2-week period at baseline, year 1 and year 2. Glycemic control was monitored by measurements of HbA1c. Serum samples and PBMCs were collected and stored at −80 °C until further analysis. Proteomics/liquid chromatography-tandem mass spectrometry (LC-MS/MS) 20 serum samples from 10 subjects collected for each subject at baseline and after 1 year of receiving verapamil or placebo were analyzed using a standardized workflow (Supplementary Fig. 1 ) similarly as previously reported 34 . For sample processing, 40 µL of serum was subjected to immunodepletion of the most abundant serum proteins using a MARS Hu-14 column (Agilent Technologies, Palo Alto, CA) as previously described 35 . The flow-through fractions were concentrated using Amicon spin filters with 3 kDa molecular mass cutoffs (Millipore, Burlington, MA). Protein concentration was measured by BCA assay (Thermo Scientific, San Jose, CA) prior to protein digestion. The depleted samples were then digested by a urea-based protocol and the peptides were then desalted by solid-phase extraction (SPE) (Phenomenex, Torrance, CA) and dried in a vacuum centrifuge. 100 µg of peptides of each sample were labeled using 11-plex tandem mass tag (TMT) reagents (Thermo Fisher Scientific, Waltham, MA) following the recent published protocol 36 . One pooled sample was generated by pooling an aliquot of 25 µg peptides from each sample to serve as a “universal reference”. The reference sample was included as the 11th channel for the two TMT-11 experiments (Supplementary Fig. 1 ). The TMT-labeled peptides combined from all 11 channels were further fractionated by basic pH reversed-phase LC using a C18 column (250 mm × 2.1 mm, 5 μm particles, Waters, Milford, MA) using an Agilent 1200 HPLC. Ninety-six fractions were collected and concatenated into 24 fractions as previously described 37 , dried in a vacuum centrifuge, and resuspended in 0.1% formic acid. For LC-MS/MS analysis, fractionated peptide samples were analyzed using a nanoAquity UPLC system (Waters) coupled to an Orbitrap Fusion Lumos mass spectrometer (Thermo Fisher Scientific). LC separations were performed with a custom-packed analytical C18 column (50 cm × 75 µm i.d., 3 µm particle size of Jupiter C18, Phenomenex) with a 120 min gradient. Binary mobile phases comprised of buffer A (0.1% formic acid in water) and buffer B (0.1% formic acid in acetonitrile) were used at a flow rate of 300 nl/min. For peptide elution, the percentage of buffer B was increased linearly and a 10 min wash with 95% buffer B and a final 1 min wash with 100% buffer B was also included. Obitrap full MS scans were conducted from 350 to 1800 m/z with a resolution of 60 K and AGC target of 4 × 10 5 followed by data-dependent higher-energy collisional dissociation (HCD) MS/MS acquisitions at a resolution of 50 K (AGC 1 × 10 5 ) and a maximum injection time (IT) of 105 ms for a total cycle time of 2 s. The MS/MS isolation window was set as 0.7 m/z with HCD normalized collision energy of 32. Peptide mode was selected for monoisotopic precursor scan and charge state screening was enabled to reject unassigned 1+, 7+, 8+, and >8+ ions with a dynamic exclusion time of 45 s to discriminate against previously analyzed ions between ± 10 ppm. For data processing and analysis, the thermo RAW files were first processed with mzRefinery to characterize and correct for any instrument calibration errors, and then with MS-GF+ v2109.08.26 38 to match against the UniProt human proteome database (2019.11.05 release; 20,352 entries). A decoy database from the searched fasta files was created by MS-GF+ to estimate the FDR. As searching parameters, the parent ion mass tolerance was set at 20 ppm, 2 missed cleavages were allowed. Cysteine carbamidomethylation (+57.0215) and N-terminal/lysine TMT labeling (229.1629) were searched as static modifications, whereas methionine oxidation (15.9949) was set as variable modification. Spectral-peptide matches were filtered using PepQValue < 0.005 and <7 ppm resulting in maximum FDR of 1%. A minimum of 6 unique peptides per 1000 amino acids of protein length was then required for achieving 1% at the protein level within the full data set. Post-processing of quantitative data of TMT reporter ion intensities was performed using a R package “PlexedPiper” for isobaric quantification [ ] similar as previously reported 39 . Briefly, the intensities of all 11 TMT reporter ions were extracted using MASIC software (v 3.0.7235) [ ] 40 . The reporter ion intensities from different scans and different fractions corresponding to the same gene were grouped. Relative protein abundance was calculated as the ratio of abundances between individual sample channels to the reference channel using the summed reporter ion intensities from peptides that could be uniquely mapped to a gene. The relative abundances were log2 transformed and zero-centered for each protein to obtain final relative abundance values. Statistical analyses to test protein abundance changes over time due to either drug effect or progression of T1D were performed using a mixed effects model, where treatment group and timepoint factors were modeled as fixed effect and subjects were modeled as random effect. The significance of the drug treatment was tested using interaction between the group and timepoint effects. Thus, the full model was formulated as protein ~ timepoint:group + timepoint + group + (1 | subject) . The significance of the timepoint:group was tested using the nested model approach. The test was two-tailed and was performed using lme4 package [doi:10.18637/jss.v067.i01] of R language for statistical computing [ ]. Proteins with significant changes in abundance were further analyzed using Enrichr Gene Ontology Biological Process 2021 term enrichment 2 . ELISA Serum CHGA levels were assessed using the Human Chromogranin A ELISA Kit (Epitope Diagnostics, INC., San Diego, CA). Serum IL-21 levels were measured using the Human IL-21 Proquantum Immunoassay Kit (Thermo Fisher Scientific) according to the manufacturer’s instructions (limits of quantitation: 0.32–5000 pg/mL). Quantitative real-time PCR Total RNA was extracted from PBMCs using the miRNeasy Mini Kit (Qiagen) according to the manufacturer’s instructions. RNA was reverse transcribed to cDNA using the first strand cDNA synthesis kit (Roche) and quantitative real-time PCR (qPCR) was performed on a LightCycler 480 system (Roche) as reported previously 41 . Relative expression of the proinflammatory T-cell markers CD4, CXCR3, STAT4, CXCR5, and IL21 was measured using the primers listed in Supplementary Table 7 . Assessment of markers such as IL21 by qPCR using PBMCs has been successful in the past 12 , 42 , 43 . All results were normalized for GAPDH run as an internal standard and serial template dilutions were performed to confirm comparable target and reference amplification efficiency. The data was then analyzed using the 2ˆ(-ddCT) method as previously described in detail 44 . Transcriptomics/RNA sequencing Human islets from three different donors were obtained from the Integrated Islet Distribution Program (IIDP) and after overnight incubation at 5 mM glucose, 250 islets were handpicked per sample and incubated for 24 h in 25 mM glucose RPMI 1640 medium with or without 100 μM of verapamil prior to RNA extraction using a miRNeasy Mini Kit (Qiagen, Germantown, MD). This concentration of verapamil has been established in early studies for the use in in vitro experiments using beta cells and islets 45 , 46 . RNA sequencing was performed by Exiqon/Qiagen and included preparation of libraries using TruSeq stranded mRNA sample preparation kit (Illumina Inc., San Diego, CA) and single-end sequencing was performed with an average of ~43 million reads obtained per sample. For data processing and analysis, RNA-sequencing reads were aligned to the H. sapiens reference genome (GRCh38.p7) using STAR (v2.4.2a) with an average ~90% reads uniquely mapped. Alignments were quantified using Salmon (v0.8.2) and differential expression analysis was performed using DESeq2. The DESeq2 model accounted for the experimental design of paired treated and untreated samples from each individual. DESeq2 was used to determine the significance of the differential expression. FDR was calculated using the Benjamini–Hochberg method. Significantly downregulated genes were further analyzed using Enrichr Gene Ontology Biological Process 2021 term enrichment 2 . Statistical analysis All available data were included in the analysis and no data from adequate samples were excluded. Statistician and experimenters were blinded to the study group allocation of samples. Population characteristics of study subjects were summarized as mean and standard errors (SEM) for continuous variables and frequencies for categorical variables. The group comparison of baseline measures was conducted using Chi-square test, Fisher’s exact test, or t-test where appropriate. The normal distribution assumption was checked using Q–Q plots and nonparametric analyses were performed where appropriate. To evaluate the effects of time and group, two-way repeated-measures ANOVA was used. One-way ANOVA or Kruskal–Wallis followed by Dunn’s multiple comparison testing was used to assess the significance between multiple groups. All tests were two-sided. Statistical analyses were performed using SigmaStat 4.0 and SAS 9.4 (Cary, NC). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The proteomics data that support the findings of this study have been deposited in ProteomeXchange with accession: PXD026601 . The MS raw datasets can also be found in the online repositories: Massive.ucsd.edu with accession: MSV000087598 . The transcriptomics data have been deposited in GEO with accession: GSE181328 . Publicly available data sets used can be accessed at: [ ] (H. sapiens reference genome GRCh38.p7) and [ ] (UniProt human proteome database (2019.11.05 release). Remaining data are available within the Article, Supplementary Information , or the Source Data provided with this paper. Source data are provided with this paper. | Use of the drug verapamil to treat Type 1 diabetes continues to show benefits lasting at least two years, researchers report in the journal Nature Communications. Patients taking the oral blood pressure medication not only required less daily insulin two years after first diagnosis of the disease, but also showed evidence of surprising immunomodulatory benefits. Continuing medication was necessary. In the two-year study, subjects who stopped daily doses of verapamil at one year saw their disease at two years worsen at rates similar to those of the control group of diabetes patients who did not use verapamil at all. Type 1 diabetes is an autoimmune disease that causes loss of pancreatic beta cells, which produce endogenous insulin. To replace that, patients must take exogenous insulin by shots or pump and are at risk of dangerous low blood sugar events. There is no current oral treatment for this disease. The suggestion that verapamil might serve as a potential Type 1 diabetes drug was the serendipitous discovery of study leader Anath Shalev, M.D., director of the Comprehensive Diabetes Center at the University of Alabama at Birmingham. This finding stemmed from more than two decades of her basic research into a gene in pancreatic islets called TXNIP. In 2014, Shalev's UAB research lab reported that verapamil completely reversed diabetes in animal models, and she announced plans to test the effects of the drug in a human clinical trial. The United States Food and Drug Administration approved verapamil for the treatment of high blood pressure in 1981. In 2018, Shalev and colleagues reported the benefits of verapamil in a one-year clinical study of Type 1 diabetes patients, finding that regular oral administration of verapamil enabled patients to produce higher levels of their own insulin, thus limiting their need for injected insulin to regulate blood sugar levels. The current study extends on that finding and provides crucial mechanistic and clinical insights into the beneficial effects of verapamil in Type 1 diabetes, using proteomics analysis and RNA sequencing. To examine changes in circulating proteins in response to verapamil treatment, the researchers used liquid chromatography-tandem mass spectrometry of blood serum samples from subjects diagnosed with Type 1 diabetes within three months of diagnosis and at one year of follow-up. Fifty-three proteins showed significantly altered relative abundance over time in response to verapamil. These included proteins known to be involved in immune modulation and autoimmunity of Type 1 diabetes. The top serum protein altered by verapamil treatment was chromogranin A, or CHGA, which was downregulated with treatment. CHGA is localized in secretory granules, including those of pancreatic beta cells, suggesting that changed CHGA levels might reflect alterations in beta cell integrity. In contrast, the elevated levels of CHGA at Type 1 diabetes onset did not change in control subjects who did not take verapamil. CHGA levels were also easily measured directly in serum using a simple ELISA assay after a blood draw, and lower levels in verapamil-treated subjects correlated with better endogenous insulin production as measured by mixed-meal-stimulated C-peptide, a standard test of Type 1 diabetes progression. Also, serum CHGA levels in healthy, non-diabetic volunteers were about twofold lower compared to subjects with Type 1 diabetes, and after one year of verapamil treatment, verapamil-treated Type 1 diabetes subjects had similar CHGA levels compared with healthy individuals. In the second year, CHGA levels continued to drop in verapamil-treated subjects, but they rose in Type 1 diabetes subjects who discontinued verapamil during year two. "Thus, serum CHGA seems to reflect changes in beta cell function in response to verapamil treatment or Type 1 diabetes progression and therefore may provide a longitudinal marker of treatment success or disease worsening," Shalev said. "This would address a critical need, as the lack of a simple longitudinal marker has been a major challenge in the Type 1 diabetes field." Other labs have identified CHGA as an autoantigen in Type 1 diabetes that provokes immune T cells involved in the autoimmune disease. Thus, Shalev and colleagues asked whether verapamil affected T cells. They found that several proinflammatory markers of T follicular helper cells, including CXCR5 and interleukin 21, were significantly elevated in monocytes from subjects with Type 1 diabetes, as compared to healthy controls, and they found that these changes were reversed by verapamil treatment. "Now our results reveal for the first time that verapamil treatment may also affect the immune system and reverse these Type 1 diabetes-induced changes," Shalev said. "This suggests that verapamil, and/or the Type 1 diabetes improvements achieved by it, can modulate some circulating proinflammatory cytokines and T helper cell subsets, which in turn may contribute to the overall beneficial effects observed clinically." To assess changes in gene expression, RNA sequencing of human pancreatic islet samples exposed to glucose, with or without verapamil was performed and revealed a large number of genes that were either upregulated or downregulated. Analysis of these genes showed that verapamil regulates the thioredoxin system, including TXNIP, and promotes an anti-oxidative, anti-apoptotic and immunomodulatory gene expression profile in human islets. Such protective changes in the pancreatic islets might further explain the sustained improvements in pancreatic beta cell function observed with continuous verapamil use. Shalev and colleagues caution that their study, with its small number of subjects, needs to be confirmed by larger clinical studies, such as a current verapamil-Type 1 diabetes study ongoing in Europe. But the preservation of some beta cell function is promising. "In humans with Type 1 diabetes, even a small amount of preserved endogenous insulin production—as opposed to higher exogenous insulin requirements—has been shown to be associated with improved outcomes and could help improve quality of life and lower the high costs associated with insulin use," Shalev said. "The fact that these beneficial verapamil effects seemed to persist for two years, whereas discontinuation of verapamil led to disease progression, provides some additional support for its potential usefulness for long-term treatment." At UAB, Shalev is a professor in the Department of Medicine Division of Endocrinology, Diabetes and Metabolism, and she holds the Nancy R. and Eugene C. Gwaltney Family Endowed Chair in Juvenile Diabetes Research. Co-authors with Shalev, in the Nature Communications report "Exploratory study reveals far reaching systemic and cellular effects of verapamil treatment in subjects with type 1 diabetes," are Guanlan Xu, Tiffany D. Grimes, Truman B. Grayson, Junqin Chen, Lance A. Thielen and Fernando Ovalle, UAB Department of Medicine, Division of Endocrinology, Diabetes and Metabolism; Hubert M. Tse, UAB Department of Microbiology; Peng Li, UAB School of Nursing; Matt Kanke and Praveen Sethupathy, College of Veterinary Medicine, Cornell University, Ithaca, New York; and Tai-Tu Lin, Athena A. Schepmoes, Adam C. Swensen, Vladislav A. Petyuk and Wei-Jun Qian, Biological Sciences Division, Pacific Northwest National Laboratory, Richland, Washington. | 10.1038/s41467-022-28826-3 |
Physics | Shaking the swarm—researchers explore how bees collaborate to stabilize swarm clusters | O. Peleg et al, Collective mechanical adaptation of honeybee swarms, Nature Physics (2018). DOI: 10.1038/s41567-018-0262-1 Journal information: Nature Physics | http://dx.doi.org/10.1038/s41567-018-0262-1 | https://phys.org/news/2018-09-swarmresearchers-explore-bees-collaborate-stabilize.html | Abstract Honeybee Apis mellifera swarms form large congested tree-hanging clusters made solely of bees attached to each other 1 . How these structures are maintained under the influence of dynamic mechanical forcing is unknown. To address this, we created pendant clusters and subject them to dynamic loads of varying orientation, amplitude, frequency and duration. We find that horizontally shaken clusters adapt by spreading out to form wider, flatter cones that recover their original shape when unloaded. Measuring the response of a cluster to an impulsive pendular excitation shows that flattened cones deform less and relax faster than the elongated ones (that is, they are more stable). Particle-based simulations of a passive assemblage suggest a behavioural hypothesis: individual bees respond to local variations in strain by moving up the strain gradient, which is qualitatively consistent with our observations of individual bee movement during dynamic loading. The simulations also suggest that vertical shaking will not lead to significant differential strains and thus no shape adaptation, which we confirmed experimentally. Together, our findings highlight how a super-organismal structure responds to dynamic loading by actively changing its morphology to improve the collective stability of the cluster at the expense of increasing the average mechanical burden of an individual. Main Collective dynamics allow super-organisms to function in ways that a single organism cannot, by virtue of their emergent size, shape, physiology and behaviour 2 . Classic examples include the physiological and behavioural strategies seen in social insects (for example, ants that link their bodies to form rafts to survive floods 3 , 4 , 5 , 6 , assemble pulling chains to move food items 7 , and form bivouacs 8 and towers 9 , as well as bridges and ladders to traverse rough terrain 10 ). Similarly, groups of `daddy longlegs’ (order Opiliones) huddle together and emperor penguins cluster together for thermoregulation purposes 11 . While much is known about the static forms that are seen in such situations, the stability of these forms to dynamic perturbation, and their global adaptation to environmental changes is much less understood. European honeybees, Apis mellifera L., show many of these collective behaviours during their life cycle 1 . For example, colonies reproduce through colony fission, a process in which a subset of the colony’s workers and a queen leave the hive, separate from the parent colony and form a cluster on a nearby tree branch 1 . In these swarm clusters (which we will refer to as clusters), the bees adhere to each other and form a large structure made of ~10,000 individuals and hundreds of times the size of a single organism (Fig. 1a ). Generally, this hanging mass of adhered bees takes on the shape of an inverted pendant cone; however, the resultant shape is also influenced by the surface to which the cluster is clinging to (see two different examples in Fig. 1a ). The cluster can stay in place for several days as scout bees search the surrounding area for suitable nest sites 1 . Fig. 1: A mechanically adaptive honeybee cluster. a , Bee clusters on a tree branch. b , The experimental set-up consists of a motor driving a wooden board, on which a cluster of bees grips a roughly circular contact area. The motor can produce periodic movement in the horizontal or vertical axis at different frequencies and amplitudes. See Supplementary Fig. 1 for the full set-up. c , The top panel shows the acceleration of the board versus time. The middle and bottom panels show how the bee cluster adapts its shape dynamically: elongated cluster at t = 0 (left column), spread-out cluster after horizontal shaking for 10 min and 30 min (middle columns), and elongated cluster after relaxation (right column); side and bottom views. The contact area before and after shaking is highlighted in blue and red, respectively. Full size image The colony is exposed to the environment during this stage and shows several behaviours to cope with the fluctuating thermal and mechanical environment. For instance, clusters tune their density and surface area to volume ratio to maintain a near constant core temperature despite large fluctuations in the ambient temperature 12 , 13 , 14 . Furthermore, at high temperatures, the swarm expands and forms channels that are presumed to aid in air circulation 12 . Moreover, in response to rain, bees at the surface arrange themselves to form `shingles’, shedding moisture efficiently from the surface of the cluster 15 . Similarly, the cluster is mechanically stable; while it sways from side to side in the wind (for example, see Supplementary Video 1 ), it could be catastrophic if the cluster breaks (when a critical load occurs) as the bees would lose the ability to minimize surface area to prevent hypothermia, while still being mechanically stable. However, the mechanism by which a multitude of bees work together to create and maintain a stable structure that handles both static gravity and dynamic shaking stimuli (for example, wind and predators) remains elusive. To understand this, we develop a laboratory experimental set-up, for ease of visualization and manipulation, to quantify the response of a honeybee cluster to mechanical shaking over short and long times. To prepare a cluster, we attach a caged queen (see Supplementary Section A ) to a board and allowed a cluster to form around her (Fig. 1b ). The bees at the base grip onto an area that is roughly circular. The board is controlled by a motor that can produce movement in the horizontal direction at different frequencies (0.5–5 Hz) and accelerations (ranged 0–0.1 g ). We apply both discontinuous shaking in which the acceleration is kept constant and the frequency is modified, and vice versa, continuous shaking in which the frequency is kept constant and the acceleration is modified (see Supplementary Fig. 2 ). For the case of horizontal shaking (for both discontinuous and continuous), the tall conical cluster swings to and fro in a pendular mode (one of the lowest energy modes of motion, see Supplementary Section C ), with a typical frequency of ~1 Hz. However, over longer durations (that is, minutes), the bees adapt by spreading themselves into a flatter conical form (Fig. 1b–d and Supplementary Video 2 ), while their total number remains constant (measured by the total weight of the cluster). The final shape flattens as the shaking continues for longer, or as the frequency and acceleration of shaking increases. For the discontinuous shaking, when we plot the relative extent of spreading (scaled by a constant) as measured by A ( t )/ A (0) for all different frequencies, as a function of number of shakes, the data collapse onto a single curve (Fig. 2a ). This suggests that the cluster response scales with both the number and magnitude of shakes, but over much longer timescales than an individual event. The nature of this response is independent of the type of stimulus: when the shaking signal is continuous, we see a similar response (Fig. 2b ). The graded adaptive response that scales with the number of shakes and is a function of applied displacements and frequencies, and the absence of any adaptation to very low frequencies and amplitudes (orange curves in Fig. 2b ), suggests that there is a critical relative displacement (that is, a threshold mechanical strain) needed to trigger this adaptation. Once the shaking stops, the cluster returns to its original elongated cone configuration over a period of 30–120 min, a time that is much larger than the time for the cluster to flatten. This reversible cluster shape change in response to dynamic loading might be a functional adaptation that increases the mechanical stability of a flattened cluster relative to an elongated one. Fig. 2: Quantifying adaptive response of the cluster to horizontal shaking. For all shaking frequencies, the base contact area of the cluster increases monotonically until a plateau is reached. Once shaking ceases, the cluster responds by gradually reverting to its original shape by increasing its contact area, but at a much slower rate. a , Ratio of the contact area of the base of the cluster divided by its original area A ( t )/ A (0) as a function of time, for the discontinuous case. The colours represent results for different frequencies of periodic shaking. The inset shows that the scaled base area collapses onto a master curve when plotted versus the number of shaking events. The error bars correspond to the standard deviation of three individual trials (see Supplementary Table 1 for more information about trial repetitions). b , A ( t )/ A (0) for continuous shaking shows the same qualitative behaviour; note that when the acceleration is very small (0.01 g ), there is no response (that is, there is a critical threshold of forcing below which the bees do not respond). c , Coordinate systems of the laboratory frame and the displacement coordinates of the individual bees. d , Deformation of an elongated cluster before shaking began ( t = 0, top) and a flattened cluster after shaking ( t = 30 minutes, bottom) shows that displacement at the tip of the cluster is largest. On the right: time snapshots of a string of bees along the centre of the cluster (see Supplementary Video 3 ). e , Trajectories of individual bees during 5 min of horizontal shaking show that when the cluster spreads out, surface bees move upwards. Colour code represents time: the trajectory starts with blue and ends with yellow. Inset: probability distribution function of vertical displacement, showing a net upward trend. f , An illustration of the behavioural constitutive law: bees sense the local deformation of connections to their newest neighbours; once the relative deformation reaches a critical value, the bees move up the gradient in relative deformation. Full size image To explore this suggestion quantitatively, we first define a laboratory-fixed coordinate system with axes as shown in Fig. 2c , with respect to which the board is at \({\bf r}_b(t)\) = [ U b , 0, W b ], the position of a bee i is defined as \({\bf r}_i(t)\) = [ X i ( t ), Y i ( t ), Z i ( t )] and its displacement is defined as [ U i ( t ), 0, W i ( t )] = \({\bf r}_i(t) - {\bf r}_i(0) - {\bf r}_b(t)\) . This allows us to track individual bees 16 along the surface of the cluster along the centreline X i (0) = 0 (Fig. 2d and Supplementary Video 3 ), over a period of oscillation. Comparing trajectories of bees in an elongated cluster and a flat cluster (that is, before and after shaking) shows that relative displacement between the bees at the cluster tip and bees at the base is significantly larger for an elongated cluster. Snapshots of tracked bees highlight the decoupling of movement of the tip and base of the cluster; that is, local deformations such as normal and shear strains are reduced in the mechanically adapted state corresponding to a spread cluster. A similar trend is observed when the cluster is subjected to a single sharp shake (see signal at Supplementary Fig. 2c ), as shown in Supplementary Video 4 . These measurements confirm that the adapted flattened structure is indeed more mechanically stable in the presence of dynamic horizontal loads. The spreading of the cluster is a collective process, begging the question of how this collective spreading behaviour is achieved. To study this, we tracked bees on the surface of the cluster during the process of adaptive spreading, particularly at the early stages. In Fig. 2e and Supplementary Video 5 , we show how bees move from the tip regions that are subject to large relative displacements towards the base regions that are subject to small relative displacements. This suggests a simple behavioural law wherein the change in relative displacement U i between neighbouring bees is a driver of shape adaptation: individual bees sense the local deformation relative to their neighbours and move towards regions of lower U i (illustrated in Fig. 2f ). In the continuum limit, this corresponds to their ability to sense strain gradients, and move from regions of lower strain (near the free tip) towards regions of higher strain (near the fixed base). It is worth noting here that this behavioural law is naturally invariant to rigid translation and rotation of the cluster, and thus depends only on the local mechanical environment each bee experiences. However, what measure of the relative displacements might the bees be responding to? To understand this, we note that the fundamental modes 17 of a pendant elastic cone are similar to those of a pendulum swinging from side to side, and a spring bouncing up and down, and their frequencies monotonically increase as a function of the aspect ratio of the cluster (Supplementary Fig. 3 ; see Supplementary Section C for details). To quantify the deviations from this simple picture due to the particulate nature of the assemblage, we turn to a computational model of the passive dynamics of a cluster and explore the role of shape on a pendant mechanical assemblage of passive particles used to mimic bees. We model each bee in the cluster as a spherical particle that experiences three forces: a gravitational force, an attractive force between neighbouring particles, and a force that prevents inter-particle penetration (see Supplementary Section C for further details). The bees at the base are assumed to be strongly attached to the supporting board, and those on the surface are assumed to be free. To study the passive response of the entire system, the board is oscillated at different frequencies and amplitudes, while we follow the displacement of individual particles, \(U_i\left( {\bf r}_i \right)\) , as well as the relative displacement between neighbouring bees \({\bf l}_{ij}(t)\) = \({\bf r}_i(t) - {\bf r}_j(t)\) (Fig. 3a ). Decomposing the vector \({\bf l}_{ij}(t)\) into its magnitude and direction allows us to define two local deformation measures associated with the local normal strain and shear strain. The local dynamic normal strain associated with a particle (bee) i relative to its extension at t = 0 is defined as δ l i = \(\left\langle {{\mathrm{max}}_{0 \le t \le T}\left| {\left| {{\bf l}_{ij}(t)} \right| - \left| {{\bf l}_{ij}(0)} \right|} \right|} \right\rangle\) , where T is the duration from the onset of the applied mechanical shaking until the swarm recovers its steady-state configuration, and the angle brackets represent the average over all bees j that are connected to bee i . The local shear strain is calculated from the changes in the angle \(\left| {\angle \left( {{\bf l}_{ij}(t) ,{\bf l}_{ik}(t)} \right)} \right|\) between \({\bf l}_{ij}(t)\) and \({\bf l}_{ik}(t)\) , connecting bees i and j , and bees i and k , respectively, with the shear strain, δ θ i defined as δ θ i = \(\left\langle {{\mathrm{max}}_{0 \le t \le T}\left| {\angle \left( {{\bf l}_{ij}(t) ,{\bf l}_{ik}(t)} \right) - \angle \left( {{\bf l}_{ij}(0) ,{\bf l}_{ik}(0)} \right)} \right|} \right\rangle\) , where the angle brackets represent the average over all pair of bees j – k that are connected to bee i . Fig. 3: Computational model of mechanical adaptation. A cluster is modelled using particles that are linked via springs in a simple triangular lattice. a , Passive simulations: clusters of different aspect ratios ( L z / L x ), shown at the extreme of a period of horizontal oscillation. The colours represent the local normal strain of each honeybee δ l i , as defined in the text. Elongated clusters (on the right) experience a larger deformation at the tip of the cluster, while flattened clusters (on the left) experience much less deformation. b , For the same state as in a , we also show the maximum shear strain, δ θ i . c , Plots of the mean normal and shear strain (δ l ( Z ) and δ θ ( Z )) as a function of the distance from the base, Z , and aspect ratio L z / L x . We see that the maximum magnitude of the strains decreases as the cluster becomes flattened. d , Active stochastic simulations: when we impose a behavioural rule that allows the bees to sense the strains around them and move in the direction of increasing strain when the magnitude crosses a threshold \(\left( \tilde {\mathrm{\delta }l_i^t}_C \right)\) , this leads to spreading. The colours represent the local integrated signal, \(\tilde {\mathrm{\delta }l_i^t}\) , and the arrows point towards higher local signal. e , The scaled base contact area A ( t )/ A (0) as a function of time, with the probability distribution function of vertical displacement, shows a net negative response (that is, bees move upwards on average), similar to experimental observations (see Fig. 2e ). Full size image As expected, we see that for the same forcing, the maximum amplitude of the local strains increases as the cluster becomes more elongated (Fig. 3a,b and Supplementary Video 6 ). Therefore, these local strains can serve as a signal for the bees to move, and a natural hypothesis is that once the signal is above a certain critical value, the bees move. However, how might they chose a direction? While it may be plausible for the bees to simply move upwards against gravity, it is probably difficult to sense a static force (that is, gravity) when experiencing large dynamic forcing (that is, shaking) in a tightly packed assemblage. Instead, we turn to ask whether there are any local signals that would give honeybees a sense of direction. For all clusters, the strains are largest near the base (Fig. 3a,b and Supplementary Video 6 ) and decrease away from it, but in addition, as the cluster becomes more elongated, there are large local strains along the contact line where x = ± L 1 /2, where the bees are in contact with the baseboard. This is due to the effect of the pendular mode of deformation that leads to rotation-induced stretching in these regions. To quantify how the normal and shear strain vary as a function of the distance from the base, Z , we average δ l i and δ θ i over all bees that were at a certain Z position at t = 0 and define the following mean quantities: δ l ( Z ) = \(\left\langle {{\rm \updelta} l_i} \right\rangle\) , and δ θ ( Z ) = \(\left\langle {{\rm \updelta} \theta _i} \right\rangle\) , where the angle brackets indicate the average overall spring connection at the vertical position \(r_z^i(0)\) = Z . Similar to the experimental data, the simulations show that the displacements U i for horizontal shaking of elongated clusters are larger in comparison to flattened clusters. As both strains δ l ( Z ) and δ θ ( Z ) are largest near the base, z = 0 (Fig. 3c and Supplementary Video 6 ), and decrease away from the supporting baseboard, they may serve as local signals that bees at the tip of the cluster respond to by moving up the strain gradient (Supplementary Figs. 3 – 5 and Supplementary Videos 7 and 8 ). This passive signature of a horizontally shaken assemblage suggests a simple behavioural hypothesis: bees can sense the local variations in the normal strain above a critical threshold, and move slowly up gradients collectively. We note that mechanical strain is invariant to translation and rotation of the whole assemblage; that is, it is independent of the origin and orientation of the frame of reference, and thus a natural choice (similar to how cells and bacteria respond to mechanical stresses 18 ). This behaviour will naturally lead to spreading of the cluster and thence smaller strains on the cluster. Noting that the timescale of the response of the bees is of the order of minutes while the duration of a single period is seconds, it is natural to consider the integrated local normal strain signal: \(\tilde {\mathrm{\delta }l_i^t}\) = \(\mathop {\sum}\nolimits_{\tilde t = t - T_{\rm w}}^t {\kern 1pt} {\rm \updelta} l_i^{\tilde t} \times {\rm d}t\) , where T w is chosen to be the period of the shaking (see detailed description in Supplementary Section C ). Then our behavioural hypothesis is that when \(\tilde {\mathrm{\delta }l_i^t} > \tilde {\mathrm{\delta }l_i^t}_C\) the bee becomes active, and moves in the direction of the time-integrated negative normal strain gradient (that is, the active force is directed toward a higher local normal strain) according to the simple proportional rule \(F^{{\mathrm{active}}} = - f^{{\mathrm{active}}}\tilde{ {{\rm \updelta} {{\bf l}}_i^t}}\) . We note that moving up a gradient in time-integrated normal strain would also suffice to explain the observed mechanical adaptation. We carry out our simulations of the active cluster in two dimensions for simplicity and speed (we do not expect any changes in three dimensions), allowing bonds to break and reform on the basis of proximity, similar to how bees form connections, and follow the shape of the cluster while it is shaken horizontally. We find that over time, the cluster spreads out to form a flattened cone (Fig. 3d,e and Supplementary Video 7 ), confirming that the local behavioural rule that integrates relative displacements that arise due to long-range passive coupling in the mechanical assemblage wherein bees actively move up the local gradient in normal strain δ l i is consistent with our observations. If sufficiently large dynamic normal strain gradients drive shape adaptation, different shaking protocols that result in lower local strains should limit adaptation. One way is to shake the cluster gently, and this indeed leads to no adaptation (Fig. 2b responding to 0.01 g ). Another way to test our hypothesis is to shake the cluster vertically, exciting the spring-like mode of the assemblage. For the same range of amplitudes and frequencies as used for horizontal shaking, our simulations of a passive assemblage show that vertical shaking results in particles being collectively displaced up and down, with little variations in normal strain. As expected, even in active clusters with the behavioural rule implemented, little or no adaptation occurs as the threshold normal strain gradient is not achieved (Supplementary Figs. 5 and 6 and Supplementary Video 8 ). To test this experimentally, we shake the cluster vertically. We see that, in this case, the cluster shape remains approximately constant (Fig. 4a,b ) until a critical acceleration is reached, at which time a propagating crack results in the detachment of the cluster from the board (Supplementary Video 9 ). The resulting displacements at the tip for vertical shaking and horizontal shaking are in agreement with our hypothesis that differential normal strain gradients drive adaptation (Fig. 4c and Supplementary Video 10 ). Fig. 4: Response to vertical shaking. a , Vertical shaking (maximum acceleration 0.05 g ) of the bee cluster leads to a very small displacement. This is consistent with our simulations (see Supplementary Fig. 4 and Supplementary Section D ) that vertical shakes do not destabilize the bees differentially. b , Contact area of the base of the cluster relative to its initial area A ( t )/ A (0) versus time. Areas are defined as in Fig. 1d . The colours represent results for different accelerations of continuous shaking. c , Maximum displacement at the tip of a tall cluster as a result of a single horizontal and vertical shake. Bees do not respond or change the shape of the cluster when subjected to vertical shaking (green), but do respond substantially when shaken horizontally (blue). The black dotted line represents the experimentally observed threshold value to initiate active behaviour. Full size image Our study has shown how dynamic loading of honeybee swarm clusters leads to mechanical adaptation wherein the cluster spreads out in response to repeated shaking that induced sufficiently large gradients in the relative displacements between individuals. We show that this adaptive morphological response increases the mechanical stability of the cluster. A computational model of the bee cluster treated as an active mechanical assemblage suggests that the active behavioural response of bees to local strain gradients can drive bee movement from regions of low strain to those of high strain and cause the cluster to flatten. This behavioural response improves the collective stability of the cluster as a whole via a reversible shape change, at the expense of increasing the time-averaged mechanical burden experienced by the individual. Reporting Summary Further information on experimental design is available in the Nature Research Reporting Summary linked to this article. Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding author upon request. | If it's a bad idea to kick a hornet's nest, it's certainly a bad idea to shake a bee swarm. Unless, of course, it's for science. A team of Harvard University researchers spent months shaking and rattling swarms of thousands of honey bees to better understand how bees collectively collaborate to stabilize structures in the presence of external loads. The research is published in Nature Physics. "Our study shows how living systems harness physics to solve complex problems on scales much larger than the individual," said L. Mahadevan, the Lola England de Valpine Professor of Applied Mathematics at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), Professor of Organismic and Evolutionary Biology (OEB), and Professor of Physics and senior author of the study. "We demonstrated that bees can harness the physicality of the environment solve a global mechanical stability problem by using local sensing and action" This research follows earlier work by the group that showed how bees can also collectively maintain the temperature of a cluster using local sensing and actuation to prevent overheating or overcooling. Bee swarms form when a queen bee strikes out with a large group of worker bees to form a new colony. While scouts look for a new nest location, the colony forms a living, breathing structure, made of their own bodies, on a nearby tree branch. These clusters maintain their structure and stability for days in the presence of wind, rain and other external loads. A team of Harvard University researchers spent months shaking and rattling swarms of thousands of honey bees to better understand how bees collectively collaborate to stabilize structures in the presence of external loads. Credit: Jacob Peters, Orit Peleg/Harvard University "The primary question of our research was, given that individual bees can likely only sense their interactions with their neighbors, how do they make changes to maintain the overall structure of the cluster?" said Orit Peleg, a former postdoctoral fellow at SEAS and co-first author of the paper. Peleg is now an Assistant Professor of Computer Science at the University of Colorado—Boulder. The researchers built a bee swarm by attaching a caged queen bee to a moveable board and waiting for worker bees to cluster around her. Once the cluster was formed, the researchers simulated wind by shaking the board horizontally and vertically. They observed that the swarm starts with a cone-like structure, with a certain height and base area. When shaken horizontally, the bees create a flatter cone by decreasing the height and increasing the base area. When the shaking stops, they go back to their original shape. The bees know which way to move because they respond to the local changes from their neighbors. "Individual bees can tell the direction of the strain based on their connection to their neighbors," said Jacob Peters, who recently defended his Ph.D. in OEB, and co-first author of the paper. "Because the strains on the swarm are highest at the top of the swarm, where its connected to the branch—or in this case, the board—they know to move up. All the bees move up together because they're influenced by this gradient, so it leads to a coordinated movement." The experimental setup consists of a motor driving a wooden board, on which a cluster of honey bees form around a caged queen bee. The board can be moved in the horizontal or vertical axis at different frequencies and amplitudes. Credit: Jacob Peters, Orit Peleg/Harvard University Imagine playing Ring-a-Round-the-Rosy blindfolded. You don't know which direction everyone in the circle is moving, but you do know the direction your neighbor is moving because you're holding their hand. You don't know when everyone falls down, but you know when to fall down because your neighbor falls down. Like bees in a swarm, you follow the cues associated with the local strain from your neighbor. When the cluster flattens during horizontal shaking, load sharing by individual bees increases but the colony overall is more stable—similar to crouching when the ground is shaking. The researchers were able to mimic this behavior in a computer simulation by imposing rules at the local level. The researchers also found that when the bees were shaken vertically, the cluster did not adapt its shape because the local variations in deformations were smaller. This research could have broader implications for how we think of control algorithms and collaborative machines. "When we build machines or materials, we use simple control algorithms that are top down, where you have a centralized command that controls all of the moving parts in the machine," said Peters. "But in this system, the bees are achieving this coordinated change in shape without a central controller. Instead, they are like a set of distributed agents with their own controllers and they have to find a way to coordinate without explicit long-range communication. By studying these types of systems, it could inspire new ways of thinking about distributed control of systems as opposed to traditional centralized control." | 10.1038/s41567-018-0262-1 |
Biology | Why do scientists chase unicorns? | Drew MacKellar et al. Streptomyces thermoautotrophicus does not fix nitrogen, Scientific Reports (2016). DOI: 10.1038/srep20086 Journal information: Scientific Reports | http://dx.doi.org/10.1038/srep20086 | https://phys.org/news/2016-02-scientists-unicorns.html | Abstract Streptomyces thermoautotrophicus UBT1 has been described as a moderately thermophilic chemolithoautotroph with a novel nitrogenase enzyme that is oxygen-insensitive. We have cultured the UBT1 strain and have isolated two new strains (H1 and P1-2) of very similar phenotypic and genetic characters. These strains show minimal growth on ammonium-free media and fail to incorporate isotopically labeled N 2 gas into biomass in multiple independent assays. The sdn genes previously published as the putative nitrogenase of S. thermoautotrophicus have little similarity to anything found in draft genome sequences, published here, for strains H1 and UBT1, but share >99% nucleotide identity with genes from Hydrogenibacillus schlegelii , a draft genome for which is also presented here. H. schlegelii similarly lacks nitrogenase genes and is a non-diazotroph. We propose reclassification of the species containing strains UBT1, H1 and P1-2 as a non-Streptomycete, non-diazotrophic, facultative chemolithoautotroph and conclude that the existence of the previously proposed oxygen-tolerant nitrogenase is extremely unlikely. Introduction Biological fixation of gaseous elemental nitrogen into ammonia is an important part of the global nitrogen cycle 1 . Several bacteria and archaea are known to carry out this process, either solely for their own biosynthetic needs 2 , or else as symbiotic partners with certain eukaryotes 3 . In many environments, bioavailable nitrogen is limiting for net primary productivity 4 and this relevance for agriculture and ecology has ensured that the biology of nitrogen fixation has been an active area of research since its discovery in the 19 th century. Bacteria synthesize ammonia from dinitrogen via a family of homologous nitrogenase enzymes, of which the molybdenum-iron (MoFe) nitrogenase is the most common 5 . The MoFe nitrogenase protein combines with the homodimeric nitrogenase reductase Fe-protein to form an (αβγ 2 ) 2 octameric enzyme complex 6 , encoded by the structural genes nifHDK . Each enzyme complex contains two iron-sulfur (4Fe:4S) clusters, two (8Fe:7S) P-clusters and two (7Fe:Mo:9S:C:homocitrate) FeMo cofactor active sites. Several accessory proteins are required for the synthesis and insertion of these complex cofactors and thus various subsets of some 16 different additional nif genes are present in diazotrophs 7 . A set of six nif genes, nifHDKENB , has been proposed as a diagnostic criterion for diazotrophy 8 . The MoFe nitrogenase has promiscuous activity and is able to reduce a number of substrates besides dinitrogen 9 . Of particular interest is its ability to convert acetylene into ethylene, which is often used as a proxy assay for nitrogenase activity. While the exact mechanism of nitrogenase function remains unknown 10 , the stoichiometry is thought to require at least 16 molecules of ATP, as well as six low-potential electrons from ferredoxin or flavodoxin, per molecule of dinitrogen reduced 11 , plus two more electrons to generate one molecule of hydrogen, an obligatory side product of nitrogenase function. In addition to this high energetic cost, nitrogenase activity is reversibly inhibited by hydrogen (H 2 ) and carbon monoxide (CO) 12 ; the former of which is generated by nitrogenase itself and the latter generated by heme oxygenase in root nodules 13 , as well as being present in significant concentrations near sites of geothermal activity or combustion. More importantly, nitrogenase is rapidly and irreversibly damaged upon exposure to oxygen 14 , due to the effect of oxygen on the enzyme’s metal cofactors 15 , perhaps exposed to the solvent upon the cyclic dissociation of the nitrogenase and the nitrogenase reductase 16 . This sensitivity has led to a proliferation of evolutionary strategies to reduce oxygen tension near active nitrogenases 17 . In addition to the MoFe nitrogenase, some bacteria possess alternative nitrogenases that substitute iron or vanadium for molybdenum in the active site 18 , presumably for use when Mo is scarce. These alternative nitrogenases introduce a δ subunit encoded by vnfG or anfG , altering the stoichiometry of the nitrogenase complex, but the structural vnfHDK and anfHDK genes are homologous to their nif counterparts and the corresponding enzymes have even greater sensitivity to O 2 , as well as lower nitrogenase activity and greater relative hydrogen production, than the MoFe enzyme. A fourth, unrelated class of nitrogenase has been reported in the thermophilic carboxydotroph Streptomyces thermoautotrophicus UBT1 19 . This strain was isolated from the soil overlying burning charcoal piles and was studied by members of the laboratory of Ortwin Meyer, at the Universität Bayreuth, Bayreuth, Germany. UBT1 was described as an obligate chemolithoautotroph, capable of growth on CO or CO 2 and H 2 , but not on complex carbon sources 20 . Nitrogen fixation in this strain was found to be significantly different from that of other diazotrophs: it was not inhibited by H 2 , CO, or O 2 . Further, nif genes were not detected by Southern blot and the strain was not found to reduce acetylene to ethylene, suggesting that this was the first natural diazotroph to lack a nitrogenase homologous to MoFe. Fractionation of protein lysate was used to identify structural components of the nitrogenase and the N-termini of the proteins isolated showed no homology to known nitrogenase subunits 21 . The specific activities reported for this enzyme were unusually low in comparison to traditional nitrogenases. Most interesting of all, the enzyme was found to be insensitive to oxygen in cell lysates and actually depended upon its presence for the generation of the superoxide anion (O 2 − ), which was proposed to be used as an electron donor for the reduction of dinitrogen. A catalytic mechanism was proposed whereby CODH (enzyme St3) transfers electrons from water to oxygen via the oxidation of CO, a manganese superoxide dismutase (Mn-SOD; enzyme St2) transfers electrons from the resulting superoxide anion to a molybdenum hydroxylase, likely another CODH homolog (St1), which in turn uses them to reduce dinitrogen. The CODH that generates superoxide was found to have no nitrogenase activity 21 . Degenerate oligonucleotides designed against the N-termini of the purified proteins have been used to identify the candidate nitrogenase genes and they were found to be homologous to known carbon monoxide dehydrogenases (CODH; enzyme St1 encoded by sdnMSL ) and a superoxide dismutase (enzyme St2 encoded by sdnO ) 22 . Stimulated by these early results, we have sequenced the genome of S. thermoautotrophicus UBT1, provided by the original isolator, D. Gadkari, in two independent laboratories. We have also isolated two novel strains of S. thermoautotrophicus with the described phenotypic characters: strain H1 from another burning charcoal pile and strain P1-2 from soil near an active coal seam fire. We have sequenced the H1 strain and find an average of 95% identity between its coding sequences and those of UBT1. The genome of strain P1-2 was not sequenced. Both draft genome sequences for strain UBT1, as well as the draft genome for strain H1, lack known nitrogenase genes ( nif , vnf , or anf ). Further, all three draft genomes lack coding sequences with high identity to the published sdn genes; instead we note that the sdn genes previously published show high identity with genes in Hydrogenibacillus schlegelii DSM2000, a draft genome of which is also presented here. Finally, we find that the H1 and UBT1 genomes possess multiple loci encoding CODH enzymes, which is also the case in another organism previously reported to be a diazotroph and whose published genome sequence similarly lacks nif genes: Pseudonocardia dioxanivorans . We accordingly included this strain in our characterization of putative novel nitrogen fixation activities. The UBT1, H1 and P1-2 strains all grow on CO and H 2 gas on mineral media, as well as on pyruvate, despite the previous description of UBT1 as an obligate chemolithoautotroph. Growth was not observed on other carbon sources tested. The H1, UBT1, P1-2 and P. dioxanivorans CB1190 strains sustain minimal growth on media lacking NH 4 , but do not incorporate heavy-isotope-labeled N 2 gas ( 15 N 2 ) into biomass as measured in attempts at different sites. These results are inconsistent with nitrogen fixation, but instead suggest that this organism can incorporate ammonia and other combined nitrogen species at exceedingly low concentrations. We propose reclassifying Streptomyces thermoautotrophicus as a non-Streptomycete, non-diazotrophic, facultative chemolithautotroph. Results Strain UBT1 and two new strains grow largely as previously described The sources of the Streptomyces thermoautotrophicus strains used in this paper are summarized in Table 1 . We found the morphology and general growth characteristics of all strains to be consistent with the previous description for strain UBT1 20 ( Fig. 1 ). Specifically, Streptomyces thermoautotrophicus grows rapidly on hydrogen, reaching its maximal extent of growth within three days of incubation at 60 °C on solid media in the presence of a 4:4:1:1 mix of N 2 :H 2 :CO 2 :O 2 . The strains grow more slowly on a 9:8:2:1 mix of CO:N 2 :O 2 :CO 2 , reaching maximal growth within four days of incubation at 60 °C. Limited germination could be observed on some carbon sources in the absence of H 2 or CO, but the only carbon source found to sustain growth reproducibly was pyruvate. Growth in liquid was poor, either in suspension in pyruvate or as a pellicle in the presence of H 2 or CO. Serial passage on mineral media solidified with gellan gum and in the presence of H 2 or CO was robust, as was serial passage on plates containing pyruvate and NH 4 (DSM260 media), suggesting that the initial assessment of this species as an obligate chemolithoautotroph was premature. Table 1 Sources of S. thermoautotrophicus strains used in this paper. Full size table Figure 1 Morphology of S. thermoautotrophicus . On mineral media solidified with gellan gum, growth is more robust in the presence ( A ) than the absence ( B ) of 30 mM NH 4 Cl (scale bars are 200 μm in each image). Electron microscopy shows smooth, branching substrate mycelia and hairy/rugose decorations on mature spores, which are arranged in straight, unbranched chains (( C ), magnification is 6,810x). Spore formation is robust on media containing NH 4 Cl (( D ), magnification is 3,600x), but when grown on media lacking NH 4 Cl the ratio of substrate to aerial mycelia is higher and the hyphae assume varied morphologies (( E ), magnification is 3,290x). In cross section, unknown vacuoles and other subcellular features appear and may be more prominent in mature spores (lower center, identifiable by rugose exterior) than in other hyphae (( F ), magnification is 20,000x). Full size image The strains did not grow above 80 °C or below 42 °C. H1 and UBT1 form a non-fragmenting, branched substrate mycelium and, after at least 24 hours of growth, produce aerial hyphae that septate into chains of grey-pigmented spores, the most mature form of which bear hairy to rugose surface decorations 23 . Substrate hyphae, aerial hyphae and free spores are all highly hydrophobic and remain buoyant atop aqueous solutions. S. thermoautotrophicus strain H1 was isolated from soil obtained from a charcoal pile in Hasselfelde, Germany, by adding soil particles to liquid media under a hydrogen atmosphere and transferring the resulting buoyant mycelia after 9 days of culture. The original liquid phase enrichment of S. thermoautotrophicus from soil retained spores that formed non-mycelial colonies, which grew well on complex media. 16S rRNA analysis of these colonies indicated that they were primarily related to thermophilic Firmicutes, including Bacillus , Brevibacillus and Geobacillus . Interestingly, liquid phase growth of this initial consortium resulted in extensive pellicle growth, which was lost upon subsequent purification of S. thermoautotrophicus after serial passage and selection of single colonies on plates. Growth of the selected strain on plates remained robust, however. S. thermoautotrophicus strain P1-2 was isolated from soil situated near an active coal seam fire in Centralia, PA, USA. Its nutritional requirements were identical to strain H1. PCR with universal RNA primers (27F and 1492R 24 ) produced a single amplicon, Sanger sequencing of the amplicon indicated 99.3% identity with one of the 16S rRNA loci present in both genomes from the H1 and UBT1 strains ( Supplementary Fig. 1 ). Genome sequences indicate a novel genus Streptomyces thermoautotrophicus strain UBT1 was grown and sequenced independently at two different facilities (Rosario, Argentina and Aachen, Germany) and S. thermoautotrophicus strain H1 at a third (Boston, MA, USA; Fig. 2 ). Optical mapping data (OpGen, Gaithersburg, MD) were collected for strain H1 and indicated the presence of a single, circular chromosome of 5.25 Mbp. A combination of short reads (Illumina), scaffolding reads (Pacbio) and optical mapping consolidated genome assembly of strain H1 into a single scaffold 4.90 Mbp in length (of which 4.5 Mbp is sequenced data and 0.4 Mbp is scaffolding Ns), containing an estimated 85% of the genome, with 5 additional contigs >1 kbp in length that could not be aligned to the map (and that could collectively account for another 1.3% of the genome) and one 48 kbp contig that is likely a plasmid, based on the presence of multiple genes encoding phage- and plasmid-related putative functions. The genome sequences of strain UBT1 are more fragmentary, but have similar lengths and gene content, including the possession of a 55 kbp plasmid (which shares no significant similarity with that of the putative H1 plasmid). The genome characteristics are compared to those of other Actinomycetes in Supplementary Table 1 . Comparison of the H1 and UBT1 (Aachen) genomes by the Genome Blast Distance Phylogeny (GBDP) approach, as implemented with default settings and bootstrapping on a web server ( ; date of access 21/06/2015) 25 yielded a distance estimate of 0.0067, comparable to a DNA-DNA hybridization (DDH) value of 94.9%, predicting a 97.18% probability that DDH >70% and thus that the strains belong to the same species. Figure 2 Genome map of S. thermoautotrophicus H1. 20 scaffolds from the SPAdes assembly were aligned to the optical map and gaps in the chromosome were filled in with N’s manually. Six nodes did not align; the largest of which is a predicted plasmid. The two outer rings represent genes on the two strands of the chromosome. The inner ring shows the GC content and prominent dips represent gaps in the assembled sequence; with length and position inferred from the optical map. Full size image The circularity and relatively modest size of the H1 genome are uncommon among Streptomycetes 26 and accordingly we undertook analysis of the nearest relatives of these strains. The UBT1 (Aachen) genome assembly contains three small subunit ribosomal RNA genes, while the more fragmentary Rosario genome assembly contains two. The H1 strain genome assembly similarly has two. Each genome contains two different 16S gene sequences that share only 90% identity between them. Between the H1 and UBT1 strains, however, each 16S sequence in one strain matches the corresponding sequence from the other strain with 99% identity. The presence of significantly different ribosomal RNA operons within a single bacterial genome has been observed before, especially in thermophiles 27 . A phylogenetic tree comparing the 16S sequences shows that one of the 16S genes clusters most closely with Thermobispora bispora and Acidothermus cellulolyticus ( Supplementary Figure 2a ). The identities with either species are 90%. The other 16S sequence in either genome has various nearest neighbors among other actinomycetes, but shares less than 93% identity over 93% of its length with any of them, as determined by BLAST. In the absence of conclusive evidence as to which of these divergent rRNA sequences is ancestral to the organism and which may be the product of a more recent horizontal gene transfer or duplication event, we adopted the approach of concatenating 14 well-conserved proteins (13 ribosomal proteins and one phosphpatidate cytidylyltransferase) within the genome sequences of strains H1 and UBT1 and performing alignment and neighbor-joining analyses with similar sequences from all Actinobacteria with fully sequenced genomes that were available at the time of analysis 28 . The resulting tree, while possessing some nodes with low confidence based on the supporting bootstrap values, is largely in agreement with recent results that have followed the same approach 29 ( Supplementary Figure 2b ). From this approach, it appears that H1 and UBT1 are most closely related to the clade that includes the genera Acidothermus , Streptosporangium , Thermobifida , Thermobispora and Thermomonospora . By this analysis, these strains do not belong in the genus Streptomyces and instead are in the class Actinobacteria , subclass Actinobacteridae , order Actinomycetales . The suborder, family and genus are all undetermined, although nearby families include Frankineae ( Acidothermus ), Streptosporangineae ( Thermomonospora , Streptosporangium , Nocardiopsis , Thermobifida ) and Pseudonocardineae ( Thermobispora ). Genomes lack nitrogenases; possess multiple CODHs No nif genes are present in any of the three draft genomes, other than a single gene annotated as “Nitrogen-fixing NifU, C-terminal:Rieske [2Fe-2S] region”. This single gene is located between a predicted uptake hydrogenase and factors for the maturation of that enzyme, which is homologous to similar clusters in other organisms. Proteins with C-terminal homology to NifU have predicted functions in metal cofactor synthesis, including in non-diazotrophs and are not diagnostic for the presence of a functional nitrogenase 30 . The genome also contains no sequences with significant homology to anf or vnf genes for the alternative Fe- or V-nitrogenases. The superoxide-dependent nitrogenase was previously identified to involve two molybdenum hydroxylases, St1 and St3, which were homologs of CODH. St1 was reported to be the putative dinitrogen reductase and St3 a true CO dehydrogenase responsible for generation of the superoxide anion radical. Both were reported to possess the same heterotrimeric structure of typical CODH enzymes or other molybdenum hydroxylases 22 . The UBT1 genome contains four operons encoding predicted aerobic CODH and the H1 genome contains three ( Table 2 ). Two of the UBT1 operons contain large subunit ( coxL ) genes encoding the AYXCSFR motif characteristic of form I aerobic CODH 31 and are located near coxDEF or coxG accessory genes necessary for the maturation of the functional CODH complex. The other two loci contain large subunits with alternative motifs that suggest categorization with form II CODH, or other molybdenum hydroxylases 32 . The medium subunits of all four loci have sequences consistent with the ability to bind flavin adenine dinucleotide (FAD) cofactor, despite the previous finding that St1 (encoded by sdnM ) does not contain FAD 22 . One of the coxMSL loci, present in both the H1 and UBT1 genomes, shows high identity between the encoded proteins and sequences published by Ortwin Meyer in 1993 31 ( Fig. 3 ), suggesting that the UBT1 strain sequenced here is the same strain as reported on in that period. Table 2 Predicted carbon monoxide dehydrogenase genes in strains H1 and UBT1 and H. schlegelii , compared to sdn genes. Full size table Figure 3 Gene sequences from the H1 and UBT1 genomes match closely those reported early in the cultivation of UBT1 by the Meyer group. Protein alignments from S. thermoautotrophicus CODH published by the Meyer group in 1993 versus corresponding sequences from the genomes of UBT1 and H1; CoxM ( A ), CoxS ( B ) and CoxL ( C ), respectively. For brevity, only the first 50 residues are shown for each alignment. N-terminal Edman degradation sequences for CoxM and CoxL are from 31 . The complete CoxS sequence was obtained from an unpublished dissertation from the Meyer laboratory; the full-length protein sequence possesses 100 percent identity with either H1 or UBT1 CoxS. Residues that are identical are black; residues that are similar to the consensus are colored green; all others are colored red. Full size image The identity between the N-terminal protein sequences published for the St1 proteins with any of the UBT1 or H1 CODH sequences, however, is low (maximum identity between any H1 or UBT1 CODH protein and St1M, S and L respectively is 53%, 50% and 35%) 21 . Nucleotide sequences coding for the St1 subunits, based on sequencing clones from a library of DNA that hybridized with degenerate oligonucleotides derived from the N-terminal St1 sequences, have previously been determined 33 . These sequences have also recently been uploaded to the GenBank database (GI: 589823865, 589823867 and 589823869, for sdnMSL , respectively). The highest identity of any of the proteins encoded by these genes to any of the H1 or UBT1 sequences at the amino acid level is 50%, 63% and 62%, for the proteins encoded by sdnM , S and L , respectively ( Table 2 ). These proteins in our S. thermoautotrophicus genomes are all annotated as CoxM, S and L proteins. The St2 protein was previously identified as a manganese-dependent superoxide dismutase (Mn-SOD), encoded by the sdnO gene. Similarly to St1, a nucleotide sequence for the entire sdnO gene had been recorded 33 and has recently been uploaded to GenBank (GI:588295007). The UBT1 and H1 genomes each contain one predicted Fe/Mn-SOD; identity with the St2 protein sequence encoded by the reported sdnO gene sequence is 38% at the amino acid level ( Table 2 ). Superoxide-Dependent Nitrogenase genes match Hydrogenibacillus schlegelii The proteins encoded by sdnMSL and sdnO have higher identities, however, with proteins from other strains which have partial sequences within the GenBank database. A BLAST search revealed that the closest match for sdnL was to a partial coxL gene sequence (GI:38679249) in Hydrogenibacillus schlegelii (formerly Bacillus schlegelii ), for which no full genome sequence was available. In order to investigate the relationship between the sdn sequences and H. schlegelii , we undertook sequencing the genome of that organism as well and report here a draft sequence for the genome of H. schlegelii strain DSM2000. This strain is a known chemolithoautotroph, with similar optimum temperature for growth as S. thermoautotrophicus and similar capacity to grow on CO or H 2 . Accordingly, the draft genome contains genes encoding ribulose bisphosphate carboxylase/oxygenase (RuBisCo), phosphoribulokinase (PRK), CODH and uptake hydrogenase. H. schlegelii has not been described as a diazotroph. Its genome lacks nif / anf / vnf nitrogenase genes and we found that the strain failed to grow on media lacking NH 4 Cl, with no visible turbidity in cultures incubated for up to one month, either when grown chemolithotrophically on H 2 /CO 2 , or when grown heterotrophically on pyruvate. The DSM2000 genome contains a single locus encoding a putative molybdenum CODH operon (TR75_12445-55) with >99% identity with the sdn operon at the nucleotide level. Translations of the open reading frames (ORFs) within also demonstrated >99% identity to the St proteins ( Table 2 ). It was further determined that a Mn-SOD within the H. schlegelii genome (TR75_10445), bore 100% identity at the amino acid level to the St2 protein ( Table 2 ). Extensive overlap was also found at the nucleotide level between the DSM2000 genome sequence and the region around the coding sequence for sdnO (99.62% identity, over 7.2 kbp), previously sequenced by the Meyer group. Strains grow on media lacking NH 4 Cl, but fail to incorporate significant 15 N 2 into biomass All S. thermoautotrophicus strains tested were capable of continuous culture on mineral media lacking added NH 4 + but peculiarities regarding their growth were apparent. Specifically, growth was far more robust in the presence of 30 mM NH 4 + . Furthermore, growth was only supported on plates solidified with 0.5–1.0% gellan gum, or on agar fortified with charcoal or bio-char. A CHN analysis on the gellan gum used returned a nitrogen content of 0.095%. Growth on mineral media solidified with Noble agar was not observed. In addition, growth on plates containing either gellan gum or agar plus bio-char was greatly enhanced by the presence of plates containing 30 mM NH 4 + within the same atmosphere, presumably due to the former cross-feeding on NH 3 volatilizing from the latter. The high number of CODH enzymes encoded by the S. thermoautotrophicus genomes and the purported identity of the St1 nitrogenase as a modified CODH prompted us to examine other published genomes containing high numbers of such enzymes. We identified Pseudonocardia dioxanivorans CB1190 as another strain whose sequenced genome encodes multiple molybdenum hydroxylase enzymes (10 operons, two of which are putative type I CODH enzymes 34 ) and which has been reported to be a diazotroph 35 . The strain CB1190 similarly lacks nif , anf and vnf genes, yet reportedly displayed a similar ability to grow on media lacking NH 4 + as an additive. This strain was accordingly included in subsequent experiments to test for diazotrophy using isotope labeling. In order to determine whether the growth of our strains was independent of exogenous combined nitrogen, we undertook labeled nitrogen assays at four independent laboratories ( Table 3 ). Plates were inoculated with spores of the strains indicated and incubated in the presence of acid-washed 15 N 2 gas. After 3–5 days, biomass was scraped from plates and subjected to isotope-ratio mass spectrometry (IRMS). At three sites with thorough 15 N 2 gas washing, no enrichment of 15 N was detected in biomass relative to plates supplemented with NH 4 Cl. At one site, a slight enrichment of 15 N in biomass was observed. We note, however, that the 15 N 2 gas used in this experiment (Aldrich catalog #364584, Lot#SZ1670V) was found in a recent report 36 to contain significant amounts of contaminating 15 NH 4 + and 15 NO 3 − /NO 2 − . The small amount of 15 N detected in the biomass of our samples, is much less than the amount that would be present in unwashed gas ( Supplementary Fig. 3 ); thus it is possible that not all of this contaminating fixed 15 N was captured by washing for 30 minutes in 5% H 2 SO 4 . Fixation was also not detected in P. dioxanivorans CB1190. Table 3 Results of growth of selected strains in the presence of 15 N 2 gas. Full size table Discussion We conclude that there is no evidence for and substantial evidence against, the existence of an oxygen-tolerant nitrogease as previously described in Streptomyces thermoautotrophicus . Our major empirical lines of evidence are both genomic and phenotypic. First, using genome sequencing, we failed to find either canonical or the proposed novel nitrogenase enzymes in our S. thermoautotrophicus isolates. Rather, sequences of the proposed enzymes are present in another thermophilic, non-diazotrophic bacterium. Second, extensive multi-site fixation experiments using 15 N 2 gas failed to demonstrate incorporation into biomass. Growth experiments suggest that S. thermoautotrophicus strains are highly effective nitrogen scavengers, but not nitrogen fixers. Finally, we point out some other inconsistencies in the original characterization of the proposed oxygen-tolerant nitrogenase system. Apart from the superoxide dependent nitrogenase, no known aerobic reduction of nitrogen to ammonia is known. An unprecedented mechanism in the scheme is the use of superoxide as an electron donor for a biologically productive reaction. No other productive biological use of superoxide is known, although many cells produce superoxide as a toxin. Usually superoxide is a toxic byproduct of cellular metabolism. With a standard potential of −330 mV 37 it is comparable to the NADH or NADPH potential of −320 mV, but much more toxic due to its reactivity. The Fe protein of the MoFe nitrogenase has an apparent midpoint potential of −470 mV 38 and the physiological electron donor is ferredoxin or flavodoxin. Therefore it seems unlikely that superoxide is sufficiently reducing to drive the nitrogenase reaction. In addition, none of the putative superoxide dependent nitrogenase system proteins have a plausible ATPase domain identified, even though ATP hydrolysis was required for the activity. In the original work, labeled nitrogen was not used during the purification of the nitrogenase, instead an ammonia production assay 21 was used, based on the reaction of ammonia with sodium phenate and hypochlorite 39 . This assay is very sensitive, but is known to have a high background 40 , which may have contributed to a misidentification of nitrogenase activity. There is absolutely no genomic evidence that S. thermoautotrophicus is diazotrophic. No MoFe, VFe, or Fe-only nitrogenase genes are present in the draft genome sequences presented here. The UBT1 strain was previously found to lack nif genes by dot blotting 19 and its putative oxygen-tolerant nitrogenase was characterized by protein lysate fractionation and assays for ammonia production in vitro 21 . The identities of the three protein complexes of the proposed oxygen-tolerant nitrogenase were previously found to be two variants of molybdenum hydroxylases related to CODH and an SOD-like protein 21 , 22 . However, N-terminal sequences of the proposed heterotrimeric St1 nitrogenase and of the homodimeric St2 nitrogenase reductase, as well as the sequences of their respective genes sdnMSL and sdnO , do not match any sequences present in the draft genomes of strain UBT1 or the novel isolate H1 21 , 22 . The genes for the proposed oxygen-tolerant nitrogenase are in fact present in the genome of another thermophilic non-diazotrophic bacterium. The sequences for sdnMSL and sdnO match with identities between 99.1% and 100% at the nucleotide level the coxMSL and sod gene sequences from Hydrogenibacillus schlegelii DSM2000 , an isolate closely related to H. schlegelii DSM9132 , which was studied by the Meyer group in the same period as S. thermoautotrophicus 41 , 42 , 43 . S. thermoautotrophicus and H. schlegelii overlap closely in their nutritional requirements and optimal growth temperature and contamination of cultures of S. thermoautotrophicus is therefore a possibility. If the Meyer group did contaminate their liquid culture of S. thermoautotrophicus with H. schlegelii , these are the sequence data we would observe. We conclude that the St1 and St2 proteins are in fact the aerobic CODH and superoxide dismutase proteins from a strain of Hydrogenibacillus schlegelii. In addition to the absence of the putative oxygen-tolerant nitrogenase genes, our extensive phenotypic characterization does not support the claim the S. thermoautotrophicus is diazotrophic. The strain of S. thermoautotrophicus we have received from D. Gadkari represent the closest known link to that which was claimed to be diazotrophic in 1992. Independent samples of that strain were cultured by different personnel at two independent sites, found to grow poorly on media lacking NH 4 Cl, grew reproducibly only on media containing additives that may contribute combined nitrogen and did not incorporate 15 N 2 into biomass. We fail to see convincing evidence for diazotrophy in this strain. In addition, strain H1, isolated from a burning charcoal pile using procedures detailed for the isolation of UBT1 by the original authors 20 and strain P1-2, isolated from soil overlying a coal seam fire, share morphology and physiology with UBT1 that are indistinguishable by any test performed thus far. They qualify as members of the same species as UBT1, according to whole-genome prediction of DNA-DNA hybridization and 16S rRNA sequence identity for strains H1 and P1-2, respectively. While there is no guarantee that independent isolates of a particular species of bacteria will share identical metabolic capabilities, it is striking that the H1 and P1-2 strains, which were not cultivated for prolonged periods in the laboratory before being subjected to 15 N 2 incorporation assays, show a similar capacity for minimal but reproducible growth on media lacking added NH 4 Cl, yet also fail to show significant incorporation of acid-washed, labeled dinitrogen into biomass. Similar to our results in S. thermoautotrophicus , the abundance of CODH paralogs in the genome of P. dioxanivorans failed to predict functional diazotrophy in that strain, as measured by 15 N 2 assays. Subsequent dialogue with the authors responsible for the isolation and initial characterization of the P. dioxanivorans species confirmed that, contrary to early reports, the sequenced strain lacks known nitrogenase genes and that the strain now available is not diazotrophic (L. Alvarez-Cohen, personal communication). Other researchers have identified bacteria capable of growth on trace environmental sources of combined nitrogen 44 , 45 and previous claims of diazotrophy in various strains have subsequently been found to be premature 46 , 47 . A high-affinity pathway for assimilation of scarce combined nitrogen may prove a viable alternative to the maintenance of a complex pathway for nitrogen fixation when ammonia and nitrate are scarce. Efficient scavenging of combined nitrogen is certainly a more parsimonious explanation for our data than a novel nitrogenase that is orthogonal to all known systems and insensitive to oxygen. While it remains possible that novel nitrogenases will be uncovered as the resources for querying the genomes of unculturable bacteria advance, any such enzymes must be supported by sensitive functional assays in order to verify their activities. Among such assays, measuring the incorporation of isotopically-labeled dinitrogen gas into biomass by mass spectrometry remains the most sensitive and specific 48 , 49 . The high sensitivity of isotope-ratio mass spectrometry, however, can also be a liability, as when the input reagent may be contaminated with fixed 15 N, a fact that may have contributed to the confusion regarding the status of UBT1 and other strains as putative diazotrophs. We cannot rule out the possibility that the strains used in the previous publications cited possessed a novel nitrogenase, which was subsequently lost upon prolonged cultivation in the laboratory. Any such enzyme, however, would likely not be constituted of subunits with the sequences previously published, as their presence in H. schlegelii does not correspond with the ability to fix nitrogen, nor does their absence in UBT1 limit its ability to grow on media containing no added NH 4 Cl. Should a viable sample of diazotrophic UBT1 be recovered, we welcome the chance to characterize it further. However, given the evidence in hand we conclude that the existence of the proposed oxygen-tolerant nitrogenase system is extremely unlikely. Regarding the reassignment of the phylogeny of this species: the identity between the 16S rRNA sequences in the H1, UBT1 and P1-2 strains is sufficiently high to warrant their classification as members of a single species. When compared to other Actinobacteria, however, identity falls below the commonly suggested guidelines for placing these strains within a known genus 50 . More sensitive protein sequence alignments support this divergence and suggest an early-branching member of the order Streptosporangiales. It is unclear at the current level of resolution available from full genome sequences whether these strains best fit as members of the family Nocardiopsaceae, Thermomonosporaceae, or a new family. In any case, the level of divergence from categorized species is sufficient to recommend that Streptomyces thermoautotrophicus be reassigned to a new genus and described as a non-diazotroph on the basis of this work. The UBT1 (Rosario), H1 and P1-2 strains have been deposited in the Leibniz-Institut DSMZ (Deusche Sammlung von Mikroorganismen und Zellkulturen GmbH) culture collection, with catalogue numbers DSM 100163, DSM 100164, DSM 100422 respectively. Methods Strain isolation and cultivation S. thermoautotrophicus strain UBT1 was kindly provided by the original author Dilip Gadkari, Bayreuth, Germany to research groups at the Universidad Nacional de Rosario, Argentina and the RWTH Aachen, Germany. At the Universidad Nacional de Rosario, S. thermoautotrophicus strain UBT1 was grown in DSMZ Medium 811, in a defined atmosphere of 60% CO, 20% CO 2 , 20% air. Plates were grown on 1.5% agar supplemented with 0.5% activated charcoal. At the RWTH Aachen, S. thermoautotrophicus strain UBT1 was grown in NFIX mineral media supplemented with Drews elements (in μg/L: Na-Fe-EDTA 800, MnCl 2 10, CoCl 2 2, CuSO 4 1, ZnCl 2 2, LiCl 0.5, SnCl 2 *2H 2 O 0.5, H 3 BO 3 1, KBr 2, KI 2, BaCl 2 0.5, pH 6.0) in a defined atmosphere of 45% CO, 5% CO 2 and 50% air. S. thermoautotrophicus strain H1 was isolated from soil taken from a charcoal pile in Hasselfelde, Germany. Soil particles were immersed in mineral media 20 and incubated in the presence of H 2 and CO 2 at 60° C. Pellicles emerged after several days and were transferred and enriched on plates of mineral media plus 0.5% gellan gum (Sigma) and 0.075% MgCl 2 *6H 2 O in a defined atmosphere of 40% H 2 , 10% CO 2 and 50% air, or else in 45% CO, 5% CO 2 and 50% air 51 . S. thermoautotrophicus strain P1-2 was isolated from soil located above an underground fire where a coal seam has been burning for several decades (Centralia, PA) 52 . The soil temperature in situ at the time of collection was 60 °C and the collection site was immediately adjacent to steam issuing from a fissure. Isolation proceeded as for the H1 strain, except that isolation was conducted at an independent facility, liquid enrichment was in DSMZ261 mineral media 53 and plates were solidified with 1% gellan gum and 0.15% MgCl 2 . Pseudonocardia dioxanivorans CB1190 was grown on ATCC medium 196 35 for routine maintenance and was grown on NFIX mineral media plus glucose and gellan gum for labeled nitrogen assays. Azotobacter vinelandii strains DJ and DJ100 (Δ nifD ) 54 were grown on Burk’s medium 55 with or without 30 mM NH 4 Cl for maintenance and labeled nitrogen assays. P. dioxanivorans and A. vinelandii were grown in air at 30 °C. Hydrogenibacillus schlegelii DSM2000 was grown on DSMZ Medium 260 or 261 56 , for heterotrophic or chemolithotrophic growth, respectively. To test for diazotrophy in this strain, variants of these media to omit NH 4 Cl were also prepared. H. schlegelii was grown at 55 °C in either air or an atmosphere of 45% H 2 , 10% CO 2 and 45% air. Labeled nitrogen assay In all experiments, separate canisters of 15 N 2 gas were purchased from Aldrich chemical (St. Louis, MO). The gas was acid-washed prior to introduction to the culture vessels: 15 N 2 gas was retrieved from the canister and injected into a Schott bottle filled with water and sealed with a rubber septum. Water was withdrawn upon introduction of the gas to equalize pressure and allow continued introduction of the gas until approximately half the volume of the bottle was occupied by the gas. Then concentrated H 2 SO 4 was introduced to bring the concentration of H 2 SO 4 in the water within the bottle to 5%. The gas was incubated for the indicated period of time with agitation before being extracted and introduced to the culture chamber. At Harvard Medical School: 15 N 2 gas was acid-washed for 30 minutes. Plates of NFIX mineral media possessing or lacking 1.5 g/L (28 mM) NH 4 Cl and solidified with 0.5% gellan gum were inoculated with S. thermoautotrophicus H1 or UBT1 and incubated in the presence or absence of 2.5% 15 N 2 for 3 days. Then biomass was collected, dried at 80 °C for 12 hours and analyzed at the MBL Stable Isotope Laboratory (Woods Hole, MA) on a Europa 20-20 CF isotope ratio mass spectrometer equipped with a Europa ANCA-SL element analyzer. A. vinelandii was grown in suspension in Burk’s medium possessing or lacking 1.5 g/L NH 4 Cl, with stirring, at 30 °C for 3 days. At the RWTH Aachen: 15 N 2 gas was acid-washed for one hour. Plates of NFIX mineral media lacking 1.5 g/L (28 mM) NH 4 Cl and solidified with 1.0% gellan gum were inoculated with S. thermoautotrophicus H1 or UBT1 and incubated for five days in the presence of 10% 15 N 2 gas. Then biomass was collected, dried at 80 °C overnight and sent for analysis to ISO-analytical, Crewe, UK. At Universidad Nacional de Rosario: 15 N 2 gas was acid-washed for 30 minutes. Plates of NFIX mineral media possessing or lacking 1.5 g/L (28 mM) NH 4 Cl, fortified with 0.5% activated charcoal and solidified with 1.5% agar were inoculated with S. thermoautotrophicus H1 or UBT1 and incubated at 60 °C for 5 days in desiccators in the presence or absence of 1.5% 15 N 2 . A. vinelandii DJ and DJ33 57 ( nifD − nifK − )were grown in Burk’s medium with (DJ33) or without (DJ) 1 g/L NH4 Cl for 3 days at 28 C in desiccators in the presence or absence of 0.75% 15 N 2 . Then biomass was collected, samples were dried at 60 °C for 24 hours and analyzed at the MBL Stable Isotope Laboratory (Woods Hole, MA) on a Europa 20-20 CF isotope ratio mass spectrometer equipped with a Europa ANCA-SL element analyzer. At Michigan State University: 15 N 2 gas was acid-washed for 24 hours. Plates of NFIX mineral media possessing or lacking 1.5 g/L (28 mM) NH 4 Cl were solidified with 1% gellan gum with NH 4 Cl and 6 plates without NH 4 Cl were each inoculated with UBT1, H1 and P1-2 strains. These plates were incubated for 3 days in 40% H 2 , 10% CO 2 and 50% air atmospheres with 2% of the atmosphere replaced with acid-washed 15 N 2 . We also incubated plates of Azotobacter vinelandii (wild type) and A. vinelandii ( nifD- mutant), which cannot fix nitrogen in Mo-containing media, as positive and negative controls, respectively. Controls were incubated on mineral media with 1% glucose either with ( A. vinelandii nifD − ) or without ( A. vinelandii WT) NH 4 Cl in the presence of an air atmosphere with ~2% of the atmosphere replaced with acid-washed 15 N 2 . After incubation, growth was scraped from plates and washed in Phosphate buffered saline to remove potential contaminating nitrogen. Two NH 4 -free plates worth of biomass were pooled into single samples at this point to compensate for poor growth of H1, UBT1 and P1-2 strains on these media. Biomass was dried at 60 °C for 24 hours, then analyzed for 15 N content on a GV instruments Isoprime Mass Spectrometer interfaced with a EuroVector Elemental Analyzer 3000 Series. Sequencing and bioinformatic analyses At Universidad Nacional de Rosario: genomic DNA from S. thermoautotrophicus UBT1 (Rosario), was extracted with cetyltrimethylammonium bromide 58 . DNA was fragmented by nebulization, ends repaired, adaptors ligated and small fragments removed following GS-FLX Titanium library preparation. Libraries were sequenced with single-end FLX 454. The reads were assembled with Newbler v.2.5.3 59 and the genome annotated by RAST. At the RWTH Aachen: genomic DNA from S. thermoautotrophicus UBT1 (Aachen), was prepared with a NucleopSpin Tissue kit (Macherey-Nagel, Germany). DNA was fragmented using a Biorupter Pico (Diagenode, Belgium) and libraries were prepared with the TruSeq kit. Libraries were sequenced with the 250PE reagent kit on a MiSeq (Illumina). PacBio reads were generated by GATC (Germany). Illumina reads were trimmed for quality with Trimmomatic (V0.32) 60 and Illumina and PacBio reads were assembled with SPAdes version 2.5.1 61 . The resulting assembly was annotated with RAST. At Harvard Medical School: genomic DNA from S. thermoautotrophicus strain H1 was isolated with the DNeasy blood and tissue kit (Qiagen). Quality was verified with Bioanalyzer chips (Agilent). DNA was fragmented by adaptive focused acoustics (Covaris) to 400 bp or 3 kb and libraries prepared with the TruSeq kit. Libraries were sequenced with 250 PE reagent kits on a MiSeq (Illumina). PacBio reads were generated with the SMRTbell Template Prep Kit. Reads were trimmed for quality with Trimmomatic 61 and assembled with SPAdes 3.1 61 . The resulting assembly was annotated with RAST 28 , augmented with a bidirectional BLAST 62 search against all bacterial protein sequences from the UniProt database 63 to increase the number of genes with a putative function. At Michigan State University: genomic DNA from H. schlegelii DSM2000 was obtained directly from DSMZ and was also extracted from H. schlegelii DSM2000 grown in Na-pyruvate mineral media (DSM 260: 4.5 g/L Na 2 HPO 4 , 1.5 g/L KH 2 PO 4 , 0.01 g/L MnSO 4 *7H 2 O, 0.2 g/L MgSO 4 *7H 2 O, 0.01 g/L CaCl 2 *2H 2 O, 0.005g/L Ferric citrate, 1.0 g/L NH 4 Cl,1.50 g/L Na-pyruvate and 3.0 mL DSM trace element solution 6) without shaking at 60 °C. Cells were pelleted, incubated for 30 minutes at 55 °C with 280 ul Tissue Cell Lysis solution (Epicentre) and with 20 ul Proteinase K (Roche). 200 ul MPC protein precipitation solution was then added, the supernatant was transferred to isopropanol and washed with 70% ethanol. Libraries were prepared for sequencing from both sources of DNA. Genomic DNA was sheared on a Covaris S2 ultrasonicator using the manufacturer’s recommended parameters for shearing to 500 bp. Sheared DNA was end repaired, A-tailed and Illumina adapters were ligated using enzymatic kits from New England Biolabs 64 . Ligation products were amplified for 15 PCR cycles with Phusion polymerase (NEB) and size-selected on a 2% agarose gel. Finished libraries were pooled and sequenced on an Illumina MiSeq 150 bp paired sequencing run. Quality-filtered reads were assembled with the A5 pipeline 65 and the resulting assembly was annotated with RAST. The CoxMSL operon sequence from H. schlegelii DSM2000 was completed using the polymerase chain reaction (PCR). A DreamTaq kit (Thermo) was used to amplify for 35 cycles and the amplicon was gel-purified and sequenced on an ABI 3730XL with BigDye chemistry. A quality-trimmed Sanger read was used to merge the two scaffolds in Phrap 64 , 66 . Phylogenetic analyses were performed on 16S rRNA genes by downloading sequences from the NCBI, RefSeq Targeted Loci Project (revised Nov. 2010 at ; date of access 23/08/2015). 352 sequences representing Actinobacteria with sequenced genomes available were retrieved, as well as B. subtilis as an outgroup. These were aligned to the 16S sequences from the S. thermoautotrophicus genomes using Clustal Omega version 1.2.1 67 with default parameters and 30 iterations of sequence input order. The resulting alignment was submitted to the Gblocks server 68 and processed with the default settings. The consolidated alignment was used to form a distance matrix and a neighbor-joining tree with 100 bootstrap replicates, which were then used to form a consensus tree; these operations were carried out with PHYLIP version 3.695 69 . The final tree was edited in MEGA6 70 : for the condensed tree, branches that were monophyletic according to their current classification in NCBI Taxonomy were collapsed up to the family level. Phylogenetic analyses were performed on protein sequences by identifying highly conserved sequences as previously described 28 . The subset of these represented in both the H1 and UBT1 genomes, including 14 ribosomal proteins and a phosphatidate cytidylyltransferase, were identified. Subsequently all Actinobacterial sequences, as well as B. subtilis as an outgroup, of the corresponding protein families were downloaded from the PFAM server 71 . These sequences were combined with the S. thermoautotrophicus proteins and the proteins from each family were concatenated by species to form a single extended protein sequence representing a single isolate. The resulting table contained 431 strains, including H1 and UBT1. These were aligned in Clustal Omega with 30 iterations of input order, the resulting alignment trimmed with Gblocks, the trimmed alignment used to form a distance matrix and neighbor joining tree with 100 bootstrap replicates and the consensus tree all generated with PHYLIP as described above. The final tree was edited in Mega to collapse monophyletic branches to the family level, excepting the immediate neighbors of H1 and UBT1. Additional Information How to cite this article : MacKellar, D. et al. Streptomyces thermoautotrophicus does not fix nitrogen. Sci. Rep. 6 , 20086; doi: 10.1038/srep20086 (2016). | Scientists chase unicorns because if they could prove the existence of the magical beasts, the world would be a better place. Take Maren Friesen, Michigan State University plant biologist, for example. Her quest was to find near-mythical bacteria that could fix their own nitrogen. Her search for such magical beasties was based on results from Germany published in the 1990s that seemed to confirm their existence. The end result, published in the current issue of Nature's Scientific Reports, proved that the elusive bacteria, Streptomyces thermoautotrophicus, did in fact exist but didn't have any mythical qualities. Most nitrogen-fixing bacteria use an enzyme that does not work when oxygen is present. The heat and toxic gas-loving strain that Friesen studied appeared to have exceptional properties, including harboring a special enzyme that was insensitive to oxygen. So why go on such a quest? "If they actually existed, it would mean we could have plants that could fix their own nitrogen, a compound used in critical biological functions, with no need for nitrogen fertilizers," said Friesen. "In this dream world, there would be less pollution, less nitrogen runoff into rivers and streams, less greenhouse gas emissions, less fuel being used to transport and apply fertilizer." That is a unicorn worth chasing, she added. So why is it worth proving that it's a myth, that it doesn't exist? While Friesen and an international team of scientists remained highly skeptical of the bacteria's existence, the positive result in the literature had long tantalized researchers. However, there were no other papers from independent labs to confirm the original findings. "This outlying result was always there, always lingering in published papers," Friesen said. "Now we've been able to bury this once and for all." The myth began in Germany, where the bacteria were discovered, and their mythical properties were suggested. They thrived in the hot, toxic fumes over traditional charcoal fires where large quantities of wood were buried and burnt down. Friesen's collaborators traveled to Germany and gathered samples while she went to Centralia, Pa., where underground coal fires have been burning for decades. She was somewhat surprised that she was able to find the bacteria, lending a bit of credence to the myth. The tale grew even more when they produced a positive result in the laboratory, demonstrating that the bacteria did indeed fix their own nitrogen. This, however, turned out to be a tainted result. "We learned that the gas that everyone had been using for the experiments was contaminated," Friesen said. "For the next experiments, we had to introduce a number of new controls, which included washing or purifying the gas we used." Dispelling the myth turned out to be a roller coaster of results and reactions - from actually finding the missing bacteria to a positive result that bolstered the tall tale, and from conducting many, many more experiments to finally killing the bacterial unicorn. While one mythical notion died, the concept of international collaboration and open data grew. Scientists from Harvard University, Imperial College (London), Aachen University (Germany) and Universidad Nacional de Rosario, Zavalla (Argentina) contributed to key aspects of the research. Rather than focus on one experiment, the team conducted many experiments around the world. "By sharing data, you can have a lot of influence," Friesen said. "The most-influential datasets are the ones that everyone is using. And as this research demonstrated, it's better to compare your results to other researcher's data than believe a singular result. Reproducibility is really key to good science." Even if it means a few unicorns must die. | 10.1038/srep20086 |
Medicine | Largest comprehensive Middle East GWAS reveals Arab genetic risk factors | et al, Whole genome sequencing in the Middle Eastern Qatari population identifies genetic associations with 45 clinically relevant traits, Nature Communications (2021). DOI: 10.1038/s41467-021-21381-3 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-021-21381-3 | https://medicalxpress.com/news/2021-02-largest-comprehensive-middle-east-gwas.html | Abstract Clinical laboratory tests play a pivotal role in medical decision making, but little is known about their genetic variability between populations. We report a genome-wide association study with 45 clinically relevant traits from the population of Qatar using a whole genome sequencing approach in a discovery set of 6218 individuals and replication in 7768 subjects. Trait heritability is more similar between Qatari and European populations ( r = 0.81) than with Africans ( r = 0.44). We identify 281 distinct variant-trait-associations at genome wide significance that replicate known associations. Allele frequencies for replicated loci show higher correlations with European ( r = 0.94) than with African ( r = 0.85) or Japanese ( r = 0.80) populations. We find differences in linkage disequilibrium patterns and in effect sizes of the replicated loci compared to previous reports. We also report 17 novel and Qatari-predominate signals providing insights into the biological pathways regulating these traits. We observe that European-derived polygenic scores (PGS) have reduced predictive performance in the Qatari population which could have implications for the translation of PGS between populations and their future application in precision medicine. Introduction Genome-wide association studies (GWAS) have provided new insights into the genetic determinants of many clinically relevant traits and identified thousands of disease- or trait-associated genetic variants 1 , 2 . However, most of the published GWAS studies performed to-date are from European, or East Asian populations 3 , 4 . Middle Eastern populations are under-represented. Also, all GWAS conducted so far used genotyping arrays imputed on genome sequencing data from studies in which only few, if any, Middle Eastern genomes were present and therefore miss all population-specific signals. Large-scale GWAS of many traits and complex diseases in Africans and Asians indicated differences in the genetic architecture between populations, but included few, if any, study participants from Arab ethnicities. In addition, many trait-associated variants show differences in allele frequencies and effect sizes across populations 5 , 6 , 7 which may complicate the derivation of polygenic scores. Recent studies have shown that polygenic risk scores derived from studies in European populations have lower predictive performance when applied to non-European populations 8 , providing strong argument for conducting GWAS in non-European populations that are less represented in previously published studies. Here, we report the first comprehensive GWAS of 45 clinically relevant traits in a Middle Eastern population using a whole genome sequencing approach. We unveil differences in heritability of certain life-style related traits between populations, investigate differences in the genetic architecture of replicating loci, assess the performance of European-derived polygenic scores in Qatari population, and report novel trait associations that are predominant to the Middle Eastern population of Qatar. Results The qatar genome program (QGP) The QGP is a population-based study designed to perform whole genome sequencing of the Qatar Biobank (QBB) participants 9 with the aim to gain insights into the population structure and the genetic architecture of clinically relevant phenotypes in the Middle Eastern Qatari population. The present study is based on whole genome sequence data from 6218 participants of QBB and further replication in 7768 subjects from the second batch of QBB data. We performed a comprehensive heritability and genome-wide association study for 45 clinically relevant traits in the Middle Eastern population of Qatar. The investigated traits cover the following categories (Table 1 ): anthropometry ( N = 3), electrolytes ( N = 7), measures of enzyme activity or abundance ( N = 5), blood coagulation-related traits ( N = 4), blood cell composition ( N = 9), lipid traits ( N = 4), and other clinically-relevant biochemistry measurements ( N = 13). A detailed description of the study population and phenotype assessment is provided in the methods section, Supplementary Data 1 and Supplementary Table 1 . A pairwise correlation analysis of the analyzed traits (Supplementary Fig. 1 ) revealed correlations between related traits, such as the liver-derived enzymes ALT, AST, ALP, and GGT (abbreviations are given in Table 1 ), traits related to hemoglobin and red blood cells (Hb, Ht, MCV, MCH, MCHC, and Frtn), and traits related to iron metabolism, such as Fe, TIBC, Hb, and Ht. Table 1 Summary of clinically-relevant quantitative traits investigated in this study. Full size table Heritability of clinically relevant traits in the Qatari population The proportion of variation that can be attributed to genetic factors (heritability) has been investigated for many clinically-relevant traits, but mainly in populations of European descent 10 , 11 . A recent study in the Ugandan population of Africa showed marked differences in heritability estimates for many complex traits compared to European populations 6 . For example, estimates of heritability for body height in Ugandan populations was significantly lower (49%) than those from European populations (77%) suggesting differences in genetic loci and/or proportion of environmental contribution. The heritability of most traits in Middle Eastern populations remains undetermined. We therefore performed a comprehensive assessment of the heritability ( h 2 ) of the 45 traits in the QGP data. We found that h 2 estimates ranged from 13% for serum iron levels (Fe) to 59% for body height (Table 1 ). We compared our findings with heritability estimates from European 10 , 11 and African populations 6 . Overall, the correlation of h 2 between European and Middle Eastern populations was higher ( r = 0.81) compared to the correlation between African and Middle Eastern populations ( r = 0.44; Supplementary Fig. 2 ). For several traits, h 2 estimates in the Middle Eastern population were significantly different from European and African populations (Fig. 1 , Supplementary Tables 2 and 3 ). For example, estimates of h 2 for height in QGP (59%) was lower than in European (77%; P = 6.0 × 10 −7 ) but not significantly different from African populations (49%, P = 0.09). Similarly, heritability for BMI in QGP (31%) was lower than in Europeans; namely Sardinia (43%; P = 8.8 × 10 −4 ) and Iceland (42%; P = 2.3 × 10 −3 ). Interestingly, h 2 for the liver enzyme GGT in QGP data (35%) was similar to the European populations (34%; P = 0.78) but significantly higher than in African populations (10%, P = 5.8 × 10 −7 ). In contrast, estimates of heritability for cholesterol in QGP data (TCH = 22%; LDL-C = 21%) was significantly lower than values from European (TCH = 37%; P = 3.2 × 10 −5 , LDL-C = 38%; P = 2.4 × 10 −6 ) or African (TCH = 53%; P = 1.1 × 10 −7 , LDL-C = 54%; P = 1.6 × 10 −8 ) populations. Fig. 1: Heritability estimates of 45 clinically-relevant traits. Heritability estimates of 45 clinically-relevant traits in the Qatar Genome Program cohort (QGP; red markers) compared to estimates from Sardinian (green marker) and Ugandan (blue markers) populations. The heritability estimates in QGP was adjusted for age, gender, the first four population principal components and relatedness. The heritability estimates for several traits in QGP were significantly different from European 10 and African 6 Populations (Supplementary Table 2 ). Refer to Table 1 for trait abbreviations. Data are presented as mean ± SEM. Full size image Genome-wide association analysis of 45 complex traits We performed genome-wide association analyses of 45 clinically-relevant quantitative traits using whole genome sequencing data for 6218 individuals from the QGP study. We focused on common and low frequency variants (MAF > 1%; N = 7,880,618) using linear mixed models correcting for age, sex, population principal components and relatedness (see methods). The genomic inflation factor (λ GC ) ranged between 0.99 and 1.13 (mean ± S.D; 1.03 ± 0.03; Supplementary Table 4 ). Most analyzed traits (37 out of the 45 traits) showed very little inflation (λ GC ≤ 1.04). Considerable inflation was only detected for traits that are well-known to have large polygenic architecture such as adult height (λ GC = 1.13) and BMI (λ GC = 1.09). Manhattan and quantile-quantile plots for the studied traits are presented in Supplementary Data 2 . Figure 2 shows a Manhattan plot comprising association data for all 45 studied traits. We identified 301 distinct variant-trait-associations that reached a genome wide significance level of P < 5.0 × 10 −8 (Table 1 and Supplementary Data 3 ). For each trait, a distinct signal was defined as the variant with the lowest P value and not in linkage disequilibrium (LD; r 2 < 0.1) with any other variant within a window of 10 Mb. Of the 301 identified genetic signals, 281 were located within ±500 kb of a previously reported variant for the same trait. We replicated many loci that are known to have consistent association in studies across various population ancestry 2 . Examples include the SLC2A9 locus for uric acid (rs13129697; P = 2.8 × 10 −41 ), the UGT1A4 locus for total bilirubin (Tbil; rs887829; P = 3.5 × 10 −251 ) and the APOE locus for low density lipoprotein-cholesterol (LDL-C; rs7412; P = 6.3 × 10 −29 ). Of the 281 genome-wide significant variant-trait associations, 51 were observed for the same SNP as reported in the PhenoScanner 12 . For these SNPs it was possible to assess the direction of association (for variants with available effect-allele; N = 43) and all showed directionality of association consistent with previous reports. We also observed multiple distinct signals for many loci (Supplementary Data 3 ). For example, 17 distinct genome-wide significant signals were observed for total bilirubin (Tbil) in a 515 kb region on chromosome 2 (Fig. 3a ). Notably, this Tbil locus harbors the complex UGT1A gene locus that encodes nine enzymes which differ in their N-termini as a result of splicing nine unique substrate-recognizing first exons into four shared exons. These enzymes are involved in transforming lipophilic substrates, such as bilirubin, into water soluble metabolites. Another example was prothrombin time (PT) for which 20 distinct genome-wide significant signals were detected in a 615 kb region on chromosome 13 (Fig. 3b ) which harbors two coagulation factor genes: F7 and F10 . We investigated whether differences in linkage disequilibrium (LD) patterns between QGP data and other populations can account for differences in signal patterns. LD analysis of the Tbil and PT loci shows marked differences in LD patterns and allele frequencies between the European, East Asian and QGP populations (Fig. 3 ). For example, nine of the 17 distinct genome-wide significant signals from the Tbil locus are either monomorphic ( N = 6) or very rare (MAF < 0.3%; N = 3) in the East Asian population whereas only two of the variants are very rare in Europeans. Fig. 2: Manhattan plot of GWAS results from 45 clinically-relevant traits. The chromosomal position of genetic variants ( N = 7,880,618) is plotted against –log 10 (P). Analysis was performed using linear mixed models correcting for age, sex, population principal components and relatedness. The red horizontal line represents the threshold for genome-wide significance ( P < 5.0 × 10 −8 ). Full size image Fig. 3: Comparison of allele frequency and linkage disequilibrium patterns. Example of multiple distinct signals identified at loci associated with clinically-relevant traits. a Regional association plots for total bilirubin (Tbil) locus on chromosome 2 and ( b ) the prothrombin time (PT) locus on chromosome 13. The plots show chromosomal positions of SNPs plotted against –log 10 (P). Multiple distinct signals are shown as red circles, blue lines represent recombination rate. c–f Comparison of allele frequency and linkage disequilibrium for the distinct signals between QGP and European population ( c and d ) or between QGP and East Asian populations ( e and f ) for the Tbil and PT loci. Linkage disequilibrium patterns between the distinct signals from QGP data are shown below the red diagonal boxes and those from European ( c , d ) or East Asian ( e , f ) populations are shown above the diagonal red boxes. Grayed areas indicate monomorphic SNPs. MAF indicates minor allele frequency from QGP, European (CEU) or East Asian (EAS) populations. Full size image To assess to which degree we replicate the effect sizes of known signals in QGP, we compared our results to previously published work focusing on a single large comprehensive GWAS of similar traits from the Biobank Japan project (BBJ) 7 as a reference. We selected this study because it represents the largest and most comprehensive published GWAS of similar traits, and because the traits in the BBJ study were transformed similarly to our analysis, which allows a direct comparison of variant-level effect sizes for the identified loci (Z-Score or rank-based inverse normal transformation). In the BBJ study, Kanai et al. 7 performed a GWAS of 58 clinically-relevant traits in study participants of East Asian descent. Of the 45 traits analyzed in the present study, 28 traits overlapped with those analyzed by the BBJ project. For these traits, the BBJ study identified a total of 907 trait-variant associations which include known loci from previous studies in various populations ( N = 575; mainly Europeans) as well as new loci identified in the Japanese population ( N = 332). Of the 907 association, we could evaluate 898 in QGP: 659 for which the same genetic variant was available in our data set (designated as group A variants) and 239 for which at least one proxy variant within 1 Mb in our data was in strong LD ( r 2 ≥ 0.8; N = 149; designated as group B variants) or exhibited some degree of LD ( r 2 = 0.1 to 0.8; N = 90; designated as group C variants) with the variant reported in BBJ (Table 2 and Supplementary Data 4 ). For 9 variants, no suitable proxy was found in our dataset. Table 2 Summary of replicated loci compared to Biobank Japan data 7 . Full size table The genetic architecture for many traits can vary between populations. Differences in allele frequency and /or effect size for many trait-associated variants are known to exist between populations 5 . Since polygenic risk scores estimated from one population may not be precisely applicable to other populations, we assessed replication, allele frequency and effect size for group A variants in QGP data. We found 29 loci that replicated at a Bonferroni-corrected significance threshold of P < 5.6 × 10 −5 (0.05/898). All had consistent direction of effect. Comparison of effect size for the replicated loci showed a significant trend for higher effect sizes in QGP compared to BBJ (regression slope = 1.21; 95% CI = 1.01–1.42; P = 3.2 × 10 −12 ; Fig. 4 ). Of the 29 replicated loci, 17 showed an effect size that is 20% larger in our data compared to BBJ. Comparison of allele frequencies for replicated loci shows higher correlation with European ( r = 0.94) compared to African ( r = 0.85) or Japanese ( r = 0.80) populations (Fig. 4 ). Further analysis using colocalization testing (see methods) showed that out of the 29 replicated loci, 22 share the same association signal, whereas 7 had distinct signals between QGP and BBJ; highlighting differences in LD patterns between the two populations (Supplementary Data 5 ). Fig. 4: Comparison of allele frequency and effect size for known loci. a The effect size (Beta) for loci showing replication after correction for multiple testing in QGP (blue bars) compared to Biobank Japan project (BBJ 7 , orange bars). b Correlation of effect size for replicated loci between QGP and BBJ. c Correlation of allele frequency for replicated loci between QGP and BBJ ( r = 0.80), QGP and European (EUR; r = 0.94), or QGP and African (AFR; r = 0.85) populations. Dotted lines represent lines of best fit from regression analysis. Full size image When using a nominal significance threshold ( P < 0.05), 180 of the 659 group A variants show nominal evidence of replication and all but seven of these have consistent direction of effect with the variant reported in BBJ. Analysis of these loci revealed significant differences in the distribution of effect size compared to BBJ data (Supplementary Fig. 3 ). We found a significantly larger number of loci with an effect size (Beta) between 0.05 – 0.1 in our data ( N = 80) compared to BBJ ( N = 51; P = 6 × 10 −4 ). Conversely, a larger number of loci with small effects (Beta < 0.05) was observed in BBJ ( n = 97) compared to our data ( n = 60; P = 2.7 × 10 −5 ) but no significant difference was found for loci with an effect size > 0.1 ( P > 0.05). Comparison of allele frequencies for these loci also shows higher correlation with European ( r = 0.94) compared to African ( r = 0.75) or Japanese population ( r = 0.70). Colocalization analysis showed that 16 out of the 180 loci (8.9%) had distinct signals between QGP and BBJ (Supplementary Data 5 ). A number of group B (11 out of 149) and group C variants (2 out of 90) showed evidence of replication for the same trait after correction for multiple testing (Table 2 ). In addition, a number of loci that did not show nominal replication ( P > 0.05) contained a signal within ±500 kb with significant P-values for the same trait ( N = 115), after correction for multiple testing (Table 2 ). For example, rs5030081 is associated with APTT in BBJ ( P = 3.97 × 10 −49 ) but not in QGP ( P = 0.21), however, a SNP (rs1042445) located 63 kb upstream and not in LD with rs5030081 ( r 2 = 0.05) is significantly associated ( P = 1.30 × 10 −48 ) with the same trait in QGP. In total, of the 898 variants identified in the BBJ study, we identified 355 variants that showed evidence of replication either directly, through a proxy, or located in a region previously reported for the same trait. Analysis of polygenic scores To assess the translatability of polygenic scores (PGS) derived from other populations to the Qatari population, we assessed the predictive performance of PGS for traits with available scoring data in the Polygenic Score Catalog ( ). We focused our analysis on PGS derived from European populations since our heritability and allele frequency comparison showed higher correlation between QGP and Europeans. In addition, robust data with enough information to allow comparison with our data was mainly available for European populations. The predictive performance of PGS from 11 traits was tested on QGP data and results are presented in Fig. 5 . All tested PGS showed lower performance when applied to QGP data with an average performance of 64.7% (SD = 15.8%) of that when applied to Europeans (Supplementary Table 5 ). The relative performance of the PGS when applied to Qataris compared to Europeans ranged from 40.5% for Height to 98.1% for Mean Platelet Volume. Fig. 5: Performance of European-derived polygenic scores in QGP. Pearson’s correlation (R) between polygenic scores (PGS) and trait values are shown for European populations (red) and Qatari population (blue). Weighted PGS scores were based on those derived from European population. Full size image Novel loci associated with traits Variants located in regions not previously reported for the trait and showing significant association ( P < 5.0 × 10 −8 ; N = 20) in the discovery set were tested for replication in an additional 7768 subjects from the second data release of QBB. Eight of the 20 novel loci replicated at a Bonferroni level ( P < 0.05/20). Meta-analysis of discovery and replication results are shown in Table 3 . Most signals were driven by genetic variants that were either monomorphic ( N = 3) or with a minor allele frequency that is three- to seven-fold lower ( N = 5) than what is observed in European population ancestries 13 . These include novel loci for coagulation-related traits (PT, and INR), blood cell traits (WBC) and other biochemical traits (Hcyst, and Tbil). Regional association plots (RAP) for novel loci are shown in Supplementary Data 6 . For example, we identified a novel association with Homocysteine (Hcyst) on chromosome 21 (rs147242481) near the CSTB gene with a relatively large effect size (beta = 0.36 standard deviation units per allele; P = 1.0 × 10 −13 ). CSTB is a member of the cystatin superfamily that encodes the Cystatin B protein that functions as protease inhibitor. It has been shown that Cystatin C, another member of cystatin superfamily, is a determinant of serum levels of Hcyst 14 , 15 . We also identified a novel locus associated with two related coagulation traits (PT and INR) on chromosome 13 situated near LINC01070 , a gene that encodes a long noncoding RNA. Another novel association was identified with serum total bilirubin (Tbil) near ARL4C gene. Serum level of Tbil is routinely used to assess liver function and studies have shown that ARL4C is highly expressed in primary hepatocellular carcinoma tumors and its expression is associated with poor prognosis 16 . We also identified four novel loci for white blood cell count trait (Table 3 ). One of these loci is located near NHLH1 and data from the international mouse phenotyping consortium ( ) shows that heterozygous Nhlh1 knockout mice have decreased basophil white blood cell number compared to wild type ( P = 3.1 × 10 −17 ; Supplementary Fig. 4 ). Table 3 Novel and Qatari-predominant association signals discovered form GWAS meta-analysis of QGP. Full size table Qatari- predominant loci associated with traits Population-specific signals have been identified for clinically relevant traits in previous GWAS 6 , 7 but the existence of such signals in the Middle Eastern populations has not been studied. We identified 12,283 autosomal variants that are common (MAF > 5%) in QGP but rare (MAF < 1%) in all other population ancestries reported by 1000 Genome project 13 . These variants were pruned based on LD ( r 2 < 0.1; N = 4357; referred to as Qatari-predominant loci) and their association with the clinical traits was investigated. For these loci, we used Bonferroni-adjusted significance threshold of P < 1.15 × 10 −5 correcting for the number of tested variants. Loci showing significant association in the discovery set were tested in the replication set to confirm their association. Meta-analysis of discovery and replication results identified 9 Qatari-predominantvariant-trait-associations (Table 3 ). All of these signals were located near genes that had been previously associated with the same trait. Discussion We performed one of the largest GWAS using whole genome sequence data to date ( n > 6200) and the first comprehensive GWAS of 45 clinically-relevant traits from a Middle Eastern population. Heritability estimates for the studied traits in QGP were generally correlated with previous estimates in other populations. However, significant differences in heritability were observed for some traits suggesting differences in the genetic architecture resulting from population-specific past events such as genetic drift and selection. In addition, variations in environmental factors and their interaction with genetic factors could explain the observed differences in heritability. For example, the heritability of GGT, an enzyme used clinically to assess liver function, was similar to that reported in European populations but significantly higher than in African populations. This observation could be due to a larger contribution of environmental factors explaining the phenotypic variations in GGT in African populations. High prevalence of liver diseases such as cirrhosis and Hepatitis B virus infection in sub-Saharan African populations 17 indicating substantial contribution of environmental factors leading to lower heritability estimates. In addition, heritability estimates for cholesterol (TCH and LDL-C) in our data were significantly lower compared to values reported in European or African populations suggesting higher contribution of environmental influences such as diet and lifestyle. Consistent with this, heritability estimates for BMI in Qatar was also lower when compared to two European populations. The prevalence of obesity in Qatar is among the highest in the world 18 and this obesity endemic caused (at least in part) by fat-rich diet and lifestyle factors plausibly leading to lower heritability estimates of BMI and cholesterol traits. However, technical variations in measurements could also contribute to the differences in heritability between the studies. Our GWAS results replicated many loci that are known to have consistent association in studies across various population ancestry, highlighting shared components of genetic architecture for the studied traits. Comparison of replicated loci identified differences in both effect size and allele frequency of the associated variants, emphasizing the importance of performing further larger GWAS in the Middle Eastern populations to enable accurate polygenic score determination. Indeed, European-derived PGS had substantially reduced predictive performance when applied to QGP data. For some traits, we identified multiple distinct signals due to differences in LD patterns and/or differences in allele frequencies of the variants. Colocalization analysis showed that about 9% of replicated loci across the investigated traits showed evidence of distinct signals between QGP and BBJ. Since most previously published GWAS for these traits were performed using genotyping arrays followed by imputation, it is possible that some of the multiple distinct signals observed in our data could be due to higher coverage of whole genome sequencing as opposed to imputation with implication on fine mapping and identification of functional variants. Our GWAS also identified novel signals providing new insights into the biological pathways regulating clinically relevant traits, such as CSTB for Hcyst and NHLH1 for white blood cell count. We observed that most of the novel loci were driven by population-specific variants and the existence of Qatari-predominant signals within known loci emphasizes the differences in genetic architecture of these traits between populations. These findings provide strong arguments for performing larger GWAS in the Middle Eastern region to further define the genetic architecture of clinical traits and complex diseases with implication on future application in precision medicine. They also underscore the potential of discovering novel signals at lower sample sizes when using understudied populations, which may be relevant to future investments when searching for new drug targets. In conclusion, we performed a comprehensive heritability analyses and GWAS studies of 45 clinically-relevant traits for the first time in a middle eastern population. We replicated many previously known loci for these traits demonstrating shared genetic components across populations. However, we identified differences in linkage disequilibrium patterns, effect size and allele frequency of associated signals. We showed that European-derived PGS has reduced predictive performance when applied to the Middle Eastern population of Qatar. We also identified 17 novel and Qatari-predominant signals across the studied traits which were mostly driven by population-specific variants providing an argument for further larger genetic association studies in Middle Eastern and other non-Caucasian populations to further characterize the genetic architecture of clinical traits and complex diseases. Methods Study subjects The present study was performed on the QBB study participants. QBB is an on-going longitudinal population-based study aiming to recruit 60,000 subjects from the Qatari population with follow up every 5 years 9 . Individuals are eligible to participate in the study if they are Qatari nationals or long-term residents (≥15 years living in Qatar) aged 18 years and older. The study covers extensive baseline sociodemographic data, clinical and behavioral phenotypic data, biological samples, as well as clinical biomarkers. All QBB participants signed an informed consent form prior to their participation; and the study was approved by Hamad Medical Corporation Ethics Committee and QBB institutional review board. Heritability and GWAS analyses were performed on data from the first QBB data release (6218 QBB participants). Replication analyses were based on an additional 7768 QBB participants from the second QBB data release. Phenotype All QBB participants attended an assessment session in which physical measurements were collected and each participant filled a standardized questionnaire reporting information on lifestyle, diet, and medical history. Collected physical measurements included anthropometry (sitting and standing height, weight, and waist and hip circumference), body composition, grip strength, arterial stiffness, blood pressure, electrocardiogram data, respiratory function and cardiorespiratory fitness. Additional phenotypes were also collected, such as 3D carotid ultrasound, full body dual energy X-ray absorptiometry (iDXA), “microscopic” features of the optic nerve and macula, and brain magnetic resonance imaging (MRI). During the assessment session, participants provided biological samples (blood, saliva and urine) for analysis and storage. Part of the biological samples were transferred to the diagnostic laboratories at Hamad General Hospital where clinical diagnostic biomarkers were measured. The present study focused on 45 clinically-relevant traits as listed in Table 1 , details of their measurements are presented in Supplementary Data 1 . All traits were normalized prior to the statistical analyses using rank-based inverse normal transformation using R ver. 3.4.0. Whole genome sequencing DNA was extracted from peripheral blood using the automated QIASymphony SP instrument according to Qiagen MIDI kit protocol’s recommendations (Qiagen, Germany). Genomic DNA integrity was assessed using the Genomic DNA assay on the Caliper Labchip GXII (Perkin Elmer, USA). DNA quantification was done using Quant-iT dsDNA Assay (Invitrogen, USA) on the FlexStaion 3 (Molecular Devices, USA). Whole genome libraries were prepared from 150 ng of DNA using the Illumina TruSeq DNA Nano kit. Genomic libraries were sequenced on HiSeq X Ten (illumina, USA) following the manufacturer’s recommended protocol to achieve a minimum average coverage of 30x. Library construction and sequencing was performed at the Sidra Clinical Genomics Laboratory Sequencing Facility. Quality control of Fastq files was performed using FastQC (v0.11.2) ( ). Reads were then aligned to GRCh37 (hs37d53) reference genome using bwa.kit (v0.7.12) ( ). Quality control on mapped reads was performed using Picard (v1.117) [CollectWgsMetrics] ( ). Variant calling was performed following GATK 3.4 best practices ( ): Indel realignment and base recalibration was performed on the initial bam file then HaplotypeCaller was run on each sample to generate an intermediate genomic variant call file (gVCF). Joint variant calling was performed using all generated gVCF files at once. We first run GenomicsDB8 to combine the different samples by regions, then for each region, we ran GenotypeGVCFs, applied SNP/Indel recalibration and then merged all regions. The combined gVCF file contained 77,867,351 variants for 6218 subjects. Quality control measures were applied to this file using PLINK ver. 2.0 19 . Indels and variants with MAF < 1%, genotype call rate < 90%, Hardy-Weinberg P value < 1 × 10 −6 , and those on chromosome X were removed leaving a total of 7,880,618 variants. We also removed samples with excess heterozygosity ( N = 8), duplicates ( N = 10), call rate < 95% ( N = 1), and gender ambiguity ( N = 65). To identify population ancestry outliers, we performed multidimensional scaling (mds) analysis as implemented in PLINK 19 . Pairwise identity by-state (IBS) matrix was determined based on a pruned set of independent autosomal SNPs ( N = 62,475) using a window size of 200 SNPS and LD threshold of r 2 = 0.05. Subjects with more than four standard deviation units (±4 SD) away from the mean of the first two mds components were identified as population outliers ( N = 87) and removed before analysis (Supplementary Fig. 5 ). The final file used for genome-wide association analyses comprised 7,880,618 variants and 6047 subjects. Similar quality control measures were applied to the replication dataset where we only tested SNPs and traits showing evidence of novel associations from the discovery set. Heritability analysis Heritability ( h 2 ) was defined as the proportion of phenotypic variance attributed to genetic factors estimated from genome-wide SNP genotype data. h 2 was calculated using the polygenic model implemented in GenABEL ver. 1.8-0 20 . The model included age, sex, and the first four principal components (PC) as covariates. Genomic kinship matrix was used to correct for relatedness and was determined using IBS analysis implemented in GenABEL. To enable comparison of heritability with previously published work, we also calculated heritability using the restricted maximum likelihood (GREML) method 21 implemented in the software package GCTA 22 . Age, sex, and the first four PCs were included as covariates in the GREML model. Linear regression analysis was used to assess correlation between heritability values across population ancestries. Genome-wide association analysis Genome wide association testing was performed using the variance component-based method GRAMMAR-Gamma 23 implemented in the R package GenABEL 20 . This model uses genomic kinship matrix to correct for relatedness and genetic substructure. For all tested traits, we included age, sex, and the first four PCs as covariates in the regression model. Principal component analysis was performed using PLINK. Genome-wide significance threshold was set as ( P < 5 × 10 −8 ) 24 . Regional association plots were generated using the locusZoom tool 25 using linkage disequilibrium data calculated from QGP data using PLINK. Genomic inflation factor, Quantile-Quantile plots and Manhattan plots were generated using R ver. 3.4.0. Assessing genome-wide significant loci We lumped all associations on a given trait based on LD ( r 2 < 0.1) within a window size of 10 Mb into distinct signals described by the SNP with the lowest p value. We annotated the SNPs representing the distinct loci using PhenoScanner 12 allowing for proxy SNPs reported in five populations (AFR, AMR, EAS, EUR, SAS) using an LD cut-off of r 2 > 0.1, a window size of ±500 kb, and we used Experimental Factor Ontology (EFO) terms ( ) to map phenotypes (Supplementary Data 3 ). Any locus that did not produce a hit in PhenoScanner was further manually checked using the GWAS catalog 2 and PubMed literature searches (accessed on 31 January 2020). Assessing replication of known loci To assess the degree of replication of known signals, we compared our results to previously published work focusing on a single large GWAS of similar traits from the Biobank Japan project (BBJ) 7 . We distinguished three groups of SNPs in our comparisons to the BBJ study. Group A: an identical SNP is present in the QGP population, Group B: a strong proxy SNP is available for replication ( r 2 ≥ 0.8), group C: a SNP with LD (0.1 < r 2 < 0.8) is available. To identify any other signal, we also queried the region ±500 kb for association with the relevant trait. Correlation of effect size and allele frequency was performed using linear regression analysis. To assess differences in associated signals between QGP and BBJ we performed colocalization analysis using the Coloc R-package ver. 3.2-1 26 . We tested two hypotheses: H3; the locus is associated with the trait in both BBJ and QGP but the association is driven by different variants, H4; the locus is associated with the trait in both BBJ and QGP and the association is driven by the same variants. Meta-analysis of discovery and replication results was performed using the inverse variance-weighted method implemented in METAL 27 . Analysis of polygenic scores Polygenic scores (PGS) scoring files were downloaded from the Polygenic Score catalog ( ) for traits with available data. We selected PGS scores derived from the largest published study in European populations. PGS were available for 11 traits from European populations and details are presented in Supplementary Table 5 . Weighted PGS were calculated for each subject in QGP based on the scoring files using PLINK ver. 2.0 19 . Pearson’s correlation (R) between the trait values and PGS were calculated using R. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability GWAS summary statistics generated in this study have been deposited in the NHGRI-EBI Catalog of human genome-wide association studies and can be accessed through [ ] under the accession codes GCST90013303, GCST90013304, …, GCST90013347. All other data supporting the findings of this study are available either within the article, the supplementary information and supplementary data files, or from the authors upon reasonable request. The raw whole genome sequence data are protected and are not available due to data privacy laws. Access to QBB/QGP genotype and phenotype data can be obtained through an established ISO-certified process by submitting a project request at [ ] which is subject to approval by the QBB IRB committee. A detailed description of the data management infrastructure for QBB was described previously 28 . Data used in this study to assess replication of known loci are available in the NHGRI-EBI Catalog of human genome-wide association studies [ ], the Phenoscanner database [ ], and the Experimental Factor Ontology (EFO) terms database [ ]. The Nhlh1 knockout mice data used in this study are available in the international mouse phenotyping consortium database [ ]. The polygenic scoring files used in this study are available in the Polygenic Score Catalog [ ]. Code availability The study utilized previously published analysis tools as described in the methods section. No custom tools were used in this study. | A group of researchers at Qatar Foundation have reported the first and largest genetic association study in the Middle East, that has been published online in Nature Communications—a leading a peer-reviewed, open access, scientific journal published by Nature Research. The study titled "Whole genome sequencing in the Middle Eastern Qatari population identifies genetic associations with 45 clinically relevant traits" highlights a vital piece of information wherein now there is a better understanding of the genetic risk factors that are specific to the Arab population, including those that are shared with other ethnicities. Qatar was among the first countries to launch its own large-scale, national genome project. Qatar Genome combines whole genome sequencing data with comprehensive phenotypic resource collected at Qatar Biobank, and is considered the first, largest and most ambitious population-based projects of its kind in the Middle East. This kind of studies can be considered as experiments conducted by nature, where the natural variation found in the genomes of thousands of Qataris is linked to variations in their respective blood tests. The results from this project are shared publicly, ensuring that the specificities of the Arab genomes will be taken into consideration in future research on new treatments and therapies. The study—led by researchers at Qatar Foundation's (QF's) Hamad Bin Khalifa University (HBKU) and QF's partner university Weill Cornell Medicine—Qatar (WCM-Q), along with other scientists from the Qatar Genome Research Consortium—includes over 6,000 Qatari individuals with whole genome sequence data. By performing detailed assessments of genetic variants across the whole genome in 6,218 individuals, comprising data from 45 clinically relevant traits, this study identified about 300 independent genetic signals. Some of these signals were predominantly found in the Qatari population. This observation was then confirmed in a further 7,768 subjects from QF's Qatar Biobank. Omar Albagha, Principal Investigator from HBKU's College of Health and Life Sciences, says: "The study provides new insights into the genetic architecture of clinical laboratory tests and identifies for the first time genetic variations that are specific to the population of Qatar. The study also shows that findings from genetic studies in European populations don't translate well when applied to our population in the Middle East. This argues for further studies to define the genetic architecture of diseases in our region. We are excited because the study represents a foundation for the implementation of precision medicine in the Middle East." Karsten Suhre, Director of Bioinformatics Core at Weill Cornell Medicine—Qatar and joint senior author on the paper, says: "It has been a long but successful journey from the first patient recruitments to Qatar Biobank to analyzing the resulting enormous genetic data set for associations with clinically relevant traits, and we as a consortium are proud to contribute with this paper to the international effort of obtaining an even better understanding of our human genome." "Qatar Genome Research Consortium gave research groups the platform to study whole genome sequencing and other omics data to empower the genetic discoveries in this part of the world, which otherwise would be under-represented," said Professor Said Ismail, Director of QF's Qatar Genome, part of QF Research, Development, and Innovation. | 10.1038/s41467-021-21381-3 |
Physics | The ductility of magnesium explained | Zhaoxuan Wu et al. The origins of high hardening and low ductility in magnesium, Nature (2015). DOI: 10.1038/nature15364 Journal information: Nature | http://dx.doi.org/10.1038/nature15364 | https://phys.org/news/2015-10-ductility-magnesium.html | Abstract Magnesium is a lightweight structural metal but it exhibits low ductility—connected with unusual, mechanistically unexplained, dislocation and plasticity phenomena—which makes it difficult to form and use in energy-saving lightweight structures. We employ long-time molecular dynamics simulations utilizing a density-functional-theory-validated interatomic potential, and reveal the fundamental origins of the previously unexplained phenomena. Here we show that the key 〈 c + a 〉 dislocation (where 〈 c + a 〉 indicates the magnitude and direction of slip) is metastable on easy-glide pyramidal II planes; we find that it undergoes a thermally activated, stress-dependent transition to one of three lower-energy, basal-dissociated immobile dislocation structures, which cannot contribute to plastic straining and that serve as strong obstacles to the motion of all other dislocations. This transition is intrinsic to magnesium, driven by reduction in dislocation energy and predicted to occur at very high frequency at room temperature, thus eliminating all major dislocation slip systems able to contribute to c -axis strain and leading to the high hardening and low ductility of magnesium. Enhanced ductility can thus be achieved by increasing the time and temperature at which the transition from the easy-glide metastable dislocation to the immobile basal-dissociated structures occurs. Our results provide the underlying insights needed to guide the design of ductile magnesium alloys. Main Developing lightweight structural metal is a crucial step on the path towards reduced energy consumption in many industries, especially automotive 1 , 2 and aerospace 3 . Magnesium (Mg) is a lightweight metal, with a density that is 23% that of steel and 66% that of aluminium, and so has tremendous potential to achieve energy efficiency 4 . In spite of this tantalizing property, Mg generally exhibits low ductility, insufficient for the forming and performance of structural components. The low ductility is associated with the inability of hexagonal-close-packed (hcp) Mg to deform plastically in the crystallographic 〈 c 〉 direction, which is accomplished primarily by dislocation glide on the pyramidal II plane with the 〈 c + a 〉 Burgers vector 5 (see Fig. 1 ). Experiments reveal a range of unusual, confounding, conflicting, and mechanistically unexplained phenomena connected to 〈 c + a 〉 dislocations that coincide with the inability of Mg to achieve high plastic strains 6 . Uncovering and controlling the fundamental behaviour of 〈 c + a 〉 dislocations is thus the key issue in using Mg, and any solution would catapult Mg science, technology, and applications forward. Success would enable, for instance, lightweight automobiles that would consume less energy, independent of the energy source, and thus act as a multiplier for many other energy-reduction strategies. Figure 1: Transitions of the easy-glide pyramidal II edge 〈 c + a 〉 dislocation. The 〈 c + a 〉 dislocation transforms into basal-dissociated products, as observed during long-time MD simulations, for a , zero, b , moderate, and c , high compressive stresses (indicated by the purple arrows) normal to the pyramidal II plane (green plane in rightmost images). The first four columns show MD simulations at the indicated times; the rightmost column shows the transition of dislocation Burgers vectors in the hcp unit cell. All cases start from the same dislocation (leftmost image) and show a distinct thermally activated intermediate state (second image) before undergoing a rapid intrinsic ‘climb-like’ dissociation onto the basal plane (third and fourth images). The final core structure (fourth image) depends on the applied load, with increasing applied load able to drive an 〈 a 〉 or a partial 〈 a 〉 dislocation away from the 〈 c + a 〉 core. is the mean transition time to reach the intermediate state shown. Dislocation cores are indicated by the symbol ‘ ⊥ ’. Atoms in the atomistic images in this figure (first four columns) and subsequent ones are coloured on the basis of common neighbour analysis 37 : blue, hcp; green, face-centred cubic (fcc); purple, body-centred cubic (bcc); yellow, all others. Dislocation core atoms thus appear predominantly as yellow. In a – c , the rightmost column shows the composition of the 〈 c + a 〉 Burgers vector before (solid purple arrow) and after (dashed blue arrows) the transition; the green and orange circles depict the two alternating layers of atoms in basal planes. In the rightmost column in b , d is the midpoint between A and B, and α is the vertical projection of A 0 onto the basal plane. PowerPoint slide Full size image Because of its critical importance and promise, 〈 c + a 〉 slip has been extensively studied over five decades 7 , 8 , 9 , 10 , and reports on this system are being published at an increasing rate 11 , 12 , 13 , 14 . In Mg single crystals, 〈 c + a 〉 slip occurs predominantly on the pyramidal II system, and measurements show that pyramidal II 〈 c + a 〉 dislocations can glide at low stresses of ∼ 20 MPa at low temperatures 15 (77–133 K). However, transmission electron microscopy (TEM) studies frequently find 〈 c + a 〉 dislocations lying, mysteriously, along basal planes and coexisting with 〈 c 〉 and 〈 a 〉 dislocations 9 , 16 , 17 , 18 , 19 . Under c -axis compression, single crystal Mg also exhibits a rapid increase in stress with increasing strain, that is, high work hardening 9 , 15 , 18 , 19 , and fractures at low strains at temperatures up to 500 K (refs 9 , 15 , 19 ). New work shows the formation of a very high density of 〈 c + a 〉 dislocation loops also dissociated on, and lying on, the basal planes 14 , with similar observations dating back fifty years 7 , 8 , 20 . Many mechanisms have been proposed, all of which invoke extrinsic effects such as ‘heating’ by the electron beam 8 , vacancy and self-interstitial precipitation 21 , and dislocation obstacles of ‘unknown’ nature 7 . Also surprisingly, when loaded at slow strain rates ( ∼ 10 −1 –10 −5 s −1 ), and particularly under compression, the yield strengths of both single crystal Mg 8 , 15 , 16 , 17 and polycrystal Mg alloys 22 , 23 increase with increasing temperature, that is, they show an anomalous temperature dependence, within a certain temperature range. This behaviour is hypothesized as being due to decomposition of the 〈 c + a 〉 dislocation into 〈 c 〉 and 〈 a 〉 (ref. 17 ), which is supported by TEM studies 16 , 21 where junction pairs of 〈 c 〉, 〈 a 〉, and 〈 c + a 〉 dislocations were found. However, no decomposition process has been directly observed. It has also been proposed that the junctions may be formed by reactions of 〈 c 〉 and 〈 a 〉 dislocations 24 . The 〈 c 〉 dislocation is also unusual because it too lies on the basal plane 8 , 12 , 25 . Overall, how 〈 c + a 〉 and 〈 c 〉 dislocations can come to lie on the basal plane is the subject of discussion. Nor is there understanding of why the behaviour of 〈 c + a 〉 dislocations varies widely over a range of loading conditions, single-crystal orientations, polycrystalline texture, and temperature. Here, we provide an explanation for all of these unusual observed phenomena that is entirely intrinsic to hcp Mg. Mechanisms of 〈 c + a 〉 dislocation transition We first show the transitions of the pyramidal II 〈 c + a 〉 edge dislocation using molecular dynamics (MD) simulations (see Supplementary Information I and II for details). Properties of the screw dislocation ( Extended Data Figs 1 and 7 ) are not directly relevant. We start the simulations with an initial 〈 c + a 〉 edge dislocation dissociated into on the pyramidal II glide plane, a structure in good agreement with density functional theory (DFT) calculations 26 . We then execute long-time MD at elevated temperatures (500 K, 600 K, 700 K) to accelerate any thermally activated processes, and under various applied stresses normal to the glide plane. As shown in Fig. 1 , the initial 〈 c + a 〉 core is metastable and transforms into one of three new basal-oriented dislocations. The transition into the final state is always preceded by an intermediate state where a partial 〈 a 〉 (Shockley) dislocation is nucleated on the basal plane and glides away with a trailing basal I 2 intrinsic stacking fault 5 . Since the time to achieve this intermediate state (indicated in Fig. 1 ) is much longer than the subsequent transition to any of the final states, the transition into this intermediate state is the kinetically limiting step in the overall process. An applied normal stress exerts a resolved shear stress on the basal plane that determines the glide of the partial 〈 a 〉, which in turn determines the final structure. The frequencies of occurrence of the various final structures as a function of load are shown in Extended Data Fig. 2 . Figure 1a shows the transition that predominates at zero and low stresses. The newly nucleated partial 〈 a 〉 stays close to the original nucleation site and another partial 〈 a 〉 is then nucleated from the other half 〈 c + a 〉 dislocation. The two remaining partials then ‘climb’ in opposite directions to form a new 〈 c + a 〉 core dissociated on the basal plane and separated by a basal I 1 stacking fault 5 . The ‘climb’-like transition is atomistically complicated, involving significant atomic motions within the core region, but does not involve any vacancy diffusion or interstitial diffusion from the surrounding bulk because the simulation contains no vacancies or interstitials. The core spreading on the basal plane is driven by the repelling force between the two partials, but the spreading is kinetically difficult because further climb would presumably require some sort of vacancy/interstitial pair formation and transport (see Extended Data Figs 3 and 4 and Supplementary Information III and IV ). This final dislocation core has been observed previously 8 , 11 and has recently been resolved in high-angle annular dark field scanning TEM (HAADF STEM) 14 . Figure 2a shows the atomic structure obtained from MD quenched to 0 K. In Fig. 2b we show a superposition of the atomic images from MD (projected into the plane of view perpendicular to the dislocation line) and from HAADF STEM (which shows spots due to diffraction from aligned columns of atoms) for one of the two basal-oriented 〈 c + a 〉 partials with the I 1 stacking fault emerging from it. Good agreement can be seen outside the core region. In the core region, the MD results show complex variations in atomic position along the dislocation line; this is consistent with the absence of clear spots in the HAADF STEM image that suggests an absence of structural order. Figure 2: Comparison of the 〈 c + a 〉 dislocation core structure in MD and experiments. a , Structure of a 〈 c + a 〉 dislocation climb-dissociated on the basal plane, from MD simulations quenched to 0 K. b , Superposition of the atomistic structure of one core ( p is the Burgers vector of the Shockley partial 〈 a 〉 dislocation) as computed by MD (open circles), and the HAADF STEM image of the same core 14 , for which bright spots indicate well-defined columns of atoms through the thickness of the experimental specimen; a Burgers loop indicating the Burgers vector is shown for both images. HAADF STEM image from ref. 14 : The structure of 〈 c + a 〉 type dislocation loops in magnesium. J. Geng et al. Philosophical Magazine Letters , 3 June 2014, reprinted by permission of the publisher (Taylor & Francis Ltd, ). PowerPoint slide Full size image Figure 1b shows the transition that occurs mainly at intermediate loads. In this case, the first nucleated leading partial 〈 a 〉 glides away, leaving behind a wide I 2 stacking fault and a dislocation (see rightmost panel in Fig. 1b for nomenclature). The other half 〈 c + a 〉 on the pyramidal II plane then reacts with the residual dislocation to form another new ‘climb-dissociated’ dislocation with Burgers vector . If the applied stress is then reduced, this core transforms into the previous core because the partial 〈 a 〉 is pulled back to the main dislocation. Figure 1c shows the transition that predominates at higher loads. In this case, the pyramidal II 〈 c + a 〉 decomposes into 〈 c 〉 and 〈 a 〉 dislocations by nucleating a trailing partial 〈 a 〉 behind the initial leading partial 〈 a 〉. The full 〈 a 〉 then glides away, driven by the high resolved shear stress, leaving behind a 〈 c 〉 dislocation. The 〈 c 〉 then also ‘climb-dissociates’ along the basal plane into two partial 〈 c 〉 dislocations separated by a basal extrinsic stacking fault, consistent with TEM observations 25 and HAADF STEM 12 . Details of all dislocation reactions are shown in Extended Data Fig. 5 and in Supplementary Information V . Dislocation transition rate and energy barrier The 〈 c + a 〉 transitions are thermally activated processes. Figure 3a shows the measured mean transition time versus stress and temperature for this random process. depends on temperature and, weakly, on stress. Simulation cell sizes and boundary conditions have insignificant effects on the measured transition time (see Extended Data Fig. 6 and Supplementary Information VI and VII ). The mean transition rate can be related to the normal-stress-dependent transition energy barrier Δ E ( σ norm ) using the Arrhenius law , where ν 0 , k , and T are the attempt frequency, Boltzmann constant, and temperature, respectively. Estimating ν 0 = 10 13 s −1 , the energy barrier versus applied stress is shown in Fig. 3b . Overall, the energy barrier is ∼ 0.5 eV. This yields a fast transition rate of ∼ 10 5 s −1 at 300 K and a very slow rate of ∼ 10 −4 s −1 at 150 K, consistent with the observation of anomalous strengthening only above ∼ 150 K (refs 16 , 17 ). Preliminary nudged elastic band 27 , 28 calculations ( Supplementary Information VIII ) of the 0 K activation barrier yield an energy barrier of ∼ 0.6 eV, in good agreement with the estimates in Fig. 3 . The applied stress has a weak asymmetric effect on nucleation because it influences the barrier by exerting a stress that moves the first partial 〈 a 〉 dislocation away (compressive) or towards (tensile) the original 〈 c + a 〉. Figure 3: Thermally activated mean transition time and energy barrier for the pyramidal II to basal plane transformation. Data are shown for three temperatures, see key. a , Mean transition time versus applied stress σ norm as measured in MD for the dissociation events shown in Fig. 1 . b , Energy barrier Δ E for the thermally activated transitions, showing a small dependence on temperature and applied stress. Error bars (s.e.m., n = 2) indicate the 95% confidence intervals of the mean transition time and energy barrier. Open, half-filled and quarter-filled symbols indicate (nearly identical) results obtained for larger simulation cells and different boundary conditions (see Supplementary Information VI ). PowerPoint slide Full size image Dislocation energy The observed transitions are driven by a reduction in total dislocation energy: all the new dislocations have lower energy than the easy-glide pyramidal II 〈 c + a 〉. The total dislocation energy per unit length within a cylindrical region of radius r centred at the dislocation core can be written as E tot = E struc + K ln( r / r min ) for r > r min (see Extended Data Fig. 7 and Supplementary Information IX ). Here, E struc is the dislocation energy within a minimum radius r min around the core region that contains all of the energy associated with the specific structure, such as the core energy, stacking fault energy, interactions among these structures, and near-field elastic energy. Conversely, the second term K ln( r / r min ) is the additional elastic energy between r min and r from the core region, which captures the elastic energy outside of the core region. The constant K is completely determined by the anisotropic elastic constants, Burgers vector b , and dislocation line direction. Here, all the 〈 c + a 〉 dislocations and total products have the same K , Burgers vector b = c + a , and line direction. Therefore, the total energy difference between any two structures is computed directly by the difference in their total energies E tot measured at large distances r > r min from the complex core region 5 . Figure 4a shows E tot versus ln( r/r min ) for r min = 6 b = 6| c + a | calculated atomistically for the three localized edge 〈 c + a 〉 type dislocations found here (see Fig. 4b ). All three curves are parallel, as expected for E tot ( r > r min ), and the energy differences are precisely the differences in E struc between the different structures. The easy-glide 〈 c + a 〉 dislocation on the pyramidal II plane has the largest energy per unit length, and the 〈 c + a 〉 dissociated on the basal plane has the smallest energy, 0.3 eV Å −1 lower, and is the most stable at zero applied stress. The 〈 c 〉 + 〈 a 〉 remaining in close proximity has an intermediate energy, and is thus also stable relative to the easy-glide structure. In fact, while not shown, a related calculation can be performed to demonstrate that even the case of 〈 c 〉 and 〈 a 〉 separated to infinity has a lower total energy than does the easy-glide core, although higher than does the 〈 c 〉 + 〈 a 〉 in close proximity, indicating that short-range interactions between the 〈 c 〉 and 〈 a 〉 dislocations reduce the overall energy relative to well-separated ones. There is thus an energy barrier for driving the 〈 a 〉 dislocation away from the 〈 c 〉. Figure 4: Dislocation energy versus dislocation structure. a , Total energy within a cylindrical region of radius r > r min = 6 b for the 〈 c + a 〉 edge dislocation on the pyramidal II plane (purple circles; case i), for edge 〈 c 〉 and 〈 a 〉 in close proximity (orange squares; case ii), and for 〈 c + a 〉 edge dislocation climb-dissociated on the basal plane (open blue squares; case iii). Here, b is the magnitude of the 〈 c + a 〉 dislocation Burgers vector. The differences in energies between the three different dislocation structures are equal to the constant differences between the energies versus r , as expected by elasticity theory. b , Dislocation core structures corresponding to the energies shown in a (cases i, ii, and iii) as computed at zero temperature. PowerPoint slide Full size image The relative order of the edge dislocation energies rationalizes the simulations and various experimental observations. Under zero or low applied normal stresses, the preferred transition is the lowest energy ‘climb-dissociated’ 〈 c + a 〉 on the basal plane. This is consistent with experimental observations in single crystal c -axis compression tests, where the resolved shear stress on basal planes is negligible and a high density of 〈 c + a 〉 loops/dislocations dissociated on basal planes 14 , 19 or aligned with basal planes 9 are observed. When there is a resolved stress τ on the basal plane, there is energy gained by the work done by the applied field in moving the full/partial 〈 a 〉 dislocation away from the 〈 c 〉. When the full or partial 〈 a 〉 has moved a distance l , the total energy reduction per unit length is Δ E = τal for the full 〈 a 〉 and for the partial 〈 a 〉, where is the I 2 stacking fault energy ( ∼ 20–30 mJ m −2 ), and a = | a |. These two cores can thus have the lowest total energy at sufficiently large τ . This is consistent with experimental observations in polycrystal Mg (ref. 21 ), where resolved shear stresses exist on basal planes in most grains and 〈 c 〉 and 〈 a 〉 dislocations/junctions are more commonly seen as compared to 〈 c + a 〉 basal loops. Critical resolved shear stresses for dislocation glide The three transformed cores shown in Fig. 1 are essentially immobile, as expected owing to their ‘climb dissociation’ onto the (non-glide) basal planes. Figure 5 shows the motion of the various dislocations under an applied resolved shear stress on the pyramidal II plane ( Fig. 5a–d ) and prism plane ( Fig. 5e ) at 300 K, as obtained via MD simulations (see Supplementary Information X ). The initial 〈 c + a 〉 on the pyramidal II plane glides at stresses of ∼ 11 MPa. All of the basal-dissociated dislocations are immobile up to stresses of more than 30 times the pyramidal II glide stress (see Extended Data Fig. 8 for the non-Schmid effect); only the 〈 a 〉 can glide away at ∼ 119 MPa, as discussed above, leaving the immobile 〈 c 〉. A stress of ∼ 430 MPa is needed to recombine the 〈 a 〉 and 〈 c 〉 into the 〈 c + a 〉, indicating a high energy barrier for this reverse reaction, so that this proposed mechanism for creation of 〈 c + a 〉 is unlikely 24 . Figure 5: Glide behaviour of the various dislocations under applied resolved shear stresses (directions indicated) at 300 K. Applied shear stress (indicated in each image) increases from top to bottom. a , Easy-glide pyramidal II 〈 c + a 〉 , with glide starting at ∼ 11 MPa. b , Basal-dissociated 〈 c + a 〉 , with glide starting at a very high stress, ∼ 330 MPa. c , 〈 a 〉 dislocation glides away from the remaining 〈 c + a 〉 product at ∼ 119 MPa, leaving an immobile 〈 c 〉. d , Reaction of 〈 a 〉 and 〈 c 〉, forming the basal 〈 c + a 〉 dislocation for a resolved shear of ∼ 430 MPa in the reverse direction. e , Absence of glide for the basal-dissociated 〈 c 〉 up to ∼ 464 MPa, where nucleation of a partial 〈 a 〉 dislocation occurs. PowerPoint slide Full size image Origin of low ductility and high strain hardening The time-dependent thermal activation of the easy-glide pyramidal II 〈 c + a 〉 to immobile lower-energy, basal-dissociated 〈 c + a 〉 and 〈 c 〉 dislocations explains the low ductility of Mg. Generalized plasticity requires the operation of at least five independent slip systems 5 . Mg cannot sustain dislocation slip in the crystallographic 〈 c 〉 direction because the necessary 〈 c + a 〉 and 〈 c 〉 dislocations transform to immobile dislocations. Thus, only twinning remains to provide some deformation in the 〈 c 〉 direction, and the plastic strain due to twinning is very small (at most, ∼ 7%). Grains in a polycrystal can thus mainly slip only in the 〈 a 〉 direction, and grains oriented favourably for 〈 c + a 〉 slip are effectively rigid. High constraint stresses rapidly develop, leading to the early onset of fracture and low ductility. Low ductility is also driven by high hardening rates, as measured in Mg, particularly for single crystals or textured polycrystals oriented for 〈 c + a 〉 slip. The immobile dislocations do not contribute to plastic straining, and instead act as ‘forest’ dislocations that impede the motion of dislocations on all the easy-glide/slip systems. Plastic flow on slip system α (where α indicates basal, prism, twin, pyramidal) is controlled by the densities of mobile dislocations and ‘forest’ dislocations impeding slip on each slip system. The contribution to the total plastic strain rate from slip system α is then where is the rate of thermally activated escape of the α dislocations past the forest obstacles 29 . The parameters in equation (2) (attempt frequency ν esc , zero-stress activation energy Δ G 0 , and exponents p and q characterizing the activation energy profile) are not important for the general discussion here. The key quantities are the applied resolved shear stress τ α acting on the dislocations, and the zero-temperature strength or critical flow stress required to overcome the energy barrier without thermal activation. is due to the forest obstacles, and so is related to the dislocation densities in all slip systems including α itself as 30 , 31 where μ is the shear modulus (ignoring elastic anisotropy) and is the matrix of interaction strengths between slip systems α and . Around room temperature, the fast transition of easy-glide 〈 c + a 〉 dislocations into immobile basal-dissociated 〈 c + a 〉 or 〈 c 〉 dislocations acts to (1) rapidly decrease the density of mobile 〈 c + a 〉 dislocations and (2) rapidly increase the density of immobile/forest dislocations affecting all slip systems. For all slip systems, the most important effect is the exponential decrease in the escape rate (equation (2)) due to the increased critical strengths on all slip systems (equation (3)) due, in turn, to the increased forest density associated with the transformed 〈 c + a 〉 dislocations. The increased forest density also decreases the slip rate directly in equation (1). Therefore, constant-strain-rate experiments show a very rapid increase in the applied stress required to sustain the imposed loading rate, and thus a rapidly increasing hardening rate in the stress–strain curve. High stresses then drive fracture, which cannot be resisted by plastic flow 32 , and the ductility is thus limited. Strain hardening is particularly dramatically enhanced in crystals oriented preferentially for 〈 c + a 〉 slip. In such an orientation, previously transformed 〈 c + a 〉 dislocations on basal planes block subsequent easy-glide pyramidal II 〈 c + a 〉 dislocations at their leading edge segments, and drive those dislocations to evolve into long straight segments with pure edge character. These dislocations then transform into immobile dislocations, thus forming immobile dislocation pile-ups. There is thus a feedback process: the rate of transformation increases with increasing immobile dislocation density, while the immobile dislocation density increases owing to the increasing number of transformations. This is observed in single crystal Mg c -axis compression tests 9 , 14 , 19 where the exceptionally high measured work hardening is accompanied by the observation of a rapid increase of straight 〈 c + a 〉 dislocations and also loops (density of ∼ 10 20 m −3 at 1% plastic strain 14 ). In fact, when 〈 c + a 〉 dislocations are observed in TEM 8 , 9 , 11 , 19 , 21 , they often exist as uniform arrays or pile-ups of long, straight dislocation segments. In contrast, at low temperatures, the 〈 c + a 〉 transitions cannot occur fast enough, leading to more normal evolution of dislocation densities, strengthening, and strain hardening, although now limited simply by the low temperature. Thus, the pyramidal II 〈 c + a 〉 dislocation transformations are responsible for the anomalous temperature dependence of the strengthening observed in the range ∼ 130–293 K (refs 16 , 17 ), for the high hardening rate observed at 300 K (refs 9 , 14 , 18 , 19 ), and for the associated low ductility. Discussion The transitions identified here are intrinsic to Mg and occur without additional defects or spurious experimental conditions (such as electron beam heating). To prevent the undesirable transitions, strategies could be aimed at energetically stabilizing the easy-glide 〈 c + a 〉 core so as to shift the transition to higher temperatures, longer times, or slower strain rates. This may be possible by solute additions that pin the easy-glide 〈 c + a 〉 core and lower its energy. Quantitative models demonstrate that glide dislocations are pinned by favourable statistical fluctuations of the solute distribution 33 , and the favourable fluctuations for the easy-glide 〈 c + a 〉 core will not simultaneously be favourable for the basal-dissociated core at the same position. New encouraging results 11 , 34 , 35 on Mg–3%Y alloys show substantially increased 〈 c + a 〉 activity and higher ductility, which may be due to increased stability of the easy-glide 〈 c + a 〉 produced by these solutes. Modelling of the interaction between alloying elements and the various dislocation cores found here, and analysis of the consequent effects on the stability of the pyramidal II 〈 c + a 〉 edge dislocation, would thus appear to be a useful direction to pursue. Our results also suggest that sufficiently small grain sizes could be favourable for ductility. If easy-glide 〈 c + a 〉 dislocations exist long enough to fully traverse grains, then the undesirable transitions are avoided. Indeed, TEM observations in large-grain materials show arrays of straight 〈 c + a 〉 dislocation segments 11 , 21 inside grains but particularly near grain boundaries 36 , suggesting that the 〈 c + a 〉 are nucleated near grain boundaries but then only travel a short distance before transforming. In contrast, high ductility is achieved in Mg with micrometre-scale grain sizes 10 . Finally, the mechanism of the 〈 c + a 〉 transition and its consequent strength anomalies may not be unique to Mg; similar observations are seen in other hcp metals 7 , 17 , 20 (such as Cd and Zn). Since the observed transition is intrinsic, results in immobile dislocations, and occurs at high rates, it may also provide the ‘unknown’ pinning process invoked 7 in one proposed mechanism aimed at rationalizing the observation of basal-oriented 〈 c + a 〉 dislocation loops. In summary, use of a new DFT-validated interatomic potential in long-time MD studies reveals a rich set of intrinsic structural transitions of the key 〈 c + a 〉 dislocations in Mg that explain long-standing experimental puzzles and are responsible for low ductility in Mg. The easy-glide pyramidal II 〈 c + a 〉 undergoes thermally activated, stress-dependent transitions into various lower-energy products lying on basal planes. The dislocation structures are in good agreement with experimental observations, the differences between experiments are explained, the temperature range where the transition is operative agrees with experiments, and the product dislocations are immobile and so cause high strain hardening by serving as obstacles for all other dislocations, leading to low ductility. This new overall understanding opens opportunities for design of Mg-based alloys based on the mechanistic concept of energetically stabilizing the easy-glide 〈 c + a 〉 dislocations on pyramidal II planes. | Zhaoxuan Wu and William Curtin of the Laboratory for Multiscale Mechanics Modeling (LAMMM) have solved the 40-year-old scientific riddle of the low ductility magnesium. Magnesium is the lightest metal found on earth; it is four times lighter than steel and a third lighter than aluminum. It is also abundantly available, being the eight most common element in the earth's crust. These features should make it the ideal material for all kinds of uses, particularly in the automotive industry where fuel efficiency is greatly improved by reduced vehicle weight. However, the use of this metal remains marginal. "Magnesium has low ductility, that is, it cannot be stretched very much before it breaks. It also shows unusual behavior, such as a regime of increasing strength with increasing temperature, which is the opposite of most metals", said Prof. Curtin. The Director of the Institute of Mechanical Engineering and his collaborator Zhaoxuan Wu, a long-term visitor in LAMMM from the Institute for High Performance Computing in Singapore, were able to identify the atomistic origins of Magnesium's low ductility and unusual behavior. This breakthrough, published in the journal Nature, could open the way to many applications of this lightweight metal. Simulations to explain 40 years of experiments Solving this mystery had an interesting history: long struggles, tangential advances, a new collaboration, and a "Eureka" moment. "I worked for 7 years on magnesium", says Curtin. "We had some accomplishments, but not on the most crucial issues, and I was about to give up, having exhausted what I thought we could do. But then Wu came from Singapore and brought renewed energy, an array of necessary technical skills, and exceptional motivation". Fast forward two years and the mystery is solved: they managed to establish a picture that unified decades of experimentation. The real progress started with the development of a new description of the atomic interactions in magnesium, a feature essential if atomistic models are to reveal any true material behavior. "We came across a rarely-used description of magnesium, and were able to tweak it and arrive at a new interatomic potential that predicted many well-established properties of magnesium with high accuracy." Most importantly, this new potential accurately predicted the structure of an important dislocation, the "c a" dislocation, which is essential for magnesium to flow easily. This dislocation, observed experimentally, could not previously be predicted. Then came the "Eureka" moment. "We were performing atomic simulations of the normal "c a" dislocation to better understand its behavior, and suddenly it changed". The atomic structure of the "c a" dislocation became entirely different, morphing into several possible geometries that locked it in place so that it could not move. Without "c a" motion, magnesium cannot flow plastically and so has low ductility. The new "c a" structures had been observed experimentally but how they occurred, whether they were real or experimental anomalies, and why there were several different structures, were all mysteries in the community. The work of Wu and Curtin showed that these new structures were intrinsic to magnesium, and they described why the different structures arise in different situations, and showed that all of these structures are more stable than the desirable "c a" structure. Thus, they found that it was inevitable that the new "c a" structures would form and kill the ductility of magnesium. A path to creating ductile magnesium "To make magnesium malleable, we must fight against the forces of nature," said Wu. If science does not yet have a miracle solution to prevent the phenomenon, he notes that it is still possible to slow the morphing process of the "c a" dislocations. "At room temperature, the morphing occurs almost instantly but when immersed in liquid nitrogen (temperature -195.79 ° C), it could take several months or one year." If the process could be delayed for a few seconds at normal temperatures, this could be enough to allow an industrial use. Several recent experimental studies in Germany have shown that it is possible to increase the ductility of magnesium by creating alloys with rare elements, such as yttrium, erbium, or holonium. Curtin believes these alloying elements bind to the normal "c a" and stabilize it somewhat, relative to the immovable "c a" dislocations. So, this is very promising, but these rare materials are very expensive and therefore unlikely to be used extensively by the automotive industry. Curtin and Wu are starting new research on how the expensive alloys, and other cheaper alloys, might stabilize the normal "c a" . "We must find a compromise between the cost of the alloy and the material performance" says Prof. Curtin. While he cannot predict when magnesium will be used heavily, he believes that the insights revealed by this new work will generate new ideas and approaches because the underlying origin of the poor behavior of magnesium is now more clearly understood. | 10.1038/nature15364 |
Computer | A model that can recognize speech in different languages from a speaker's lip movements | Pingchuan Ma et al, Visual speech recognition for multiple languages in the wild, Nature Machine Intelligence (2022). DOI: 10.1038/s42256-022-00550-z Journal information: Nature Machine Intelligence | https://dx.doi.org/10.1038/s42256-022-00550-z | https://techxplore.com/news/2022-11-speech-languages-speaker-lip-movements.html | Abstract Visual speech recognition (VSR) aims to recognize the content of speech based on lip movements, without relying on the audio stream. Advances in deep learning and the availability of large audio-visual datasets have led to the development of much more accurate and robust VSR models than ever before. However, these advances are usually due to the larger training sets rather than the model design. Here we demonstrate that designing better models is equally as important as using larger training sets. We propose the addition of prediction-based auxiliary tasks to a VSR model, and highlight the importance of hyperparameter optimization and appropriate data augmentations. We show that such a model works for different languages and outperforms all previous methods trained on publicly available datasets by a large margin. It even outperforms models that were trained on non-publicly available datasets containing up to to 21 times more data. We show, furthermore, that using additional training data, even in other languages or with automatically generated transcriptions, results in further improvement. Main Visual speech recognition (VSR), also known as lipreading, is the task of automatically recognizing speech from video based only on lip movements. In the past, this field has attracted a lot of research attention within the speech recognition community 1 , 2 , but it has failed to meet the initial high expectations. There are two main reasons why the first generation of VSR models fell short: (1) the lack of large transcribed audio-visual datasets resulted in models that could only recognize a limited vocabulary and work only in a laboratory environment and (2) the use of handcrafted visual features, which might not have been optimal for VSR applications, prevented the development of high-accuracy models. Recently, large audio-visual transcribed datasets, like LRS2 3 and LRS3 4 , have become available, and these have allowed the development of a large vocabulary and robust models. In addition, advances in deep learning have made possible the use of end-to-end models, which learn to extract VSR-related features directly from raw images. These developments have led to a new generation of deep-learning-based VSR models that achieve much higher accuracy than older models and also work in unseen real-life situations. The recent advances in VSR models are mainly fuelled by using increasingly larger transcribed datasets and the development of models that work well when trained with huge amounts of data. Some recent works 5 , 6 use tens of thousands of hours of non-publicly available training data to achieve state-of-the-art performance on standard benchmarks. In contrast to this recent trend, we demonstrate that carefully designing a model is equally as important as using larger training sets. Our approach revolves around (1) addition of prediction-based auxiliary tasks to a VSR model, (2) appropriate data augmentations and (3) hyperparameter optimization of an existing architecture. This leads to a great reduction in word error rate (WER) and results in state-of-the-art performance on almost all benchmarks. This is achieved by using only publicly available datasets, which are two orders of magnitude smaller than those used in previous works. We also show that combining multiple datasets further improves the performance (which is in line with the results reported in the literature). Hence, we argue that further progress in the field can be achieved not only by increasing the size of the training data but also by careful model design and optimization. The vast majority of existing works focus on improving the performance of English-only VSR models. There are also a few works that design models tailored to a specific language, like Mandarin 7 , 8 , 9 . In contrast to previous works, our approach is evaluated not only on English but also on Mandarin and Spanish (the two other widely spoken languages), Italian, French and Portuguese. State-of-the-art performance is achieved in all languages. Specifically, in this Article, we make the following contributions: We propose a novel method for VSR that outperforms state-of-the-art methods trained on publicly available data by a large margin. We do so with a VSR model with auxiliary tasks that jointly performs VSR and prediction of audio and visual representations. We demonstrate that the proposed VSR model performs well, not only in English, but also in other languages, such as Spanish, Mandarin, Italian, French and Portuguese. We show that enlarging the training sets, even by including unlabelled data with automatically generated transcriptions or videos in other languages, results in improved performance. This provides further evidence for the hypothesis that the recent improvements presented in the literature are probably the result of larger training sets and not necessarily of better models. We discuss challenges for VSR systems that need to be solved and ethical considerations that must be taken into account before this technology can be widely applied. Baseline VSR model The baseline VSR model that we extend in this work is based on ref. 10 . The model consists of a three-dimensional (3D) convolutional layer with a receptive field of five frames, followed by a 2D ResNet-18 (Fig. 1e ), a 12-layer Conformer model 11 and a transformer decoder as shown in Fig. 1b . The model is trained end to end using a combination of the connectionist temporal classification (CTC) loss with an attention mechanism. Data augmentation is also used during training in the form of random cropping and image flipping (applied to all frames in the same sequence). This model achieves state-of-the-art VSR performance on the LRS2 and LRS3 datasets, when only publicly available data are used for training. Fig. 1: Model architecture overview. a – c , Baseline ASR model ( a ), baseline VSR model ( b ) and proposed model ( c ) with prediction-based auxiliary tasks. The frame rate of the extracted visual features and audio features is 25. d , The architecture of the ASR encoder from a . e , The architecture of the VSR encoder from b . Full size image Baseline ASR model The baseline Automatic Speech Recognition (ASR) model that we use is based on ref. 10 . The model consists of an 1D ResNet-18 (Fig. 1d ), a 12-layer Conformer model and a transformer decoder as shown in Fig. 1a . This model also follows the hybrid CTC/attention architecture and is trained end to end. Time-masking is also used as data augmentation during training. At the moment, this is the state-of-the-art ASR model on the LRS2 and LRS3 datasets. Our approach In contrast to previous works, which improve the VSR performance by using increasingly larger training sets, we focus on improving the performance by carefully designing a model without relying on additional data. This is achieved by revising the training strategy and architecture of the state-of-the-art model proposed in ref. 10 . First, we optimize hyperparameters and improve the language model (LM) with the aim of squeezing extra performance out of the model. Second, we introduce time-masking, which is a temporal augmentation method that is commonly used in ASR models. It substantially improves the VSR performance by forcing the model to rely more on contextual information and, as a consequence, it can better disambiguate similar lip movements that correspond to different phonemes. Finally, we use a VSR model with auxiliary tasks where the model jointly performs VSR and prediction of audio and visual representations extracted from pre-trained VSR and ASR models. This prediction task provides an additional supervisory signal and forces the model to learn better visual representations. A diagram of the architecture of our model is shown in Fig. 1c . The performance of our model is presented in Tables 1 – 4 . Owing to the random nature of training, we train ten models for each experiment and report the mean and standard deviation of the WER over the ten runs. This is in contrast to previous works, which report just a single value (most probably the best WER) and no standard deviation, and it provides a more robust estimate of the actual performance. However, to facilitate a fair comparison with other works, we also report the best WER of the ten runs. Table 1 Results on the LRS2 dataset Full size table Results on LRS2 The results on LRS2—an English audio-visual dataset—are reported in Table 1 . Our model outperforms all existing works by a large margin, even when it is trained on smaller amounts of training data. In particular, it outperforms the previous state of the art—ref. 10 —in terms of the best WER achieved, by 5%. This is despite the fact that, in ref. 10 , training is carried out on a larger training set. When we use the same training set size as in ref. 10 , our model results in a 9.2% improvement. When we use additional training data, an even larger improvement of 12.4% is observed. Similarly, our approach results in a 22.8% absolute improvement in the best WER over ref. 4 , which uses a training set with similar size to ours and also includes non-publicly available data. Results on LRS3 The results on LRS3—another English audio-visual dataset—are presented in Table 2 . In this case too, our proposed approach substantially outperforms all existing works that are trained using publicly available datasets. In particular, our method leads to an 8.2% absolute improvement, in terms of the best WER, over the state of the art—ref. 10 —when the same training data are used. As expected, a smaller absolute improvement of 5.4% is reported when a smaller training set is used. In the case of additional training data being available, a larger absolute improvement of 11.8% is achieved. Table 2 Results on the LRS3 dataset Full size table There are also some works that rely on very large non-publicly available datasets for training. As a consequence, it is not clear whether the reported improvement in WER is due to a better model or simply to the large amount of training data. Our approach outperforms all works that use up to 21 times more training data. More specifically, our best model, trained on 1,453 h of video, leads to a 2.1% absolute improvement over ref. 12 , which uses 31,000 h of training data. However, it performs worse than ref. 6 , which presents a model trained on 90,000 h, which is 62 times more training data than the publicly available training data on which our model is trained. Results on CMLR The results on the CMLR dataset—a Mandarin audio-visual dataset—are shown in Table 3 . We report performance in terms of character error rate (CER) instead of WER, because Chinese characters are not separated by spaces. Our approach results in a substantial reduction in the CER over all existing works. We achieve an absolute improvement of 12.9% over the state of the art, ref. 9 . The WER can be further reduced by 1.1% by first pre-training our model on English and then fine-tuning it on the CMLR training set. Table 3 Results on the CMLR dataset Full size table Results on CMU-MOSEAS-Spanish The results on the CMU-MOSEAS-Spanish dataset—an audio-visual Spanish dataset—are shown in Table 4 . Given that this is a small dataset, it is not possible to train an accurate model without using additional data. For this purpose, we first pre-trained the model on English datasets and then fine-tuned it on the training sets of CMU-MOSEAS and TEDx datasets using the Spanish videos only. Because this is a new dataset and there are no results from previous works, we trained the end-to-end model presented in ref. 10 to serve as the baseline. We observe that our proposed approach results in a 7.7% absolute reduction in the WER. A further reduction of 6.5% can be achieved by using additional training data. Table 4 Results on the CMU-MOSEAS-Spanish (CM es ) dataset Full size table Comparison between mean and best WER/CER In all results shown in Tables 1 – 4 we report both the mean and the best performance over ten runs. We observe that the mean WER, which is more representative of the actual performance, is up to 0.8% worse than the best WER. The only exception is for the CMLR dataset (Table 3 ), where the mean and best CER are practically the same, mainly as a result of the large size of the test set. This difference between the mean and best WER is something that should be taken into account when comparing different models, especially when the models are tested on relatively small test sets and the results are too close. Applications Speech is the most commonly used human communication method and consists of an audio signal and the corresponding mouth movements. Speech perception is also bimodal, as demonstrated by the McGurk effect 13 , where the perception of a sound may change depending on the lip movements shown to the observers. In addition, it has been shown that the addition of visual speech information to a word recognition task performed by normal hearing adults is equivalent to increasing the signal-to-noise ratio (SNR) by 15 dB compared to audio-only recognition 14 . Hence, one of the main applications of VSR is to enhance the performance of ASR models in noisy environments. VSR models are not substantially affected by acoustic noise and can be integrated into an audio-visual speech recognition (AVSR) model to compensate for the performance drop of ASR models. Several AVSR architectures have been proposed 4 , 10 , 12 , 15 , 16 , 17 , 18 ; these show that the improvement over ASR models is greater as the noise level increases, that is, the SNR is lower. The same VSR architectures can also be used to improve the performance of audio-based models in a variety of applications like speech enhancement 19 , speech separation 20 , voice activity detection 21 , active speaker detection 22 and speaker diarization 23 . There are also a number of applications based exclusively on VSR. Silent speech interfaces (SSIs) 24 , which can enable speech communication to take place when an audible speech signal is not available, can be developed with the help of VSR systems. This means that a speaker would be able to mouth words instead of vocalizing them. This technology has the potential to transform the lives of speech-impaired people. Individuals who have lost the ability to speak (aphonia) or have difficulty in speaking (dysphonia) due to tracheostomy, laryngectomy, stroke or injury might find it hard to communicate with others. The use of SSI can alleviate this by providing an alternative way of communication and at the same time reduce the stress caused by the sudden loss of their voice. The use of SSI can also be useful in cases where speaking is not allowed, for example, in a meeting, and can provide privacy in public conversations. VSR technology also opens up opportunities to automatically transcribe video content that was recorded without audio, like silent movies, CCTV footage or video captured by older webcams, and would otherwise require substantial manual effort or might have even been impossible. It can also be used as a useful tool in face forgery detection 25 . Most face-manipulation approaches add inconsistencies in mouth movements, which might not always be perceptible by humans, but they can easily be detected by properly trained VSR models. Finally, there is a new form of VSR that has become popular recently and generates audio, instead of text, directly from the input video 26 , 27 . This is essentially a combination of a standard VSR model with a text-to-speech model, but it has two important advantages: (1) it does not require any transcribed dataset and can be trained with vast amounts of unlabelled audio-visual data, and (2) it is faster and can potentially be used in real-time applications as it removes the constraint of recognizing a complete word before generating the corresponding speech signal. This new approach is especially useful for audio inpainting applications, because it can automatically fill in audio gaps from video. Challenges Despite the great advances in VSR, there are still numerous challenges that need to be solved before the full potential of this technology can be achieved. First, visual ambiguities that arise from the fact that different phonemes correspond to similar lip movements is one of the most important reasons for the substantial performance gap between ASR and VSR models. Designing VSR systems that can resolve some of these ambiguities by relying more on the context, like the time-masking augmentation proposed in this work, might close this gap. In addition, VSR systems are sensitive to visual noise like lighting changes, occlusions, motion blur and compression. Reduced and/or mismatched resolution and frame rate between training and test conditions can also affect performance. There is some evidence that VSR systems are robust to small or moderate amounts of noise and less robust to reduced resolution 28 , 29 , but further studies are needed to establish the impact of each noise type. Another challenge is that a VSR model should be person-independent and pose-invariant. However, it is well known that deep networks rely heavily on texture 30 . This can potentially degrade the performance, because unknown test subjects and head pose can substantially affect the appearance of the mouth. This is typically addressed by training the VSR models on a large number of subjects with varying poses. Some preliminary works on pose-invariant 31 and subject-independent 32 VSR have shown that this can be addressed in a more principled way, and this is another area that deserves further attention. Similarly, multi-view VSR 33 can be beneficial, but it is not yet clear which lip views are optimal and how they should be combined. The availability of multiple cameras in meeting rooms, cars and in modern smartphones opens up a new opportunity for improving VSR systems. The vast majority of VSR systems have focused on plain English speech. However, it is known that lip movements are affected by the context where speech is produced and the type of speech. There is evidence that lip movements tend to increase in silent speech 34 and also when speech is produced in noise (Lombard effect) 35 . Despite studies that show a performance drop when VSR models 36 , 37 , 38 are tested on such conditions, this area remains unexplored. Finally, the development of non-English VSR systems that take into account the unique characteristics and accents of each language also remains an open challenge. Ethical considerations It is important to note that VSR is a dual-use technology, which means it can have a positive impact on society as well as a negative one. Although our objective is to build VSR systems that will be beneficial for society, like the applications mentioned above, this technology can also be misused. One example is that it can be deployed for surveillance via CCTV or even via smartphone cameras, which raises privacy concerns 39 , 40 . A potential side effect of this is that it might discourage people from speaking in public if they believe that their conversation can be intercepted by anyone carrying a camera 40 . Sophisticated surveillance using VSR technology might not be possible at the moment, especially via CCTV due to the low quality of CCTV camera images, compared to the high-quality data used during training, but it should not be ignored. Cameras and VSR systems are getting better, so it might become a serious privacy concern rather soon unless automatic blurring of all faces of people who did not provide an explicit consent becomes a new standard. Commercial applications of VSR technology are still at a very early stage. One of the very few examples is a smartphone application that aims to help speech-impaired individuals communicate and is currently being trialled in UK NHS hospitals. This is being developed by Liopa 41 , which also works on keyword spotting from CCTV footage. We thus argue that appropriate government regulations for VSR systems, which address privacy concerns and potential misuse, are necessary at this early stage before the technology is fully commercialized. This will allow the proper auditing of every new application before it reaches the market, so that the risks and merits can be properly communicated to users and the public. Otherwise, VSR systems may have the same fate as face recognition technology, which was commercialized without proper regulation being in place. As a consequence, a ban on using face recognition was introduced in several cities 42 , 43 and some companies either stopped offering such services or put restrictions on their use 44 , 45 , 46 when the ethical concerns became widely known. It should also be pointed out that VSR technology might be biased against specific age groups, genders, cultural backgrounds or non-native speakers. Most of the publicly available datasets have been collected from TV programmes, TED talks or YouTube videos. Hence, it is very likely that some groups are underrepresented, for example, younger people when data are collected from TV programmes or older people when data are collected from YouTube. Similarly, it is likely that people from specific cultural backgrounds or non-native speakers are also underrepresented. This will lead to VSR models that are less accurate for all these groups. Because demographic information is not available for any publicly available dataset used for training VSR models, it is not easy to verify whether such biases exist. VSR models need to be trained on demographically diverse data, including non-native speakers, to ensure similar performance across different user groups. This will lead to VSR systems whose accuracy is not lower for some users because their age, gender, cultural background or accent is underrepresented in the training data. Methods Our method outperforms state-of-the-art methods by a large margin for VSR in multiple languages. In what follows we explain the details of our approach and the changes that we have made to the training strategy and architecture that led to this highly improved performance. Datasets LRS2 Ref. 3 describes a large-scale audio-visual English dataset collected from BBC programmes. It consists of 144,482 video clips with a total duration of 224.5 h. The videos are divided into a pre-training set with 96,318 utterances (195 h), a training set with 45,839 utterances (28 h), a validation set with 1,082 utterances (0.6 h) and a test set with 1,243 utterances (0.5 h). LRS3 Ref. 47 describes the largest publicly audio-visual English dataset collected from TED talks. It contains 438.9 h with 151,819 utterances. Specifically, there are 118,516 utterances in the ‘pre-train’ set (408 h), 31,982 utterances in the ‘train-val’ set (30 h) and 1,321 utterances in the ‘test’ set (0.9 h). CMLR Ref. 8 describes a large-scale audio-visual Mandarin dataset collected from a Chinese national news programme. It contains 102,072 clips with transcriptions. The training, validation and test sets contain 71,448 (60.6 h), 10,206 (8.6 h) and 20,418 (17.3 h) clips, respectively. To the best of our knowledge, CMLR is the largest publicly available dataset in Mandarin. CMU-MOSEAS Ref. 48 describes a large-scale dataset that contains multiple languages and was collected from YouTube videos. It consists of 40,000 transcribed sentences and includes Spanish, Portuguese, German and French. We consider the Spanish videos (CM es ) with a total duration of 16.3 h. We divided the data into training and test sets, which contain 8,253 videos (15.7 h) and 329 videos (0.6 h), respectively. Multilingual TEDx Ref. 49 describes a multilingual corpus collected from TEDx talks. It covers eight languages with manual transcriptions and has a total duration of 765 h. For the purposes of this study, we consider the Spanish videos (MT es ) and use the data split proposed in ref. 49 . We manually cleaned the dataset to exclude videos where the speaker is not visible, resulting in a total of 44,745 videos (71.4 h) for training, 403 videos (0.7 h) for validation and 302 videos (0.5 h) for testing. It should be noted that we only use the training set in this study. AVSpeech Ref. 20 is a large-scale audio-visual dataset consisting of 4,700 h of video in multiple languages. A pre-trained language recognition model, VoxLingua107 50 , was first used to identify the English speaking videos. Two pre-trained ASR models, Wav2Vec2-Base-960h ( ) and Wav2Vec2-large-xlsr-53-english ( ), were then used to obtain machine-generated transcriptions for these videos. We only kept the videos where the WER between the two generated transcriptions was below 60%, resulting in 350,991 videos with a total duration of 641 h. The transcriptions generated by the Wav2Vec2-Base-960h model were used for these videos. Performance metrics WER is the most common metric used in speech recognition. This measures how close the predicted word sequence is to the target word sequence. Assuming S is the number of substitutions, D is the number of deletions, I is the number of insertions needed to get from the predicted to the target sequence and N is the number of words in the target sequence, then the metric can be defined as $${\rm{WER}}={\frac{S+D+I}{N}}.$$ (1) Similarly to WER, we can define the CER, which measures how close the predicted and target character sequences are. In this case, S , D and I are computed at the character level and N is the total number of characters. Pre-processing We used the RetinaFace 51 face detector and the Face Alignment Network (FAN) 52 to detect 68 facial landmarks. The faces were then registered to a neutral reference frame using a similarity transformation to remove translation and scaling variations. A bounding box of 96 × 96, centred on the mouth centre, was used to crop the mouth region of interest. The cropped patch was further converted to grey-scale and normalized with respect to the overall mean and variance of the training set. Hyperparameter optimization Hyperparameter optimization aims to improve the performance of a model by fine-tuning the values of the parameters that are used to control the training process or the model architecture. Some of the most common hyperparameters that are usually optimized are the following: initial learning rate, learning rate decay parameters, number of layers, size of layers, dropout rate and the loss function weights, which are used to combine the different loss terms. Additional hyperparameters related to conformers are the number and size of the self-attention heads. We performed hyperparameter optimization on the LRS2 dataset by attempting to reduce the WER on the validation set. Our conclusion was that the parameters used in the baseline model 10 were already optimal, so no further improvement was observed. The next step was to optimize other hyperparameters that might not have been exhaustively optimized, like batch-size-related parameters. Again, the parameters were chosen based on the validation set performance. Further details and results are provided in Supplementary Section 4 and Supplementary Table 8, respectively. The results on the LRS2 and LRS3 test sets are shown in Supplementary Table 9 . Each hyperparameter was optimized independently based on the WER on the validation set of LRS2. We used the same hyperparameters for all experiments. It is clear that hyperparameter optimization results in a substantial reduction in the WER for both datasets. Improving LMs A LM determines the probability of a given sequence of characters. It is used during decoding and favours sequences that are more likely to occur. To increase the capacity of the LM we use multiple text corpora for training. We also increase the number of sequences considered during decoding (beam size is set to 40). The impact of these changes is demonstrated in Supplementary Table 9, where the WER is reduced for both English datasets. The score from the LM ( S LM ) is incorporated in decoding as follows: $${S}={\lambda }{S}_{\rm{CTC}}+{(1-\lambda )}{S}_{\rm{att}}+{\beta }{S}_{\rm{LM}},$$ (2) where S CTC and S att are the scores of the CTC and decoder branch, respectively, and λ and β correspond to the CTC and LM score weights. Additional details about the corpora used for training the LM in each language, as well as training details, are presented in Supplementary Section 5. Time-masking Data augmentation works by synthesizing additional distorted training data with the goal of reducing over-fitting. In VSR, most existing works make use of image transformations such as random cropping and horizontal flipping 10 , 15 , 53 . These spatial augmentations are helpful, but they do not take into account the temporal nature of visual speech. Only a few works exist that apply temporal augmentations like deleting or duplicating frames 54 or variable length augmentation 55 . In this Article we propose the use of time-masking, which is commonly used in training ASR models 56 . It works by randomly masking n consecutive frames by replacing them with the mean sequence frame. This allows the model to more effectively use contextual information and can better disambiguate similar lip movements that correspond to different phonemes. It also makes the model more robust to short missing segments. Given that there is large variance in the video lengths, especially on the LRS2 and LRS3 datasets, the number of masks used is proportional to the length of the training sequence. Specifically, we use one mask per second and, for each mask, we randomly mask up to 40% of frames, with the masked segments chosen using a uniform distribution. Additional details about this augmentation are provided in Supplementary Section 6. The impact of time-masking is shown in the ablation study on the LRS2 and LRS3 datasets shown in Table 5 . Training a model without time-masking results in a substantial increase in the mean WER when compared to the full model. Table 5 Ablation study on the LRS2 dataset and LRS3 dataset Full size table Prediction-based auxiliary tasks The standard approach to VSR relies on end-to-end training, which allows the entire model to be optimized towards the desired target. This is an attractive property and has led to impressive results, but also results in substantial challenges in training such a large model. One solution that has recently been proposed is the use of auxiliary tasks in the form of additional losses applied to intermediate layers of the model 57 , 58 , 59 . This acts as regularization, which helps the model learn better representations and leads to better generalization on test data. Based on this observation, we propose as an auxiliary task the prediction from intermediate layers of audio and visual representations learned by pre-trained ASR and VSR models (Fig. 1c ). This is inspired by the recent success of prediction tasks in self-supervised learning. In particular, good audio representations can be learned by predicting handcrafted audio features 60 or by using joint audio and visual supervision 61 . Similarly, visual speech representations can be learned by predicting audio features 62 . Hence, the proposed auxiliary task provides additional supervision to the intermediate layers of the model, which in turns results in better visual representations and improved performance. Mathematically, this is formulated as a regression problem where the goal is to minimize the L1 distance between the predicted and pre-trained visual and audio features. This results in the following loss term added to the loss function: $$\begin{array}{lll}{{{{\mathcal{L}}}}}_{\rm{AUX}}&=&{\beta }_{\rm{a}}| | {h}_{\rm{a}}({f}^{l}({{{{\bf{x}}}}}_{\rm{v}}))-{g}_{\rm{a}}^{l}({{{{\bf{x}}}}}_{\rm{a}})| {| }_{1}\\ &+&{\beta }_{\rm{v}}| | {h}_{\rm{v}}({f}^{l}({{{{\bf{x}}}}}_{\rm{v}}))-{g}_{\rm{v}}^{l}({{{{\bf{x}}}}}_{\rm{v}})| {| }_{1}\end{array},$$ (3) where x v and x a are the visual and audio input sequences, respectively, g v and g a are the pre-trained visual and audio encoders, respectively, f is the subnetwork up to layer l whose intermediate representation is used as input to the audio and visual predictors h a and h v , respectively, β a and β v are the coefficients for each loss term, and ∣ ∣ ⋅ ∣ ∣ 1 is the ℓ 1 -norm. The model performs VSR and at the same time attempts to predict audio and visual representations from intermediate layers. Hence, the final loss is simply the addition of the main VSR loss and the auxiliary loss: $${{{\mathcal{L}}}}={{{{\mathcal{L}}}}}_{\rm{VSR}}+{{{{\mathcal{L}}}}}_{\rm{AUX}}$$ (4) $${{{{\mathcal{L}}}}}_{\rm{VSR}}={\alpha }{{{{\mathcal{L}}}}}_{\rm{CTC}}+{(1-\alpha )}{{{{\mathcal{L}}}}}_{\rm{att}}$$ (5) where \({{{{\mathcal{L}}}}}_{\rm{VSR}}\) is the loss of the hybrid CTC/attention architecture used, \({{{{\mathcal{L}}}}}_{\rm{CTC}}\) is the CTC loss, \({{{{\mathcal{L}}}}}_{\rm{att}}\) the loss of the attention mechanism, and α controls the relative weight of each loss term. Further details about the losses are provided in Supplementary Section 7. We emphasize that the proposed method is not architecture-dependent and can also be used with other more advanced visual front ends 63 . The substantial impact of the auxiliary losses on performance can be observed from Table 5 . Removing either loss, that is, either the first or second term from equation ( 3 ), leads to an increase in the mean WER for both datasets. In the case where both losses are removed, that is, no auxiliary loss is used, then the increase in the mean WER is even greater. Finally, the removal of the two losses and time-masking results in a substantial decrease in performance. An ablation study on the effect of layer l where the auxiliary loss (equation ( 3 )) is attached is shown in Supplementary Fig. 1 . Layer 6 was found to be the optimal level based on the performance on the validation set. All results reported in all the tables are based on this configuration. Further details are presented in Supplementary Section 9.1. Using additional training data Using larger and larger training sets with a view to reducing the WER is a recent trend in the literature. To investigate the impact of the amount of training data, we trained models on varying amounts of data. We started by training models using only the training set of each database (seventh row of Table 1 and fourth row of Table 2 ). It is not possible to train a model from scratch on the LRS2 and LRS3 datasets, so we used curriculum learning. This means that we first used only short utterances and as training progresses we kept adding longer ones. Further details on curriculum learning are provided in Supplementary Section 8. We used a model trained for recognizing 500 English words 55 on the LRW dataset for initialization, then fine-tuned it on the corresponding training sets of the LRS2 or LRS3 datasets (eighth row of Table 1 and fifth row of Table 2 ). Finally, we used the models trained on LRW + LRS3 and LRW + LRS2 as initialization and fine-tuned them further on LRS2 and LRS3, respectively (ninth row of Table 1 and sixth row of Table 2 ). It is clear that, as we use more datasets for training, the performance keeps improving. This is also the case for Spanish and Mandarin (sixth row of Table 3 and third row of Table 4 ), even when models trained on English are used for initialization. However, the reduction in WER is smaller than in English, probably due to language mismatch. Finally, we used a subset of the AVspeech dataset as additional training data together with the automatically generated English transcriptions. Again, the WER is reduced in all languages (tenth row of Table 1 , seventh row of Table 2 , last row of Tables 3 and 4 ), despite using transcriptions that contain errors, with the smallest reduction observed in Mandarin. This is not surprising, because Mandarin is much less similar to English than Spanish. These results are in line with the hypothesis that the reduction in the WER reported in recent works is mainly due to the larger datasets used for training. Implementation Our experiments were implemented using an open-source toolkit, ESPNet 64 . We trained the models with the Adam optimizer 65 with β 1 = 0.9, β 2 = 0.98 and ϵ = 10 −9 . The learning rate increases linearly in the first 25,000 steps, yielding a peak learning rate of 0.0004 and thereafter decreasing in proportional to the inverse square root of the step number. The network was trained for 50 epochs with a batch size of 16. We used the model averaged over the last ten checkpoints for evaluation. Details regarding the network architecture are provided in Supplementary Section 2. Conclusions In this Article we have presented our approach for VSR and demonstrated that state-of-the-art performance can be achieved not only by using larger datasets, which is the current trend in the literature, but also by carefully designing a model. We have highlighted the importance of hyperparameter optimization, which can further improve the performance of existing architectures. We have then shown the importance of time-masking, which forces the network to focus more in the context. We have also proposed a new architecture based on auxiliary tasks where the VSR model also predicts audio-visual representations learned by pre-trained ASR and VSR models. Finally, we have provided evidence that using larger datasets improves the performance, which is in line with recent works in this field. Our approach outperforms all existing VSR works trained on publicly available datasets in English, Spanish and Mandarin, by a large margin. Data availability The datasets used in the current study are available from the original authors on the LRS2 ( ), LRS3 ( ), CMLR ( ), Multilingual ( ) and CMU-MOSEAS ( ) repositories. Qualitative results and the list of cleaned videos for the training and test sets of CMU-MOSEAS and Multilingual TEDx are available on the authors’ GitHub repository ( ). Code availability Pre-trained networks and testing code are available on a GitHub repository ( ) or at Zenodo 66 under an Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence. | In recent years, deep learning techniques have achieved remarkable results in numerous language and image-processing tasks. This includes visual speech recognition (VSR), which entails identifying the content of speech solely by analyzing a speaker's lip movements. While some deep learning algorithms have achieved highly promising results on VSR tasks, they were primarily trained to detect speech in English, as most existing training datasets only include English speech. This limits their potential user base to people who live or work in English-speaking contexts. Researchers at Imperial College London have recently developed a new model that can tackle VSR tasks in multiple languages. This model, introduced in a paper published in Nature Machine Intelligence, was found to outperform some previously proposed models trained on far larger datasets. "Visual speech recognition (VSR) was one of the main topics of my Ph.D. thesis," Pingchuan Ma, a Ph.D. graduate from Imperial College who carried out the study, told TechXplore. "During my studies, I worked on several topics, for instance exploring how to combine visual information with audio for audio-visual speech recognition and how to recognize visual speech independently of the head pose of participants. I realized that the vast majority of existing literature only dealt with English speech." The key objective of the recent study by Ma and his colleagues was to train a deep learning model to recognize speech in languages other than English from the lip movements of speakers and then compare its performance to that of other models trained to recognize English speech. The model created by the researchers is similar to those introduced by other teams in the past, but but some of its hyper-parameters were optimized, the dataset was augmented (i.e., increased in size by adding synthetic, slightly modified versions of data) and additional loss functions were used. "We showed that we can use the same models to train VSR models in other languages," Ma explained. "Our model takes raw images as input, without extracting any features, and then automatically learns what useful features to extract from these images to complete VSR tasks. The main novelty of this work is that we train a model to perform VSR and also add some additional data augmentation methods and loss functions." In initial evaluations, the model created by Ma and his colleagues performed remarkably well, outperforming other VSR models trained on much larger datasets, even if it required less original training data. As expected, however, it did not perform as well as English-speech recognition models, mainly due to smaller datasets available for training. "We achieved state-of-the-art results in multiple languages by carefully designing the model, rather than by simply using larger datasets or larger models, which is the current trend in the literature," Ma said. "In other words, we showed that how a model is designed is equally important to its performance than increasing its size or using more training data. This can potentially lead to a shift in the way researchers try to improve VSR models." Ma and his colleagues showed that one can achieve state-of-the-art performances in VSR tasks by carefully designing deep learning models, instead of using larger versions of the same model or collecting additional training data, which is both expensive and time consuming. In the future, their work could inspire other research teams to develop alternative VSR models that can effectively recognize speech from lip movements in languages other than English. "One of the main research areas I'm interested in is how we can combine VSR models with existing (audio-only) speech recognition," Ma added. "I'm particularly interested in how these models can be dynamically weighted, i.e., how the model can learn which model should rely on depending on the noise. In other words, in a noisy environment an audio-visual model should rely more on the visual stream, but when the mouth region is occluded it should rely more on the audio stream. Existing models are essentially frozen once trained and they cannot adapt to changes in the environment." | 10.1038/s42256-022-00550-z |
Space | New white paper showcases the future of space robotics | www.surrey.ac.uk/sites/default … int_single_pages.pdf | https://www.surrey.ac.uk/sites/default/files/UK_RAS_wp_print_single_pages.pdf | https://phys.org/news/2016-07-white-paper-showcases-future-space.html | Abstract Long-duration observations of Neptune’s brightness at two visible wavelengths provide a disk-averaged estimate of its atmospheric aerosol. Brightness variations were previously associated with the 11-year solar cycle, through solar-modulated mechanisms linked with either ultraviolet or galactic cosmic ray (GCR) effects on atmospheric particles. Here, we use a recently extended brightness data set (1972–2014), with physically realistic modelling to show, rather than alternatives, ultraviolet and GCR are likely to be modulating Neptune’s atmosphere in combination. The importance of GCR is further supported by the response of Neptune’s atmosphere to an intermittent 1.5- to 1.9-year periodicity, which occurred preferentially in GCR (not ultraviolet) during the mid-1980s. This periodicity was detected both at Earth, and in GCR measured by Voyager 2, then near Neptune. A similar coincident variability in Neptune’s brightness suggests nucleation onto GCR ions. Both GCR and ultraviolet mechanisms may occur more rapidly than the subsequent atmospheric particle transport. Introduction Long-term observations of Neptune from a ground-based telescope show variations in the planet’s disk-averaged brightness, which are associated with changes in the reflectivity (albedo) of the planet from its atmospheric aerosol and clouds. Although seasonal variations dominate the time series, Lockwood and Thompson 1 showed, using data from 1972 to 1996, that small fluctuations in Neptune’s brightness at two visible wavelengths followed the 11-year solar cycle. They examined two quantities known to vary closely with solar activity. The first, solar ultraviolet radiation, is linked to photochemical variations in Neptune’s atmospheric aerosol particles, and the second, galactic cosmic rays (GCR), may create some of Neptune’s aerosol through ion-induced nucleation. It was not possible to discriminate between the ultraviolet and GCR effects, although the relationship with ultraviolet was slightly more statistically robust 1 . The Neptune magnitude-solar activity relationship broke down after 1996 (refs 1 , 2 ), but recent extension of the data 3 encourages its re-examination. Supporting evidence for a solar cycle in infrared observations from 1975 to 2007 (ref. 4 ) further motivates a fresh consideration of the origin of the short-term variability in Neptune’s albedo. Both the ultraviolet and GCR mechanisms can, in principle, account for the changes observed in the photometric observations, which originate in Neptune’s stratosphere and troposphere 1 , 5 . The ultraviolet mechanism was originally proposed 6 to explain the solar cycle signal when it was first reported in Neptune’s albedo 7 . It was suggested that ultraviolet-triggered surface chemistry on pre-existing aerosol particles varied the optical properties of Neptune’s stratospheric aerosol through a darkening in colour (‘tanning’), detectable in the photometric measurements. The GCR-driven mechanism was proposed for Neptune 8 , 9 , through direct (‘Wilson’) condensation of supersaturated gas onto atmospheric ions 10 , causing particle growth ultimately detectable at optical wavelengths. The possibility that charge-related effects could modulate the atmospheres of the outer planets, where variations in solar irradiance are proportionately less important 11 , contrasts with the small likely role of energetic particles in Earth’s atmosphere 12 and provides a further motivation for this study. The two proposed mechanisms for external solar forcing of Neptune are essentially heliospheric (through GCR) or photospheric (through solar ultraviolet) in origin. The analysis to investigate them here uses two approaches. First, the relationships between Neptune’s magnitude, solar ultraviolet radiation and GCRs are studied with multiple regression, allowing both the proposed mechanisms to act together. We find that, when the extra degrees of freedom are accounted for, including both ultraviolet and GCR improves prediction of the magnitude fluctuations. Second, we examine the relatively rapid fluctuations apparent during the mid-1980s in the Neptune astronomical data. This enhanced variability coincided with a known episode of quasi-periodic fluctuations present in GCR 13 , centred around 1.68 years. Investigating Neptune’s atmospheric variability in the 1.5 to 2 year range therefore presents a method by which to separate the two different suggested solar-modulated influences, an approach previously used to separate coincident terrestrial atmospheric responses 14 . Results The extended Neptune photometric data set 1972–2014 Regular photometric observations of the magnitude (brightness) of Neptune have been made since 1972, through well-characterised visible bandpass filters of width ∼ 20 nm centred at 472 nm (‘b’, blue) and 551 nm (‘y’, green) using a 21-in telescope at Lowell Observatory, Arizona 2 , Fig. 1a . Each magnitude value is typically determined from between 4 and 39 nights of data (median 9 nights), taken around the time the planet is brightest in the sky (opposition) 3 . Standard errors in the magnitude measurements, determined from the standard deviations and number of nights of observation were typically ±0.001 (ref. 3 ). Sampling intervals varied between 0.7 and 1.2 years, with median interval of 1.04 years. Figure 1 summarizes the data, with Fig. 1a showing the measured magnitudes, Fig 1b showing the magnitude fluctuations, Fig. 1c showing the ultraviolet data and Fig. 1d showing GCR measured both at Earth and in space. More information on the data is given in the ‘Methods’ section. Figure 1: Time series of Neptune’s brightness and solar modulated parameters. ( a ) Neptune brightness (astronomical magnitude, where smaller values represent a greater signal) time series at 472 nm (blue squares) and 551 nm (green circles), from ref. 3 , each smoothed with a lowess fit (blue dashed line or green solid line). ( b ) Magnitude fluctuations after detrending ( a ) with a lowess fit. The maximum s.e.m. in each data set is shown as a single error bar on the far left. ( c ) Lyman-alpha (ultraviolet) radiation at 121.5 nm. ( d ) Cosmic ray count rate at Earth’s surface and in the heliosphere, showing terrestrial neutron monitor data from Oulu, Finland, (black) and Voyager 2 LECP instrument daily mean flux of cosmic ray protons >70 MeV (grey). Data are described in full in the ‘Methods’ section. Full size image The correlation between the 29 data points and 30-day running means of ultraviolet, sunspots and GCR with a range of lags was calculated previously 1 . The correlation between Neptune’s magnitude fluctuations and ultraviolet was statistically significant, whereas the correlation with GCR was not, on which basis it was concluded that ultraviolet was the more likely mechanism 1 . This analysis 1 did not allow for the possibility that both the physically plausible mechanisms involving ultraviolet and GCR could be acting in combination in Neptune’s atmosphere to cause the observed albedo variations. If so, multiple regression may be more appropriate than treating the two proposed mechanisms independently. The earlier analysis 1 was statistically rather than physically based, whereas we have used standard ion–aerosol theory to guide our statistical approach. Finally, extra data have recently been made available 3 , which we now include. Previous analyses 1 assumed linear relationships between ultraviolet (UV) and albedo, and GCR and albedo. Although the ultraviolet-albedo relationship is generally assumed to be linear 6 , this assumption is not necessarily appropriate for the GCR mechanism. In this case, the albedo changes are likely to be proportional to the number of ion-induced particles. The number of particles formed is controlled by the ion-induced nucleation rate 9 , which is proportional to the atmospheric ion (or electron) concentration n , where the ions are singly charged, nanometre-sized clusters created by GCR ionization 11 with volumetric production rate q . In ion–aerosol theory, there are two possible limiting regimes linking n and q . First, the recombination limit of negligible aerosol, in which the ion concentration n is controlled by ion–ion or ion–electron self-recombination and . Second, in the attachment regime, the ion concentration is limited by attachment to any pre-existing particles (such as aerosols, haze, clouds or dust) and n ∝ q (for example, see ref. 11 ). Assuming that the GCR count rate provides an estimate of q (for example, see ref. 15 ), a set of possible statistical relationships was investigated of the form: where f b,y are the measured magnitude fluctuations in the b (472 nm) or y (551 nm) wavelength ranges, κ, λ and μ are coefficients for the b or y data representing the ultraviolet mechanism, ion attachment and ion recombination, respectively, and x is a constant for the b or y data. Daily Lyman-alpha and Oulu neutron counter (GCR) data, averaged for ±20 days around the observation date given in ref. 3 , were used as measurements of ultraviolet (UV) and GCR (see the ‘Methods’ section). The regressions were weighted according to the standard errors in the magnitude data described above, and the errors in ultraviolet and GCR data were assumed to be negligible in comparison with those in the magnitude data. As the attachment and recombination regimes cannot act simultaneously at the same location, this set of statistical relationships represents the integrated effect of the different atmospheric layers observed at Earth through each filter. The adjusted coefficient of determination ( R 2 ), that is, fraction of the variance explained by the fit, corrected for the number of points and fitted variables, was used to evaluate each model, summarized in Table 1 . For the y filter, adjusted R 2 was improved to 0.14 (see case 7 in Table 1 ) when both the ultraviolet and GCR-related mechanisms were included rather than considering the mechanisms separately, which only explained 2–8% of the variance. Considering both GCR-created ionization and ultraviolet therefore permits rejection of the null hypothesis, which is that including both ultraviolet and GCR does not improve the fit (for example, see ref. 16 ). The improvement from including both GCR and ultraviolet is less marked for the b filter data, with the fit between ultraviolet and magnitude fluctuation slightly better than for GCR, ultraviolet and magnitude fluctuation. The measured and modelled magnitude fluctuations are compared in Fig. 2 . Table 1 Summary of multiple regression analysis. Full size table Figure 2: Physically based linear regression models to explain Neptune magnitude fluctuations. The models include ultraviolet radiation, attachment and recombination of GCR-created ions—Case 7 in Table 1 —for ( a ) f y at 551 nm and ( b ) f b at 472 nm. The coefficients determined in equation (1) for, respectively, ( a ) κ =0.010±0.004 cm 2 s, λ =(2±1) × 10 −4 min, μ =−0.04±0.02 min 0.5 , x =1.5±0.8 mag (where mag is astronomical magnitude), and ( b ) κ =0.011±0.005 cm 2 s, λ =(2±1.5) × 10 −4 min, μ =−0.03±0.02 min 0.5 and x =1.1±0.9 mag. Full size image One possible explanation for the differences between the b and y filter responses relates to the different parts of Neptune’s atmosphere accessed by each filter. The 442 nm b filter responds to stratospheric haze particles, whereas the 551 nm y filter is more sensitive to the optical properties of tropospheric aerosols 6 . Ultraviolet will be absorbed by, for example, haze layers 17 as it passes through the atmosphere, whereas highly energetic secondary GCR, typically muons of several GeV, can readily penetrate the deep atmosphere (for example, see ref. 11 ), with the maximum ionization, known in the terrestrial atmosphere as the Pfotzer–Regener maximum 15 , expected at about 40 hPa (ref. 9 ). It is therefore plausible that particles seen by the b filter in the haze layers would respond preferentially to ultraviolet, but that the changes in the y filter wavelength can be better explained by the inclusion of GCR in the model. Spectral analysis Particle detectors on the Voyager 2 spacecraft measured primary GCR as they passed out of the Solar System. These measurements showed variability on 1 to 2 year timescales 18 , which was at its strongest in primary GCR in the 1980s, consistently with terrestrial neutron monitor data 19 . Although similar variability is apparent in terrestrial GCR (neutron monitor) measurements, it is not in the solar 10.7-cm radio flux, a widely-used index of solar radiative emissions 20 . Beyond GCR data, this variability in the 1980s has also been identified in other heliospherically modulated quantities 19 , such as terrestrial surface atmospheric electricity data 21 . In contrast, such variability is not apparent in photospheric quantities such as solar ultraviolet, which makes this periodicity a useful diagnostic for separating photospheric and heliospherically modulated effects 14 . To pursue this for Neptune’s atmosphere we consider whether the cosmic ray variability observed at Earth is also present at Neptune. After establishing that it is, we then consider when the variability on these timescales occurs in Neptune’s atmosphere, GCR and, for completeness, solar ultraviolet. One approach to evaluating variations such as these within specified frequency ranges is to use a periodogram, or a series of periodograms selecting successive time intervals. Using successive periodograms, calculated using the Lomb–Scargle method for irregularly spaced data, the temporal variations of spectral power density (SPD) between 1.5 and 1.9 years in Neptune’s magnitude, cosmic rays from both Voyager 2 and Oulu and ultraviolet (Lyman-alpha) radiation are shown in Fig. 3 as a moving-window spectrogram. The cosmic ray data used are protons of energy >70 MeV measured by the Voyager 2 Low Energy Charged Particle (LECP) instrument 22 . Figure 3a presents a spectrogram of the data from Voyager 2, indicating strong spectral power at 1.5–1.9-year periodicities from about 1983 to 1987, when Voyager 2 was travelling from Saturn to Neptune. (Further details of the spectral data processing are given in the ‘Methods’ section). Voyager 2’s closest approach to Uranus was on 24 January 1986, and to Neptune on 25 August 1989 (ref. 23 ). Figure 3: Spectral power densities between 1.5 and 1.9 years. Moving-window spectrograms derive the SPD, normalized by the variance to be dimensionless, for periodicities between 1.5 and 1.9 years from ( a ) the Voyager 2 LECP instrument proton data, ( b ) Neptune’s magnitude fluctuations at 551 nm, ( c ) 472 nm, ( d ) Oulu neutron monitor data and ( e ) solar ultraviolet (Lyman-alpha) radiation. Contours of SPD are shown, with colour for added emphasis. The data and spectrogram calculations are described in full in the ‘Methods’ section. Full size image Figure 3a–c shows spectrograms generated from the data collected at Neptune, Fig. 3d shows terrestrial cosmic rays measured at Oulu in Finland and Fig. 3e shows the ultraviolet data. The Oulu and Voyager 2 cosmic ray data support earlier findings 17 , in that the spectral variability appears first in GCR at Earth ( Fig. 3d ), before reaching Voyager 2 ( Fig. 3a ) consistent with outward propagation of a heliospheric feature. The Neptune and Voyager 2 spectrograms show similarities in their time evolution, with coincident increased SPD during the 1980s. To evaluate the significance of this enhanced SPD, a Monte-Carlo procedure was used. Using random shuffling of the magnitude fluctuation data, many (50,000) randomised power spectra were obtained; a spectral peak at ∼ 1.6 years has a probability of being caused by chance of about 1% ( P <0.02), see Fig. 4 . Figure 4: Estimate of statistical significance of the SPD at 1.5–1.9 years during the 1980s. Dimensionless power spectral density (solid lines) calculated for ( a ) 551 nm and ( b ) 472 nm data from 1980 to 1994, using the Lomb–Scargle method after de-trending with a loess fit. The statistical significance of the spectral peaks has been estimated using a Monte-Carlo procedure: dashed and dotted lines show the upper 95th and 99th percentiles of 50,000 realizations of the power spectra calculated in the same way as the spectral peak, but after random shuffling of the magnitude data. Full size image In contrast, the spectrogram derived from the Lyman-alpha data ( Fig. 3e ) shows minimal variability in the range of periodicities considered during the mid-1980s. Hence the Voyager 2 cosmic ray data establish that the 1.5–1.9-year periodicity was present both in Neptune’s atmosphere and in GCR near Neptune during the 1980s. The 1.5–1.9-year spectral feature in heliospheric GCR can be used as a ‘fingerprint’ of a possible GCR influence in atmospheric properties such as clouds 14 . Comparing the strength of this feature on Neptune with GCR therefore provides a method to separate ultraviolet and GCR effects on Neptune’s albedo. However, there may be a lag in Neptune’s measured albedo in response to external forcing, due to the internal timescales of particle production and movement in Neptune’s atmosphere. These processes were described 6 as methane being injected into the stratosphere and upper troposphere by convection, where photochemical, nucleation and sedimentation processes act on timescales of typically a few Earth years. The photochemical colour changes postulated to be the ultraviolet mechanism providing solar modulation of the albedo are thought to act on 0.2–2-year timescales 6 . Guided by these ideas, it was found that the fit statistics summarized in Table 1 could be improved by allowing Neptune’s albedo to lag ultraviolet and GCR. We have further investigated the delays in Neptune’s atmospheric response by carrying out a lag correlation analysis between the average 1.5–1.9-year SPD in GCR, as for Fig. 3 , and the same quantity in Neptune’s magnitude, Fig. 5a . The peak response is achieved at a lag of 3 years for both wavelengths, and Fig. 5b indicates that, with a 3-year lag, 32% of the variance in the Neptune 472 nm SPD can be explained by GCR at P <0.001 confidence (18% for the 551 nm SPD at P <0.05 confidence). The calculated lag is robust to errors in the SPD, as determined by recalculating the spectra many (10,000) times, including a normally distributed random error within the quoted uncertainty on the magnitude measurements 3 . Supplementary Fig. 1 is a version of Fig. 5 calculated for the 1.5–1.9-year SPD in Lyman-alpha (ultraviolet) radiation. Unlike the GCR-magnitude relationship in SPD ( Fig. 5 ) which shows an almost linear dependency in the 1980s, there is no statistically significant linear effect between the 482 nm SPD and ultraviolet (or, indeed, for the 551 nm SPD against the ultraviolet SPD, not plotted). This indicates that some of the spectral features previously identified in the GCR data during the 1980s have propagated into Neptune’s atmosphere for detection at 472 nm, which is not replicated for the ultraviolet data. Figure 5: Relationships between 1.5 and 1.9-year spectral power densities for Neptune and GCRs. ( a ) Correlation between mean normalized SPD for periodicities between 1.5 and 1.9 years in Neptune’s atmosphere against the same periodicity in GCRs at Oulu, for Neptune lagging Oulu by 0–10 years. Dotted lines mark 95% confidence limits from multiple (10,000) realizations of the SPDs calculated with the uncertainties in the magnitude fluctuations. ( b ) Average Neptune 472 nm SPD against GCR SPD data values for the 1.5–1.9-year periodicity, with Neptune lagging Oulu by 3 years. The filled circles are from 1980 to 1989, when the 1.5–1.9-year periodicity was particularly strong, and the rest of the data are open circles. A lowess fit to all the data points is also shown (solid line). Full size image Returning to the GCR effects, and restricting the analysis to data from the 1980s, when the spectral feature was particularly strong and known to be present close to Neptune, then 87% of the variance in this intermittent periodicity at 472 nm can be explained solely by GCR at the P <0.001 confidence level. For the y data at 551 nm, the R 2 remains 18% during the 1980s. Estimates using classical cloud physics theory 24 for plausible parameters of ion-induced particle growth at Neptune (see the ‘Methods’ section) indicate that newly nucleated particles could grow to optical wavelengths relatively quickly, with timescales from tens of minutes to hours. This rapid ion growth timescale implies that the lagged GCR effects observed in Neptune’s magnitude fluctuations could arise from the propagation of the 1.5–1.9-year periodicity outwards through the heliosphere, suggested by the lag between the spectral features in Voyager 2 ( Fig. 3a ) and Oulu ( Fig. 3d ). The two analyses presented in this paper, by multiple regression and spectral techniques, represent different effects. The multiple regression evaluates the net response of Neptune’s magnitude to both GCR and ultraviolet forcing, whereas the SPD approach only addresses the sensitivity of Neptune’s atmosphere to one forcing, that of GCR on 1.5–1.9-year timescales. For 472 nm, variability at these periodicities during the 1980s is well explained by GCR. In terms of net response over the whole data set (1972–2014), fluctuations in the 472 nm filter data are most effectively accounted for by ultraviolet variations alone, but at 551 nm there is a combined effect of GCR and ultraviolet. Discussion Two alternative origins have previously been proposed 2 for the decadal variations observed in Neptune’s atmosphere, external forcing (solar modulation) or chance. On the basis of a statistical argument, GCR or ultraviolet presented alternatives as solar forcing agents 1 . Including the most recent data, we now find that considering GCR and ultraviolet as joint contributors to Neptune’s atmospheric variability is stronger than an either–or scenario. Our model’s explanatory power is enhanced by including realistic ion–aerosol physics, allowing for ion loss both by attachment to aerosol and by ion–ion or ion–electron recombination. Ion–aerosol theory considerations indicate that, in cloudy or hazy conditions, ions attach to aerosol particles (see equation 1). This has two consequences. First, haze particles or cloud droplets will become charged by ion attachment. The charge does not depend on the ionization rate 25 , so will not generate the photometric variability analysed here. Second, the associated ion depletion will reduce opportunities for ion-induced nucleation. As Neptune’s stratosphere and troposphere are rich in haze and cloud particles 5 , 6 , this suggests a role for transport processes 6 , 26 in moving nucleated particles to regions where they become detectable in the photometric observations. Our work provides new evidence for solar forcing in Neptune’s atmosphere on sub-seasonal timescales, through both ultraviolet-driven and GCR mechanisms. The lags in Neptune’s response to external forcing present in both GCR and ultraviolet data over the entire time series, and in the GCR SPD during the 1980s, are consistent both with each other and with known particle movement timescales 6 . Further investigation is needed to understand the potentially very different timescales associated with both particle-modulating mechanisms, and the relevant transport processes within Neptune’s atmosphere. Methods Neptune magnitude data The Neptune magnitude data has been obtained by Dr W Lockwood and collaborators from the Lowell Observatory, Arizona over many years 2 , 3 . Neptune’s magnitude is measured with a 21-in reflector telescope using differential photometry, a technique based on measuring the brightness of a target object relative to comparison stars. The data are filtered in the visible with filters called ‘b’ (centred on 472 nm) and ‘y’ (centred on 551 nm). Filter-response functions and details of the long-term stability and errors are all given in ref. 2 . Detrending the Neptune magnitude data is necessary to remove the slow seasonal related increase in Neptune’s brightness (for example, see refs 1 , 2 , 27 ). Following the approach in ref. 1 , we have applied smoothing curves to the magnitude data to remove the low-frequency seasonal changes and look at fluctuations occurring on more rapid timescales. A quadratic detrend was applied in earlier analyses 1 , 2 , but rather than assume an arbitrary polynomial, we have applied robust local smoothing methods and compared them with the quadratic in Fig. 6a . The lowess 28 fit and the newer algorithm, loess 29 , are well-established non-parametric local smoothers, weighted towards points near the region to be fitted. It can be seen from Fig. 6b that the key features in the magnitude fluctuations are preserved independently of the smoothing fit chosen. Figure 6: Comparison of smoothing approaches for Neptune magnitude time series data. ( a ) Raw data are shown as points (blue squares for 472 nm, green circles for 551 nm), with three different smoothing lines as indicated on the legend. ( b ) fluctuations calculated using the three different fits, with the same lines as indicated in the legend for a . Full size image A Kolmogorov–Smirnov (KS) statistical test has also been used to establish whether the magnitude fluctuations calculated using any of the three smoothing fits differ statistically. Importantly, the Kolmogorov–Smirnov test does not make any assumptions about the distribution of the data (for example, see ref. 30 ). Table 2 indicates that for most of the types of smoothing used, the calculated magnitude fluctuations are not significantly different. Table 2 Kolmogorov–Smirnov test results ( P values) for Neptune magnitude fluctuations de-trended with different techniques. Full size table Cosmic ray data GCRs are energetic particles generated outside the Solar System. They are mainly protons, which are most abundant, and alpha particles (helium nuclei) 31 . Cosmic rays ionize atmospheres by colliding with molecules and inducing a cascade of secondary ionizing and non-ionizing particles; they are the major source of ionization in many planetary atmospheres 11 . Neutron monitor measurements of GCR are used here as an indicator of atmospheric ionization. Neutrons are non-ionizing radiation formed by GCR decaying in Earth’s atmosphere and are measured by a network of terrestrial monitors, described below. GCR can also be measured directly by spacecraft. Voyager 2 proton data, available from 1977 onwards, is useful for comparison, since it represents the cosmic ray flux near or at Neptune for some of the time period of interest. However, because of the variable lag of up to 4 months between the time series of neutron monitor data and Voyager 2 data, depending on the position of Voyager 2, we have chosen to focus on data from the Oulu neutron monitor, which has been in continuous operation since the 1960s (ref. 32 ). Oulu The Oulu neutron monitor detects neutrons generated by primary GCR decaying in the atmosphere. Integrated over the atmospheric column, neutron monitor data is a reasonable proxy for cosmic ray ionization in Earth’s atmosphere (although not necessarily in the lower troposphere 15 ). As the physics of atmospheric ionization is fundamentally similar at Neptune and on Earth 11 , we use the Oulu data as a source of long-term cosmic ray data. The Oulu neutron monitor is essentially unchanged since installation in the 1960s, but there is an ‘efficiency’ factor applied to the data to include the effects of changes in hardware and software (as well as the routinely applied atmospheric pressure correction). The data are fully explained at and in ref. 32 . Our analysis uses 1-day averages, although 5-min resolution data is available from 1968. Voyager The Voyager 2 proton data (energy>70 MeV) was obtained by the LECP instrument 22 , although time series of >70 MeV ions and high-energy alphas and protons from the Cosmic Ray Subsystem instrument 33 are similar and could equally have been used. Data (available and described in detail at ) are sampled at 1 s intervals, but recorded as daily average count rates, with the standard deviations reported. The median standard deviation of these was 0.003 s −1 (with an interquartile range of 0.002–0.004 s −1 ) compared with a median count rate of 0.072 s −1 that is, a fractional standard deviation of ±4%. Lyman-alpha ultraviolet data The Lyman-alpha data is a composite time series of the solar hydrogen 121.57 nm emission line, and represents ultraviolet emissions from the entire solar disc 34 . It is generated from a combination of satellite measurements and models extending back to 1947, available at . For the time period relevant to this paper, the data is taken from the Atmospheric Explorer-E (1977–1980), Solar Mesospheric Explorer SME (1981–1989), the Upper Atmosphere Research Satellite (UARS) SOLSTICE instrument (1991–2000), Solar Radiation and Climate Experiment (SORCE) SOLSTICE instrument (2003–2010) and Solar Dynamics Observatory (SDO) EVE (2010–2015). The times where no satellite data is available (1972–1977 and 2001–2003) are filled in by model calculations 35 . Detailed discussion of this data set is outside the scope of this paper 35 , but the uncertainty is estimated to be ±10%. Spectral analysis The periodograms in Fig. 3 were all generated from data selected using an 8-year moving window having a 0.5 cosine bell taper, with steps of 0.5 years between successive data window evaluations. The data were first de-trended to obtain magnitude fluctuations using a loess fit. (The fluctuations were demonstrated to be insensitive to the choice of detrend used in Neptune magnitude data above). The selected data window was cosine tapered (tapering factor 0.5), to reduce the truncation effects of the short data windows. The Lomb–Scargle 36 , 37 approach for irregularly spaced data was used, with the code 38 implemented in R. In the case of irregularly spaced data, the minimum detectable period is 2s, where s is the minimum sampling period of the data set. In the Neptune data the minimum sampling interval is 0.7 years, giving a minimum detectable periodicity of 1.4 years (refs 39 , 40 ). The 1.5–1.9-year periodicity described in the GCR data in the ‘Results’ section was independently confirmed to occur in the 1980s by an additional analysis of the daily GCR data. This analysis used a phase-preserving 1.55–1.81-year Lanczos bandpass filter 41 of half-length 8 years, with missing values addressed by multiple bootstrapped realizations 14 . Computer code is available on request. Droplet nucleation onto ions in Neptune’s atmosphere The saturation ratio S required for ions to grow into ultrafine droplets by condensation can be determined using the Thomson equation (2), which describes the equilibrium saturation ratio needed for ion-induced nucleation to become energetically favourable 11 . In equation 2 r is radius, ρ fluid density, M the mass of the condensing molecule, q charge, γ T the surface tension, k B Boltzmann’s constant, T temperature, ɛ o the permittivity of free space (all in SI units) and ɛ r relative permittivity: A similar approach was taken to calculate the saturation ratio at which ion-induced nucleation could occur on Neptune 9 . Here, we apply the Thomson equation for methane and diacetylene (butadiyne), two species thought likely to form droplets through ion-induced nucleation at the pressures and temperatures appropriate for Neptune 9 . Diacetylene nucleates onto singly charged ions of critical radius 1 nm at a saturation ratio of ∼ 500, whereas methane needs a saturation ratio of ∼ 15 for nucleation onto ions ( Fig. 7 ). As these large saturation ratios are expected in the cold Neptune environment 9 , condensation can occur onto freshly produced ∼ 1 nm cluster ions. Figure 7 also shows that ion-induced nucleation occurs more easily at lower saturation ratios on multiply charged large ions. For a typical charge of 2e on 10nm particles in Neptune's atmosphere, arising from asymmetry in positive ion and free electron mobilities (ref. 25 ), estimates of the saturation ratio required for nucleation are ∼ 100 (diacetylene), and ∼ 5 (methane), consistent with the ‘relatively efficient’ ion-induced nucleation predicted in ref. 9 . Figure 7: Saturation ratios required for condensation onto ions. For ( a ) methane at 75 K and ( b ) diacetylene (butadiyne) at 100 K.The critical saturation ratio required is reduced if the ions are multiply charged, with calculations given for 1, 2 and 5 elementary charges e . Full size image Droplet growth by condensation We now estimate the rate of droplet growth for methane condensation at 75 K and 1,100 hPa on Neptune with methane saturation ratio of 2.7 in a hydrogen atmosphere 9 . (There is not enough data available to carry out the calculation for diacetylene). Following ref. 24 , the rate of growth of a droplet of radius r is given by where, S is the saturation ratio, T is the temperature, L is the latent heat of vapourization, R v is the gas constant for the condensing species, D is the diffusion coefficient for the condensing species, K is the thermal conductivity, ρ v is the vapour density and e s (T) is the saturation vapour pressure. Two normalization factors, f(α) and g(β) , are also defined 24 where, with p the pressure and R’ the gas constant of the background gas, and α is 1. The second normalization factor is given by where, 0.02< β< 0.04 and is taken here to be 0.04. The terms used in the calculation are listed in Table 3 . Table 3 Quantities used to estimate growth rate of methane droplets in hydrogen at ∼ 100 K. Full size table Inserting values from Table 3 , we find that the estimated droplet growth rate is of order 4 nm s −1 , which is insensitive to both the initial radius, and the radius as the droplet grows. This insensitivity to radius arises from the 1/ r term in equation 3 being compensated by the r terms in equations 4 and 5. Additional information How to cite this article: Aplin, K. L. & Harrison, R. G. Determining solar effects in Neptune’s atmosphere. Nat. Commun. 7:11976 doi: 10.1038/ncomms11976 (2016). | Autonomous robots capable of walking, swimming and climbing, will replicate insects, birds, animals and even humans on future missions of space exploration within decades, according to a new UK-RAS Network white paper led by Professor Yang Gao, Head of STAR Lab at the University of Surrey. Space Robotics and Autonomous Systems: Widening the horizon of space exploration also reveals that the rapid evolution of technologies powering space RAS will have beneficial applications in sectors such as healthcare, mining and agriculture. Lead author Professor Yang Gao, Head of STAR Lab at the University of Surrey's Space Centre explained, "Since the 1990s, a new generation of planetary exploration has travelled further into the solar system and is required to become increasingly more convincing as a human proxy in space. This will lead to the development of robotic explorers and assistants that can carry out such complex tasks that they could tangibly replace humans in space or assist astronauts on a mission." Such skills include the ability for space robotics to be equipped with new sensing techniques in order to acquire 3D perception, and to have the ability to climb, swim, dig, fly, sail, navigate and dock spacecraft without humans, as well as to interact with humans. European Space Agency (ESA) Astronaut Roberto Vittori who launched the white paper, said: "Space robotics is central to the future of space exploration. The importance of this area of science cannot be understated, something I can personally attest to having been responsible for the space shuttle's robotic arm in the instillation of a six-tonne cosmic ray detector to the International Space Station. This Space Robotics white paper will be instrumental in providing a clear vision as we continue to push new boundaries in both man and unmanned spaceflight." The University of Surrey is at the forefront of research in this area. Part of Surrey Space Centre (SSC), the Surrey Technology for Autonomous Systems and Robotics (STAR) Lab specialises in developing contemporary robotics solutions and autonomous systems that monitor and service spacecraft, remove space debris and explore new space frontiers and extra-terrestrial surfaces. Examples include robotic arms capable of grabbing space debris and consigning it to a recycling bin, and ideas to 'modularise' spacecraft so that individual subsystem modules can be replaced if they fail. These are all tasks which would be extremely dangerous and hugely expensive if performed by human astronauts without using robots. Bringing benefits back down to earth While space robotics and autonomous systems (RAS) are broadening what is possible in space, they are also bringing benefits closer to home. Professor Gao explained, "Increasingly we are seeing non-space industries interested in applying our expertise to their own areas, such as the nuclear sector which also has to deal with a high radiation, hazardous environment. "We're now developing robotic vision-based software for Sellafield which can help sort and segregate nuclear waste autonomously. Also, for the agricultural sector we've been asked to develop a small autonomous vehicle that can identify diseased crops, take high resolution images and deploy a robotic arm to take samples if required." Other industries that will benefit as a direct result from the technical advancements in space robotics include: The healthcare industry through advancements in robotic surgery, diagnostics, independent living, nursing systems, prosthetic analysis and therapy opportunitiesThe emergency services through improved responsiveness, reduced risk to life and more efficient deploymentThe deep mining industry through enhanced exploration, excavation, refinement and health condition monitoring The water industry through more efficient asset inspection, maintenance and health condition monitoring The white paper report highlights technological needs, challenges and solutions that space robotics are likely to overcome over the coming decades. At an estimated $10,000 per kilogram to launch a satellite merely into low Earth orbit, cost saving solutions such as the ability to repair spacecraft in orbit using space RAS is becoming very attractive to the rapidly expanding space industry. | www.surrey.ac.uk/sites/default … int_single_pages.pdf |
Nano | A spoonful of sugar in silver nanoparticles to regulate their toxicity | David C Kennedy, Guillermo Orts-Gil*, Chian-Hui Lai, Larissa Müller, Andrea Haase, Andreas Luch, Peter H Seeberger. "Carbohydrate functionalization of silver nanoparticles modulates cytotoxicity and cellular uptake." Journal of Nanobiotechnology 12: 59, December 2014. DOI: 10.1186/s12951-014-0059-z | http://dx.doi.org/10.1186/s12951-014-0059-z | https://phys.org/news/2015-01-spoonful-sugar-silver-nanoparticles-toxicity.html | Abstract Background Increasing use of silver nanoparticles (Ag-NPs) in various products is resulting in a greater likelihood of human exposure to these materials. Nevertheless, little is still known about the influence of carbohydrates on the toxicity and cellular uptake of nanoparticles. Methods Ag-NPs functionalized with three different monosaccharides and ethylene glycol were synthesized and characterised. Oxidative stress and toxicity was evaluated by protein carbonylation and MTT assay, respectively. Cellular uptake was evaluated by confocal microscopy and ICP-MS. Results Ag-NPs coated with galactose and mannose were considerably less toxic to neuronal-like cells and hepatocytes compared to particles functionalized by glucose, ethylene glycol or citrate. Toxicity correlated to oxidative stress but not to cellular uptake. Conclusions Carbohydrate coating on silver nanoparticles modulates both oxidative stress and cellular uptake, but mainly the first has an impact on toxicity. These findings provide new perspectives on modulating the bioactivity of Ag-NPs by using carbohydrates. Introduction Nanoparticles are playing an increasing role in the development of novel diagnosis methods and in the advanced design of drug delivery systems [ 1 ],[ 2 ]. Silver nanoparticles (Ag-NPs) in particular, show excellent anti-microbial properties and therefore are rapidly being incorporated into a wide array of consumer products such as textiles, cosmetics or packaging materials, increasing the likelihood of human and environmental exposure [ 3 ],[ 4 ]. Moreover, due to their optical properties Ag-NPs are attracting more attention in the fields of biological and chemical sensors [ 5 ]. However, Ag-NPs exist in variety of different sizes and shapes but also, very important, with different coatings. Recently, among surface coatings there is an increasing interest in using carbohydrates as biomimetic functional molecules on the surface of nanoparticles [ 2 ],[ 6 ]-[ 8 ] for the diagnosis and treatment, for instance, of brain diseases like glioma and stroke [ 9 ],[ 10 ]. Glycan functionalised NPs offer several advantages: (i) their synthesis can be performed under biomimetic conditions resulting later on in nanoparticles without traces of chemicals responsible for adverse cellular responses. (ii) the carbohydrates on the surface can serve as targeting molecules and trigger cellular uptake via specific receptors or mediate specific cellular responses [ 10 ]. Concurrently, the importance of carbohydrates in cellular signalling and in the regulation of cellular processes continues to emerge [ 11 ]. The inherently weak interactions between carbohydrates and proteins or other biomolecules makes these interactions difficult to study. However, because these interactions tend to be multivalent in nature, the use of nanoparticles to mimic the multivalent presentation of carbohydrates found on biomolecular surfaces make carbohydrate-functionalized nanoparticles important systems to study [ 12 ]. Several factors like surface charge and particle size can contribute to the selective binding and uptake of nanomaterials [ 13 ],[ 14 ]. In addition to labelling with a targeting molecule, nanoparticles can induce multivalent effects by clustering antigens on the surface of the particle. Thereby, the binding of relatively weak targeting agents can be enhanced. Nevertheless, despite the importance of carbohydrates in biology and the vast array of literature on functionalized nanomaterials, little is known about the effects of carbohydrates on the uptake and toxicity of nanoparticles by different type of cells. Although it has been reported that polysaccharides can reduce the toxicity of silver nanoparticles [ 15 ] less is known about the influence of monosaccaharides [ 16 ] thus, the different results are difficult to rationalise. Moreover, as pointed out by Johnston et al. [ 17 ], the increasing importance of Ag-NPs in the development of novel consumer materials intended for human exposure requires more in depth studies on toxicity mechanisms, as well as, on how silver particles interact with biological molecules and how different surface modifications can be used to reduce or eliminate possible toxic effects. Here, we discuss the toxicity and the cellular uptake of different silver nanoparticles functionalized with citrate, three different monosaccharides as well as ethylene glycol on two different cell lines. It was found that toxicity correlates with oxidative stress rather than with cellular uptake. Experimental Materials Silver nitrate, sodium citrate, D-glucose, D-mannose, D-galactose and ethylenglycol with MW = 200 (EG-3) with purity >99% were purchased from Sigma-Aldrich. Synthesis of silver nanoparticles Citrate-capped silver nanoparticles were synthesized using a standard method [ 18 ]: A solution of AgNO 3 (10 −3 M) in deionized water was heated until boiling point. Then, sodium citrate solution in water was added dropwise. The color of the solution slowly turned into gray-yellow after few minutes, indicating the formation of nanoparticles. Heating was continued for an additional 10 min. The solution was then cooled to room temperature and stored in dark before functionalization. Synthesis of carbohydrate ligands Thiol-functionalized carbohydrates were synthesized using the following process: glucose, galactose and mannose were each reacted with thiopropionic acid to form the corresponding glycoside in approximately 50% yield. The products were isolated as DMAP-H+ salts. Solutions of these thiolate salts were then directly added to solutions of silver nanoparticles to prepare the carbohydrate functionalized particles. Functionalization of nanoparticles To a glass vial charged with a stir bar was added 1 mL of citrate capper silver nanoparticles (1 nM) and 60 μL of a 2 mM stock solution of the corresponding ligand. The solutions were stirred for 3 h at which time each solution was transferred to a 1.5 mL Eppendorf tube and centrifuged for 10 min at 8000 rpm. The supernatant was discarded and the pellet was resuspended in 1 mL of H 2 O. These nanoparticle solutions were then used for subsequent reactions and analyses. Scheme of synthesized nanoparticles is shown in Figure 1 . Figure 1 Prepared nanoparticles. (A) Different biomolecules on the surface of prepared silver nanoparticles (linker not shown); (B) in cell culture media the nanoparticle-water interface is composed of ligands and ions but also of proteins, the so-called protein corona. Full size image Nanoparticles characterization Dynamic light scattering (DLS) Most of DLS measurements were carried out at 37°C (to simulate physiological temperature) by use of a Malvern Zeta Nanosizer in water and in cell culture medium (DMEM + 10% FCS). This instrument operates at 4-mW He-Ne laser power, scattering angle of 173° and a wavelength of 633 nm. The intensity correlation functions were fitted by the method of cumulants and by using the Non-Negative Least Squares algorithms (NNLS) included in the Zeta Nanosizer software. The zeta potentials of the samples were obtained from laser Doppler electrophoresis, converting electrophoretic mobilities to zeta potentials. Each sample was prepared in triplicate and measured six times. Experiments consisted of 60 runs per measurement. Transmission electron microscopy (TEM) and energy dispersive X-ray spectroscopy (EDS) TEM investigations were performed on a Jeol JEM 2200-FS operating at 200 kV. At high magnification, the in-column Ω-filter was used to improve the contrast. Samples were prepared by immersion of grids of S-160-3 type (Cu coated with carbon film, Plano GmbH) in a small volume (0.5 mL) of solutions followed by solvent evaporation in a dust-protected atmosphere. Particle size distributions were obtained by analysing at least 200 NPs from TEM images using ImageJ software [ 19 ]. Energy dispersive X-Ray analysis (EDX) was performed in STEM mode using a spot size between 0.5 and 1.5 nm. Sugar quantification Sugar densities were evaluated for Glu, Gal and Mann by a previously reported method [ 2 ],[ 7 ]. Briefly, Glu-Ag-NPs, Gal-Ag-NPs and Mann-Ag-NPs were dispersed in deionized water (0.5 mL) in an ice bath. A freshly prepared 0.5% (w/w) solution of anthrone in sulfuric acid (1 mL) was added slowly to this solution. The resulting solution was gently mixed and heated to 80°C for 10 min. The absorption of the solution was measured at 620 nm and compared with those that were obtained from a standard curve to determine the amount of sugars on the Ag-NP surface. Cell culture Neuro-2A and HepG2 cells (American Tissue Culture Center) were each grown in Dulbecco’s modified Eagle’s medium supplemented with 10% fetal bovine serum (PAN-Biotech), 1% penicillin-streptomycin (50 μg/mL, PAN-Biotech), and l-glutamine (2 mM, PAN-Biotech), under standard culture conditions (37°C, 5% CO2). MTT assay Cells were seeded into wells in a 96-well plate (1 × 105 cells/mL, 100 μL per well) to cover a 9 × 6 grid, filling 54 wells. Remaining wells were filled with 200 μL of PBS. After 24 hours, 100 μL volumes of dilutions of nanoparticles in water spanning from 1 nM to 0.01 nM were added to the seeded wells (final concentrations spanning 5 pM to 500 pM). For each functionalized particle, eight dilutions were prepared and for each dilution six replicates were performed. In the remaining 6 wells, 100 μL of PBS was added as a control. Cells were then incubated with complexes for 72 h. After 72 h, 50 μL of a PBS solution of MTT (2.5 mg/mL) was added to each well and then incubated for 3 h. After 3 h, media was aspirated from all wells, leaving purple formazan crystals in those wells with viable cells. To each well, 150 μL of DMSO was added. Plates were then agitated for 10 s and analyzed using a plate reader (NanoQuant Infinite M200 instrument by Tecan Group Ltd.) to determine the absorbance of each well at 570 nm. This reading divided by the average from the reading of the six control wells was plotted to determine the IC50 value of each complex for each cell line. Analysis of protein carbonylation as a read-out for intracellular oxidative stress development HepG2 were seeded in 6-well plates and treated with nanoparticles (final concentrations 2.5 pM and 5pM) for 6 h (induction of protein carbonylation). Cells were washed with PBS three times and lysed by adding a modified RIPA buffer (50 mM Tris/HCl pH 7.4; 150 mM NaCl, 1 mM EDTA, 1% Igepal, 0.25% Na-deoxycholate). Protein concentrations were determined via Bradford assay according to manufacturer instructions (BioRad, München, Germany). For detection of protein carbonyls OxyBlot kit (Millipore, Schwalbach, Germany) was used according to manufacturer instructions. Briefly, protein carbonyls are labeled by adding 2,4-dinitrophenyl (DNP) hydrazine, which becomes covalently attached as DNP hydrazone and can be detected with the respective DNP antibody. SDS-PAGE was performed according to standard protocols. Gels were transferred onto nitrocellulose membranes with a semidry blotting system. Tubulin antibody was obtained from Abcam (Abcam, Cambridge, UK) and used as a loading control. Images were obtained with GelDoc system (BioRad, München, Germany) and quantified with ImageLab (BioRad, München, Germany). The assay was repeated in three independent experiments and results were statistically evaluated. Confocal microscopy For imaging, cells were grown on cover slips seeded in 6-well plates. Sterilized cover slips were placed in each well followed by addition of cell suspensions (1 × 10 5 cells/mL, 2 mL per well). After 24 h, cells were treated with either CuSO 4 or CuHis to a final concentration of 25 μM. Cells were incubated with copper complexes for 1, 3, 24 or 72 h at which times the media was removed and cells were fixed by adding 1 mL of fixing solution (3.7% formaldehyde, 4% glucose in PBS). The fixing solution was then removed and 1 mL of PBS was added to cells in each well. To each sample, 4 μL of anti-PRP antibody (Abcam EP1802Y – rabbit monoclonal antibody against prion protein) were then added to the PBS and the cells were placed in the fridge to be treated for 12 h at 4°C. After 12 h, the PBS was removed, and cells were rinsed 3 times with 1 mL PBS. Cells were then treated with 1 mL PBS and 4 μL of a secondary Goat anti-Rabbit IgG – FITC (Invitrogen) and covered with foil. Cells were left at room temperature for 3 h and then treated with DAPI (3 μL Invitrogen). DAPI was added to each well and the plates were then covered with foil again and left at room temperature for 20 min. Finally, the PBS was aspirated, and cells were rinsed 3 times with 1 mL PBS. Cover slips were then removed from the wells. To prepare slides, PBS (20 μL) was added to the surface of each microscope slide and then the removed coverslips were inverted and placed on the PBS. Coverslips were then sealed using nail polish and dried in the dark for 10 min. Slides were imaged using a confocal laser scanning microscope (LSM 700, Zeiss). Z-stack plots (1 micron thick layers) were taken for 6 unique cell clusters from each sample. Stacks were compressed into two-dimensional images using ImageJ software to create a single image showing the entire cell surface. This image was then analyzed using voxel analysis to determine the number of fluorescently labeled pixels, and thus, the level of prion protein at the cell surface. Changes in surface expression and localization were noted and reported. ICP-MS Cell samples were digested in 100 μL nitric acid and stored at −20°C until analysis. A 50 μL volume of the samples was diluted 1:10000 with 3.5% nitric acid for analysing the cellular uptake of NPs. Lanthanum (10 ppb) was added as internal standard. An external calibration series from 0.5 ppb to 50 ppb was prepared using a silver standard solution. A sample volume of 3 mL was needed for analysis. For this purpose an ICAP-Q (Thermo Fisher Scientific GmbH, Dreieich, Germany) was connected to a concentric nebulizer with a cyclone spray chamber. The working parameters are in Additional file 1 . Results All nanoparticles were characterized by TEM, DLS, ZP and EDX (Figure 2 ). Particle sizes computed from statistical analysis of TEM images were around 54 nm (Figure 2 -A and Additional File 1 ). EDX confirms the absence of impurities (Figure 2 -B). DLS of particles in water after 24 h show some degree of agglomeration while particles in cell culture media were more stable probably due to the formation of a protein corona (Figure 1 -D and Additional File 1 ). This in good agreement with findings from Kittler et al. [ 20 ] and excludes, in this case, agglomeration as a factor affecting toxicity in cell culture medium [ 21 ],[ 22 ]. ZP shows a change in surface charge for functionalized nanoparticles compared with citrate silver in good agreement with expected values for glycosylated nanoparticles [ 10 ] (Figure 2 -C). Amount of sugar on nanoparticles was determined using the anthorne/H 2 SO 4 method in a similar way as in our previous contribution [ 23 ] (see Additional file 1 ). Values found were between 3.2 and 3.9 molecules sugar/nm 2 . Figure 2 Physico-chemical characterization of nanoparticles. (A) TEM. Inset corresponds to high resolution image showing d lattices; (B) Energy dispersive X-ray analysis showing Ag but no impurities; (C) zeta potential of different prepared samples; (D) DLS of samples in cell culture medium showing well-dispersed nanoparticles. Full size image The toxicity of the functionalized silver nanoparticles was tested against two cell lines, a neuronal-like cell line (Neuro-2A) and a hepatocyte cell line (HepG2) by using an MTT assay (Figure 3 -A and B). Here, a clear influence of the coating on the toxicity of the particles was observed. While particles functionalized with EG-3, glucose and citrate coated nanoparticles show a similar toxicity, galactose and mannose functionalized nanoparticles were significant less toxic towards both cell lines. Figure 3 Toxicity results in vitro. (A) EC50 values from MTT assay using silver nanoparticles with different coatings and HepG2 cells; (B) Analogous with Neuro-2 cells; (C) Detection of oxidative stress from Ag-NPs (concentration 5 pM) via formation of protein carbonyls incubated with HepG2 cells. (D) Protein carbonyls were detected at different concentrations (2.5, 5, 10 pM) as (DNP) hydrazone adducts via immunoblots with a DNP antibody. Full size image In order to elucidate the mechanism leading to observed coating-dependent toxicities we analyzed the formation of protein carbonyls as an indirect read-out for the oxidative stress inducing activity of nanoparticles. Proteins can become carbonylated either as a direct or an indirect consequence of reactive oxygen species (ROS) formation. Experiments were performed at particles concentrations 2.5 pM and 5 pM (see Additional file 1 ). At both concentrations a strong correlation between carbonyls formation and EC 50 values can be observed (Figure 3 -C), suggesting that the toxicity is mainly caused by oxidative stress related to ROS formation. This may be related to ion release as has already been shown that silver ions can trigger oxidative stress. Many authors have argued that in fact the toxicity of nanosilver is only caused by the ionic form [ 24 ]. Therefore, in our case this could mean that either the different types of nanosilver are related to different release rates of ionic silver from the various different coated NP. Dissolution of silver nanoparticles can vary from less than 10% to up to nearly 100%, depending on the coating [ 25 ]. Since production of protein carbonyls rather simulates intracellular oxidative stress, the release of silver ions in cell culture media was also measured by ICP-MS in order to also evaluate potential extracellular oxidative stress (see Additional file 1 ). Nevertheless, no free silver ions were detected in the supernatant of cell culture media, probably due to precipitation of ionic silver in form of AgCl and protein complexes. Therefore, under the studied conditions, intracellular release of silver ions may be the only responsible for cellular damage. On the other hand, toxicity may also potentially be influenced by different cellular uptake rates. Here, a Trojan-horse mechanism has been often discussed in literature as a responsible for toxicity of silver nanoparticles. According to this, nanoparticles represent carrier vehicles which penetrate into cells, and then release toxic silver ions by dissolution [ 26 ]. To get further insights on main factors leading to toxicity, we analyzed the cellular uptake of the different functionalized silver NPs by ICP-MS and by confocal microscopy. ICP-MS and confocal microscopy showed for both cell lines that the less toxic galactose-functionalized nanoparticles are taken up even more efficiently compared to mannose- or glucose- functionalized particles (Figure 4 ). Moreover, although mannose and glucose-functionalized nanoparticles present similar cellular uptakes, observed toxicities were considerably different. Thus, particles which are largely internalized into cells do not necessarily present the highest toxicity. Actually, in this study, glucose-capped nanoparticles present the highest toxicity as well as protein carbonylation, despite their moderate cellular uptake, compared with other nanoparticles. Interestingly, Vaseem et al. showed that glucose reduces the toxicity of nickel nanoparticles towards A549 cells [ 27 ]. Thus, intracellular oxidative stress depending on particles coating was the deciding factor leading to toxicity. Figure 4 Cellular uptake of silver nanoparticles with different coatings by in HepG2 cells and Neuro-2A. Full size image Confocal microscopy images show that cellular localization in Neuro-2A cells for the galactose-coated particles are mainly clustered inside the cytoplasm. Therefore, most likely they are contained inside vesicular structures, such as endosomes or lysosomes. Nevertheless, they apparently do not enter the nucleus (Figure 5 ). Higher density of particles clusters were observed on one side of each cell. Interestingly, for mannose- and glucose-functionalized particles, clusters seem to be spread more evenly through the cell and intracellular clusters tend to be smaller than particles with other functionalities. Figure 5 Confocal microscopy images of cells incubated with prepared nanoparticles. Cell nuclei are stained in red. Green dots represent fluorescently labelled nanoparticles. Full size image Uptake of nanoparticles depending on surface charge has been discussed by other authors. For instance, Badawy et al. showed that negatively charged silver nanoparticles did not overcome electrostatic repulsion barrier towards similar charged bacillus species [ 28 ]. As a result, highly negatively charged citrate silver nanoparticles induced less toxicity than H 2 -Ag nanoparticles. In our case, we also observe a similar correlation between uptake and surface charge for mannose, glucose or EG3 coated Ag-NP, which are more negatively charged and taken up less efficiently. This is consistent with the fact that cells lines used here are also negatively charged due to various carbohydrate moieties. However, in our case, different uptake rates are not related to different toxicities as actually galactose coated NP, which are taken up most efficiently show least toxicity. Eventually this highly increase uptake for galactose coated NP is due to a specific galactose receptor on the surface of cells [ 23 ],[ 29 ]. In fact, internalization of the prepared nanoparticles may depend on various factors, one being surface charge of nanoparticles, another one being the presence of specific receptors on cell surface and finally, it will also depend on the composition of the protein corona [ 30 ]-[ 32 ]. We measured the zeta potential of nanoparticles after incubation, showing similar overall negative charges for all particles, which confirmed the formation of the protein corona. Although in the last years, more efforts were invested in order to elucidate the detailed composition of the protein corona, this still remains a challenging question which needs exhaustive analysis and techniques. Nevertheless, based on previous studies, the composition of the protein corona on the surface of functionalized particles presented here is expected to be different depending on the particle coating [ 33 ]-[ 35 ]. Conclusions Functionalisation of silver nanoparticles with monosaccharides modulates their cellular uptake and toxicity. Galactose and mannose-coated nanoparticles were considerably less toxic to both neuronal-like cells Neuro-2A and hepatocytes, compared to particles functionalized with glucose, ethylene glycol or citrate. Observed toxicity was strongly correlated with intracellular oxidative stress, measured as protein carbonylation, but not to cellular uptake. Summarising, a clear correlation between particle coating, oxidative stress and toxicity has been shown. These results open new perspectives to modulate the bioactivity of Ag-NPs by using carbohydrates. Additional file | The use of colloidal silver to treat illnesses has become more popular in recent years, but its ingestion, prohibited in countries like the US, can be harmful to health. Scientists from the Max Planck Institute in Germany have now confirmed that silver nanoparticles are significantly toxic when they penetrate cells, although the number of toxic radicals they generate can vary by coating them with carbohydrates. Silver salts have been used externally for centuries for their antiseptic properties in the treatment of pains and as a surface disinfectant for materials. There are currently people who use silver nanoparticles to make homemade potions to combat infections and illnesses such as cancer and AIDS, although in some cases the only thing they achieve is argyria or blue-tinged skin. Health authorities warn that there is no scientific evidence that supports the therapeutic efficiency of colloidal silver and in fact, in some countries like the US, its ingestion is prohibited. On the contrary, there are numerous studies which demonstrate the toxicity of silver nanoparticles on cells. One of these studies has just been published in the Journal of Nanobiotechnology by an international team of researchers coordinated from the Max Planck Institute of Colloids and Interfaces (Germany). "We have observed that it is only when silver nanoparticles enter inside the cells that they produce serious harm, and that their toxicity is basically due to the oxidative stress they create," explains the Spanish chemist Guillermo Orts-Gil, project co-ordinator, to SINC. To carry out the study, the team has analysed how different carbohydrates act on the surface of silver nanoparticles (Ag-NP) of around 50 nanometres, which have been introduced into cultures of liver cells and tumour cells from the nervous system of mice. The results reveal that, for example, the toxic effects of the Ag-NP are much greater if they are covered with glucose instead of galactose or mannose. 'Trojan horse' mechanism Although not all the details on the complex toxicological mechanisms are known, it is known that the nanoparticles use a 'Trojan horse' mechanism to trick the membrane's defences and get inside the cell. "The new data shows how the different carbohydrate coatings regulate the way in which they do this, and this is hugely interesting for controlling their toxicity and designing future trials," points out Orts-Gil. The researcher highlights that there is a "clear correlation between the coating of the nanoparticles, the oxidative stress and toxicity, and thus, these results open up new perspectives on regulating the bioactivity of the Ag-NP through the use of carbohydrates". Silver nanoparticles are not only used to make homemade remedies; they are also increasingly used in drugs such as vaccines, as well as products such as clothes and cleaning cloths. | 10.1186/s12951-014-0059-z |
Biology | Smartphone data can help create global vegetation maps | Sophie Wolf, Citizen science plant observations encode global trait patterns, Nature Ecology & Evolution (2022). DOI: 10.1038/s41559-022-01904-x. www.nature.com/articles/s41559-022-01904-x Journal information: Nature Ecology & Evolution | https://dx.doi.org/10.1038/s41559-022-01904-x | https://phys.org/news/2022-10-smartphone-global-vegetation.html | Abstract Global maps of plant functional traits are essential for studying the dynamics of the terrestrial biosphere, yet the spatial distribution of trait measurements remains sparse. With the increasing popularity of species identification apps, citizen scientists contribute to growing vegetation data collections. The question emerges whether such opportunistic citizen science data can help map plant functional traits globally. Here we show that we can map global trait patterns by complementing vascular plant observations from the global citizen science project iNaturalist with measurements from the plant trait database TRY. We evaluate these maps using sPlotOpen, a global collection of vegetation plot data. Our results show high correlations between the iNaturalist- and sPlotOpen-based maps of up to 0.69 ( r ) and higher correlations than to previously published trait maps. As citizen science data collections continue to grow, we can expect them to play a significant role in further improving maps of plant functional traits. Main Global maps of plant functional traits provide an indispensable foundation for understanding interactions of the environment and the terrestrial biosphere. Information on plant traits, such as tissue properties or morphological characteristics, is urgently needed as input for dynamic global vegetation models 1 , 2 or to understand geographic patterns of plant community structure and functional diversity 3 , 4 . Yet, the spatial distribution of plant trait observations remains sparse 5 , 6 . Measuring traits directly often involves destructive sampling and expensive laboratory protocols 7 . Plant trait databases, such as the TRY plant trait database 8 , 9 , curate these single measurements from numerous independent studies and projects. Although TRY provides an impressive collection of trait measurements, its cover cannot display global trait patterns 9 , 10 . Alternative approaches have upscaled trait expressions from trait databases for a few traits 5 , 11 , 12 , 13 , 14 . However, the resulting maps still feature huge uncertainties as they extrapolate into unknown areas 12 , 15 —uncertainties reflected in often-large discrepancies among the different trait maps 16 . The potential of satellite remote sensing to generate large-scale maps of trait distributions has also been explored 6 , 17 . Yet, remote sensing retrieves only a few traits 18 , 19 , 20 and primarily informs on upper canopies, while lower layers may not be well represented 21 . One potential source to complement existing trait data could be crowd-sourced observations classified via automated species classification 22 , which is increasingly accessible to a wide audience via smartphone apps (for a recent comparison of apps see Jones 23 ). Such technology now enables collecting plant observations in very large sample sizes 24 , 25 , 26 , 27 . Recent evaluations of crowd-sourced data demonstrate the potential of using crowd-sourced plant occurrence data to study species distribution and macroecological floristic gradients 28 , 29 . iNaturalist, one of the largest and increasingly popular citizen science projects, has motivated users to collect and verify a total of over 14 million vascular plant ‘research-grade’ observations 30 (Extended Data Fig. 1 ). Considering the growth rate and the fact that iNaturalist is only one among many citizen science projects that add to the global boost in vegetation data availability, we can expect that such data will play a crucial role in our understanding of global plant functions in ecosystems 31 . Yet, the actual value of using citizen science plant occurrence data for mapping global trait patterns remains largely unknown. Many citizen science projects, including iNaturalist, have no sampling design, so the resulting datasets cannot be considered to be representative of space, time, vegetation types or taxa. Biases may depend on site popularity or accessibility, cell phone reception, users’ interest in specific plants (for example, noticeable flowers) or temporal patterns in user activity, such as during vacations, weekends or linked to plant phenology 28 , 32 , 33 , 34 , 35 . However, if we were able—despite all the potential biases—to make use of the functional information encoded in these data and extract reliable trait information from the iNaturalist citizen science observations, we would have a quickly growing data source for biodiversity monitoring and long-term observation at our disposal. Here, we explore whether opportunistic iNaturalist research-grade occurrence data of vascular plants 36 could help map trait patterns on a global scale; we assess whether these data reflect global functional community composition when complemented with trait information from the global plant trait database TRY 9 and aggregated spatially into global trait grids. As a reference for evaluating the iNaturalist observations, we use trait community-weighted mean (cwm) from the database sPlotOpen 37 , 38 . Database sPlotOpen is curated from globally distributed plots with vegetation abundance measurements, balanced over global climate and soil conditions. It provides a cwm for each vegetation plot for 18 traits. These cwm are also derived from TRY measurements. We thus compare the functional trait information derived from spatially and taxonomically biased occurrence samples provided by iNaturalist citizen scientists to professionally sampled environmentally balanced plot-based abundance weighted data. Results iNaturalist and sPlotOpen observation density and cover We matched the iNaturalist species observations of vascular plants ( n = 14,019,405) with trait measurements from TRY, using the species name as the link. We were able to match 85% of the iNaturalist vascular plant observations and 32% of the species. A total of 55% of the species listed in the TRY database for the traits of interest were matched. This provided us with nearly 12 million observations ( n = 11,895,453) of 28,500 vascular plant species, for which we have information on at least one trait (see Table 1 for sample sizes of each trait). Table 1 Plant traits Full size table The density and distribution of iNaturalist observations after linking to TRY in comparison to the sPlotOpen plot data ( n = 95,104) show an almost global coverage (Fig. 1 ); for density before linking to TRY see Extended Data Fig. 2 . There is a strong imbalance in the density of observations, with some regions of the world, especially North America and Europe, being over-represented with around 100,000 observations per cell, while others are represented with only one or two observations per cell. The sPlotOpen data, while showing lower coverage, are also, by design, less biased in density towards these regions 38 . Fig. 1: Density and distribution of iNaturalist and sPlotOpen datasets. a , Density of iNaturalist vascular plant observations after linking to TRY database ( n = 11,895,453); 2° resolution (221 km grid size at the equator). Colour corresponds to the number of observations per cell. For density before linking to TRY see Extended Data Fig. 2. b , Density of the iNaturalist in mean annual temperature (°C) and precipitation (mm) climate space. Whittaker biomes are numbered as follows: (1) tropical seasonal forest/savanna, (2) (sub)tropical desert, (3) temperate rain forest, (4) tropical rain forest, (5) woodland/shrubland, (6) tundra, (7) boreal forest, (8) temperate grassland/desert and (9) temperate seasonal forest. c , d , Global density of sPlotOpen plots ( c ) and density of the sPlotOpen in mean annual temperature ( ∘ C) and precipitation (mm) climate space ( d ) ( n = 95,104). Full size image Density and distribution in climate space (temperature/precipitation) paint a similar picture (Fig. 1 ): while iNaturalist observations cover a larger portion of this space, it is also more biased in the number of observations, specifically toward the temperate regions with moderate rainfall, that is the temperate deciduous forest and temperate shrubland Whittaker biomes 39 . The density of observations varies greatly across terrestrial biomes 40 . Among the 14 biomes, ‘temperate forests’ and ‘Mediterranean forests, woodlands and shrubs’ are best represented by iNaturalist observations with a density of about 0.4 observations per km 2 . Temperate grasslands have a medium density of 0.13 per km 2 , while all other biomes are represented by much fewer observations, with densities ranging from 0.02 in tropical and boreal forests and deserts down to only 0.004 observations per km 2 in the tundra (Supplementary Table 1 ). Global trait maps From the merged iNaturalist–TRY data we then created global trait distribution maps based on these citizen science observations. We generated a global spatial grid and calculated the mean trait value of each grid cell for a set of 18 traits (Table 1 ) linked to iNaturalist observations. An example of such a map is shown in Fig. 2 for the trait leaf area (see Extended Data Fig. 3 for all 18 traits). Fig. 2: Example of a global trait map. a , b , ln-transformed leaf area (mm 2 ), using iNaturalist observations ( a ) and sPlotOpen ( b ), respectively. For all iNaturalist trait maps see Extended Data Fig. 3 . Full size image Spatial correlation of iNaturalist and sPlotOpen We then investigated the feasibility of using iNaturalist observations, annotated with TRY trait measurements, to produce global trait maps by comparing them to sPlotOpen vegetation plot data. We generated trait maps using the sPlotOpen in the same manner as described above for the iNaturalist observations. We correlated (correlation coefficient r weighted by area of grid cell) the two trait maps at different resolutions, ranging from 0.06° to 4° (7–440 km at the equator) (Fig. 3a ; Methods ). Fig. 3: Pixel-by-pixel correlation of iNaturalist and sPlotOpen global trait maps. The correlation is quantified using a weighted correlation coefficient ( r weighted by grid cell area). a , Relationship of r and spatial resolution (0.06°–4° resolution or about 7–440 km grid size at the equator). The lines connecting the points solely enhance readability. b , Scatter plots of sPlotOpen map pixel values plotted against the respective iNaturalist map pixel values (at a 2° spatial resolution); displayed here for traits with the highest iNaturalist sPlotOpen agreement (that is, weighted r > 0.5 and slope >0.5 and <2); see Extended Data Fig. 4 for correlation plots of all traits. All trait values are ln-transformed, 1:1 line in dotted grey and SMA regression slope in red. For clarity, the secondary y axis on the right shows the raw trait values marked on a log scale, which correspond to the ln-transformed values on the left. Plot extents are the 0.01 and 0.99 quantiles of the data. The 95% CIs for the slope are: leaf area (0.98,1.08), SLA (1.34,1.49), leaf fresh mass (1.06,1.19), leaf N per area (1.32,1.46), plant height (0.72,0.8) and stem conduit density (1.37,1.51). Full size image At a 2° resolution we observed the highest correlations for several traits, with a median r of 0.46 over all traits. The following traits exhibited the strongest correlations ( r ): stem conduit density (0.68), stem specific density (SSD) (0.63), leaf fresh mass (0.59), leaf area (0.59), leaf nitrogen (N) per area (0.59), plant height (0.58) and specific leaf area (SLA) (0.56) (Fig. 3b ; see Extended Data Fig. 4 for correlation plots for all traits). At the even coarser 4° resolution, some traits exhibited a higher correlation, such as stem conduit density ( r = 0.69) or leaf fresh mass ( r = 0.66), while others, such as plant height and leaf N per area, peaked at the 2° resolution. We calculated a standard major axis (SMA) regression for each trait (displayed in red lines in Fig. 3b ), which showed slopes significantly different from the 1:1 line for some traits ( P ≪ 0.01 for H 0 : slope = 1). This indicates biases in the structure of the datasets: for larger values, iNaturalist observations tend to estimate higher SLA, leaf fresh mass, leaf N per area and stem conduit density in comparison to sPlotOpen. For SSD, this effect was particularly high (slope = 2.57, 95% confidence interval (CI) (2.45,2.7)). For larger values, iNaturalist observations estimate lower plant height than sPlotOpen. For leaf area, the SMA regression revealed no bias ( P = 0.30 for H 0 : slope = 1). In a second approach, instead of aggregating sPlotOpen plots in a grid, we compared every plot cwm to the average trait measurement of all iNaturalist observations within a certain radius around each sPlotOpen plot (Extended Data Fig. 5 ). In this plot-wise comparison to iNaturalist, the cwm of the sPlotOpen vegetation plots were not regionally aggregated (compared to the grid-based analysis above). We observed similar trends but a larger scattering in the correlation plots, which is reflected in lower r values (Extended Data Fig. 6 ). Differences within biomes To assess how well iNaturalist observations reflect sPlotOpen plant communities in different terrestrial biomes 40 , the previously mentioned grids were generated for each biome separately and normalized. Figure 4 shows the pixel differences (iNaturalist − sPlotOpen) per biome and top trait (Supplementary Tables 1 and 2 give detailed statistics). In general, iNaturalist better resembles sPlotOpen in predominantly non-forest biomes. In tropical, temperate and boreal forests, SLA is overestimated by iNaturalist observations, with a median of 30%, 26% and 38%, respectively, while leaf N per area is underestimated by 15%, 6% and 20%. In all tropical and subtropical forest biomes and temperate coniferous forest biomes, we additionally observe an underestimation of plant height (ranging from a 55% underestimation in tropical and subtropical coniferous forests to 10% in temperate coniferous forests; see Extended Data Fig. 7). Tropical, subtropical, temperate and montane grasslands, Mediterranean vegetation and deserts, on the whole, exhibit smaller deviations. In most biomes, leaf area and leaf fresh mass are overestimated by 11% and 15% on average. Fig. 4: Difference between sPlotOpen cwm grid cell averages and iNaturalist observation averages per grid cell for each terrestrial biome. All traits were scaled by range (−1,1) using the 0.05 and 0.95 quantiles. In this figure, all types of tropical and temperate forests were aggregated (for a detailed boxplot over all 14 WWF biomes see Extended Data Fig. 7 ). The bounds of the box are defined by the first and third quartiles, the centre lines are the medians, the whiskers mark the 1.5 interquartile range and outliers are not shown. The red step-graph shows the sample size n = the number of iNaturalist map pixels that overlap the respective sPlotOpen map per biome and trait. The blue step-graph marks the mean density of iNaturalist vascular plant observations per km 2 in each biome. For exact sample sizes per biome and trait, see Supplementary Table 1 . Full size image Comparison to previously published trait products We also correlated the sPlotOpen maps to published trait maps, which are derived from extrapolation methods 5 , 6 , 11 , 12 , 13 at 0.5° and 2° resolution. For all available published traits maps—leaf N per mass, leaf N per area and SLA—the weighted correlation ( r ) was higher for the iNaturalist maps (0.37, 0.59 and 0.56) than for previously published maps based on extrapolation (highest r = 0.25, 0.50 and 0.51, respectively). Some of these trait maps showed no correlation with sPlotOpen (Table 2 ). Table 2 Correlation of published maps with sPlotOpen Full size table Discussion A correlation of up to 0.69 ( r ) between these two fundamentally different datasets is astounding and unexpectedly high. We did not expect the iNaturalist maps to correlate more strongly with sPlotOpen than previously published trait maps (Table 2 ), which are all based on extrapolations of TRY data and climate variables. The higher correlation between iNaturalist and sPlotOpen could stem from similar biases in the two datasets. However, we do not assume this explanation to be the most likely: iNaturalist occurrence data are noisy and heterogeneous, shared by users who sample the species they encounter and find interesting; sPlotOpen is a data collection of vegetation plots that were each measured and recorded within a framework of very specific research questions and comes with its own array of biases. And yet, occurrence samples provided by iNaturalist users resemble professionally sampled plot-based abundance data on a level of functional traits. We found higher correlations between sPlotOpen- and iNaturalist-based trait maps at larger grid cell sizes. We assume that larger grid cell sizes generally compensate for the sparsity present in both datasets, which eventually increases the correspondence across macroecological gradients. When data are spatially aggregated, the finer spatial details are smoothed out, preserving the more broad climatological gradients (an effect described, for instance, in Joswig et al. 41 ). In this way, abundance differences of species—which are captured by the cwm trait values in sPlotOpen data but not by the iNaturalist occurrences—become less important, the broader the spatial scale on which they are aggregated. For several traits (for example, SSD, seed mass, leaf N per area and stem conduit density) the global trait maps presented here are the first published and evaluated global trait maps, to our knowledge. These traits have been previously identified as relevant in the study of functional diversity but no global maps of them existed to date 3 , 42 . Other authors have published global trait distribution maps for a subset of traits we present here but they commonly follow no general or independent approach to evaluate the maps 5 , 11 , 12 , 13 . This is critical since most of them do extrapolate into unknown space 43 , 44 . Here, we evaluate the global trait distribution maps using the sPlotOpen data. This approach may serve as an indicator to judge the reliability of the maps presented here for further use (Fig. 3a ). All data and code used for this study are freely available so that functional gradient maps can be updated as the number of iNaturalist observations and trait data in TRY continue to grow. The mismatches we do see between sPlotOpen and iNaturalist may result from biases and limits within either dataset. Data gaps found in the iNaturalist observations may be ascribed to low population density or accessibility. We can see a greater agreement among iNaturalist and sPlotOpen in biomes with a higher iNaturalist observation density (Fig. 4 ), indicating that more observations indeed lead to a more complete representation of a given plant community. This is supported by the observation that grid cell differences between iNaturalist and sPlotOpen are larger at low observation densities (Extended Data Fig. 8 ). Given the exponential growth of iNaturalist observations in the past (Extended Data 1 ), we can expect this dataset to close gaps prospectively and provide enough observations to make stratified sampling possible. Naturally, biases in sPlotOpen may also contribute to differences between the two datasets. The sPlotOpen is a conglomeration of data collected in various studies with very specific research questions across various ecosystems 37 , 38 . Additionally, the different vegetation plots are very heterogeneous in terms of plot size, sampling density or sampling protocol. As a result, sPlotOpen is unable to reflect a balanced or stratified sample representative for a given region. Nonetheless, we use sPlotOpen plant community data as a baseline reference, as it provides the most representative and extensive information on the global distribution of plant communities. For a limited set of traits, other global maps have been published 5 , 6 , 11 , 12 , 13 , 14 ; however, since these products usually do not show a strong agreement, they cannot serve as a trustworthy reference (compare Schiller et al. 16 ). Future studies, especially those using extrapolation (or upscaling) methods, may also compare their maps against sPlotOpen data. We see in Fig. 3a that for some traits, iNaturalist and sPlotOpen show no or only very low correlation. One explanation for this disagreement could be the true lack of macroecological patterns: if a trait exhibits no macroecological pattern, we cannot expect there to be a correlation (a monotonic relationship) between two datasets. The traits may vary greatly on a very small scale, both spatially and temporally, such as leaf N per mass or leaf C. Other traits, such as seed length might simply be less informative than seed mass, for which we see a stronger correlation (0.23). Wood vessel length is a trait only relevant to woody species, which eliminates many observations. When comparing differences between iNaturalist and sPlotOpen among biomes, we see that iNaturalist observations better represent sPlotOpen vegetation plots in biomes not dominated by trees (that is, grasslands, shrublands and deserts; Fig. 4 ). Herbaceous species and communities tend to have larger SLA 3 , 42 . The overestimation of SLA in forest biomes and the underestimation of plant height in tropical/subtropical and temperate coniferous forest biomes by iNaturalist observations point to iNaturalist users’ partiality towards small, herbaceous plants. Users might be inclined to capture not the characteristic dominant species or functional plant types of a plant community but rather the noticeable, yet rare, individuals. This bias could lead to an undersampling of trees in forest biomes 16 . The comparison of growth form (tree/shrub/herb) cover between iNaturalist and sPlotOpen supports this observation: tree cover seems to be underestimated and herb cover overestimated by iNaturalist observations in temperate forests (Extended Data Fig. 9 ). This effect might even be potentiated by the technical difficulty of photographing large shrubs and trees, making it hard for other users to confirm and agree on the species identification from a photo alone, creating a bottleneck as to which observations make it to the ‘research-grade’ level. Also, trees are weighted higher in the sPlotOpen cwm due to a larger cover per individual plant. The iNaturalist approach does not adjust the weight on the basis of cover but on occurrence and therefore large plants, that is trees, will be weighted less. This, in turn, may result in the observed underestimation of plant height in tropical forest biomes. For traits with a skewed distribution, such as height or SLA in forests, this effect may be amplified. The overestimation of leaf area by 11% on average in all biomes points to an undersampling of grasses (Fig. 4 ). This bias might again be the result of the difficulty identifying grass species in general and the added difficulty of identifying grass species from photos to confirm an observation. More knowledge of citizen scientists’ sampling behaviour could allow for correcting for these biases 27 , 29 . Both iNaturalist and sPlotOpen are annotated with trait information taken from the TRY database. For this study, average trait measurements from TRY data were simply linked via species name without considering intraspecies trait plasticity. Future studies may include intraspecific trait variation, for example, by considering the relationship with climate 6 , 45 , 46 or using gap-filled trait data or by estimating the trait variability from iNaturalist photos using a machine-learning approach 16 . An additional analysis (Extended Data Fig. 10 ) shows that the trait maps produced by Schiller et al. 16 to check the plausibility of estimating trait values from iNaturalist photos are indeed very similar to the iNaturalist maps based on species classification presented here. These findings provide a promising basis for future work, which may integrate multiple facets of citizen science contribution to global trait maps—from species identifications to photographic records. Also, this approach may be extended to other data sources, such as incorporating all 300 million vascular plant occurrences from the Global Biodiversity Information Facility (GBIF). Using (and understanding) the complete GBIF occurrence data, however, is less trivial: it contains more individual biases, mixed datasets from national or institutional inventories, citizen science and research projects. The results presented here open up a promising avenue for the use of citizen science data to help fill the spatial gaps in plant trait data. These data can provide invaluable insight, especially in regions where trait distributions have so far only been estimated by extrapolation. As our results showed, the iNaturalist trait maps correspond better to sPlotOpen than do previously published maps (Table 2 ) at the spatial scales considered here (0.5° and 2° resolution). So far, dynamic global vegetation models incorporate only a very crude representation of functional properties of plants, for example encoded in coarse functional types. Spatially explicit information on plausible ranges of traits, such as leaf N, leaf C, leaf area, SLA, seed mass and SSD, are fundamental to the accurate representation of photosynthesis 47 . Inaccurate data on these traits propagate biases in representing the carbon and coupled water and nutrient cycles. Today, individual-based dynamic vegetation models of forests and grasslands are increasingly capable of modelling vegetation dynamics at the biome scale. Soon they will be of global relevance 5 , 47 , 48 and require reliable global trait information. We call for considering the advent of new citizen science efforts to join forces towards generating global trait maps for scientific purposes. Already, iNaturalist observations cover a large spectrum in both geographical and climate space (Fig. 1 ) and can extend our view into unknown environmental areas. Many regions, both spatially and temporally, are still not well represented. BioBlitzes and other observation challenges within the iNaturalist and other citizen science communities have shown the readiness of volunteers to contribute to specific research questions and projects 49 . Effectively designed initiatives can be used to target gaps in vegetation data and stimulate higher environmental awareness among contributors. Conclusion Our results suggest that citizen science plant species occurrence data, despite being constrained by several biases and lacking an integrated sampling strategy, can be used to derive plant trait patterns consistent with those generated using plant community data from scientific projects—and even with better approximation than previously published maps that used extrapolation. We present the first maps, to our knowledge, for several traits and provide a way to evaluate such maps using the sPlotOpen data. We found surprisingly high correspondence for multiple traits and were able to identify several systematic biases. Several geographical regions are not adequately represented in citizen science data but the exponential growth of records suggests that spatial gaps may be filled within the next few years. Plant species occurrences documented by citizen scientists in concert with plant trait expressions curated from professional scientific studies, thus, can provide a promising data stream to uncover global patterns of plant form and function. Integrating this approach with other technologies, such as satellite-based remote sensing, machine learning for upscaling and combining above and below the canopy perspectives, might greatly improve our understanding of Earth’s plant functional diversity. Together, our findings may open a new argument to foster and value citizen science approaches, such as iNaturalist, as they offer growing data treasures that can be integrated seamlessly into process-based terrestrial biosphere models. Methods iNaturalist research-grade observations The iNaturalist research-grade data are openly accessible at the GBIF. The vascular plant (Tracheophyta) data were downloaded from , version ’Darwin core archive’ on 5 January 2022 36 . An iNaturalist observation is granted research-grade when it meets the following criteria: it must include a date, a spatial georeference and a picture or sound; and the observed individual may not be captive or cultivated. Once two-thirds of users agree on a species identification (two-thirds consensus), the observation obtains research-grade and is uploaded to . iNaturalist contributors have already collected more than 36 million research-grade observations—and more than 14 million such observations of vascular plants alone. Since 2017, iNaturalist has incorporated help identifying an observation using machine learning on both the website and smartphone app. This system of combining user-validation and machine-learning-based species suggestions makes these observations especially interesting as a potential data source in a scientific context. For this study, we used the ‘date identified’ to calculate the number of iNaturalist observations added every year. We used the species name to retrieve existing knowledge on functional traits from the TRY database—information that iNaturalist data do not include readily—and then used the coordinates to map trait distributions globally. Linking iNaturalist to TRY The TRY database contains trait measurements from individual plants and, typically, multiple individual measurements per trait and species. TRY is a heterogeneous collection of measurements, which include various life stages of plant species and a range of different growing conditions. Nonetheless, with well over 11 million trait records of nearly 280,000 plant taxa, this dataset currently provides the only plausible way we can assess plant traits on a global scale 9 . Measurements for the 18 traits of interest (Table 1 ) across all available species were downloaded from . We used 18 traits frequently considered when characterizing plant functional gradients and functional diversity 3 . We ln-transformed (natural logarithm) the trait measurements for a more normal distribution, as shown in Kattge et al. 9 . We calculated from all these individual measurements an average value per trait and species. The average trait values were then matched with the iNaturalist observations via the species name. The TRY original species names were standardized using the Taxonomic Names Resolution Service (TNRS) 9 . TNRS is based largely on ‘The Plant List’. The GBIF backbone for vascular plants, with which the iNaturalist observations were standardized, relies heavily on ‘The World Checklist of Vascular Plants’ and ‘The Leipzig Catalogue of Vascular Plants’, both of which are continuations of The Plant List. We assume—since both species lists are harmonized using essentially the same lists—that a simple name matching of the two standardized species names lists already captures the majority of possible matches. By using such an exact match, first against the standardized TRY names and then the original TRY names, we linked around 84% of the iNaturalist observations. After strict matching, we applied a conservative fuzzy match (rapidFuzz with cutoff=90). We were able to match another 1% of iNaturalist observations with trait information, about 85% of the iNaturalist observations in total. TRY data were not gap-filled. sPlotOpen vegetation plot data To test how well the iNaturalist observations resemble plant communities, we used data from sPlotOpen as a reference. The sPlotOpen 38 is an open-access and environmentally and spatially balanced subset ( n = 95,000 plots) of the global sPlot vegetation plots dataset v.2.1 (ref. 37 ). It was downloaded from the iDiv data repository . For this study, we used the ln-transformed cwm for all 18 traits listed in the sPlotOpen data (Table 1 ), a selection of traits based on all continuous traits from Bruelheide et al. 3 . The trait values in sPlotOpen are not measurements recorded on site in the vegetation plots; they are taken from the gap-filled version of the TRY database 9 and matched via species name to calculate a cwm per plot and trait (see Bruelheide et al. 3 for details). Beyond that, we used the geolocation (latitude/longitude) of each vegetation plot. Distribution and density of observations The distribution and density of iNaturalist observations and sPlotOpen plots were visualized spatially in a 2° grid. The distribution and density of iNaturalist observations and sPlotOpen vegetation plots in a climate space (precipitation/temperature) were visualized using the Python package ‘hexbin’. Average annual temperature and precipitation for the geolocation of each iNaturalist observation and each sPlotOpen vegetation plot were extracted from the freely available ‘WorldClim bio variables’, downloaded from . The Whittaker biome coordinates were taken from the R package ‘plotbiomes’ data . Trait maps For each trait separately, we generated a global spatial grid (latitude/longitude), where each pixel value represents the mean trait value of all iNaturalist observations located within the grid cell. This mean was calculated by spatially aggregating all iNaturalist observations within each grid cell and averaging the linked TRY trait values x i associated with each observation i . We calculated the arithmetic mean of the ln-transformed trait values. The trait maps based on the sPlotOpen data were generated similarly, only with x i being the ln-transformed cwm associated with each plot i . We generated the grid maps for all 18 traits and both datasets. These maps are available (see Data availability section) in two versions: one shows the arithmetic mean of the ln-transformed trait values x over all observations n and the other shows the geometric mean given by $$\exp \left(\frac{1}{n}\mathop{\sum }\limits_{i=1}^{n}\ln ({x}_{i})\right).$$ Note the conceptual difference between the trait maps derived from sPlotOpen and iNaturalist: while the sPlotOpen trait maps are aggregations of cwm derived from plot-based species abundances, iNaturalist trait maps are derived from mean occurrence values. These means are not weighted by abundance (for example, biomass or cover), since such information is not available for iNaturalist observations. We assume that the observations reflect the species abundance in a community, even though the iNaturalist means are averaged by species occurrences only. Correlation of iNaturalist and sPlotOpen We tested several grid sizes (0.06°–4° resolution) to assess at what spatial resolution the iNaturalist trait maps most strongly resemble the trait patterns found in the sPlotOpen trait maps. The mean value for each trait in each grid cell in the iNaturalist trait map was then correlated to the respective cell in the sPlotOpen trait map. We used a weighted Pearsons correlation coefficient ( r ) to assess the correlation between the two maps. The weight of each mean trait value corresponds to the area of each grid cell ( Supplementary Information ). We calculated slopes and intercepts using a model II regression, specifically the SMA regression since we were not predicting one dataset using the other 50 . The SMA regression slope, 95% CIs and a two-sided t -statistic ( H 0 : slope = 1) were calculated using the R package ‘smatr’ 51 . Aggregation in buffers around sPlotOpen plots For the alternative approach of using a buffer around each sPlotOpen vegetation plot, we aggregated all iNaturalist observations around each plot individually within a certain radius or buffer and calculated the mean trait measurement inside the buffer. To obtain more equal area buffers, the latitudinal/longitudinal data were projected into ‘world sinusoidal projection’ (ESRI:54008). The iNaturalist observations within each buffer were aggregated and, from these, the average trait values were calculated. The correlation coefficient (Pearsons) was calculated from sPlotOpen cwm and the corresponding mean trait values of the iNaturalist observations inside the buffer. Correlating previously published maps with sPlotOpen We compared the sPlotOpen maps with trait maps published in previous studies. These maps of SLA, leaf nitrogen per mass and leaf nitrogen per area were all spatially extrapolated from sparse TRY observations. We spatially resampled the previously published trait maps 5 , 6 , 11 , 12 , 13 , 14 to intersect with the sPlotOpen-based grids and aggregated at a 0.5° and 2° resolution. We then calculated the weighted r , as described above for the iNaturalist maps. Difference of iNaturalist and sPlotOpen across biomes The correspondence of average trait expressions between iNaturalist and sPlotOpen cwm was evaluated across terrestrial biomes, as defined in the WWF terrestrial ecoregion map 40 . The WWF terrestrial-ecoregions shape files were downloaded from . The normalized trait values x ′ are given by the arithmetic means of each ln-transformed trait value per grid cell, x ln , normalized to a (−1,1) range using the 0.05 and 0.95 quantiles: $$x^{\prime} =\frac{\exp ({x}_{\ln })-{q}_{0.05}(\exp ({x}_{\ln }))}{{q}_{0.95}(\exp ({x}_{\ln }))-{q}_{0.05}(\exp ({x}_{\ln }))}.$$ On the basis of the WWF terrestrial-ecoregions shape files, a separate trait map was calculated for each biome and for the traits that had shown the highest correlation globally—stem conduit density, leaf area, leaf fresh mass, plant height, leaf nitrogen (N) per area and SLA. The iNaturalist aggregated trait value per grid cell was then subtracted from the corresponding sPlotOpen grid cell average (sPlotOpen − iNaturalist); the differences were visualized in boxplots. Comparing growth forms We extracted growth form information (tree/shrub/herb) from TRY (Trait-ID 42) to each iNaturalist observation and estimated the coverage of each growth form for all sPlotOpen plots. We chose the most commonly used classification for each species. The average tree/shrub/herb coverage was then calculated for each grid cell and these means correlated. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The trait maps in GeoTiff format for both the iNaturalist and sPlotOpen maps are openly available at (ref. 52 ). All data used to create and analyse these maps are openly accessible (consult workflow for information on how to download the data). Code availability We provide a fully reproducible workflow ( ) of all analyses presented here and a script that can be used readily and without much effort to create updated global trait maps using the latest data, as citizen science data continue to grow 52 . | Nature and climate are mutually dependent. Plant growth is absolutely dependent on climate, but this is, in turn, strongly influenced by plants, such as in a forest, which evaporates a lot of water. In order to be able to make accurate predictions about how the living world may develop, extensive knowledge of the characteristics of the vegetation at the different locations is necessary, for example, leaf surface size, tissue properties and plant height. However, such data usually have to be recorded manually by professional scientists in a painstaking, time-consuming process. Consequently, the available worldwide plant trait data are very sparse and cover only certain regions. The TRY database, managed by iDiv and the Max Planck Institute for Biogeochemistry in Jena, currently provides such data on plant traits for almost 280,000 plant species. This makes it one of the most comprehensive databases for plant characteristics mapping in the world. Up to now, global maps of plant traits have been created using extrapolations (estimation beyond the original observation range) from this geographically limited database. However, the resulting maps are not particularly reliable. In order to fill large data gaps, the Leipzig researchers have now taken a different approach. Instead of extrapolating existing trait data geographically from the TRY database, they have linked it to the vast dataset from the citizen science project iNaturalist. With iNaturalist, users of the associated smartphone app share their observations of nature, providing species names, photos and geolocation. In this way, more than 19 million data points have been recorded, worldwide, for terrestrial plants alone. The data also feeds the world's largest biodiversity database, the Global Biodiversity Information Facility (GBIF). This is accessible to the public and also serves as an important database for biodiversity research. In order to test the accuracy of the maps based on the combination of iNaturalist observations and TRY plant traits, they were compared to the plant trait evaluations based on sPlotOpen; the iDiv sPlot platform is the world's largest archive of plant community data. It contains nearly 2 million datasets with complete lists of plant species which occur in the locations (plots) studied by professional researchers. The database is also enhanced with plant trait data from the TRY database. The conclusion: The new iNaturalist-based map corresponded to the sPlot data map significantly more closely than previous map products based on extrapolation. "That the new maps, based on the citizen science data, seem to be even more precise than the extrapolations was both surprising and impressive," says first author Sophie Wolf, a doctoral researcher at Leipzig University. "Particularly because iNaturalist and our reference sPlotOpen are very different in structure." "Our study convincingly demonstrates the potential for research into voluntary data," says last author, Dr. Teja Kattenborn from Leipzig University and iDiv. "It is encouraging to make increasing use of the synergies between the combined data from thousands of citizens and professional scientists." "This work is the result of an initiative of the National Research Data Infrastructure for Biodiversity Research (NFDI4Biodiversity), with which we are pushing for a change in culture towards the open provision of data," says co-author Prof Miguel Mahecha, head of the working group Modeling Approaches in Remote Sensing at Leipzig University and iDiv. "The free availability of data is an absolute prerequisite for a better understanding of our planet." The research was published in Nature Ecology & Evolution. | 10.1038/s41559-022-01904-x |
Biology | Study finds chaos is more common in ecological systems than previously thought | Tanya Rogers, Chaos is not rare in natural ecosystems, Nature Ecology & Evolution (2022). DOI: 10.1038/s41559-022-01787-y. www.nature.com/articles/s41559-022-01787-y Journal information: Nature Ecology & Evolution | https://dx.doi.org/10.1038/s41559-022-01787-y | https://phys.org/news/2022-06-chaos-common-ecological-previously-thought.html | Abstract Chaotic dynamics are thought to be rare in natural populations but this may be due to methodological and data limitations, rather than the inherent stability of ecosystems. Following extensive simulation testing, we applied multiple chaos detection methods to a global database of 172 population time series and found evidence for chaos in >30%. In contrast, fitting traditional one-dimensional models identified <10% as chaotic. Chaos was most prevalent among plankton and insects and least among birds and mammals. Lyapunov exponents declined with generation time and scaled as the −1/6 power of body mass among chaotic populations. These results demonstrate that chaos is not rare in natural populations, indicating that there may be intrinsic limits to ecological forecasting and cautioning against the use of steady-state approaches to conservation and management. Main Chaos was introduced to ecology nearly 50 years ago 1 , 2 to provide an explanation for widespread fluctuations in abundance of natural populations. The defining characteristics of chaos are bounded, deterministic, aperiodic dynamics that depend sensitively on initial conditions. If common, chaos would offer the promise of short-term predictability while setting hard limits on long-term forecasting 3 . It would also mean that the ‘stable ecosystem’ paradigm—the theoretical justification for linear statistical models of ecological dynamics 4 and steady-state management policies 5 —would need rethinking. Chaos has been observed in many ecological models 6 , 7 , 8 , 9 , demonstrated in laboratory experiments (for example, insects 10 , microbes 11 and plankton 12 ) and detected in a handful of well-studied field systems 13 , 14 , 15 , 16 . However, most meta-analyses assessing the prevalence of chaos in natural field populations have found chaos to be absent or rare 17 , 18 , 19 , 20 , 21 . The most recent global meta-analysis concluded that only 1 out of 634 ecological time series was chaotic 18 . The apparent rarity of chaos in free-living natural populations is a mystery for several reasons. Ecosystems involve tens to thousands of species and large complex systems are prone to chaos 9 , 22 , 23 . Nonlinear dynamics, a necessary condition for chaos, are also common in ecological time series 24 and many abiotic drivers of population dynamics are themselves chaotic 25 . In light of this, we hypothesize that the dearth of evidence for ecological chaos reflects methodological and data limitations, rather than genuine rarity. Importantly, many meta-analyses in ecology have tested for chaos by fitting one-dimensional, parametric population models to time series 17 , 18 , 19 , 20 —models in which the current state depends only on the last state and this dependency is constrained to a particular functional form. We know from theory and focused empirical studies that overcompensation is not the only mechanism that can generate chaos and that chaos often arises through ecological interactions 16 , 26 . Although seminal models of ecological chaos were one-dimensional 1 , using one-dimensional models to classify natural populations treats ecological complexity (for example, species interactions) as noise, thereby hindering chaos detection 27 , 28 . Non-parametric, multidimensional methods for chaos detection that make minimal assumptions about the dynamics 27 , 29 , 30 are more mathematically robust and potentially more accurate, particularly in cases where the underlying dynamics are complex and not well understood. In contrast to the one-dimensional parametric studies, the last global meta-analysis to use flexible, higher-dimensional methods, published in 1995, found evidence for chaos in 11% of 27 field time series (excluding four series on measles cases), noting that this was probably an underestimate 30 . The question of chaos prevalence using comparable methods has not been revisited and, in the interim, many more time series of sufficient length have become available—a critical factor for detecting chaos 31 . In addition, new chaos detection tools are also now available 32 , 33 , 34 , 35 but how these methods, developed outside ecology, will perform on ecological time series is unknown. Here, we revisit whether chaos is, in fact, rare in ecological systems using a suite of flexible, higher-dimensional approaches. The definitive and most widely used index of chaos is the Lyapunov exponent (LE), which measures the average rate of divergence between nearby points in phase space 36 (Supplementary Note 2 ). Positive LE values are indicative of chaotic dynamics. We selected two methods of estimating LEs (direct 37 and Jacobian 31 ) and four additional chaos detection algorithms (recurrence quantification analysis 32 , permutation entropy 33 , horizontal visibility graphs 34 and the chaos decision tree 35 ). We tested the six methods on data simulated with a variety of chaotic, periodic and stochastic models to benchmark misclassification rates under ecologically relevant time series lengths and levels of noise ( Supplementary Notes 2 – 4 ). We tested the generality of this classification accuracy using two additional suites of simulation models. Three methods had error rates >0.5 and so were not pursued further (Table 1 and Extended Data Fig. 1 ). This included the direct LE method, which was unable to differentiate divergence due to chaos from divergence due to noise 38 . We applied the remaining three methods (Jacobian, recurrence quantification and permutation entropy) to time series from the Global Population Dynamics Database (GPDD) 39 . The GPDD aggregates 4,471 time series from 1,891 taxa. Previous analyses of the GPDD have concluded that many of these time series are too noisy to permit accurate modelling 40 , 41 . Therefore, we restricted our attention to the subset of the GPDD where chaos could be detected if present; that is, relatively long time series of good data quality without any major gaps ( Methods ). Applying these criteria produced a dataset of 172 time series representing 138 different taxa from 57 distinct locations with between 30 and 197 observations. To confirm the prevalence of chaos among plankton in the GPDD, which were sampled largely from one location, we also analysed an independent dataset of 34 zooplankton time series from three lakes with between 138 and 639 observations. Table 1 Error rates for six chaos detection methods on simulated datasets and rates of chaos detection in the empirical GPDD dataset using the three most reliable methods Full size table We then explored how Jacobian LE values varied among taxa and depended on intrinsic timescale (generation time), body size (mass), time series length (generations sampled) and embedding dimension ( E ) which we define here as the number of lags needed to reconstruct the dynamics (Supplementary Note 1 ). The Jacobian method estimates LEs from a local linear model fit to lags of the time series 42 , 43 . Results and discussion Across three independent classification methods, at least one-third of the GPDD time series were classified as chaotic (Table 1 ). The most conservative estimate (34%) was obtained with the Jacobian LE method, which in our simulations had the best performance on short time series, was the most robust to process noise and underestimated the frequency of chaos in the presence of substantial observation error (Supplementary Note 4 ). We focus most of our remaining analyses on the Jacobian LE estimates. Noise and non-stationarity can affect the classification of time series and, on the basis of tests with low-dimensional parametric models, these were found to be present in the GPDD 40 , 44 . However, the time series we selected for chaos detection only partially overlap these previous studies. Hence, we needed to address the role that noise and non-stationarity play in our specific results. If noisy time series were being incorrectly classified as chaotic, we would expect a higher frequency of chaos among series with lower prediction accuracy. However, the fraction classified as chaotic by the Jacobian method did not vary with prediction R 2 (logistic regression, n = 172, \(X_{\mathrm{d.f.} = 1}^2\) = 0.006, P = 0.9; Extended Data Fig. 2a ) and series with high prediction error did not have higher LEs (Extended Data Fig. 2b ). The frequency of chaos (34%) also did not change if only series with prediction R 2 > 0.25 were considered. So, although chaotic series were more variable than non-chaotic series (Fig. 1a ), they were actually somewhat more predictable (Fig. 1b ); hence observation error is not inflating the frequency of chaos. Fig. 1: Chaotic dynamics in relation to variability, predictability, nonlinearity and non-stationarity. a – d , Histograms show the number of chaotic and non-chaotic time series in relation to: variability, as measured by the coefficient of variation ( a ); predictability, as measured by the leave-one-out prediction R 2 for abundance ( b ); nonlinearity, as measured by the local weighting parameter ( θ ) 43 , where 0 indicates linear dynamics ( c ); and monotonic trend, as measured by the squared Spearman rank correlation coefficient ( d ). Horizontal axis labels give the midpoint of each bin with the exception of c which displays the discrete values that were used. Key in a applies to all panels. Full size image If non-stationarity in the mean was driving the results, we should expect chaotic series to exhibit strong monotonic trends, exponential growth or nearly linear dynamics. Only six time series that had either strong monotonic trends and/or near-linear dynamics were misclassified as chaotic (Fig. 1c,d ). Reclassifying these series (four birds, one mammal and one insect) as not chaotic reduced the frequency of chaos to 30%. The majority of chaotic series, however, were strongly nonlinear (Fig. 1c ), did not display a strong monotonic trend (Fig. 1d and Extended Data Fig. 2c ,d) and had a median growth rate near 0. Although these metrics do not capture more subtle forms of non-stationarity, they suggest that non-stationarity and exponential growth are not responsible, by and large, for the observed frequency of chaos. These observations in the GPDD time series are consistent with our simulations and previously published results 45 : the Jacobian method was less likely to find chaos as observation noise increased, was minimally affected by process noise, rarely classified long-term trends as chaotic and effectively discriminated between chaos and stochastic linear dynamics with seasonality (Supplementary Figs. 1 – 5 ). Taken together, these analyses indicate that the frequency of ecological chaos is not an artefact. So, why is chaos more prevalent in our study than in previous meta-analyses? Whereas the methods used here make minimal assumptions about the dynamics, most earlier analyses classified series by fitting one-dimensional population models 17 , 18 , 19 , 20 . To evaluate the effect of these constraints on chaos detection, we first restricted the Jacobian method to E = 1, essentially fitting a one-dimensional non-parametric model. This reduced the apparent frequency of chaos in the GPDD from 34% to 9.9%, with reductions seen across all taxonomic groups (Fig. 2 ). Changes in classification were most common among populations in which the optimal E was high (Supplementary Fig. 6 ), consistent with the hypothesis that reducing dimensionality inhibits chaos detection. When we fit a set of one-dimensional parametric models used in previous meta-analyses, this further reduced the apparent frequency of chaos to 6% or less (Supplementary Note 5 and Extended Data Fig. 3 ). Thus, the one-dimensional parametric assumption used in other meta-analyses probably explains the rarity of chaos detection in these studies. Data limitation might also account for differences with other analyses, many of which used much shorter time series. To address this, we re-evaluated the prevalence of chaos in the 106 time series with at least 50 data points; 42% were chaotic. Restricting further to the 57 series with at least 70 data points increased the prevalence of chaos to 58%. Overlong sampling intervals might also bias our results because time series with sampling intervals larger than the Lyapunov horizon (timesteps on the order of LE −1 ) should appear effectively stochastic, producing false negatives. Of the 30 series with at least 70 data points and less than four generations per timestep (none of which were plankton), 40% were classified as chaotic. Hence, chaos seems to be more visible in longer series, but long sampling intervals do not appear to inflate the prevalence of chaos. Approaching this from the other direction, we found that 42% of the chaotic time series were no longer classified as chaotic when truncated to 30 data points (the minimum length used in our analysis). These results are consistent with our simulations results where data limitation increased the false negative rate much more than the false positive rate (Supplementary Note 4 and Supplementary Figs. 1 – 5 ). Thus, with longer time series we expect to see a greater fraction of populations classified as chaotic. These results probably explain why other meta-analyses using very short time series (<20 data points) found no evidence for chaos 21 . Having allayed most reasonable qualms about statistical artefacts, we further explored the biological contexts in which chaos occurs. The frequency of chaos differed among taxonomic groups; phytoplankton had the greatest proportion of chaotic series (81%), followed by zooplankton (77%), insects (43%), bony fishes (29%), birds (18%) and mammals (16%) (Fig. 2 ). The prevalence of chaos decreased in species with longer generation times (logistic regression, n = 166, \(X_{\mathrm{d.f.} = 1}^2\) = 26.7, P < 0.001; Fig. 3a ) which tended to have lower LEs as well (Fig. 3b ). E also tended to decrease with increasing generation time (Pearson r = −0.30) and was lowest among birds (Supplementary Fig. 7 ). There are several possible explanations for this pattern. Long-lived species definitionally have lower average mortality rates. Hence, on a per unit time basis (but not per generation), we might expect long-lived species to have relatively weaker interactions with other species, leading to lower LE and E compared with short-lived taxa. Long-lived species may also be better insulated from chaotic environmental drivers 19 , 46 . Data limitations might also account for lower rates of chaos in long-lived species, as chaos detection depends on the time series length relative to the intrinsic timescale for the system. If having fewer generations sampled reduces the ability to detect chaos, we would expect data truncation to have a larger effect on species with longer generation times. There was a trend where species with longer generation times were more likely to be reclassified as non-chaotic when series were truncated to 30 data points; however, this result was not statistically significant (logistic regression, n = 49, \(X_{\mathrm{d.f.} = 1}^2\) = 0.79, P = 0.38; Supplementary Fig. 8 ). Fig. 2: Chaotic dynamics by taxonomic group and model dimensionality. Bars show the number of chaotic and non-chaotic time series by taxonomic group with unconstrained embedding dimension (free E ) and with embedding dimension fixed to 1 ( E = 1) using the Jacobian method. Full size image Fig. 3: Chaotic dynamics in relation to generation time. a , Proportion of time series classified as chaotic using the Jacobian method. Chaotic series are coded as 1 and non-chaotic series as 0. Points are vertically jittered to reduce overlap. The line is a logistic (Bernoulli) regression and associated band is the 95% confidence interval. b , Values of the LE plotted against generation time. In a and b , colour indicates the embedding dimension, E . Full size image Recent evidence indicates that LEs scale with body mass in experimental demonstrations of chaos 47 . To determine how generally this applies to natural populations, we evaluated whether variation in LEs among chaotic species exhibited analogous mass ( M ) scaling. Combining LEs from our study and existing studies 10 , 11 , 12 , 16 , 48 , 49 , 50 , 51 (compiled by ref. 47 ), we fit the model log 10 (LE) = a + b log 10 ( M ) for LE > 0 and found evidence for consistent scaling with b = −0.15 (±0.013 (s.e.)), P < 0.001, d.f. = 74; Fig. 4 ). For LEs from the GPDD, there was no interaction between mass and taxon and a marginally significant effect of taxon (analysis of variance, log 10 ( M ): F 1,45 = 48.4, P < 0.001; taxon: F 5,45 = 2.4, P = 0.049; log 10 ( M ) × taxon: F 5,45 = 1.2, P = 0.33), suggesting that differences in mass account for most of the average differences in LE among broad taxonomic groups. To account for potential non-independence of LEs within locations, we repeated the regression of log 10 (LE) on log 10 ( M ) with general least squares and again using only location means. The resulting slopes differed by no more than 0.02. The consistency of LE scaling between laboratory and natural populations cannot readily be explained as a statistical artefact. Moreover, since the laboratory-derived LEs are not artefacts of observation noise or non-stationarity, consistent scaling with the field data provides additional evidence for chaos in natural populations. Fig. 4: Positive LEs in relation to body mass. Colour distinguishes broad taxonomic groups. Includes data from this study (GPDD and supplemental results from three lake systems) and positive LEs compiled by ref. 47 (AG2020). The log–log scale is in keeping with previous studies 47 . Note that the lakes data (squares) were not used to fit the regression line. Supplementary Fig. 9 shows the same points with lower confidence intervals. Full size image The 172 GPDD series come from 57 distinct locations, raising the potential issue of non-independence (for example, 33 of 34 plankton and 15 of 17 fish time series each came from one location; Extended Data Fig. 4a ). In a well-mixed system where all variables interact on similar timescales, there will be a unique maximum LE and the LEs estimated from each state variable should be similar, as has been observed in some laboratory studies 12 . However, in systems with weak coupling, modularization or separation of timescales—processes known to occur in natural ecosystems 52 , 53 —different LEs can be reconstructed from different variables, representing a range of dynamics corresponding to different subsystems. In fact, we hypothesize that this weak coupling and timescale separation contribute to the observed mass scaling of the LEs. The number of independent LEs at a given location will be somewhere between one and the number of species sampled. The exact number in our database is not resolvable, however, as only three locations had more than eight time series (Extended Data Fig. 4a ) and tools to resolve these subsystems have yet to be developed and tested (an important next step). As a coarse first pass, examining the distribution of LEs suggests bimodality in two of the three locations with more than eight series (Extended Data Fig. 5 ); however, this distribution cannot be resolved for the other locations. Thus, we have presented our results at the time series level but recognize that further resolution is required. For the time being, we can conduct a conservative test of system-level chaos by assuming that each location represents one well-mixed system and asking whether the median LE for each location is chaotic (>50% of LEs are significantly >0 with 95% confidence). By this standard, 21% (12/57) of locations were chaotic and the prevalence of chaotic locations was 41% (5/12) in insects, 25% (4/16) in birds and 9% (2/22) in mammals (plankton and fish were not sufficiently well represented to compute this frequency). We can also use the error rates from our simulation study to compute the probability that a system is chaotic, accounting for the known asymmetry between false positive and false negative rates (with the important caveat that these error rates are dependent on the particular suite of simulations used). By this standard, 25% of locations had a >50% probability of being chaotic and 19% of locations had a >80% probability of being chaotic (Extended Data Fig. 4b ). Both of these location-level results are consistent with our time-series level results and our finding that chaos is not rare. At the time series level, our estimate of 34% is greater than that of the last meta-analysis to use higher-dimensional models and Jacobian LEs 30 (11%) and the difference may simply reflect an absence of plankton in their database. As most plankton in the GPDD are from a relatively open marine system, it is plausible that what appears here as chaos reflects advection of patchily distributed populations. To address this, and the fact that all zooplankton series came from a single location, we evaluated the frequency of chaos and mass scaling of LEs in additional time series of zooplankton from three lakes. The prevalence of chaos for the lake zooplankton time series was 47% and all lakes had a >60% location-level probability of chaos. Among the chaotic taxa, none of these new LE estimates was significantly different from the mass scaling derived from the GPDD (deviation of observed values from regression predictions, P > 0.05 for all; Fig. 4 ). It is exceedingly unlikely that advection would result in LEs that scale with mass consistently across three datasets, although it may contribute to the frequency of chaos. Conclusions Single species models are routinely used to evaluate population status in applied fields such as fisheries 5 and conservation biology 54 . However, our results clearly show that scalar population models typically mischaracterize dynamics, treating complexity as noise and leading to the conclusion that chaos is rare 17 , 18 , 19 , 55 . As noted by Robert May, such models “do great violence to reality” 56 . More flexible methods (for example, refs. 30 , 43 , 45 ) are better able to characterize complex dynamics and integrating these into population status assessments is an important area for future research. Reflecting on the frequency of chaos in natural populations, we note that birds and mammals, the least chaotic taxa, make up 58% of the time series we analysed but represent <1% of the species on earth 57 . Thus, chaos may be considerably more common than the one-third presented here. Diseases, genetic variants, species and statistical events are labelled ‘rare’ using thresholds ranging from 0.001% to 5%. By these standards, chaos in natural ecosystems is far from rare. This presents both challenges and opportunities for ecology as a predictive science; although short-term forecasting is feasible 58 , precise long-term prediction is likely to be impossible and management should avoid defining objectives in terms of equilibrium conditions. However, with increasing amounts of data and modern learning algorithms, new frontiers are open for characterizing the complex, non-equilibrium and high-dimensional dynamics of ecology which will advance both our understanding of natural variability and improve our ability to manage ecosystems. Methods Data We obtained abundance time series data from the GPDD 39 accessed through the R package ‘rgpdd’ 59 . Our analyses required reasonably long and continuous time series and for organisms to be detected with sufficient frequency to reconstruct their dynamics. Consequently, we selected series with a reliability score of at least 2, at least 30 non-missing time points, at least 5 unique abundance values, <60% zeros and <22% missing time points (in our dataset, this resulted in time series having no more than 11 missing values). We used only field-collected survey data (we excluded laboratory and harvest data), excluded human diseases and excluded the shorter and lower-quality of six duplicate time series that passed our filtering. Our final dataset contained 172 time series representing 138 different taxa from 57 sampling locations. Of these series, there were 109 sampled annually, 53 monthly, 8 semi-annually and 2 bimonthly. There were 62 series from birds (Aves), 38 from mammals (Mammalia), 21 from insects (Insecta), 21 from phytoplankton (Bacillariophyceae, Dinophyceae), 17 from bony fishes (Osteichthyes) and 13 from zooplankton (Bivalvia, Crustacea, Echinoidea, Gastropoda, Polychaeta, Scyphozoa, Chaetognatha). Time series lengths ranged from 30 to 197 timesteps. For sample time series, see Extended Data Fig. 6 . Before analysis, all untransformed abundance time series were rescaled to unit variance by dividing by the standard deviation. This transformation was not strictly necessary but aided in visualization and diagnostics. To allow for log transformations and calculations of population growth rate, \({{{\mathrm{ln}}}}\left( {x_t/x_{t - \tau }} \right)\) , all time series containing zeros were rescaled after adding a constant (1 if all values were integers, the minimum non-zero value if the series contained non-integers). Leaving the zeros intact and using only model forms that did not require log transformations produced similar results. As a measure of organismal intrinsic timescale, generation time was obtained from published sources for all species in our dataset. We used the age at first reproduction as a proxy for generation time, unless direct estimates of generation time or doubling time were available. Wet body mass was obtained from published sources or, if unavailable, was estimated from volume, assuming that organisms have the same density as water. Generation time and mass data were not included for seven taxa that were not finely resolved enough taxonomically to obtain this information. Sources for generation time and mass were the following: birds, mammals, fish, insects from ref. 60 ; diatoms from refs. 61 , 62 ; insect masses not included in ref. 60 from refs. 63 , 64 , 65 , 66 , 67 , 68 ; copepods from refs. 69 , 70 , 71 ; and dinoflagellates from refs. 72 , 73 . The plankton data in the GPDD are nearly all marine and thus from relatively open systems. Hence, it is possible that their dynamics reflect water movement in addition to population growth. However, these time series display seasonal peaks and troughs that persist for months, rather than the more ephemeral fluctuations expected from water mass movement and it seems reasonable to assume that these represent population dynamics over a large spatial area as opposed to fluid dynamics. Nevertheless, to assess the robustness of these plankton results, we performed a supplemental analysis on 34 monthly zooplankton time series data from three lake systems which are arguably more ‘closed’ than the marine environment. These systems were Lake Zurich (Wasserversorgung Zürich), Lake Geneva (SOERE OLA-IS, AnaEE-France, INRA of Thonon-les-Bains, CIPEL, 19 December 2019, developed by the Eco-Informatique ORE system of the INRA 74 ) and Oneida Lake 75 . We also required these series to have <60% zeros. Mass data for these species were obtained from refs. 63 , 76 , 77 , 78 , 79 . If only dry mass was available, we assumed dry mass was 20% of wet mass for arthropods and 4% of wet mass for rotifers 76 , 80 . Analysis Our goal was to use a combination of modern and classical methods for detecting chaos to characterize ecological time series. However, most of these methods were developed in data-rich fields and tested on finely spaced time series with thousands to millions of observations. Therefore, we began by testing six chaos detection methods on simulated data from 37 stochastic, periodic and chaotic models with ecologically relevant time series lengths and levels of observation and process noise. These simulations included both a test set of models used for tuning and new sets of models used for validation and evaluation of generality. The specific classification methods we tested were the ‘direct’ method of estimating Lyapunov exponents (DLE) 37 , the Jacobian method of estimating Lyapunov exponents (JLE) 31 , 43 , recurrence quantification analysis (RQA) 32 , 81 , permutation entropy (PE) 33 , the horizontal visibility algorithm (HVA) 34 , 82 and the chaos decision tree (CDT) 35 . Note that the more traditional DLE and JLE have been tested previously 30 , 31 , 38 , 45 , 83 and we re-test them here for comparison with the newer methods. Supplementary Note 1 provides a brief background on time-delay embedding and a comparison of methods for selecting the embedding dimension and time delay which are used in many of the detection methods. Supplementary Note 2 provides the mathematical definition for LE and full details on our implementation of each detection method. Supplementary Note 3 provides details on the simulation models and Supplementary Note 4 summarizes results of the simulation testing. Under the conditions of our simulations, DLE, HVA and CDT had either false positive or false negative rates >0.5 in both the test and validation datasets and so were not pursued further. We applied the remaining methods, which all had false positive rates <0.2, to the empirical dataset to estimate the frequency of chaos in natural populations. The JLE method derives LEs from the Jacobian matrices of a local linear time-delay embedding model (Supplementary Note 2.2 ). We explored several methods for generating confidence intervals for the LE and selected the method that produced the best classification accuracy in the simulated data (Supplementary Table 2 ). The JLE method proved to be the most accurate index of chaos in the simulated test and validation datasets with the lowest false positive rate. Note that LEs based on short time series, such as those used in this analysis, are more a reflection of the local, rather than global LE. While we may not be able to determine if a system, in the long run, is chaotic, these estimates characterize how the system behaved over this particular time period. Moreover, although short time series may not generate a precise estimate of the LE, our simulations indicate that, for the models tested, the sign of the LE can be accurately determined using relatively low sample sizes. Since the LE also is the most widely used index of chaos and provides a quantitative, scale-invariant measure of divergence rate, we used the numeric values of the LEs to further explore the relationship between chaotic dynamics, intrinsic timescale, body size, time series length and embedding dimension in the GPDD dataset, after converting the LE from units of timestep −1 to units of month −1 . To calculate the mass scaling of the LE, we followed previous work 47 , 84 and used ordinary least squares regression on log 10 transformed data for LE > 0. To test the effect of time series length and sampling interval on the inferred LE (and subsequent classification), we examined the proportion of time series classified as chaotic if we were to restrict our analysis to only those series with at least 50 or 70 observations and to only those with at least 70 observations and at least four generations per timestep. We also truncated all time series that had been classified as chaotic to the last 30 observations and recomputed the LE. To account for the fact that some of the time series were sampled from the same locations and thus may (but not necessarily) represent series from the same dynamical system, we also computed the proportion of locations with a median LE that was chaotic (at least 50% of time series were chaotic with 95% confidence). To account for known asymmetry in the false positive and false negative rates, we also computed the probability that a location was chaotic using the error rates for the JLE method in our simulation study; however, these results should be interpreted with caution since we cannot know how representiative the error rates are of ecological reality. The corrected probability of chaos was calculated as (P − FPR)/(TPR − FPR), where P is observed proportion of time series classified as chaotic, FPR is false positive rate and TPR is true positive rate. To test whether non-stationarity or long-term trends affected our results, we examined whether LEs were greater in series with stronger monotonic trends. We assessed the degree of monotonic trend using the squared Spearman rank correlation between abundance and time. To test whether the restriction of dimensionality affects the inferred LE, we recomputed the LE with the embedding dimension set to 1. To test whether the restriction of model form, in addition to restricting dimensionality, affects the inferred LE, we fit a set of common one-dimensional population models with the form \(x_{t + 1} = x_t{{{\mathrm{exp}}}}\left[ {f\left( {x_t,{{{\mathbf{q}}}}} \right)} \right]\) to each time series and used the fitted model to estimate the LE (Supplementary Note 5 ). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The GPDD data are available on KNB with identifier . Zooplankton data were obtained for Oneida Lake from KNB (identifier kgordon.17.67), for Lake Zurich from Wasserversorgung Zürich and for Lake Geneva from the Observatory on LAkes (OLA-IS, AnaEE-France, INRA of Thonon-les-Bains, CIPEL; ). The simulated datasets and generating code are available in the code repository. The specific GPDD time series used and associated metadata (including compiled generation time and mass data) are available in the code repository. Code availability All analysis code is available at . | Chaos in natural populations appears to be much more common than previously recognized, according to a new analysis by scientists at UC Santa Cruz and NOAA Fisheries. Populations of organisms in natural ecosystems fluctuate a lot, and a key question for ecologists is whether those fluctuations are regular (varying around some theoretically "stable" equilibrium), random (completely unpredictable), or chaotic. Chaotic systems, like the weather, can be predictable in the short term but not in the long term, and they are highly sensitive to small differences in the initial conditions. "Knowing whether these fluctuations are regular, chaotic, or random has major implications for how well, and how far into the future, we can predict population sizes and how they will respond to management interventions," said Tanya Rogers, a NOAA Fisheries ecologist and research fellow at UCSC's Institute of Marine Sciences. Rogers is first author of the new study, published June 27 in Nature Ecology & Evolution. Her coauthors are Bethany Johnson, a UCSC graduate student in applied mathematics, and Stephan Munch, a NOAA Fisheries ecologist and adjunct professor at UCSC in the Departments of Applied Mathematics and Ecology and Evolutionary Biology. The researchers found evidence of chaotic dynamics in over 30 percent of the populations they analyzed in an ecological database. Previous meta-analyses assessing the prevalence of chaos in natural field populations had found chaos to be absent or rare. But that may have been due to limited amounts of data and the use of inadequate methods, rather than the inherent stability of ecosystems, the authors said. "There's a lot more data now, and how long a time series you have makes a big difference for detecting chaotic dynamics," Munch said. "We also showed that methodological assumptions made in prior meta-analyses were biased against detecting chaos." For the new study, the researchers used new and updated chaos detection algorithms and put them through rigorous testing on simulated data sets. Then they applied the three best methods to a dataset of 172 population time series from the Global Population Dynamics Database. Their analysis revealed interesting associations between chaotic dynamics, lifespan, and body size. Chaos was most prevalent among plankton and insects, least prevalent among birds and mammals, and intermediate among fishes. "A lot of short-lived species tend to have chaotic population dynamics, and these are also species that tend to have boom-and-bust dynamics," Rogers said. The results suggest there may be intrinsic limits to ecological forecasting and caution against the use of equilibrium-based approaches to conservation and management, particularly for short-lived species. "From the fisheries management perspective, we want to predict fish populations so we can set limits for fishery harvests," Rogers explained. "If we don't recognize the existence of chaos, we could be losing out on short-term forecasting possibilities using methods appropriate for chaotic systems, while being overconfident about our ability to make long-term predictions." | 10.1038/s41559-022-01787-y |
Medicine | The odds are against extra-sensory perception | Rouder JN & Morey RD (2011). A Bayes factor meta-analysis of Bem's ESP claim. Psychonomic Bulletin & Review. DOI 10.3758/s13423-011-0088-7 | http://dx.doi.org/10.3758/s13423-011-0088-7 | https://medicalxpress.com/news/2011-05-odds-esp.html | Abstract In recent years, statisticians and psychologists have provided the critique that p -values do not capture the evidence afforded by data and are, consequently, ill suited for analysis in scientific endeavors. The issue is particular salient in the assessment of the recent evidence provided for ESP by Bem ( 2011 ) in the mainstream Journal of Personality and Social Psychology . Wagenmakers, Wetzels, Borsboom, and van der Maas ( Journal of Personality and Social Psychology, 100 , 426–432, 2011 ) have provided an alternative Bayes factor assessment of Bem’s data, but their assessment was limited to examining each experiment in isolation. We show here that the variant of the Bayes factor employed by Wagenmakers et al. is inappropriate for making assessments across multiple experiments, and cannot be used to gain an accurate assessment of the total evidence in Bem’s data. We develop a meta-analytic Bayes factor that describes how researchers should update their prior beliefs about the odds of hypotheses in light of data across several experiments. We find that the evidence that people can feel the future with neutral and erotic stimuli to be slight, with Bayes factors of 3.23 and 1.57, respectively. There is some evidence, however, for the hypothesis that people can feel the future with emotionally valenced nonerotic stimuli, with a Bayes factor of about 40. Although this value is certainly noteworthy, we believe it is orders of magnitude lower than what is required to overcome appropriate skepticism of ESP. Working on a manuscript? Avoid the common mistakes Bem ( 2011 ) has claimed that people can feel or sense salient events in the future that could not otherwise be anticipated. For example, in his Experiment 2, Bem presented participants with two rather ordinary pictures and asked them to indicate which one would be chosen subsequently by a random number generator. If a participant correctly anticipated the random choice, he or she was rewarded with a brief display of a positively valenced picture. Conversely, if a participant incorrectly anticipated the random choice, he or she was punished with a negatively valenced picture. Bem claimed that people could indeed feel these future reward and punishment events and, consequently, were able to anticipate the random choice at a rate deemed statistically above chance. Bem presented a sequence of similar experiments and results and, on this basis, concluded that people can feel the future. This phenomenon and others like it in which people can show seemingly impossible awareness of events are termed psi phenomena , or, more colloquially, extrasensory perception (ESP). If ESP is substantiated, it would be among the most important findings in the history of psychology. The existence of ESP would force us to revise not only our theories of psychology, but also those of biology and physics. In our view, when seemingly implausible claims are made with conventional methods, it provides an ideal moment to reexamine these methods. The conventional approach used by Bem ( 2011 ) has two properties. First, as is typical in many empirical investigations, Bem presented a sequence of experiments, each targeting the same basic phenomena from a slightly different angle. Second, Bem employed null-hypothesis significance testing in which p -values are reported as evidence and evaluated against a fixed criterion to reach judgments. In previous work, we joined a growing consensus that conventional inference by significance testing overstates the evidence for an effect (see Berger & Sellke, 1987 ; Edwards, Lindman, & Savage, 1963 ; Wagenmakers, 2007 , among several others), and proposed a Bayes factor replacement for the t -test (Rouder, Speckman, Sun, Morey & Iverson, 2009 ).This Bayes factor quantifies the evidence in data for competing hypotheses from a single experiment or, more precisely, for a single comparison. Unfortunately, while this Bayes factor is appropriate for assessing evidence for a single contrast, it is ill suited for meta-analytically combining evidence across several experiments. Herein, we develop a meta-analytic version of the Bayes factor t -test and use it to assess the evidence across Bem’s experiments. We find some support for ESP; the probability of the combined data are 40 times more likely under an ESP alternative than under a no-ESP null. This evaluation differs from that of Bem, who, in our opinion, overstated the evidence. It also differs from that of Wagenmakers, Wetzels, Borsboom and van der Maas ( 2011 ), who found no support for ESP. Our interpretation of this Bayes factor is that while it is noteworthy, it is insufficient in magnitude to sway the beliefs of an appropriately skeptical reader. The evidence from p -values and Bayes factor There is a well-known asymmetry in significance testing: Researchers can reject the null hypothesis but can never accept it. This asymmetry works against the goals of scientific inquiry, because null hypotheses often correspond to theoretically useful statements of invariance and constraint (Gallistel, 2009 ; Kass, 1992 ; Rouder et al., 2009 ). For Bem’s ( 2011 ) case, the null hypothesis is the theoretically attractive, reasonable, and highly interpretable constraint that ESP does not exist. In order to fairly assess the evidence for ESP, it is necessary to be able to state the evidence for or against the null provided by the data. Yet, with significance testing, we may only accept ESP and never reject it. The above point about asymmetry is easy to grasp. Its implications, however, are subtle and consequential, because they extend beyond not being able to state evidence for the null hypothesis; they extend to assessing evidence in the data for the alternative as well. A good starting point is consideration of the distribution of p -values under two competing hypotheses (examples are shown in Fig. 1A ). If the null hypothesis is false, p -values tend to be small, and they decrease as sample size is increased. The dashed green line shows the distribution of p -values when the underlying effect size is .2 and the sample size is 50; the dashed-dotted red line shows the same when the sample size is increased to 500. The distribution of p -values under the null, however, is quite different. Under the null, all p -values are equally likely (solid blue line in Fig. 1A ). Perhaps surprisingly, this distribution holds regardless of sample size; p -values do not increase under the null as sample sizes increase. Fig. 1 Significance tests overstate the evidence against the null hypothesis. A Distribution of p -values for an alternative with effect-size of .2 (dashed and dashed-dotted lines are for sample sizes of 50 and 500, respectively) and for the null (solid lines). B Probability of observing a p -value between .04 and .05 for the alternative (effect size = .2, N = 50) and for the null. The probability favors the alternative by a ratio of about 4:1. C Probability of observing a p -value between .04 and .05 for the alternative (effect size = .2, N = 500) and for the null. The probability favors the null by a factor of 10. D The solid line is the probability of observing a t -value of 2.51 (p = .007) for N = 100 under the alternative, relative to that under the null, as a function of the alternative. The circle and square points highlight the ratios that favor the alternative and null, respectively. The dashed line shows the one-tailed prior distribution used throughout Full size image The logic behind significance testing is a form of argument by contradiction. If observed data (or data more extreme) are improbable under the null, then the null is contradicted, and presumably, there is some alternative under which the data are more probable. It is reasonable to ask, then, about the factor by which the observed data are more probable under some alternative than under the null. This factor serves as a measure of evidence for the alternative, relative to the null. Suppose that a data set with sample size of 50 yields a p -value in the interval between .04 and .05. Figure 1b shows the distributions of p -values for the null and the alternative (effect size = .2) around this interval, and the probabilities are the shaded areas under the curve. The probability of observing a p -value under the null and alternative is .01 and .04, respectively. Therefore, the alternative fares four times better than the null. Although such a ratio constitutes evidence for the alternative, it is not as substantial as might be inferred by such a small p -value. Figure 1c shows a similar plot for the null and alternative (effect size = .2) for a large sample size of 500. For this effect size and sample size, very small p -values are the norm. In fact, a p -value between .04 and .05 is about 10 times more likely under the null than under the alternative. In fact, a p -value at any one point—say .05—constitutes increasing evidence for the null in the large sample size limit. This paradoxical behavior of significance testing in which researchers reject the null even though the evidence overwhelmingly favors it is known as Lindley’s paradox (Lindley, 1957 ) and is a primary critique of inference by p -values in the statistical literature. We can examine the evidence from Bem’s ( 2011 ) data for various alternatives, relative to the null. In Experiment 1, for example, participants needed to anticipate which of two erotic pictures they would be shown. The average performance across 100 naive subjects was .531, and this level was significantly different from the at-chance baseline of .5, t (99) = 2.51, p = .007. Figure 1d shows the evidence for various alternatives. The probability ratios on the y -axis are the probability of the observed p -value under a specific alternative, relative to that under the null. Not surprisingly, these ratios vary greatly with the choice of alternative. Alternatives that are very near the null of .5—say, .525—are preferred over the null (filled circle in Fig. 1D ). Alternatives further from .5—say .58 (filled square)—are definitely not preferred over the null. Note that even though the null is rejected at p = .007, there is only a small range of alternatives where the probability ratio exceeds 10, and for no alternative does it exceed 25, much less 100 (as might naïvely be inferred from the p -value). We see that the null may be rejected by p -values even when the evidence for every specific point alternative is more modest. The probability ratio in Fig. 1D may be denoted by B and expressed as follows: $$ B = \frac{{Pr\left( {{\text{Data}}|{H_1}} \right)}}{{Pr\left( {{\text{Data}}|{H_0}} \right)}}, $$ where H 0 is the null and H 1 is that the alternative is that true performance is a specific value—for example, .52. In Bayesian statistics, probability ratios B are called Bayes factors , and they are well-calibrated measures of evidence from the data for one hypothesis relative to another. One drawback of the preceding formulation, however, is that the alternative is a single point hypothesis. In Bayesian statistics, it is possible and desirable to consider composite hypotheses in which parameters range over many possible values. To consider composite hypotheses, the analyst specifies how each single value should be weighted. Figure 1d shows such weights (dashed lines), and for this alternative hypothesis, small effects are weighted more than large ones. The distribution of weights over parameters is called the prior distribution . When an alternative H 1 consists of a weighted range of parameter values, the probability of the data is $$ Pr\left( {{\text{Data}}|{H_1}} \right) = \int {Pr\left( {{\text{Data}}|\theta } \right)f\left( \theta \right)d\theta }, $$ where θ are the parameters and f is the prior distribution on these parameters. The probability of the data given the hypothesis is the expected or weighted averaged probability across the possible parameter values. The Bayes factor for a composite versus a point null is $$ B = \frac{{Pr\left( {{\text{data}}|{H_1}} \right)}}{{Pr\left( {{\text{data}}|{H_0}} \right)}} = \frac{{\int {Pr\left( {{\text{data}}|\theta } \right)f\left( \theta \right)d\theta } }}{{Pr\left( {{\text{data}}|\theta = {\theta_0}} \right)}}, $$ where θ 0 is the value of θ under the null, or .5 for Fig. 1D . Figure 1d also shows an example of a prior over the parameter (dashed line), and for this prior, the Bayes factor evidence for the observed p -value is 3.23; that is, the observed level of performance is about 3 times more probable under the alternative than under the null. To compute Bayes factors, researchers must choose the prior distribution f . Fortunately, there is ample guidance in the literature about how to do so for the linear models, including the t -test (Gönen, Johnson, Lu, & Westfall, 2005 ; Liang, Paulo, Molina, Clyde, & Berger, 2008 ; Zellner, 1986 ; Zellner & Siow, 1980 ). We advocate a prior that serves as a generic default broadly applicable for scientific use. This prior was proposed by Jeffreys ( 1961 ), was developed for linear models by Zellner and Siow, among several others, and was termed the JZS prior by Bayarri and Garcia-Donato ( 2007 ). The JZS prior, along with the resulting JZS Bayes factor , are presented in the Appendix . The JZS Bayes factor has a number of advantages: It makes intuitive sense, it has beneficial theoretical properties, Footnote 1 it is not dependent on the measurement scale of the dependent variable, and it can be conveniently computed. Footnote 2 Further details are provided in Rouder et al. ( 2009 ). The Bayes factor measure of evidence is the probability ratio of data given hypotheses. A related quantity of interest is the probability ratio of hypotheses given data, called the posterior odds . The posterior odds describe the analyst’s degree of belief in the hypotheses after observing the data. The following equation describes the relationship between posterior odds and the Bayes factor: $$ \frac{{Pr\left( {{H_1}{\text{| data}}} \right)}}{{Pr\left( {{H_0}{\text{| data}}} \right)}} = B \times \frac{{Pr\left( {{H_1}} \right)}}{{Pr\left( {{H_0}} \right)}}, $$ where the terms \( \tfrac{{Pr\left( {{H_1}{\text{|data}}} \right)}}{{Pr\left( {{H_0}|{\text{data}}} \right)}} \) and \( \tfrac{{Pr\left( {{H_1}} \right)}}{{Pr\left( {{H_0}} \right)}} \) are posterior and prior odds, respectively. The prior odds describe the beliefs about the relative plausibility of hypotheses before the data are observed, and the Bayes factor describes how the evidence from the data should affect beliefs. For example, suppose the evidence from a set of ESP experiments yielded a Bayes factor of 40 in favor of ESP. Consider a skeptical reader with prior odds of a 1,000,000:1 against ESP. In this case, the reader should revise their beliefs by a factor of 40, to 25,000:1 against ESP. Likewise, a reader that has prior odds favoring ESP should multiply these odds by 40 in light of the data to reach an even more favorable posterior odds. Bayes factors are logically independent of prior odds and, consequently, are ideal for scientific communication (Jeffreys, 1961 ). We recommend that researchers report Bayes factors and that readers use the context of prior knowledge, such as knowledge about physical laws or plausible mechanisms, to set prior odds in interpreting these Bayes factors. Wagenmakers et al.’s ( 2011 ) analysis of ESP Table 1 shows the 10 contrasts originally reported by Bem ( 2011 ) and reanalyzed by Wagenmakers et al. ( 2011 ). Wagenamakers et al. computed two-tailed JZS Bayes factors , and some contrasts yielded modest support for the no-ESP null, while others yielded modest support for the ESP alternative. On balance, according to Wagenmakers et al., there is little systematic evidence for ESP. We have added a third column as a validity check, and it provides the direction of the effect. In several of Bem’s experiments, one could be reasonably sure that if ESP held, the effect should be in one direction and not the other. For example, in Bem’s Experiment 1, discussed previously, participants were instructed to indicate the curtain behind which there was an erotic picture, and, if ESP held, their performance should be greater rather than worse than chance. If there were no ESP, we would expect the observed performance to be slightly below chance for some experiments and slightly above chance for others. Table 1 shows that the direction of all 10 were in the direction hypothesized by Bem. This concordance serves as evidence for ESP that is not captured by Wagenmakers et al.’s analysis. In fact, the Bayes factor of getting all 10 contrasts to be in the same direction is about 100:1 in favor of ESP. Footnote 3 This inconsistency motivates our development of a meta-analytic Bayes factor. Table 1 Wagenmakers et al. ( 2011 ) assessment of Bem’s evidence Full size table The meta-analysis problem Meta analysis seems like it should be a strong point of the Bayes factor. If one has several replicate experiments, it seems reasonable that the posterior odds from the first can serve as the prior for the second, and so on. Under this framework, the combined evidence across all the replicate experiments is simply the product of the Bayes factors. This intuition that the meta-analytic Bayes factor is the product of individual Bayes factors is not correct, and Table 2 provides an example of how it fails. The first four rows show the results of four replicate experiments, each of sample size 100. The data are independently and identically normally distributed observations with a mean of .2 and a variance of 1.0. Hence, the true effect size is .2, and the observed effect sizes in the replicate experiments vary reasonably around this true value. The corresponding Bayes factors for the replicate experiments are shown, and these indicate that the evidence in each experiment is marginal, with one sample favoring the alternative and the other three favoring the null. The product of these Bayes factors is also shown (B = .092), and it indicates that the null is preferred with evidence slightly larger than 10:1. The row labeled “Data pooled” shows the results of pooling the data, rather than multiplying the Bayes factor. In this case, where the data are drawn from a common distribution, pooling is valid and preferred. The resulting Bayes factor is 54:1 in favor of an effect. Hence, multiplying JZS Bayes factors is not a valid meta-analytic approach. Table 2 JZS Bayes factor across four replicate experiments Full size table This seeming contradiction comes about because JZS Bayes factors respect the resolution of data (Rouder et al., 2009 ). When the sample size is small, small effects may be considered evidence for the null because the null is the more parsimonious description given the resolution provided by the data. As the sample size grows, however, the resolution provided for the data is finer, and small effects are more concordant with the alternative. An appropriate analogy may be a criminal court trial in which each of several witnesses provides only partial information as to the guilt of a defendant who has committed a crime. If the jury is forced to assess the odds after hearing the testimony of any single witness, these odds may all favor innocence, since no one witness may be compelling enough in isolation to provide evidence for guilt. However, if the jury considers the totality of all testimonies, the weight will assuredly shift toward guilt. Fortunately, a meta-analytic extension of the JZS Bayes factor is tractable and convenient. One of the key properties of the JZS priors is that the full influence of the data is captured by the t -statistic. Under the JZS priors, we may think of t -statistic as a single piece of datum and the parameter of interest as the effect size , δ . Under the null, the effect size is constrained to zero; Under the alternative, it follows a fat-tailed distribution. The resulting Bayes factor is $$ B = \frac{{Pr\left( {t|{H_1}} \right)}}{{Pr\left( {t|{H_0}} \right)}} = \frac{{\int {Pr\left( {t|\delta } \right)f\left( \delta \right)d\delta } }}{{Pr\left( {t|\delta = 0} \right)}}, $$ where expressions for the probabilities and prior f is provided in the Appendix . The generalization to M independent experiments, each with t -values \( {t_1},{t_2}, \ldots, {t_M} \) is given by $$ B = \frac{{\int {\prod\nolimits_{{m = 1}}^M \,Pr\left( {{t_m}|\delta } \right)f\left( \delta \right)d\delta } }}{{\prod\nolimits_{{i = 1}}^M \,Pr\left( {{t_m}|\delta = 0} \right)}}, $$ (1) where ∏ indicates the product of terms. The key property of this meta-analytic approach is that the true effect size is assumed to be constant across each experiment. Although the meta-analytic Bayes factor assumes common true effect size across experiments, it does not assume a common variance. Hence, it is applicable to experiments where the unit of measure may vary, such as those that span accuracy and response time effects. A script in R program for computing this Bayes factor may be obtained from the authors. It is reasonable to wonder whether the constant effect size model underlying the meta-analytic Bayes factor is warranted. We chose this approach because it is tractable when researchers have access to the test statistics, rather than the raw data. Alternative models that posit variation in effect size across experiments are possible (Utts, Norris, Suess, & Johnson, 2010 ), although analysis may require access to the raw data. These variable effect-size alternatives are certainly more complex than the constant effect-size model, and if the true effects are about the same size, it may be at a competitive disadvantage. Whereas Bem ( 2011 ) reports near constant effect sizes across the experiments, we believe that the constant effect size model is a convenient and appropriate alternative to the null model. To illustrate this meta-analytic Bayes factor, we applied it to the four replicate experiments in Table 2 . The value is about 49:1 in favor an effect, which is quite close to the value of 54 from pooling the data. The reason these values differ slightly is that the meta-analytic Bayes factor posits a separate variance (σ 2 ) for each experiment, while the JZS Bayes factor on pooled data assumes a common, single variance. The evidence in Bem’s ( 2011 ) data Bem ( 2011 ) provided 10 contrasts from nine separate experiments to support the claim of ESP. The contrasts chosen, however, strike us as too opportunistic. For example, in his Experiments 8 and 9, Bem found a positive result for ESP with neutral stimuli and entered the corresponding t -value into his final tally in his Table 7 . In Experiment 1, Bem found a positive result for ESP with erotic stimuli (accuracy of .53 vs. .50 baseline) but a null result for neutral stimuli or emotionally evocative stimuli (accuracy of .49 vs. .50 baseline). Bem entered only this positive result with erotic stimuli into his final tally. In our view, tallying the positive results without the null result is not justified. One way of improving the assessment is to evaluate the evidence for neutral, emotionally evocative, and erotic stimuli separately, such that conflicting results can be contrasted. The corresponding t -values, sample sizes, and resulting meta-analytic Bayes factors for these three classes of stimuli are shown in Table 3 . Table 3 Bayes factor for three feeling—The-future hypotheses Full size table We have not included results from Bem’s ( 2011 ) Experiments 5, 6, and 7 in our meta-analysis because we are unconvinced that these are interpretable. These three experiments are retroactive mere-exposure effect experiments in which the influence of future events purportedly affects the current preference for items. The main difficulty in interpreting these experiments is forming an expectation about the direction of an effect, and this difficulty has consequential ramifications. In the vast majority of conventional mere-exposure effect studies, participants prefer previously viewed stimuli (Bornstein, 1989 ). Bem observed this pattern for negative stimuli, but the opposite pattern, novelty preference, for positive stimuli. Bem claimed that this crossover was anticipated by the findings of Dijksterhuis and Smith ( 2002 ), who documented that participants habituate to emotional stimuli. Accordingly previously encountered negative stimuli are judged less negative and previously encountered positive stimuli are judged less positive. We, however, remain unconvinced that Dijksterhuis and Smith’s emotional habituation is applicable here because of methodological differences. Dijksterhuis and Smith, for example, used six subliminal presentations to achieve habituation, and it is unclear if habituation will follow from a single presentation. What is sorely missing is the analogous conventional mere-exposure experiment with the same negative, positive, neutral, and erotic stimuli to firmly establish expectations. In fact, Bem took this approach with his retroactive priming experiments (Bem’s Experiments 3 and 4), and the inclusion of conventional priming studies to establish firm expectations greatly increases the interpretability of those results. Without these control experiments to establish the direction of mere exposure effects with emotional and evocative stimuli, the most judicious course is to exclude Experiments 5, 6, and 7 from analysis. Table 3 reveals that there is relatively little support for the claim that people can feel the future with erotic or neutral events. The Bayes factor does offer some support for a retroactive effect of emotionally valenced, nonerotic stimuli: The evidence for an effect provided by Experiments 2, 3, and 4 outweighs the evidence against an effect provided by Experiment 1. In Experiment 2, participants were rewarded with brief presentations of positive pictures and punished with brief presentations of negative ones when they anticipated or failed to anticipate, respectively, the future state of a random-number generator. In Experiments 3 and 4, participants identified an emotionally valenced target stimulus more quickly when a subsequently presented prime matched the valence of the target. General discussion The publication of Bem’s ( 2011 ) report on ESP provides an ideal opportunity to discuss how evidence should be assessed and reported in experimental studies. We argue here that inference by p -values not only precludes stating evidence for theoretically useful null hypotheses, but also overstates the evidence against them. A suitable alternative is the Bayes factor—the relative probability of observing the data under two competing hypotheses. To use the Bayes factor, it is necessary to specify a prior against which evidence is calibrated. We recommend the JZS prior as a suitable generic default because the resulting Bayes factor is invariant to changes in measurement scale and has beneficial theoretical properties (see note 1). One of the drawbacks of our previous development (Rouder et al, 2009 ) was that it did not provide a means of combining data across multiple experiments, making meta-analysis difficult. Herein, we extend JZS default Bayesian t -test to multiple experiments and use this new development to analyze the data in Bem. Our Bayes factor analyses of Bem’s data, which Bem offered as evidence of ESP, show that the data support more modest claims. The data yield no substantial support for ESP effects of erotic or neutral stimuli. For emotionally valenced nonerotic stimuli, however, we found a Bayes factor of about 40, and this is the factor by which readers should increase their odds. We caution readers against interpreting this Bayes factor as the posterior odds that ESP is true. On the contrary, posterior odds should reflect the context provided by prior odds, as discussed previously. In the present case, there are two relevant sources of context for prior odds: past studies of ESP, and the plausibility of mechanisms underlying ESP. Bem ( 2011 ) fallows in a line of parapsychological research that extends from the 1930s. In a recent meta-analyses, Storm, Tressoldi and Di Risio ( 2010 ) reported a sizable degree of statistical support for ESP for certain classes of experiments. For example, among the 63 studies that used a four-choice procedure, participants responded correctly on a total of 1,326 out of 4,442 trials, a rate of almost 30% (as compared with a 25% baseline). We worry, however, about the frequency of unreported studies. To us, the more relevant context in setting prior odds is the lack of a plausible mechanism for ESP. ESP seems contradicted by well-substantiated theories in physics and biology. Consequently, it is reasonable to have low prior odds on ESP. In our view, while the evidence provided by Bem is certainly worthy of notice, it should not be sufficient to sway an appropriately skeptical reader. We remain unconvinced of the viability of ESP. Notes The theoretical properties of the JZS Bayes factor are as follows. First, the Bayes factor is always finite for finite data. Second, the Bayes factor is consistent; as sample size is increased, B grows to infinity if the null is false and shrinks to zero if it is true. This consistency may be contrasted with p -values, which do not converge in the limit when the null is true (see Fig. 1 ). Finally, for any sample size, the Bayes factor grows to infinity as t grows to infinity. Web applets to compute Bayes factors for paired and grouped t -tests may be found at pcl.missouri.edu/bayesfactor . We assume that the direction of each experiment is distributed as a Bernoulli trial. Under the no-ESP null, the probability parameter p = .5; under the ESP alternative, p is distributed as a uniform between 0 and 1 (see Wagenmakers, 2007 , for details). | Can people truly feel the future? Researchers remain skeptical, according to a new study by Jeffrey Rouder and Richard Morey from the University of Missouri in the US, and the University of Groningen in the Netherlands, respectively. Their work appears online in the Psychonomic Bulletin & Review, published by Springer. Although extra-sensory perception (ESP) seems impossible given our current scientific knowledge, and certainly runs counter to our everyday experience, a leading psychologist, Daryl Bem of Cornell University, is claiming evidence for ESP. Rouder and Morey look at the strength of the evidence in Dr. Bem's experiments. Their application of a relatively new statistical method that quantifies how beliefs should change in light of data, suggests that there is only modest evidence behind Dr. Bem's findings (that people can feel, or sense, salient events in the future that could not otherwise be anticipated, and cannot be explained by chance alone), certainly not enough to sway the beliefs of a skeptic. They highlight the limitations of conventional statistical significance testing (p values), and apply a new technique (meta-analytical Bayes factor) to Dr. Bem's data, which overcomes some of these limitations. According to Rouder and Morey, in order to accurately assess the total evidence in Bem's data, it is necessary to combine the evidence across several of his experiments, not look at each one in isolation, which is what researchers have done up till now. They find there is some evidence for ESP – people should update their beliefs by a factor of 40. In other words, beliefs are odds. For example, a skeptic might hold odds that ESP is a long shot at a million-to-one, while a believer might believe it is as possible as not (one-to-one odds). Whatever one's beliefs, Rouder and Morey show that Bem's experiments indicate they should change by a factor of 40 in favor of ESP. The believer should now be 40-to-1 sure of ESP, while the skeptic should be 25000-to-1 sure against it. Rouder and Morey conclude that the skeptics odds are appropriate: "We remain unconvinced of the viability of ESP. There is no plausible mechanism for it, and it seems contradicted by well-substantiated theories in both physics and biology. Against this background, a change in odds of 40 is negligible." | DOI 10.3758/s13423-011-0088-7 |
Medicine | APOE4 triggers early breakdowns in the blood-brain barrier | APOE4 leads to blood–brain barrier dysfunction predicting cognitive decline, Nature (2020). DOI: 10.1038/s41586-020-2247-3 , www.nature.com/articles/s41586-020-2247-3 Journal information: Nature | http://dx.doi.org/10.1038/s41586-020-2247-3 | https://medicalxpress.com/news/2020-04-apoe4-triggers-early-breakdowns-blood-brain.html | Abstract Vascular contributions to dementia and Alzheimer’s disease are increasingly recognized 1 , 2 , 3 , 4 , 5 , 6 . Recent studies have suggested that breakdown of the blood–brain barrier (BBB) is an early biomarker of human cognitive dysfunction 7 , including the early clinical stages of Alzheimer’s disease 5 , 8 , 9 , 10 . The E4 variant of apolipoprotein E ( APOE4 ), the main susceptibility gene for Alzheimer’s disease 11 , 12 , 13 , 14 , leads to accelerated breakdown of the BBB and degeneration of brain capillary pericytes 15 , 16 , 17 , 18 , 19 , which maintain BBB integrity 20 , 21 , 22 . It is unclear, however, whether the cerebrovascular effects of APOE4 contribute to cognitive impairment. Here we show that individuals bearing APOE4 (with the ε3/ε4 or ε4/ε4 alleles) are distinguished from those without APOE4 (ε3/ε3) by breakdown of the BBB in the hippocampus and medial temporal lobe. This finding is apparent in cognitively unimpaired APOE4 carriers and more severe in those with cognitive impairment, but is not related to amyloid-β or tau pathology measured in cerebrospinal fluid or by positron emission tomography 23 . High baseline levels of the BBB pericyte injury biomarker soluble PDGFRβ 7 , 8 in the cerebrospinal fluid predicted future cognitive decline in APOE4 carriers but not in non-carriers, even after controlling for amyloid-β and tau status, and were correlated with increased activity of the BBB-degrading cyclophilin A-matrix metalloproteinase-9 pathway 19 in cerebrospinal fluid. Our findings suggest that breakdown of the BBB contributes to APOE4 -associated cognitive decline independently of Alzheimer’s disease pathology, and might be a therapeutic target in APOE4 carriers. Main Analysis of BBB permeability by dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) 7 , 8 (Fig. 1a ; see Methods ) in 245 participants (Extended Data Table 1 ) indicated that there was increased BBB breakdown in the hippocampus (HC) and parahippocampal gyrus (PHG) in cognitively normal APOE4 (ε3/ε4 and ε4/ε4) carriers, compared to cognitively normal APOE3 homozygotes (ε3/ε3), both with clinical dementia rating (CDR) scores of 0. The BBB breakdown in the HC and PHG in APOE4 carriers increased further with cognitive impairment at a CDR score of 0.5 (Fig. 1b–d ). This increase was independent of differences in amyloid-β (Aβ) and phosphorylated tau (pTau) in the cerebrospinal fluid (CSF) (Fig. 1e–h ); that is, whether individuals were Aβ+ or Aβ− and pTau+ or pTau− using the accepted cut-off values 7 , 24 , 25 (see Methods ), where Aβ+ and pTau+ status indicates classical Alzheimer’s disease (AD)-associated pathways 23 . By contrast, APOE3 carriers with cognitive impairment developed less pronounced BBB changes in the HC and PHG (Fig. 1b–d ). We found no significant BBB differences in other grey or white matter brain regions between APOE4 carriers and APOE3 homozygotes, except for increased BBB permeability in the caudate nucleus and minor leaks in the frontal cortex and corpus callosum in cognitively normal APOE4 carriers (Extended Data Fig. 1 ). These findings held when cognitive dysfunction was evaluated by neuropsychological performance (see Methods ) (Extended Data Figs. 2 , 3 ). Fig. 1: BBB breakdown in the HC and PHG in APOE4 carriers increases with cognitive impairment, independently of CSF Aβ and tau status. a , b , Maps of BBB permeability transfer constant ( K trans ) generated by DCE-MRI ( a ) in the HC of APOE3 homozygotes ( APOE3 ) and APOE4 carriers ( APOE4 ) with CDR scores of 0 or 0.5 ( b ). FA, flip angle; T1w, T1 weighted. c , d , BBB K trans in the HC ( c ) and PHG ( d ) in individuals with CDR 0 bearing APOE3 (black, n = 128) or APOE4 (red, n = 68) and with CDR 0.5 bearing APOE3 (black, n = 14) or APOE4 (red, n = 25). e , f , K trans in the HC ( e ) and PHG ( f ) in APOE4 carriers with CDR 0 who were Aβ 1–42 negative (Aβ−; n = 37) or positive (Aβ+; n = 16), or with CDR 0.5 who were Aβ− ( n = 7) or Aβ+ ( n = 10). g , h , K trans in the HC ( g ) and PHG ( h ) in APOE4 carriers with CDR 0 who were pTau− ( n = 42) or pTau+ ( n = 10), and with CDR 0.5 who were pTau− ( n = 13) or pTau+ ( n = 5). i , HC (blue) and PHG (orange) overlaid on a 3D template. j , k , Volumes of the HC ( j ) and PHG ( k ) in individuals with CDR 0 bearing APOE3 ( n = 124) or APOE4 ( n = 75) and with CDR 0.5 bearing APOE3 ( n = 13) or APOE4 ( n = 20). l , m , K trans (estimated marginal means ± s.e.m. from ANCOVA models corrected for age, sex, education, CSF Aβ 1–42 and pTau status, and HC and PHG volumes) in the HC ( l ) and PHG ( m ) in individuals with CDR 0 bearing APOE3 (black, HC n = 125; PHG n = 128) or APOE4 (red, HC and PHG n = 68) and with CDR 0.5 bearing APOE3 (black, HC n = 12; PHG n = 14) or APOE4 (red, HC n = 20; PHG n = 25). c – h , j , k , Continuous line, median; dotted line, interquartile range (IQR). Significance by ANCOVA for main effects and post hoc comparisons controlling for age, sex, and education. All ANCOVA omnibus tests remained significant at false discovery rate (FDR) threshold of 0.05. Source Data Full size image The volumes of the HC and PHG decreased with cognitive impairment in APOE4 but not APOE3 carriers (Fig. 1i–k ). The breakdown of the BBB in the HC and PHG in APOE4 carriers, but not APOE3 homozygotes, remained a highly significant predictor of cognitive impairment after we statistically controlled for age, sex, education, CSF Aβ and pTau status, and HC and PHG volumes, as shown by the estimated marginal means from the ANCOVA models (Fig. 1l, m ), and confirmed by logistic regression models (Supplementary Table 1 ). The BBB dysfunction (Fig. 1c, d, l, m ) preceded brain atrophy (Fig. 1j, k ) and was independent of systemic vascular risk factors (Extended Data Fig. 4 ). Because both Aβ and tau can lead to blood vessel abnormalities and BBB breakdown 3 , 26 , 27 , we studied whether BBB disruption in APOE4 carriers was downstream from amyloid and tau accumulation in a subset of 74 and 96 participants, respectively (Extended Data Tables 2a, b ). Voxel-based analysis by positron emission tomography (PET) indicated a substantially higher accumulation of amyloid in the orbital frontal cortex (OFC) in cognitively normal APOE4 carriers compared to APOE3 homozygotes, as reported 28 , but did not detect accumulation of tau tracer in either APOE4 or APOE3 carriers (Extended Data Fig. 5a–d ). To determine how BBB permeability relates to accumulation of amyloid and tau, we selected 5-mm-thick coronal slices in regions of interest that included the HC and PHG (where BBB disruption is seen first in APOE4 carriers compared to APOE3 homozygotes (Fig. 1b,d,e )), the OFC (where amyloid accumulation develops initially in APOE4 carriers), and the inferior temporal gyrus (ITG; a region that is affected early by tau pathology 29 ) (Extended Data Fig. 5b, d, e ). Brain uptake of amyloid and tau tracers (after correction for the choroid plexus off-target binding for tau tracer; see Methods and Extended Data Fig. 5f, g ) indicated no difference between APOE4 and APOE3 carriers in the HC, although uptake of both tracers was modestly increased compared to the background values in cerebellum (Fig. 2a, b ). The BBB was disrupted in the HC in APOE4 carriers compared to APOE3 homozygotes (Fig. 2c ), consistent with our findings in the larger cohort (Fig. 1b, c ). There was no difference in amyloid and tau accumulation in the PHG between APOE4 carriers and APOE3 homozygotes, despite BBB disruption in APOE4 carriers (Fig. 2d–f ). Amyloid accumulation in the OFC was higher in cognitively normal APOE4 carriers than in APOE3 carriers (Fig. 2g, h ), but there was no difference in BBB integrity (Fig. 2g, i ). In the ITG, there were no differences in tau accumulation or BBB integrity between APOE4 and APOE3 carriers (Fig. 2j–l ). Together, these data suggest that BBB disruption in the HC and PHG in APOE4 carriers is independent of AD pathology, and that BBB breakdown in APOE4 carriers starts in the medial temporal lobe, which is responsible for memory encoding and other cognitive functions. Fig. 2: Blood-brain barrier breakdown in APOE4 carriers is independent of amyloid and tau accumulation in the brain. All studies were performed in individuals with CDR score 0. a , Representative superimposed left HC amyloid PET (top), tau PET (middle), and BBB K trans maps (bottom) from APOE3 (left) and APOE4 (right) carriers. b , c , Amyloid and tau tracer uptake ( b ) and BBB K trans ( c ) in HC of APOE3 ( n = 45, 60, and 65) and APOE4 ( n = 29, 37, and 31) carriers. d , Representative superimposed left PHG amyloid PET (top), tau PET (middle), and BBB K trans maps (bottom) from APOE3 (left) and APOE4 (right) carriers. e , f , Amyloid and tau tracer uptake ( e ) and BBB K trans ( f ) in PHG of APOE3 ( n = 45, 60, and 65) and APOE4 ( n = 29, 37, and 31) carriers. g , Representative superimposed left medial OFC amyloid PET (top) and BBB K trans maps (bottom) from APOE3 (left) and APOE4 (right) carriers. h , i , Amyloid tracer uptake ( h ) and BBB K trans ( i ) in OFC of APOE3 ( n = 45 and 44) and APOE4 ( n = 29 and 23) carriers. j , Representative superimposed left ITG tau PET (top) and BBB K trans maps (bottom) from APOE3 (left) and APOE4 (right) carriers. k , l , Tau tracer uptake ( l ) and BBB K trans ( l ) in ITG of APOE3 ( n = 60 and 59) and APOE4 ( n = 37 and 28) carriers. b , c , e , f , h , i , k , l , Continuous lines, median; dotted lines, IQR. BBB K trans was determined in all participants (see Extended Data Tables 2a, b ) who received either both amyloid and tau tracers ( n = 58), only amyloid tracer ( n = 9) or only tau tracer ( n = 29). Significance by ANCOVA for group comparisons controlling for age, sex, and education, and two-tailed t -tests for comparison of PET values to standardized uptake value ratios (SUVR) = 1. Source Data Full size image In humans with AD and animal models, elevated levels of soluble platelet-derived growth factor receptor-β (sPDGFRβ) in the CSF indicate that pericyte injury is linked to BBB breakdown 7 , 8 , 30 and cognitive dysfunction 7 , 30 . Using a median split for visual display of the CSF sPDGFRβ baseline levels from 350 participants (see Methods ), we stratified all participants into two groups, with low CSF sPDGFRβ levels (0–600 ng ml −1 ) and high sPDGFRβ levels (600–2,000 ng ml −1 ) (Fig. 3a ). In 146 APOE4 carriers and APOE3 homozygotes who were evaluated by cognitive exams at 2-year intervals up to 4.5 years from baseline, participants with higher baseline CSF sPDGFRβ exhibited accelerated cognitive decline on a global mental status exam and global cognitive composite z -scores, which remained significant after controlling for CSF Aβ and tau status (Fig. 3b, c ; Supplementary Table 2 ). When stratified by APOE status, higher baseline CSF sPDGFRβ levels in APOE4 carriers predicted cognitive decline after controlling for CSF Aβ and pTau status (Fig. 3d, e ; Supplementary Table 3 ), but did not predict decline in APOE3 homozygotes (Fig. 3f, g ; Supplementary Table 4 ). Fig. 3: Elevated baseline CSF levels of sPDGFRβ predict cognitive decline in APOE4 carriers. a , Histogram frequency distribution of CSF sPDGFRβ using median split to divide participants into two groups: high (blue, above median 600–2,000 ng ml −1 ) and low (grey, below median 0–600 ng ml −1 ) baseline CSF sPDGFRβ. All longitudinal analyses used baseline CSF sPDGFRβ as a continuous predictor of future cognitive decline. b , c , Linear mixed model analysis of all studied participants ( n = 146) followed at 2-year intervals for up to 4.5 years after baseline lumbar puncture (LP). Higher baseline CSF sPDGFRβ (blue) predicts greater decline in demographically corrected mental status exam scores over time ( P = 0.01) (this remains significant after controlling (contr.) for CSF Aβ ( P = 0.002) and pTau ( P = 0.002) status; b ), and in global cognitive composite scores ( P = 0.01) (this remains significant after controlling for CSF Aβ ( P = 0.017) and pTau ( P = 0.01) status; c ). d , e , Higher CSF sPDGFRβ (blue) in APOE4 carriers ( n = 58) predicts future decline in mental status exam scores ( P = 0.005) after controlling for CSF Aβ ( P = 0.004) and pTau ( P = 0.003) status ( d ), and in global cognitive composite scores ( P = 0.02) after controlling for CSF Aβ ( P = 0.02) and pTau ( P = 0.01) status ( e ). f , g , Baseline CSF sPDGFRβ does not predict decline ( n = 88) in either mental status ( f ) or global composite ( g ) scores in APOE3 homozygotes regardless of CSF Aβ or pTau status. b – g , Separate lines indicate median split of baseline CSF sPDGFRβ (grey, below median; blue, above median). ∆ slopes provided for median split of baseline CSF sPDGFRβ groups. t0 = −1 to 0.5 years post-LP, t1 = 0.5 to 2.5 years post-LP, t2 = 2.5 to 4.5 years post-LP. Error bars show s.e. of the estimate. Linear mixed model analysis with no multiple comparison. See Supplementary Tables 2 – 4 for detailed statistics. Source Data Full size image The increase in CSF sPDGFRβ with cognitive impairment was also found on cross-sectional CDR analysis in APOE4 carriers but not APOE3 homozygotes (Fig. 4a, b ; Extended Data Table 3 ; Supplementary Table 5 ). Increased levels of sPDGFRβ in the CSF of APOE4 carriers correlated with increases in BBB permeability in the HC and PHG (Fig. 4c, d ), and elevated levels of molecular biomarkers of BBB breakdown including albumin CSF/plasma quotient, and CSF fibrinogen and plasminogen (Fig. 4e–g ). Fig. 4: Elevated CSF sPDGFRβ, cyclophilin A and matrix metalloproteinase-9 in APOE4 carriers. a , CSF sPDGFRβ in individuals with CDR 0 bearing APOE3 (black, n = 152) or APOE4 (red, n = 95) and with CDR 0.5 bearing APOE3 (black, n = 42) or APOE4 (red, n = 45). b , CSF sPDGFRβ (estimated marginal means ± s.e.m. from ANCOVA models corrected for age, sex, education, CSF Aβ 1–42 and pTau status) in individuals with CDR 0 bearing APOE3 (black, n = 152) or APOE4 (red, n = 95) and with CDR 0.5 bearing APOE3 (black, n = 42) or APOE4 (red, n = 45). c , d , Correlation between CSF sPDGFRβ and BBB K trans in the HC ( n = 65; c ) and PHG ( n = 65; d ) in APOE4 carriers. e – g , Correlations between CSF sPDGFRβ and albumin quotient ( Q alb , n = 92; e ), fibrinogen ( n = 93; f ), and plasminogen ( n = 57; g ) in APOE4 carriers. h , CSF CypA in individuals with CDR 0 bearing APOE3 (black, n = 75) or APOE4 (red, n = 62) and with CDR 0.5 bearing APOE3 (black, n = 33) or APOE4 (red, n = 45). i , CSF CypA (estimated marginal means ± s.e.m. as in b ) in individuals with CDR 0 bearing APOE3 (black, n = 75) or APOE4 (red, n = 62) and with CDR 0.5 bearing APOE3 (black, n = 33) or APOE4 (red, n = 45). j , Correlation between CSF CypA and sPDGFRβ in APOE4 carriers ( n = 96). k , CSF matrix metalloproteinase-9 (MMP9) in individuals with CDR 0 bearing APOE3 (black, n = 72) or APOE4 (red, n = 68) and with CDR 0.5 bearing APOE3 (black, n = 33) or APOE4 (red, n = 45). l , Correlation between CSF MMP9 and CypA in APOE4 carriers ( n = 104). m , n , CypA ( m ; see Extended Data Fig. 8 ) and secreted MMP9 in the culture medium ( n ) in human iPSC-derived APOE3 (ε3/ε3) and APOE4 (ε4/ε4) pericytes. Mean + s.e.m. from four independent culture replicates. a , h , k , Continuous lines, median; dotted lines, IQR. Significance by ANCOVA for main effects and post hoc comparisons controlling for age, sex, and education. c–g , j , l , Two-tailed simple linear regression; Pearson correlation coefficient ( r ). m , n , Unpaired two-tailed Student’s t -test. Source Data Full size image Next, we focused on the proinflammatory cyclophilin A–matrix metalloproteinase-9 (CypA–MMP9) pathway. When activated by brain capillary pericytes in APOE4 (but not APOE3 ) knock-in mice, this pathway leads to MMP9-mediated breakdown of the BBB, which in turn induces neuronal stress related to leaked blood-derived neurotoxic proteins followed by neuronal dysfunction and loss of synaptic proteins 19 . Brain tissue analysis has also shown higher activation of the CypA–MMP9 pathway in degenerating brain capillary pericytes in APOE4 carriers than in APOE3 homozygotes 16 . In our cohort, APOE4 carriers, but not APOE3 homozygotes, developed an increase in CypA CSF levels with cognitive impairment (Fig. 4h, i ), which correlated with elevated CSF sPDGFRβ (Fig. 4j ). APOE4 carriers, but not APOE3 homozygotes, also developed elevated MMP9 in the CSF with cognitive impairment (Fig. 4k ), which correlated with elevated CSF CypA levels (Fig. 4l ), suggesting that activation of the CypA–MMP9 pathway in APOE4 carriers correlates with pericyte injury, as shown in animal models 19 . There were no differences in glia or in inflammatory or endothelial cell injury CSF biomarkers between cognitively impaired and unimpaired APOE4 and APOE3 participants, but there was an increase in neuron-specific enolase (NSE) with cognitive impairment in APOE4 carriers, confirming neuronal stress (Extended Data Fig. 6 ) and consistent with atrophy of the HC and PHG (Fig. 1j, k ). Studies in APOE knock-in mice and mouse pericytes have shown that apoE3, but not apoE4, transcriptionally inhibits CypA via low-density lipoprotein receptor-related protein 1, which in turn transcriptionally inhibits MMP9 19 . Consistent with the mouse data, pericytes derived from APOE4 (ε4/ε4) human induced pluripotent stem cells (iPSCs) had substantially higher levels of CypA and secreted MMP9 than those derived from APOE3 (ε3/ε3) cells (Fig. 4m, n ), suggesting that apoE may control the CypA–MMP9 pathway in human pericytes in an isoform-specific manner, as in mouse models 19 . In APOE4 carriers, CSF Aβ 1–42 was reduced and CSF pTau levels were increased with cognitive impairment, compared to APOE3 homozygotes (Extended Data Fig. 7 ), as reported 23 ; this difference remained significant after controlling for CSF sPDGFRβ levels (Extended Data Fig. 7 ). Together, these findings support the idea that the Aβ and tau pathways operate independently of the BBB breakdown pathway during the early stages of cognitive impairment in APOE4 carriers. In summary, we have shown that BBB breakdown contributes to cognitive decline in APOE4 carriers independent of AD pathology; that high baseline CSF levels of sPDGFRβ can predict future cognitive decline in APOE4 carriers; and that APOE4 , but not APOE3 , activates the CypA–MMP9 pathway in the CSF, which may lead to accelerated BBB breakdown and thereby cause neuronal and synaptic dysfunction 19 . As blockade of the CypA–MMP9 pathway in APOE4 knock-in mice restores BBB integrity and subsequently normalizes neuronal and synaptic function 19 , it is possible that CypA inhibitors (some of which have been used in humans for non-neurological applications 31 ) might also suppress the CypA pathway in cerebral blood vessels in APOE4 carriers. This should improve cerebrovascular integrity, and reduce the associated neuronal and synaptic deficits, thereby slowing cognitive impairment. Methods Study participants Participants were recruited from three sites: the University of Southern California (USC), Los Angeles, CA; Washington University (WashU), St. Louis, MO; and Banner Alzheimer’s Institute Phoenix, AZ and Mayo Clinic Arizona, Scottsdale, AZ as a single site. At the USC site, participants were recruited through the USC Alzheimer’s Disease Research Center (ADRC): combined USC and the Huntington Medical Research Institutes (HMRI), Pasadena, CA. At the WashU site, participants were recruited through the Washington University Knight ADRC. At Banner Alzheimer’s Institute and Mayo Clinic Arizona site, participants were recruited through the Arizona Apolipoprotein E ( APOE ) cohort. The study and procedures were approved by the Institutional Review Boards of USC ADRC, Washington University Knight ADRC, and Banner Good Samaritan Medical Center and Mayo Clinic Scottsdale, indicating compliance with all ethical regulations. Informed consent was obtained from all participants before study enrolment. All participants ( n = 435) underwent neurological and neuropsychological evaluations performed using the Uniform Data Set (UDS) 32 and additional neuropsychological tests, as described below, and received a venipuncture for collection of blood for biomarker studies. An LP was performed in 350 participants (81%) for collection of CSF. DCE-MRI for assessment of BBB permeability was performed in 245 participants (56%) who had no contraindications for contrast injection. Both LP and DCE-MRI were conducted in 172 participants. Among the 245 DCE-MRI participants, 74 and 96 were additionally studied for brain uptake of amyloid and tau PET radiotracers, respectively, as described below. No statistical methods were used to predetermine sample size. All biomarker assays, MRI, and PET scans were analysed by investigators blinded to the clinical status of the participants. Participant inclusion and exclusion criteria Included participants (≥45 years of age) were confirmed by clinical and cognitive assessments to be either cognitively normal or at the earliest symptomatic stage of AD. A current or prior history of any neurological or psychiatric conditions that might confound cognitive assessment, including organ failure, brain tumours, epilepsy, hydrocephalus, schizophrenia, and major depression, was exclusionary. Participants were stratified by APOE genotype as APOE4 carriers (ε3/ε4 and ε4/ε4) or APOE4 non-carriers (ε3/ε3), also defined as APOE3 homozygotes, who were cognitively normal or had mild cognitive dysfunction, as determined by CDR scores 33 and the presence of cognitive impairment in one or more cognitive domains based on comprehensive neuropsychological evaluation, including performance on ten neuropsychological tests assessing memory, attention/executive function, language and global cognition. For all analyses individuals with ε3/ε4 and ε4/ε4 alleles were pooled together in a single APOE4 group, as we did not find in the present cohort (82–86% ε3/ε4 and 14–18% ε4/ε4 participants, depending on the outcome measure) a significant difference between individuals with two versus one ε4 allele for the studied parameters, including the BBB K trans and sPDGFRβ CSF values (see statistical section below). Individuals were additionally stratified by Aβ and pTau CSF analysis as either Aβ 1–42 + (<190 pg/ml) or Aβ 1–42 − (>190 pg/ml), and pTau+ (>78 pg/ml) or pTau− (<78 pg/ml), using accepted cutoff values 7 , 24 , 25 . Participants were excluded if they were diagnosed with vascular cognitive impairment or vascular dementia. Clinical diagnoses were made by neurologists and criteria included whether the patient had a known vascular brain injury, and whether the clinician judged that the vascular brain injury played a role in their cognitive impairment, and/or pattern and course of symptoms. In addition to clinical diagnosis, the presence of vascular lesions was confirmed by moderate-to-severe white matter changes and lacunar infarcts by fluid-attenuated inversion recovery (FLAIR) MRI and/or subcortical microbleeds by T2*-weighted MRI 1 . Participants were also excluded if they were diagnosed with Parkinson’s disease, Lewy body dementia or frontotemporal dementia. History of a single stroke or transient ischaemic attack was not an exclusion unless it was related to symptomatic onset of cognitive impairment. Participants also did not have current contraindications to MRI and were not currently using medications that might better account for any observed cognitive impairment. Clinical exam Participants underwent clinical assessments according to UDS procedures harmonized across all study sites, including clinical interview and review of any neurocognitive symptoms and health history with the participant and a knowledgeable informant. A general physical and neurologic exam was conducted. The CDR assessment was conducted in accordance with published standardization procedures, including standardized interview and assessment with the participant and a knowledgeable informant. In accordance with current diagnostic models for cognitive and biological research criteria for cognitive impairment and AD 23 , participants were separately stratified by cognitive impairment and AD biomarker abnormality using established cutoffs for CSF Aβ 1–42 and pTau 7 , 24 , 25 . Cognitive impairment was determined on the basis of global CDR score and neuropsychological impairment in one or more cognitive domains. Vascular risk factors The vascular risk factor (VRF) burden in each participant was evaluated through physical examination, blood tests, and clinical interviews with the participant and informant; history of cardiovascular disease (heart failure, angina, stent placement, coronary artery bypass graft, intermittent claudication), hypertension, hyperlipidaemia, type 2 diabetes, atrial fibrillation, and transient ischaemic attack or minor stroke were investigated. The total VRF burden was defined by the sum of these risk factors, as previously described 7 . We assigned an elevated VRF burden to individuals with two or more VRFs. This threshold was adopted because previous studies showed that the presence of two or more VRFs is associated with occult cerebrovascular disease at autopsy in older adults with AD, whereas a single VRF is common and not necessarily associated with increased cerebrovascular disease in this population 34 , 35 . Cognitive domain impairment evaluation Impairment in one or more cognitive domain was judged by performance on comprehensive neuropsychological testing, using previously described neuropsychological criteria for cognitive impairment 7 . All participants underwent neuropsychological testing that included the UDS battery (version 2.0 or 3.0) plus supplementary neuropsychological tests at each site. Raw test scores were converted to age-, sex- and education-corrected z scores using the National Alzheimer’s Coordinating Center (NACC) regression-based norming procedures ( ). Normalized z scores from ten neuropsychological tests were evaluated in determining domain impairment, including three tests per cognitive domain (memory, attention/executive function and language) and one test of global cognition. Impairment in one or more cognitive domains was determined using previously described neuropsychological criteria, and was defined as a score >1 s.d. below norm-referenced values on two or more tests within a single cognitive domain or three or more tests across cognitive domains 36 . Prior studies have established improved sensitivity and specificity of these criteria relative to those employing a single test score, as well as adaptability of this diagnostic approach to various neuropsychological batteries 36 , 37 . Participants were excluded from cognitive domain analyses if they had less than 90% complete neuropsychological test data (53, 24, and 82 participants were excluded for MRI, PET, and CSF analyses, respectively). Included participants were classified as 0, 1, or 2+ based on the number of cognitive domains for which they had two or more impaired test scores. Test battery specifics for each UDS version and recruitment site are as follows. i) Global cognition: MMSE for UDS version 2 38 and MoCA for UDS version 3 39 . ii) Memory: The Logical Memory Story A Immediate and Delayed free recall tests (modified from the original Wechsler Memory Scales, Third Edition (WMS-III)) for UDS version 2 and the Craft Stories Immediate and Delayed free recall for UDS version 3. For supplementary tests the USC participants underwent the California Verbal Learning Test, Second Edition (CVLT-II) and the Selective Reminding Test (SRT) sum of free recall trials. Norm-referenced scores for these supplementary test scores were derived from a nationally representative sample published with the test manual (CVLT-II) 40 and in studies of normally ageing adults (SRT). iii) Attention and executive function: The Trails A, Trails B, and Wechsler Adult Intelligence Scale—Revised (WAIS-R) Digit Span Backwards tests for UDS version 2 and the Trails A, Trails B and Digit Span Backwards tests for UDS version 3. iv) Language: The Animal Fluency, Vegetable Fluency, and Boston Naming Tests for UDS version 2 and Animal Fluency, Vegetable Fluency, and Multilingual Naming Test (MINT) for UDS version 3. Magnetic resonance imaging and analysis The MRI data sets were obtained at the Mark and Mary Stevens Neuroimaging and Informatics Institute of USC and Washington University of St. Louis. We developed a standardized high-resolution 3T MRI brain scan protocol. At the USC site, a Siemens 3T Prisma scanner was used with a product 32-channel head receive coil and body transmit coil. At the WashU site, a Siemens 3T mMR with 20-channel head coil and Siemens 3T Vida with 64-channel head coil were used. Anatomical coronal spin echo T2-weighted scans were first obtained through the hippocampi (TR/TE 8020/50 ms, NEX = 1, slice thickness 2 mm with 2 mm gap between slices, FOV = 175 × 175 mm, matrix size = 448 × 448). Baseline coronal T1-weighted maps were then acquired using a T1-weighted 3D volumetric interpolated breath-hold (VIBE) sequence and variable flip angle method using flip angles of 2°, 5°, 10°, 12°, and 15°. Coronal DCE-MRI covering the hippocampi and temporal lobes was acquired using a T1-weighted 3D VIBE sequence (FA = 15°, TR/TE = 5.14/2.18 ms, NEX = 1, slice thickness 5 mm with no gap, FOV 175 × 175 mm, matrix size 320 × 320, voxel size 0.550 × 0.550 × 5 mm 3 ). This sequence was repeated for a total of 16 min with an approximate time resolution of 15.4 s. Gadolinium-based contrast agent (GBCA), gadoterate meglumine (Dotarem, Guerbet, France) (0.05 mmol/kg), was administered intravenously into the antecubital vein using a power injector, at a rate of 3 ml/s followed by a 25-ml saline flush, 30 s into the DCE scan. The standardization and optimization of the MRI protocol required several tests performed on a phantom. Specifically, scanner characterization and calibration sequences including B 0 , T1, and variable flip-angle mapping were implemented, optimized, and applied. After the achievement of good results in terms of quality control and reproducibility, we standardized and employed the same pre-contrast and dynamic T1-weighted protocols at both USC and Washington University sites. Of note, all the other MR sequences were also identical on both scanners. In order to minimize inter-site variability, the entire MRI protocol including the anatomical and DCE pulse sequences were 100% mirrored from one site to another and the same contrast agent gadoterate meglumine (Dotarem) was injected into participants at the same concentration (0.05 mmol/kg). Finally, exactly the same pre- and post-processing analysis pipeline was applied for both sites, including T1 multi-FA mapping using linear fitting and Patlak-based DCE modelling using the arterial input function determined in each individual from the internal carotid artery. Applying all the above cited factors greatly limited inter-site variability. The consistency of the results from the two sites was additionally confirmed by our previous publication 7 . In brief, we performed the analysis of the combined DCE data sets from both USC and WashU sites, and additionally site-specific analysis for each of the two sites separately, which showed no statistically significant differences across sites. Recently, we invited a subset of 52 participants for an additional T1-weighted scan without contrast (using the same scanner and same MR pulse sequences) after their first DCE-MRI 41 and measured both B 0 and T1 values at a two-year interval. This study showed that the results were unchanged and consistent across the scans, supporting minimal intra-site variability. Quantification of BBB permeability See Supplementary Information , Supplementary Methods . Quantification of regional brain volumes HC and PHG morphometry were performed using the FreeSurfer (v5.3.0) software package 42 ( ), as previously described 7 . The HC and PHG were segmented using FreeSurfer Desikan-Killiany and subcortical atlases 43 , 44 . Then, regional volumes (mm 3 ) were derived accordingly. The technical details of this procedure have been described previously 45 , 46 . Data processing and visualization were performed using the Laboratory of Neuro Imaging (LONI) pipeline system ( ) and Quantitative Imaging Toolkit 47 , 48 , 49 . Positron emission tomography and analysis PET image acquisition was performed at the Molecular Imaging Center of USC or Mallinckrodt Institute of Radiology of WashU. Amyloid and tau PET studies were conducted using 18 F-florbetaben (FBB) or 18 F-florbetapir (FBP) and 18 F-flortaucipir (AV1451), respectively. FBB (Life Molecular Imaging, Inc.) was obtained from SOPHIE, Inc. for the USC site, while FBP was provided by Eli Lilly and Company for the WashU site. For all amyloid PET analysis the FBP and FBB data sets were combined. AV1451 was provided by Avid Radiopharmaceuticals, Inc. for the USC site and was produced by the Mallinckrodt Institute of Radiology for the WashU site. A Siemens Biograph 64 PET scanner was used at the USC site. At the WashU site, FBP scans were acquired on a Siemens mMR and AV1451 scans were acquired on a Siemens Biograph mCT. The mCT session was used for attenuation correction of the mMR scans. Participants were injected with 300 MBq (±10%) of FBB or 370 MBq (±10%) of FBP. FBB and FBP images were acquired from 90 to 110 min and 50 to 70 min, respectively, after injection in accordance with the manufacturers’ recommendations. Individuals who participated in amyloid and tau PET studies also had their DCE-MRI scan within 2.2 ± 0.9 and 2.1 ± 0.6 months of their amyloid and tau PET scans, respectively. In brief, a computed tomography (CT) scan was performed first for attenuation correction before each PET imaging session. The downloaded PET images from FBB, FBP, and AV1451 tracers were processed by using standard uptake value maps (SUV in g/ml). All PET images were co-registered to structural high-resolution 3D T1-weighted Magnetization Prepared Rapid Acquisition Gradient Echo (MP-RAGE) MRI images using FSL-FLIRT (FMRIB’s Linear Image Registration Tool) 50 . The FreeSurfer-segmented cerebellum was used as a reference tissue to normalize for both amyloid and tau 51 . After co-registration of PET images into an anatomical reference image (MNI152 standard-space), Statistical Parametric Mapping (SPM12) was used for group comparison on a voxel-by-voxel basis. Age at time of PET imaging session, sex, and education were introduced in a multiple regression model as covariates. Level of significance was set to P < 0.001 for amyloid and P < 0.005 for tau (uncorrected P values) with the minimum number of voxels (Ke) in a cluster of 50. Additionally, given the known AV1451 off-target ligand binding in the choroid plexus (CP) 52 , 53 , which can contribute to HC regional AV1451 signal owing to the close proximity of the CP to the HC and relatively low spatial resolution of PET scans (that is, ~6-mm voxel size), we took advantage of visualizing CP by DCE-MRI, also performed in these individuals, which allowed us to subtract the contribution of the CP signal to the HC AV1451 proper signal. The following steps were used to correct for off-target ligand binding to the CP (see Extended Data Fig. 5 ). Step 1: HC masks were generated from the 3D T1-weighted MP-RAGE. Step 2: CP masks were generated from the T1-weighted VIBE post-GBCA (FA = 15°) image. Step 3: HC and CP masks were overlaid. Step 4: The overlap of the CP and HC masks was subtracted to obtain a CP-corrected HC PET signal after adding 6-mm voxel size on top of the CP mask generated from DCE data. Representative images of HC AV1451 PET signal before and after applying the CP correction are shown in Extended Data Fig. 5 . We next quantified regional changes in amyloid and tau SUV ratio (SUVR) in relation to regional DCE-MRI K trans values in all participants stratified by APOE genotype. The regional SUVR values were taken from the FreeSurfer-segmented HC, PHG, OFC 28 , and ITG 29 . The BBB K trans constant (DCE-MRI) was determined in all participants (Extended Data Tables 2a, b ). This includes those who were analysed for both amyloid and tau ( n = 58), only amyloid ( n = 9) or only tau ( n = 29). Lumbar puncture and venipuncture Participants underwent a lumbar puncture and venipuncture in the morning after an overnight fast. The CSF was collected in polypropylene tubes, processed (centrifuged at 2,000 g , 4 °C, 10 min USC site; 5 min WashU site), aliquoted into polypropylene tubes and stored at −80 °C until assay. Blood was collected into EDTA (EDTA) tubes and processed (centrifuged at 2,000 g , 4 °C, 10 min USC site; 5 min WashU site). Plasma and buffy coat were aliquoted in polypropylene tubes and stored at −80 °C; buffy coat was used for DNA extraction and APOE genotyping. APOE genotyping DNA was extracted from buffy coat using the Quick-gDNA Blood Miniprep Kit (catalogue no. D3024, Zymo Research, Irvine, CA). APOE genotyping was performed via polymerase chain reaction (PCR)-based retention fragment length polymorphism analysis, as previously reported 7 . Molecular assays Quantitative western blotting of sPDGFRβ The quantitative western blot analysis was used to detect sPDGFRβ in human CSF (ng/ml), as previously reported 7 , 8 . BBB breakdown biomarkers Albumin quotient ( Q alb , the ratio of CSF to plasma albumin levels) and CSF levels of fibrinogen and plasminogen were determined using enzyme-linked immunosorbent assay (ELISA), as previously reported 7 , 8 . Cyclophilin A We developed a novel CypA assay on the Meso Scale Discovery (MSD) platform. Standard-bind 96-well plates (catalogue no. L15XA-3 / L11XA-3, MSD, Rockville, MD) were spot-coated with 5 μl per well of 40 μg/ml rabbit polyclonal anti-CypA antibody (catalogue no. 10436-T52, Sino Biological, Wayne, PA) prepared in 0.03% Triton X-100 in 0.01 M PBS pH 7.4 solution. The plates were left undisturbed overnight to dry at room temperature. The next day, the plates were blocked with 150 μl per well of Blocking One (catalogue no. 03953-95, Nacalai Tesque, Japan) and incubated for exactly 1 h with shaking. Meanwhile, samples and standards were prepared in Blocking One blocking buffer. Different concentrations ranging from 3.5 to 200 ng/ml of a recombinant human CypA protein (catalogue no. 3589-CAB, R&D Systems, Minneapolis, MN) were used to generate a standard curve. All CSF samples were diluted 1:3. After blocking, the plates were manually washed three times with 200 μl per well of wash buffer (in 0.05% Tween-20 in 0.01 M PBS pH 7.4). The prepared samples or standards were added at 25 μl per well, and the plates were incubated overnight at 4 °C with shaking. The next day, the plates were washed three times, and 25 μl per well of 1 μg/ml sulfo-tagged mouse monoclonal CypA detection antibody (catalogue no. ab58144, Abcam, Cambridge, MA), prepared in Blocking One. The plates were incubated for 90 min at room temperature with shaking. Next, the plates were washed four times, then 150 μl per well of 2× Read Buffer T with surfactant (catalogue no. R92TC-3, MSD, Rockville, MD) was added and the plates were read immediately on an MSD SECTOR Imager 6000 (MSD, Rockville, MD) with electrochemiluminescence detection. The raw readings were analysed by subtracting the average background value of the zero standard from each recombinant standard and sample reading. A standard curve was constructed by plotting the recombinant standard readings and their known concentrations and applying a nonlinear four-parameter logistics curve fit. The CypA concentrations were calculated using the samples’ reading and the standard curve equation; the result was corrected for the sample dilution factor to arrive at the CypA concentration in the CSF samples. Matrix metalloproteinase-9 CSF levels of MMP9 were determined using the human MMP9 Ultra-Sensitive Kit from MSD (cat. No. K151HAC). Neuron-specific enolase CSF levels of NSE were determined using ELISA (cat. no. E-80NEN, Immunology Consultant Laboratories, Portland, OR). The company no longer sells this product; thus, this analyte was measured in the majority of participants but not in those individuals that enrolled in the study most recently. S100B CSF levels of the astrocyte-derived cytokine, S100 calcium-binding protein B (S100B), were determined using ELISA (cat. no. EZHS100B-33K, EMD Millipore, Billerica, MA). Inflammatory markers An MSD multiplex assay was used to determine CSF levels of intercellular adhesion molecule 1 (ICAM1) (cat. no. K15198D, MSD, Rockville, MD), and interleukin-6 (IL6), IL-1β, tumour necrosis factor-α (TNFα), and interferon gamma (IFNγ) (cat. no. K15049G, MSD, Rockville, MD). Aβ peptides An MSD multiplex assay (cat. no. K15200E, MSD, Rockville, MD) was used to determine CSF levels of Aβ 1–42 . Participants were stratified based on CSF analysis as either Aβ+ (<190 pg/ml) or Aβ− (>190 pg/ml) using the accepted cutoff values as previously reported for the MSD 6E10 Aβ peptide assay 24 . Tau Phosphorylated tau (pT181) was determined by ELISA (cat. no. 81581, Innotest, Fujirebio US, Inc., Malvern, PA). Participants were stratified based on CSF analysis as either pTau+ (>78 pg/ml) or pTau− (<78 pg/ml), using the accepted cutoff value as previously reported 25 . Human iPSCs iPSC lines were generated by reprogramming of skin fibroblasts from APOE ε4/ε4 and APOE ε3/ε3 donors with AD or without (control) as recently reported 54 . Reprogramming was performed using integration-free Sendai virus vectors and we passaged cells to passage 15 and confirmed normal karyotype. hiPSCs were maintained on Matrigel (Corning) in mTeSR1 (catalogue no. 85850, StemCell Technologies, Vancouver, BC, Canada) supplemented with 10 ng/ml FGF2 StemBeads (StemCultures) or mTeSR plus (StemCell Technologies) every other day. Differentiation of iPSCs into pericytes Differentiation of iPSCs into pericytes was carried out as described previously 55 . In brief, iPSCs were dissociated with ReLeSR (catalogue no. 05872, StemCell Technologies) and seeded at 55,000 cells/cm 2 in Essential 8 medium (catalogue no. A1517001, ThermoFisher, Waltham, MA, USA) supplemented with ROCK inhibitor Y-27632 (10 μM, catalogue no. 72304, StemCell Technologies) on Matrigel (0.5 mg/6-well plate, catalogue no. 354230, Corning, NY, USA). After 24 h incubation, the iPSCs were switched into STEMdiff Mesoderm Induction Medium (MIM, catalogue no. 05221, StemCell Technologies) for 5 days with daily medium change. On day 6 of MIM treatment, the cells were plated on Matrigel at 25,000 cells/cm 2 in pericyte medium (catalogue no. 1201, ScienCell, Carlsbad, CA, USA) for an additional 7 days. The differentiated cells were dissociated with Accutase (catalogue no. 07920, StemCell Technologies). Following incubation with human PDGFRβ biotinylated antibody (catalogue no. BAF385, R&D Systems, Minneapolis, MN, USA), the cells were incubated with anti-biotin microbeads (catalogue no. 130-090-485, Miltenyi Biotec, Bergisch Gladbach, NRW, Germany) and magnetically sorted using MACS LS columns (catalogue no. 130-042-401, Miltenyi Biotec) following the manufacturer’s instructions. Sorted pericytes were plated at a density of 25,000 cells/cm 2 on Matrigel-coated coverslips for immunocytochemistry analyses or poly- l -lysine-coated six-well culture plates for western blot analyses. Differentiated pericytes were positive for the pericyte markers PDGFRβ, CD13, and NG2, and negative for the endothelial marker CD31, the astrocytic marker GFAP, and the microglial marker CD11b. Statistical analyses Prior to performing statistical analyses, we first screened for outliers using the Grubbs’ test, also called the ESD (extreme studentized deviate) method, applying a significance level of α = 0.01 ( ). For each of the outliers identified, a secondary index of outlier influence was applied using the degree of deviation from the mean (greater than ± 3 s.d.) 56 . Using these stringent criteria, a total of five outliers (one each in Figs. 1j, k and 2j and one each in Extended Data Fig. 6a, b , were removed from analyses, as indicated in the legends of these figures. Continuous variables were also evaluated for departures from normality through quantitative examination of skewness and kurtosis, in addition to visual inspection of frequency distributions. Where departures of normality were identified, log 10 transformations were applied, and distribution normalization was confirmed before parametric analyses. This was done for Fig. 4h, k and Extended Data Fig. 7a, b, d, e . As the use of log 10 transformations accounts for any non-normality, this obviated the need for outliers exclusion. DCE-MRI K trans , and CSF sPDGFRβ and CypA Regional DCE-MRI K trans values and CSF sPDGFRβ, CypA and MMP9 levels were compared across the entire sample stratified by APOE status. As in the APOE4 group relatively few participants were homozygous ε4/ε4 compared to heterozygous ε3/ε4 (14% for DCE-MRI analysis, and 18% for sPDGFRβ analysis), and initial comparisons between ε4/ε4 and ε3/ε4 carriers did not show any significant differences in regional HC and PHG DCE-MRI K trans values (CDR 0, P HC = 0.19 and P PHG = 0.54 (PHG); CDR 0.5, P HC = 0.22 and P PHG = 0.84) or CSF sPDGFRβ levels (CDR 0, P = 0.23; CDR 0.5, P = 0.47), all subsequent analyses combined APOE4 carriers (ε3/ε4 and ε4/ε4), and compared these participants to APOE3 carriers (ε3/ε3) stratified by cognitive impairment status (CDR 0 versus 0.5 and 0 versus 1 versus 2+ cognitive domain impairment using ANCOVA with FDR correction for multiple comparisons (see details below). For CDR analyses, model covariates included age, sex, and education. Cognitive domain impairment was determined using age-, sex-, and education-corrected values, so these covariates were not additionally included in the analyses. Additional post hoc ANCOVA analyses evaluated whether the observed differences remained significant after stratifying APOE4 carriers by CSF Aβ 1–42 and pTau status, and after statistically controlling for CSF Aβ 1–42 and pTau status and regional brain volume in APOE4 non-carriers and carriers. These findings were also confirmed by hierarchical logistic regression models using the same covariates. PET AD biomarkers In a subset of participants who underwent amyloid and tau PET imaging together with DCE-MRI studies, we used ANCOVA models controlled for age, sex and education to compare regional amyloid and tau ligand binding and DCE-MRI values in a set of APOE4 non-carriers and carriers within a priori regions of interest, based on prior imaging studies, to determine whether distinct regional pathologies differed by APOE4 carrier status. Baseline CSF sPDGFRβ as a continuous predictor of cognitive decline For linear mixed model analysis, baseline CSF sPDGFRβ was a continuous predictor of demographically corrected global cognitive change at 2-year follow up intervals, controlling for CSF Aβ 1–42 and CSF pTau status. Global cognition was indexed by age-, sex-, and education-corrected z scores on mental status exam (MMSE or MoCA) and as the global cognitive composite of all age-, sex-, and education-corrected neuropsychological test z scores (see above for list of neuropsychological tests). Time was modelled with date of LP as baseline (t0) with two follow-up intervals of 2 years each (t1, t2). Additional analyses confirmed all findings when time was modelled as time since baseline, with date of lumbar puncture as baseline (t0) and follow up as annual intervals (t1− n ). All longitudinal mixed models treated CSF sPDGFRβ as a continuous predictor. Although we have previously established that CSF sPDGFRβ is a marker of pericyte injury 7 , 8 , 57 , the optimal cutoff value for abnormal CSF sPDGFRβ levels indicative of pericyte injury remains unknown. Autopsy studies are required to determine optimal in vivo biomarker cutoff values that predict gold-standard neuropathological measures, such as studies conducted for CSF and PET markers of amyloid and tau. Given the lack of available autopsy data relating CSF sPDGFRβ to neuropathological markers of pericyte injury, we chose to divide participants by CSF sPDGFRβ values using median split for the purposes of visual display only (higher CSF sPDGFRβ was above sample median and lower CSF sPDGFRβ was below sample median). The median split was not used in statistical analyses and was only used for the purpose of visual display (Fig. 3a ) for statistical parameters from analyses using CSF sPDGFRβ as a continuous predictor of cognitive decline). Correlational analyses Pearson product moment correlations were used to evaluate relationships among CSF sPDGFRβ, CypA, MMP9, fibrinogen, plasminogen and hippocampal and parahippocampal BBB K trans levels among APOE4 carriers. Multiple comparison correction and missing data Given the large number of analyses, FDR correction was applied to P values for primary study outcomes (DCE-MRI, sPDGFRβ) evaluated in the entire sample by APOE4 carrier status and CDR status using the Benjamini–Hochberg method 58 in ANCOVA and logistic regression models controlling for age, sex, education, brain volume, and CSF Aβ 1–42 and pTau status (for DCE-MRI analyses). Post hoc confirmatory analyses in participant subsets further evaluating independence of CSF and PET markers of amyloid and tau, evaluation of mechanistic markers (that is, CypA and MMP9), and longitudinal analysis of predictive value of CSF sPDGFRβ were not corrected for multiple comparisons. For longitudinal data with variable follow up, we used linear mixed model analyses with and accounted for missing data via the missing at random assumption. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability All data generated and/or analysed during this study are either included in this article (and its Supplementary Information ) or are available from the corresponding author on reasonable request. Source Data for Figs. 1 – 4 are provided with the article. Code availability All software used in this study are publicly available: Rocketship v1.2 ( ), FreeSurfer (v5.3.0) ( ), FSL-FLIRT ( ), SPM12 ( ), and Quantitative Imaging Toolkit ( ). | New USC research reveals how APOE4, a genetic culprit for Alzheimer's disease, triggers leaks in the brain's plumbing system, allowing toxic substances to seep into the brain areas responsible for memory encoding and other cognitive functions. The damage is linked to future problems in learning and memory, even when the disease's signature sticky plaques have not appeared. The findings suggest that the smallest blood vessels in the brain, which form the blood-brain barrier, might be a potential target for early treatment. The study appears today in Nature. "This study sheds light on a new way of looking at this disease and possibly treatment in people with the APOE4 gene, looking at blood vessels and improving their function to potentially slow down or arrest cognitive decline," said senior author Berislav Zlokovic, director of the Zilkha Neurogenetic Institute at the Keck School of Medicine of USC. "Severe damage to vascular cells called pericytes was linked to more severe cognitive problems in APOE4 carriers. APOE4 seems to speed up breakdown of the blood-brain barrier by activating an inflammatory pathway in blood vessels, which is associated with pericyte injury." Scientists have long known that the APOE4 gene—which occurs in up to 14 percent of the population—increases the probability of developing Alzheimer's disease. Until now, it's been unclear how different pathologies determine the course of the disease in its early stages, or what underlying mechanisms lead to cognitive decline in APOE4 carriers. Zlokovic's previous research shows that people who develop early memory problems also experience the most leakage in their brain's blood vessels—independent of amyloid plaque or tau, two common contributors to Alzheimer's. The leakage starts when cells called pericytes, which line the walls of blood vessels in the brain and maintain blood-brain barrier integrity, are damaged. These injured pericytes can be detected with a unique biomarker, developed by Zlokovic's lab in 2015, which shows up in cerebrospinal fluid. For this study, scientists used standard memory tests to check the participants' cognitive abilities and their neuropsychological performance. They also used advanced neuroimaging and employed the biomarker that indicates damage to the brain's blood vessels. In participants who had the APOE4 gene, researchers found damaged capillaries in the brain's memory center, the hippocampus and medial temporal lobe. The damage correlated with increased levels of a protein that causes inflammation, cyclophilin A—an early sign of the disease in people already at higher risk of developing Alzheimer's. Zlokovic, who became director of the Zilkha Neurogenetic Institute in 2012, pioneered the concept that a breakdown in the blood-brain barrier contributes to cognitive impairment and dementia. The Zilkha Neurogenetic Institute opened at Keck School of Medicine in 2003 with a $20 million donation from Los Angeles businessman Selim Zilkha, who later contributed $10 million more to the effort. | 10.1038/s41586-020-2247-3 |
Chemistry | Aryl radical formation by aryl halide bond cleavage by a N-heterocyclic carbene catalyst | Yuki Matsuki et al, Aryl radical-mediated N-heterocyclic carbene catalysis, Nature Communications (2021). DOI: 10.1038/s41467-021-24144-2 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-021-24144-2 | https://phys.org/news/2021-07-aryl-radical-formation-halide-bond.html | Abstract There have been significant advancements in radical reactions using organocatalysts in modern organic synthesis. Recently, NHC-catalyzed radical reactions initiated by single electron transfer processes have been actively studied. However, the reported examples have been limited to catalysis mediated by alkyl radicals. In this article, the NHC organocatalysis mediated by aryl radicals has been achieved. The enolate form of the Breslow intermediate derived from an aldehyde and thiazolium-type NHC in the presence of a base undergoes single electron transfer to an aryl iodide, providing an aryl radical. The catalytically generated aryl radical could be exploited as an arylating reagent for radical relay-type arylacylation of styrenes and as a hydrogen atom abstraction reagent for α-amino C(sp 3 )–H acylation of secondary amides. Introduction Aryl halides (Ar–X) have been broadly utilized as versatile reagents for chemical reactions due to their availability and bench stability. Specifically, oxidative addition of the C(sp 2 )–X bond in an aryl halide to a transition-metal complex enables the generation of an arylmetal species that induces various transformations (Fig. 1a ) 1 , 2 . The generation of an aryl radical 3 , 4 through single electron reduction of an aryl halide followed by mesolytic cleavage of the C(sp 2 )–X bond has emerged as an alternative approach for the utilization of aryl halides as chemical reagents (Fig. 1b ). Due to the high reduction potential of aryl halides, metal sources such as samarium salts 5 , transition-metal complexes 6 and metal-based visible light photoredox catalysts 7 have been mainly utilized as single electron donors. Recently, organic electron donors 8 , 9 , 10 , 11 including visible light organic photoredox catalysts 12 , 13 , 14 have taken the place of metals to provide simple and rapid access to aryl radicals via single electron transfer (SET). These approaches enable the generation of aryl radicals without using metal sources, but still require either a stoichiometric amount of the organic electron donor or light irradiation. Fig. 1: Activation of aryl halides and aryl radical-mediated NHC catalysis. a Oxidative addition of aryl halide to transition metal. TM, transition metal. b Organic electron donor induced aryl radical generation (recent advance). SET, single electron transfer. OED, organic electron donor. c Generation of aryl radical through NHC catalysis (working hypothesis). d Arylacylation of alkene and C(sp 3 )–H acylation through NHC catalysis (this work). 1,5-HAT, 1,5-hydrogen atom transfer. Full size image There has been significant advancement in the use of radical reactions catalyzed by N-heterocyclic carbenes (NHC) with SET from a Breslow intermediate, based on enzymatic pyruvate transformations 15 , 16 , 17 , 18 . Recently, we developed NHC-catalyzed decarboxylative radical cross-coupling between aldehydes and aliphatic carboxylic acid derived-redox active esters 19 , 20 . The reaction involves SET from the enolate form of the Breslow intermediate derived from an aldehyde and NHC in the presence of a base to the redox active ester, followed by radical–radical coupling between the resultant Breslow intermediate-derived radical and alkyl radical. Thus, the enolate form of the Breslow intermediate can serve as a single electron donor and an acyl radical equivalent. Since this report, a series of NHC-catalyzed radical cross-coupling reactions have been developed by us and other groups 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 . However, these reported examples have been limited to catalysis mediated by C(sp 3 )-centered radicals. To expand the scope of radical NHC catalysis, we questioned whether an aryl radical could be generated and utilized (Fig. 1c ). Considering the gap in redox potential between the enolate form of the Breslow intermediate ( E ox = −0.97 V vs. SCE) 34 and iodobenzene ( E red = −2.24 V vs. SCE) 35 , the corresponding electron transfer seems to be thermodynamically unfavored. However, two kinetic features, (1) a small reorganization energy of the enolate form of the Breslow intermediate 34 and (2) a fast mesolytic cleavage of the radical anion derived from aryl iodide 35 , encouraged us to pursue this research program. Here, we report an aryl radical-mediated NHC organocatalysis method. The aryl radical that is generated catalytically through thermodynamically unfavored SET from the enolate form of the Breslow intermediate to an aryl iodide could be exploited as an arylating reagent for radical relay-type arylacylation of styrenes (Fig. 1d , top) and as a hydrogen atom abstraction reagent for α-amino C(sp 3 )–H acylation of secondary amides (Fig. 1d , bottom). It should be noted that this protocol eliminates the requirement for metals, oxidants/reductants and light. Results and discussion Development of the reaction and screening of conditions To test the generation of an aryl radical by the radical NHC catalysis, we designed intramolecular arylacylation reactions with benzaldehyde ( 1a ) and aryl electrophiles 2a – d bearing a cinnamyl tether group that can trap the aryl radical immediately (Table 1 ) 36 . Based on our previous studies on the NHC-catalyzed decarboxylative radical cross-coupling between aldehydes and redox-active esters 19 , we used a thiazolium salt ( N1 ) 37 possessing a N-2,6-diisopropylphenyl substituent and a seven-membered backbone structure as the NHC precursor and Cs 2 CO 3 as the base. The reaction with aryl iodide 2a as a substrate proceeded to afford 3aa in 39% isolated yield with recovery of 2a (entry 1). The reactions with other azolium salts resulted in no product formation (data not shown). Neither bromide nor chloride leaving groups resulted in product formation (entries 2 and 3). The reaction using the corresponding aryldiazonium tetrafluoroborate 2d did not afford the desired coupling product 3aa (entry 4). This might be due to the low reduction potential of the aryldiazonium salt ( E red = −0.2 V vs. SCE), which would cause a second electron transfer from the Breslow-derived radical intermediate 38 . Table 1 Screening of reaction conditions. Full size table Radical relay-type arylacylation of styrenes The successful example of an NHC-catalyzed reaction involving the generation of an aryl radical from aryl iodide and subsequent intramolecular radical addition to an alkene prompted us to explore intermolecular arylacylation using aryl iodides, alkenes, and aldehydes through a radical relay mechanism (see Table 1 , entry 1). Though slight modifications of the reaction conditions, e.g., addition of water, use of N -2,6-diisopropylphenyl-substituted six-membered ring fused thiazolium-type NHC catalyst and increase of the amount of NHC catalyst, were required, the desired intermolecular arylacylation was achieved (Supplementary Fig. 1 ). Specifically, the reaction using aldehydes 1 , styrenes 4 and aryl iodides 5 occurred in the presence of a catalytic amount of thiazolium salt N2 as the NHC precursor and stoichiometric amounts of Cs 2 CO 3 and water in DMSO solvent at 80 °C to afford the three-component coupling products, α,β-diarylated ketones 6 (Fig. 2 ). The crude materials consisted of the product 6 , unreacted substrates and trace amounts of unidentified compounds. Two-component coupling stemming from 1 and 5 was not observed. The NHC-catalyzed radical relay process would involve SET from the enolate form of the Breslow intermediate to aryl iodide 5 , radical addition of the resulting aryl radical to styrene 4 , and subsequent radical–radical coupling between the Breslow intermediate-derived ketyl radical and the secondary benzylic radical (Fig. 3a ) 20 . The role of H 2 O is unclear. It might increase the solubility of Cs 2 CO 3 and facilitate the formation of the Breslow intermediate from aldehyde by acceleration of proton transfer-involved step. Fig. 2: Substrate scope of radical relay-type arylacylation of styrenes. a Reaction was carried out with 1 (0.2 mmol), 4 (2 mmol), 5 (0.3 mmol), N2 (30 mol %), Cs 2 CO 3 (0.24 mmol), and H 2 O (0.2 mmol) in DMSO (0.4 mL) at 80 °C for 4 h. b Reaction was carried out with 1 (0.1 mmol), 4 (1 mmol), 5 (0.15 mmol), N2 (30 mol %), Cs 2 CO 3 (0.12 mmol), and H 2 O (0.1 mmol) in DMSO (0.2 mL) at 80 °C for 4 h. Full size image Fig. 3: Possible pathways. a Catalytic cycle for arylacylation of alkene. b Catalytic cycle for C(sp 3 )–H acylation. Full size image As described above, the product derived from the two-component coupling between the aryl radical and the Breslow intermediate-derived ketyl radical in this system was not observed. Even under the reaction conditions without alkenes, the two-component coupling product was not formed. The dominant of the radical addition to styrene might be due to the high reactivity of aryl radical. Additionally, competitive reaction processes, such as the C–H abstraction from formyl C–H bond of aldehyde and the radical addition to another arenes or an enolate form of Breslow intermediate would be much faster than the two-component coupling. These unproductive processes would induce the decomposition of the ketyl radical, which would explain the requirement of high catalyst loading on the reaction conditions. With the optimal reaction conditions in hand, the scope of each reaction component was evaluated and the results are summarized in Fig. 2 . Various aryl iodides served as the arylating reagent in the reaction using benzaldehyde ( 1a ) and styrene ( 4a ) (Table 2 , left). The reaction with iodobenzene or p -iodoanisole resulted in the desired product formation albeit with low reaction efficiency ( 6aaa and 6aab ). When an aryl substrate possessing fluorine, bromine, and iodine substituents was subjected to the optimal reaction conditions, selective mesolytic cleavage of the C(sp 2 )–I bond occurred to afford the multicomponent coupling product ( 6aac ). Notably, this reaction allowed the use of π-extended molecules, such as 1-iodonaphthalene, 9-iodoanthracene and 1-iodopyrene ( 6aad – 6aaf ). 2-Iodothiophene exhibited higher reactivity than 3-iodothiophene ( 6aag vs 6aah ). Benzothiophene and benzofuran were tolerated in this reaction ( 6aai and 6aaj ). Electron-deficient thiazole-derived iodoheteroarenes participated in this protocol ( 6aak and 6aal ). The heteroarene-based aryl iodides including pyridines and quinolines also worked as substrates to afford the desired products in low yields (data not shown). Recently, arylthianthrenium salts have been treated as aryl radical precursors by visible-light photoredox catalysis 39 . This NHC organocatalysis could also generate aryl radicals from the thianthrenium salts and undergo the arylacylation reaction ( 6aab and 6aam ). Table 2 Effects and properties of directing groups. Full size table A broad array of aromatic aldehydes was amenable to the reaction with 2-iodothiophene ( 5g ) and styrene ( 4a ) (Fig. 2 , right). Alkyl and halogen substituents at the para position on the aromatic ring of the aldehydes did not inhibit the reaction ( 6bag – 6eag ). Both electron-withdrawing and electron-donating functional groups on the aromatic aldehydes were tolerated ( 6fag – 6iag ). Various heteroaromatic rings were incorporated into the acyl unit ( 6jag – 6mag ). Aliphatic aldehydes did not participate in the reaction even though a thiazolilum catalyst 22 possessing an N -neopentyl group and a seven-membered backbone structure was used instead of N1 (data not shown). This might be due to the slower formation of Breslow intermediate of aliphatic aldehydes using a neopentyl-derived NHC than that of aromatic aldehydes using N1 22 . A C–H abstraction from the formyl C–H bond of aliphatic aldehyde would be also considered. Next, the scope of alkenes was explored (Fig. 2 , right). Though inferior to non-substituted styrene, p -methoxy-, ben zyloxy-, and chloro-substituted styrenes provided the corresponding ketone products in moderate yields ( 6abg – 6adg ). A naphthalene-conjugated alkene was also a good substrate ( 6aeg ). On the other hand, when aliphatic alkenes were subjected to the reaction conditions, three-component coupling was not observed (data not shown). C(sp 3 )–H acylation of secondary amides We turned our attention to exploiting the catalytically generated aryl radical as a hydrogen atom abstraction reagent 40 , 41 . During the survey of the scope of aryl iodides in the arylacylation of alkenes shown in Fig. 2 , 2-iodothiophene was found to have better reactivity than other aryl iodides. This result inspired us to incorporate the 2-iodothiophene moiety into the directing group that serves as the intramolecular hydrogen atom abstraction reagent for the NHC-catalyzed C(sp 3 )–H acylation using aldehydes. Specifically, the reaction between 2-pyridine carboxaldehyde ( 1 m ) and secondary amide ( 7a ) having a 2-iodothiophene moiety occurred in the presence of a catalytic amount of the N -2,6-diisopropylphenyl-substituted seven-membered ring fused thiazolium salt N1 37 as the NHC precursor and Cs 2 CO 3 in DMSO at 80 °C to afford the corresponding acylation product 8ma in 93% isolated yield (Fig. 4 ). In this NHC-catalyzed process, the resultant α-amino C(sp 3 )-centered radical, derived by SET from the enolate form of the Breslow intermediate to the 2-iodothiophene moiety in the substrate and subsequent 1,5-hydrogen atom transfer, could participate in radical–radical coupling with the Breslow intermediate-derived ketyl radical (see Fig. 1d , bottom and Fig. 3b ). N2 possessing a six-membered ring backbone exhibited comparable reactivity (Supplementary Fig. 2 ). The use of 2-iodobenzene-based directing group also afforded the corresponding acylation product 8ma-C (vide infra for mechanistic considerations for C(sp 3 )–H acylation of secondary amides). The directing group could be removed by Schwartz’s reagent (Supplementary Fig. 9 ). Fig. 4: Substrate scope of C(sp 3 )–H acylation of secondary amides. a Reaction was carried out with 1 (0.4 mmol), 2 (0.2 mmol), N1 (10 mol %), and Cs 2 CO 3 (0.22 mmol) in DMSO (0.4 mL) at 80 °C for 6 h. b Reaction was carried out with 1 (0.4 mmol), 2 (0.2 mmol), N1 (5 mol %), and Cs 2 CO 3 (0.21 mmol) in DMSO (0.4 mL) at 80 °C for 6 h. c Reaction was carried out with 1 (0.6 mmol), 2 (0.2 mmol), N1 (20 mol %), and Cs 2 CO 3 (0.24 mmol) in DMSO (0.4 mL) at 80 °C for 6 h. d Reaction was carried out with 1 (0.4 mmol), 2 (0.2 mmol), N1 (20 mol %), and Cs 2 CO 3 (0.24 mmol) in DMSO (0.4 mL) at 80 °C for 6 h. Full size image The scope of amides in this protocol was investigated with the use of 2-pyridine carboxaldehyde ( 1m ) as a coupling partner (Fig. 4 , left). The reactions with cyclic amides derived from pyrrolidine, piperidine and azepane proceeded efficiently to afford the desired α-acylated products in high yields ( 8ma – 8mc ). Although a piperazine derivative was also a suitable substrate ( 8md ), a morpholine scaffold was not tolerated (data not shown). The acyl group could be incorporated into amino acids such as proline with high diastereoselectivity ( 8me ). In addition to cyclic amides, acyclic amides underwent this organocatalyzed acylation ( 8mf and 8mg ). To demonstrate the power and broad functional group tolerance of this protocol, the C(sp 3 )–H acylation of pharmaceutical drugs was conducted. Secondary amine-based drugs such as Amoxapine and Paroxetine resulted in product formation without being disturbed by coexistent functional groups ( 8mh and 8mi ). In our previous report on the NHC-catalyzed decarboxylative alkylation 19 , α-aminoalkyl radical generated from α-amino acid-derived redox active ester can couple with a Breslow intermediate-derived radical to afford the α-aminoketone. Importantly, however, secondary amide substrates presented here would be more abundant and structurally diverse than α-amino acids. Thus, this protocol allowed to couple α-amino alkyl radicals derived from various amides such as cyclic amides, acyclic amides and pharmaceutical drugs. Subsequently, we explored the scope of aldehydes with a pyrrolidine substrate 7a (Fig. 4 , right). The reactions of p tolualdehyde or p - tertiary -butyl benzaldehyde gave the corresponding α-aminoketones in moderate yields ( 8ba and 8ca ). A variety of functional groups including trifluoromethoxy, trifluoromethyl, methoxy, and thioether were tolerated well ( 8fa – 8ia ). The aryl bromide group survived the reductive reaction conditions ( 8ea ). Ketones having electron-rich heteroaryl substituents such as thiophene or furan were also constructed ( 8ja , 8na , and 8oa ). Similar to 2-pyridinecarboxaldehyde, the reactions with 6-methyl-2-pyridinecarboxyaldehyde and 2-quinolinecaboxaldehyde gave the corresponding products in exceptionally high yields ( 8pa and 8qa ). Mechanistic considerations for C(sp 3 )–H acylation of secondary amides To understand how the 2-iodothiophene-based directing group imparted the high reactivity, we evaluated four different directing groups from two viewpoints, the efficiencies of aryl radical generation and hydrogen abstraction (Table 2 ). Screening of directing groups such as 2-iodothiophene ( 7a ), 3-iodothiophene ( 7a - A ), 2-iodofuran ( 7a - B ), and 2-iodobenzene ( 7a - C ) was conducted for the reaction of 7 with benzaldehyde ( 1a ) in the presence of N1 catalyst and Cs 2 CO 3 in DMSO at 80 °C (Table 2 , 1st line). As expected, 2-iodothiophene ( 7a ) was the most effective among those examined. Next, we examined the reduction potential of amide substrates and their C(sp 2 )–I bond dissociation energies (BDEs). Cyclic voltammetry experiments revealed that 7a having a 2-iodothiophene moiety was the most reducible (Table 2 , 2nd line and Supplementary Figs. 3 – 6 ). The BDE values obtained by DFT calculations indicated that these C(sp 2 )–I bonds tended to be easily cleaved and that 7a - C would undergo more facile mesolytic cleavage of the C(sp 2 )–I bond after single electron reduction compared with the other substrates (Table 2 , 3rd line, and Supplementary Fig. 7 ). This provided a good explanation for the improved yield for the acylation reaction of 7a - C than 7a - A and 7a - B in spite of the higher reduction potential ( 7a - C : –2.19 V vs. 7a - A : –1.99 V and 7a - B : –1.61 V). Furthermore, DFT calculations of the activation energy of the α-amino C(sp 3 )–H abstraction step were carried out (Table 2 , 4th line and Supplementary Fig. 8 ). 2-Iodothiophene derivative ( 7a ) showed the lowest activation energy for the C(sp 3 )–H abstraction. Based on these results, we assumed that the 2-iodothiophene-based directing group might have an advantage in terms of the facile generation of the aryl radical by the thermodynamically less unfavored SET and the subsequent fast C(sp 3 )–H abstraction. To see whether or not the intermediacy of electron donor-acceptor (EDA) complex facilitated the thermodynamically unfavored SET, we conducted UV/Vis absorption studies using known precursor of Breslow intermediate and iodobenzene (Supplementary Fig. 10 ). As a result, the significant shift of the absorption was not observed. Moreover, the reaction under light irradiation conditions to facilitate the SET process did not give the desired product at all (data not shown). These results didn’t support the intermediacy of the EDA complex. In summary, this achievement expanded the scope of radical NHC catalysis, enabling the generation and utilization of aryl radicals 42 . The enolate form of the Breslow intermediate derived from an aldehyde and a thiazolium-type NHC in the presence of a base undergoes SET to the aryl iodide, producing an aryl radical. Although the SET event is thermodynamically unfavored, the small reorganization energy of the enolate form of the Breslow intermediate and the fast mesolytic cleavage of the C(sp 2 )–I bond makes the pathway kinetically feasible. The catalytically generated aryl radical could undergo addition to styrenes or intramolecular hydrogen atom abstraction to form a C(sp 3 )-centered radical, which engaged in the subsequent radical–radical coupling with the Breslow intermediate-derived ketyl radical. The NHC catalysis presented here provides a strategy for an aryl radical-mediated organocatalytic reaction. Studies for expanding this strategy toward the synthesis of complex molecules are currently underway in our laboratory. Methods The reaction to produce 6aag in Fig. 2 is representative Thiazolium salt N2 (23.9 mg, 0.06 mmol) was placed in a Schlenk tube containing a magnetic stirring bar. The tube was sealed with a Teflon ® -coated silicon rubber septum, and then evacuated and filled with nitrogen. Degassed DMSO (400 μL) and H 2 O (3.6 μL, 0.2 mmol) were added to the tube. Next, benzaldehyde ( 1a ) (20.3 μL, 0.2 mmol), styrene ( 4a ) (229 μL, 2.0 mmol) and 2-iodothiophene ( 5 g ) (38.2 μL, 0.3 mmol) were added, followed by Cs 2 CO 3 (78.2 mg, 0.24 mmol). After 4 h stirring at 80 °C, the reaction mixture was treated with saturated NH 4 Cl aqueous solution (400 μL), then extracted with diethyl ether (four times) and dried over sodium sulfate. After filtration, the resulting solution was evaporated under reduced pressure. After the volatiles were removed under reduced pressure, flash column chromatography on silica gel (100:0–90:10, hexane/EtOAc) gave 6aag (40.3 mg, 0.14 mmol) in 69% yield. The reaction to produce 8ma in Fig. 4 is representative Thiazolium salt N1 (8.3 mg, 0.02 mmol) and amide 7a (61.4 mg, 0.2 mmol) were placed in a Schlenk tube containing a magnetic stirring bar. The tube was sealed with a Teflon ® -coated silicon rubber septum, and then evacuated and filled with nitrogen. Cs 2 CO 3 (71.7 mg, 0.22 mmol), degassed DMSO (400 μL) and 2-pyridinecarboxaldehyde ( 1m ) (42.8 mg, 0.4 mmol) were added to the vial. After 6 h stirring at 80 °C, the reaction mixture was treated with saturated NH 4 Cl aqueous solution (400 μL), then extracted with diethyl ether (4 times) and dried over sodium sulfate. After filtration through a short plug of aluminum oxide (1 g) with diethyl ether as an eluent, the resulting solution was evaporated under reduced pressure. After the volatiles were removed under reduced pressure, flash column chromatography on silica gel (100:0–50:50, hexane/EtOAc) gave 8ma (53.0 mg, 0.19 mmol) in 93% yield. Data availability The authors declare that the data supporting the findings of this study are available within the paper or its Supplementary Information files and from the corresponding author upon reasonable request. | Aryl halides with a benzene ring directly bonded to a halogen atom are readily available and chemically stable, so they are used as a source of benzene rings in organic synthesis. For example, a chemical reaction that generates a highly reactive aryl radical from an aryl halide using a toxic tin compound has long been known as a method for supplying a benzene ring. In recent years, chemical reactions have been developed in which an aryl halide is reduced using a metal catalyst or a photocatalyst followed by cleavage of the bond between the benzene ring and the halogen atom to generate an aryl radical. However, since the methods previously reported require metal salts and/or excess amounts of an oxidizing agent or a reducing agent, chemical reactions with less environmental impact are desirable. The research group of Kanazawa University led by Prof. Ohmiya has been developing novel chemical reactions using newly developed metal-free organic catalysts to produce various useful chemicals in a much easier manner than with conventional methods. In the present study, the group succeeded in generating aryl radicals from aryl iodides, a type of aryl halide, under mild conditions without the need for light or metal salts, using an N-heterocyclic carbene catalyst and the aryl radicals thus formed were used for organic syntheses. A single electron transfer from an enolate intermediate consisting of a thiazolium-type N-heterocyclic carbene catalyst and an aldehyde to an aryl iodide and the subsequent cleavage of the bond between the benzene ring and the iodine atom generate an aryl radical in a catalytic manner. Considering the oxidation potential of the enolate intermediate (Eox = -0.97 V) and the reduction potential of the aryl iodide (Ered = -2.24 V), single electron transfer from the enolate intermediate to the aryl iodide, i.e. single electron reduction, is thermodynamically unfavorable. However, it is considered that the reaction took place due to kinetic factors because the two reaction steps, i.e. 1) single electron transfer from the enolate intermediate to the aryl iodide and 2) cleavage of the bond between the benzene ring and the iodine atom, proceed rapidly. The aryl radical generated acts as a source of the benzene ring, the difunctionalization of the alkene proceeds, and a benzene-ring substituted ketone is obtained. In addition, by using the aryl radical generated for the intramolecular hydrogen abstraction reaction, the dehydrogenative acylation of the amide proceeds, and an α-aminoketone compound can be obtained. Substrates with various functional groups can be used in these molecular conversion reactions. Derivatives of pharmaceuticals can also be synthesized by the dehydrogenative acylation of amides. The results of this study are the development of a chemical reaction that cleaves the bond between the benzene ring of an aryl halide and the halogen atom by using an organic catalyst that has a low impact on the environment, leading to generation of an aryl radical. Since aryl radicals can be easily generated from aryl halides that are widely used in organic synthesis, this is expected to be a powerful technology for precisely synthesizing medical and agricultural drugs as well as chemical materials. | 10.1038/s41467-021-24144-2 |
Physics | Antineutrino detection could help remotely monitor nuclear reactors | "Employing Antineutrino Detectors to Safeguard Future Nuclear Reactors from Diversions," Nature Communications (2019). DOI: 10.1038/s41467-019-11434-z Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-019-11434-z | https://phys.org/news/2019-08-antineutrino-remotely-nuclear-reactors.html | Abstract The Non-Proliferation Treaty and other non-proliferation agreements are in place worldwide to ensure that nuclear material and facilities are used only for peaceful purposes. Antineutrino detectors, sensitive to reactor power and fuel changes, can complement the tools already at the disposal of international agencies to safeguard nuclear facilities and to verify the States’ compliance with the agreements. Recent advancement in these detectors has made it possible to leverage them to reduce the likelihood of an undetected diversion of irradiated nuclear material. Here we show the sensitivity of antineutrino monitors to fuel divergence from two reactor types: a traditional light-water reactor and an advanced sodium-cooled reactor design, a likely candidate for future deployment. The analysis demonstrates that a variety of potential diversion scenarios can be detected by such a system. We outline recent developments in monitoring capabilities and discuss their potential security implications to the international community. Introduction Nuclear energy is touted as a key tool for providing high-volume, low-carbon energy 1 , 2 , with about 450 nuclear power plants currently producing around 11% of world’s energy and 61 additional plants under construction 3 . As the number of nuclear energy-producing nations grows and the nuclear fuel cycle becomes ever more complex and internationalized, stresses on the existing safeguards infrastructure will only increase. With new, unconventional types of reactors introduced to the market, standard approaches to nuclear proliferation safeguards also become less reliable. A strongly advocated means of containing proliferation is to establish a provider-user fuel cycle, in which front-end and back-end processes are made available by states with such infrastructure already in place. These capabilities, specifically fuel enrichment and reprocessing, provide a direct path for a state to develop a nuclear weapon. A provider-user approach removes the incentive for user states to develop these proliferation-sensitive technologies. Existing agreements (for example, between the U.S. and the United Arab Emirates) 4 accomplish this by legally binding nations not to pursue enrichment or reprocessing technology. More novel approaches rely on a hub-spoke model 5 , 6 : advanced reactors of battery-type, which rely a single but long (10–30 years) fuel loading and usually have relatively low power, are leased to the user state and returned to the provider state at the end of their lifetime for disposal or recycling of spent fuel. While these proposals reduce proliferation risks associated with reactor fuel cycle, they do not eliminate it. Plutonium produced as a by-product of normal operation in any reactor could still be diverted for use in weapons 7 . Traditional safeguards and inspections have worked well, deterring material diversion from current reactors for many decades. The International Atomic Energy Agency (IAEA) relies on inspector presence during reactor refueling intervals, as well as seals on fuel assemblies to keep track of weapon-usable material. However, these checks are not infallible, since inspectors are not always present on a site and can be refused entry. Future reactor developments on the horizon are likely to strain the nonproliferation regime further. The shift in the industry towards more distributed generation using Small Modular Reactors (SMR) will stretch out monitoring resources. Advanced so-called “Generation IV” reactor designs often do not require any refueling or, conversely, continuously refuel their cores, complicating inspections efforts and requiring continuous monitoring of the fuel state. The development of long-life reactors also raises additional questions about safeguards, notably regarding when and how inspections should be conducted. Inspectors cannot rely on refueling cycles to monitor these reactors as they typically do for Pressurized Water Reactors (PWR), which operate on an 18–24-month cycle. The issue of inspections is exacerbated by the weapon-grade quality of plutonium in such cores (<7% 240 Pu) resulting from the fast neutron spectrum that is necessary to breed fissile material in the core and subsequently burn it in situ without loss of reactor criticality. Future safeguarding technologies deployed to combat these challenges will ideally possess always-on, real-time monitoring attributes and ensure the integrity of the collected data against attempts to falsify it. One potential route to meeting such continuous safeguarding requirements can be provided by antineutrino particles, a by-product of the nuclear fission process. Indeed, advances in antineutrino detector capabilities can yield a continuous, unobtrusive, and unfalsifiable way of obtaining information on a nuclear core. In this paper, such a system is termed Reactor Evaluation Through Inspection of Near-field Antineutrinos, or RETINA (see Fig. 1 ). Fig. 1 Principle of reactor operation verification with antineutrino monitors. The process for verifying reactor inventory integrity with antineutrinos bears similarities to biometric scans such as retinal identity verification. In retinal scans, an infrared beam traverses a person’s retina ( a ) and the blood vessels, distinguishable by their higher light absorption ( b ) relative to other tissue, are mapped. The mapping is extracted and compared to a copy stored in a database ( c ), and if the two match, the person’s identity is verified. Similarly, a nuclear reactor ( g ) continuously emits antineutrinos which vary in flux and spectrum with the particular fuel isotopes undergoing fission ( d ). Some interact in a nearby detector ( h ) via inverse beta decay ( e ). The measured signal is compared to a reference copy stored in a database for the relevant reactor, initial fuel, and burnup ( f ); a sufficiently matching signal indicates that the core inventory has not been covertly altered. If the antineutrino flux of a perturbed reactor is sufficiently different from expected, a diversion can be concluded to have taken place ( i ) Full size image At a 2008 workshop by the IAEA’s Division of Technical Support, experts evaluated how antineutrino detectors can improve the monitoring capabilities of the organization and complement the role of inspectors 8 . The report concluded that RETINA systems could complement safeguard capabilities by providing an independent, real-time measure of the reactor power and of its fissile inventory. Many antineutrino systems have been built and operated in the past to demonstrate some of these safeguarding capabilities, notably at SONGS 9 , Daya-Bay 10 , and Nucifer 11 . The SONGS project in particular, has demonstrated how near-field monitoring of reactors can deduce information about operational status, by leveraging the proportionality of antineutrino flux to the power output of a reactor 12 . The results showed that operational status (on/off) can be detected with a greater than 99% confidence within just 5 h. Power fluctuations were measured on month-long scales with an estimated 8.3% precision. Obtaining information on evolution of the core content has been demonstrated using computer simulations, as well as in antineutrino physics experiments showing how modifications to the isotopic composition result in noticeable changes in antineutrino spectral yields. For example, a study on the North Korean 1994 crisis showed that diversions of 8-kg-worth of plutonium could be detected within 90 days at a 90% confidence level 13 . Another study on a mixed-oxide core (MOX) showed that modifications to the isotopic composition (including between different levels of plutonium purity) can be detected using a RETINA-based system with a high degree of confidence within a single fuel cycle (18 months) 14 . In the following, we examine the performance of antineutrino detectors when employed as diversion monitors of reactors with distinct operational characteristics. Antineutrinos interact with matter with such rarity that it would take nearly a light-year’s worth of lead to shield a detector from the antineutrinos emitted by a nuclear reactor. While the probability of interaction remains small, an operating reactor generates ample quantities of antineutrinos from beta decay of fission products in the fuel—on the order of 10 21 per GWh of electricity produced. As a result, a distinct antineutrino signature can be observed with a relatively small above-ground detector 9 , 15 . Here, we examine the efficacy of antineutrino detectors under several scenarios in which fissile material in the range of one to a few significant quantities (SQ) of plutonium has been diverted from either an established advanced pressurized water reactor (PWR) or a more exotic example of a long-life core design, the Ultralong Cycle Fast Reactor (UCFR-1000) 16 , with more details of modeling and analysis provided in Supplementary Note 2 . We calculate the reference signal which an antineutrino detector monitoring of normal burnup behavior of each core should produce under nominal operating conditions. We then compare these to the estimated detector data from each of the diversion scenarios. The time required for a monitoring body to conclude that the reactor state differs from its declared configuration is evaluated against proliferation breakout timeframes as indicated by IAEA timeliness criteria. For a technically sophisticated actor, the IAEA estimates a reasonable limit of three months for the weapon-conversion time by after the occurrence of a material diversion 17 . We show that, for the path to a plutonium-based weapon, actionable information for some scenarios can be obtained prior to breakout periods, even for relatively small changes in reactor composition. Results Modeling the reactor antineutrino signature The most mature technology for detecting electron antineutrinos relies on the reverse of the β − reaction characteristic of fission products. In this process, aptly named inverse beta decay (IBD), an antineutrino interacts with a proton producing a positron and a neutron that are readily detectable with standard technologies. The IBD reaction has an extremely low probability (on the order of 10 −43 cm 2 ) of occurrence and requires an antineutrino to have a threshold energy of 1.8 MeV. The threshold reduces the number of detectable antineutrinos produced per fission to approximately 1.92 for 235 U and 1.45 for 239 Pu. More details with respect to IBD interactions can be found in Supplementary Note 1 . The plot under section (b) of Fig. 1 shows the absolute distribution of emitted antineutrinos per fission event per MeV of energy. The difference in antineutrino emitted for various fissile isotopes results in a distinctive pattern of the fuel composition as a function of time. For example, the plot in section (c) of Fig. 1 shows the difference in antineutrino signatures if the reactor core is exclusively composed of 235 U (as in case of freshly loaded reactor core with no irradiated fuel present) or 239 Pu (reactor operating on so-called mixed-oxide fuel designed to dispose of plutonium) at the beginning of operation. These two cases are extrema and are rarely present in actual reactors; however, interpolation is possible providing the necessary information about the core composition for verification purposes. Due to the small antineutrinos interaction probabilities, only a few counts are registered in the detector per day (of the order of 100–1000, depending on reactor power, detector size, efficiency, and placement away from the core). Therefore, reaching conclusive statistical evidence of modifications to the core operations requires timelines from the order of hours to months, depending on the information needed. A small scintillator-based detector (with an active volume on the order of 1 to 5 m 3 ) positioned at 17 to 25 meters away from the reactor core (typical distances associated with the reactor containment), can notice complete shutdowns in the order of a few hours, but small modifications to the fuel composition of a core can take months to detect. Still, RETINA systems could provide a notable advantage over the current regime where inspections normally take place every 18 months and depend on tamper-evident seals (markers to identify if an object was manipulated or moved during the absence of inspectors) to establish continuity of knowledge. A monitoring body with access to RETINA information can place bounds on the material inside the reactor without requiring a cessation of operations, radiation-hardened electronics, or even the physical presence of inspectors. In fact, the ability to detect large changes in power level within hours 12 and to observe the isotopic evolution of reactor fuel as it proceeds through its burnup cycle 12 has already been demonstrated experimentally at the whole-core scale using an IBD-type detector at a nuclear reactor cite 18 . The detectors selected for the analysis were based on the PROSPECT AD-I and AD-II designs 19 . The initial phase detector (AD-I) uses a 3 ton segmented 6 Li-doped liquid scintillator detector at a distance of 7–12 m from the core. The second phase is a 10 ton detector at a distance of 15–19 m. The experiment objective is to monitor the antineutrino signature from a highly enriched uranium (HEU) fueled reactor at the Oak Ridge National Laboratory. A total of three PROSPECT-like detectors were used in each of the PWR and UCFR analyses. Each with a fiducial mass of 5 tones, placed at a 25 m radial distance from the core, and with 120° azimuthal intervals. In the case of the PWR, the detectors were all placed at the same axial location. For the UCFR, the three detectors were staggered at different heights to better account for the axial core evolution in the reactor: the first even with the bottom of the core, the second at core midplain, and the third at the top. The detector parameters are summarized in Table 1 . Table 1 Characteristics of antineutrino detector system Full size table The energy spectrum of the detector signal was divided into 0.5 MeV bins, as finer energy discretization was observed to produce negligible gains in precision during preliminary analysis. The detector event rate in each bin, n i , can be calculated as: $$n_i = N_{\mathrm{D}}\Sigma _k\Sigma _\alpha \frac{1}{{4\pi D_\alpha ^2}}f_\alpha ^k\sigma _i\nu _i^k\varepsilon _i$$ (1) where N D describes the number of target protons for the IBD reaction in the detector, \(D_\alpha ^2\) is the averaged square of the distance from some reactor volume element α to the detector, \(f_\alpha ^k\) is the fission rate of isotope k in volume element α , σ i is the bin-averaged IBD cross section, ε i is the detector efficiency for bin i , and \(\nu _i^k\) is the antineutrino yield per fission of isotope k into bin i . Antineutrino yields are available for the major heavy metal isotopes using the fission products from a 400 keV fission-inducing neutron; the calculated yield varies very little with incident neutron energy (thermal, 400 keV, 14.1 MeV) compared to the difference between isotopes 20 . The detector event rates are integrated over time to produce the expected count in each bin for each detector. Under standard operation of both the PWR and UCFR, the transition from primarily fissioning 235 U to 239 Pu can be tracked, in line with reactor observation experiments 12 . The higher thermal power rating of the PWR produces a more intense antineutrino field than the UCFR and features a smooth and steady evolution of the signal with the fuel composition over the course of the burnup cycle. The primary difference in the fuel cycles arises from the PWR receiving periodic refueling, in contrast to the continual production of fissile fuel in the fertile column of the UCFR. Once the initial 235 U inventory is depleted from the UCFR starter fuel, the nominal detector signal enters an approximately steady-state regime in which burn zone propagation, rather than isotopic evolution of the fuel, drives changes in magnitude. During this section—the majority of the UCFR burn cycle—there is negligible change in the nominally detected antineutrino spectrum, and changes in magnitude are much slower and more subtle than during the 235 U- 239 Pu transition. The evolution of the resulting antineutrino signal from both types of reactors is highlighted in Fig. 2 . Fig. 2 Evolution of antineutrino spectrum and antineutrino detector response as a function of reactor operational time. The emission rates of antineutrino particles at different energies vary with operating lifetime as reactors shift from burning uranium to plutonium. a The PWR signal consists of a repeated 18-month operating cycle with a three month refueling interval, while ( b ) the UCFR operates continuously (excluding maintenance interruptions). The UCFR antineutrino signature reaches an equilibrium after ~10 years when plutonium fission dominates and initial enriched uranium reserves are depleted Full size image The uncertainties affecting the comparison calculation between the antineutrino signal from reference and perturbed reactor states arise from the following elements of the calculation: reactor physics uncertainty σ rp , antineutrino yield uncertainty σ yields , detector parameter uncertainty σ det , reactor operating power uncertainty σ power used for diversion masking, and the fitting errors for the burnup-dependent evolution of the calculated nominal reference and perturbed detector event rates σ fit,ref and σ fit,div , as detailed in Supplementary Note 3 . These are combined in quadrature to produce the parameter σ norm in the χ 2 goodness-of-fit calculation (Eq. ( 2 )). Each uncertainty element may have multiple components, and for the reactor physics uncertainty these components show significant correlations and anti-correlations. Additional information on each of these factors is provided in the Appendix. The detector background is estimated by scaling the measurement-validated Monte Carlo simulations by the PROSPECT team 19 up to RETINA-sized detectors. The relevant background in the IBD-based antineutrino detectors of the kind used in RETINA is able to be categorized and quantified: 1. Reactor-correlated background was able to be nearly completely shielded via a multi-layer lead, polyethylene, and boroated polyethylene structure by the PROSPECT team at a standoff of ~7 m from the HFIR reactor used as an antineutrino source. At a standoff of over three times greater distance, this was deemed to contribute negligibly to RETINA background, especially considering that built-in shielding tends to be more consistent and thorough at power reactors than at small research reactors 19 . 2. Uncorrelated gamma coincidences from radioactive decay of nuclei in the surrounding walls, bedrock, etc., which was the primary background for earlier antineutrino detectors, is able to be largely mitigated via detector segmentation and utilizing the outer layer of segments as active shielding. Pulse-shape discrimination (PSD) is able to further improve rejection of uncorrelated gammas. 3. Cosmogenic fast neutrons originating via muon decay can thermalize and be captured in the detector volume in a coincidence that mimics IBD. Pulse-shape discrimination, timing windows based on characteristic delay of the coincident signal, and active rejection via a secondary muon veto detector immediately above the primary detector volume can filter out most cosmogenic background, but it still comprises the vast majority of the IBD-like background. This background is dependent on the altitude of the reactor facility and must be re-calculated for each site. We use the background reported at the HFIR facility (259 m above sea level). The metric used for calculating detection probability is based on a χ 2 goodness-of-fit test of the perturbed detector signal to the reference signal (Eq. ( 2 )); the departure from nominal average reactor power is considered as a free parameter, x . The final term in the summation accounts for the increase in detection probability for prolonged average operation at power above or below declared levels. $$\chi ^2 = \left( {\mathop {\sum}\limits_{\mathrm{b}} {\frac{{\left( {n_{\mathrm{b}} - \left( {1 + x} \right)n_{\mathrm{b}}{\!\!\! \prime} } \right)^2}}{{n_{\mathrm{b}}}}} } \right) + \left( {\frac{x}{{\sigma _{{\mathrm{norm}}}}}} \right)^2$$ (2) The calculated χ 2 represents the expected difference in reference and perturbed detector signal; stochastic processes cause real differences to vary about T 0 = χ 2 as \(N\left( {T_0,2\sqrt {T_0} } \right)\) . Assuming a null hypothesis of no diverted material and an acceptable false positive rate of α = 5%, the detection probability is: $$p\left( {{\mathrm{detection}}} \right) = 1 - \Phi \left( {\frac{{T_{{\mathrm{crit}}}^\alpha - T_0}}{{2\sqrt {T_0} }}} \right),$$ (3) where Φ( x ) is the cumulative distribution function of the standard normal distribution and \(T_{{\mathrm{crit}}}^\alpha\) is the lower bound of \({\int}_T^\infty {\chi ^2} \left( k \right)dk = \alpha\) . The calculation is repeated for the expected detector responses at one, three, and six months post-diversion in order to assess identification probabilities for irradiated material use at the minimum, maximum, and extended conversion timelines, respectively. Detecting diversions via antineutrinos Previous work on antineutrino monitoring of nuclear reactor has relied on simplified point-like core models 21 . Although reactors are generally treated as a point source of antineutrinos due to comparatively large reactor-detector standoffs relative to reactor dimensions, the reactor itself cannot be accurately simulated in low-dimension calculations because of the dynamic response of the neutron field to perturbations in local compositions. The analysis here is based on whole-core, 3-D, neutron transport simulation which accounts for spatial effects of assembly diversions. The methodology employed has been previously demonstrated in sensitivity studies of reactor antineutrino safeguards uncertainties 22 . Diverted assemblies are explicitly modified in the simulation to provide added accuracy to the analysis. We consider diversion of fuel assemblies from inner (-I), middle (-M), and outer (-O) radial positions inside the two reactor cores. This allows to account for difference in neutron importance of different core regions. Irradiated assemblies are replaced with fresh ones and a new antineutrino emission rate is calculated. In addition, proliferators are assumed to be sufficiently sophisticated to attempt to alter the average reactor power covertly in order to minimize the change in detector signal. Care was also taken in different diversion scenarios to ensure that the core remained critical. PWR diversions Regular refueling and assembly shuffling is standard practice in PWR fuel cycles to achieve a relatively flat radial neutron flux shape across the core. The modeled PWR operates under a 3-batch cycle with each assembly residing in the core for three fuel loading. Assemblies are shuffled within the core in a chessboard pattern (highlighted in Fig. 3 , with darker assemblies representing higher burnup) to flatten the power density profile. Fuel assemblies are therefore designated either “fresh”,“once-burned”, or “twice-burned”. PWR diversions are assumed to take place during refueling intervals at the end of an operating cycle. An attempt to proliferate during normal operations would require an unplanned shutdown which would be detected by the regulating body. The PWR diversions are of either once-burned (PWR-O) or twice-burned (PWR-I and PWR-M) assemblies. Each type holds a different quantity of plutonium and of varying quality. For instance, once-burnt PWR-O assemblies yield ~3 kg of plutonium (82% fissile), while twice burnt assemblies typically yield just under 4 kg of plutonium at 75% fissile content. These purities are not ideal for weapons production, but can still be attractive to a proliferator. The locations from which assemblies are taken is shown in Fig. 3 along with the reference arrangement of fresh, once-burned, and twice-burned assemblies. In the amounts present at the time of diversion, the PWR-I1, PWR-M1, PWR-O1, (i.e., one assembly from each region) and PWR-O2 (i.e., two assemblies from the outer region) diversions do not provide enough plutonium for a weapon, but a series of such diversions (or parallel diversions from multiple reactor facilities) would eventually aggregate to a weaponizable quantity. PWR-M2 and PWR-O4 therefore represent the minimum material removal for which a single SQ of plutonium is obtained. Fig. 3 Diversion scenarios for the PWR and UCFR. The scenario identifiers indicate the approximate radial position of the diversion and the number of assemblies removed. Multiple-assembly scenarios assume removal/replacement from symmetric positions in order to maximize separation between perturbed positions, thereby minimizing change in detector signal. The mass of plutonium and its constituent isotopes obtained by a proliferator for each scenario ( a ) depends on whether the assembly has been irradiated for one (outer) or two (inner, middle) cycles prior to removal. The UCFR plutonium composition for all scenarios is weapons-grade, with little contaminant isotopes. The probability of detection via observation by RETINA with count duration of one, three, and six months following the resumption of operations is shown in b . The green bars assume no power manipulation by the operator to mask the signal, while blue ones assume power manipulation is performed optimally. Demarcations (dashed lines) indicate the minimum mass required to manufacture a plutonium-based weapon (1 SQ) and the minimum detection probability required by the IAEA for low-probability events (20%) Full size image UCFR diversions The UCFR-1000 core-center position contains a control assembly, so UCFR-I diversions are performed at center-adjacent locations. The central assemblies experience a substantially larger neutron flux than their more peripheral counterparts despite their initially identical fuel compositions. As a result, the inner assemblies contain as much as 1 SQ of plutonium per assembly in just over two years of operation, whereas the outer ones have 1/4 as much. A burnup of 2.17 effective full-power years (EFPY) was chosen for the UCFR-1000 diversions, as this is the earliest a single SQ of plutonium is available via removal of one assembly (any of the six center-adjacent locations). The middle-core and outer-core diversion scenarios were chosen such that a minimum of 1 SQ would be obtained by the actor. Because the fuel is irradiated with a fast neutron spectrum, the plutonium has high purity—the UCFR-I, UCFR-M, and UCFR-O diversions, respectively, yield plutonium with 93, 95, and 98% fissile content. Diversion yields and detectability Analysis of multiple diversion scenarios for both cores (Fig. 3 ) shows that, in the majority instances, diversions of 1 SQ are detected by a RETINA system with sufficient probability to meet IAEA requirements 17 for low-probability events, some within one month of resumed reactor operations. While diversions from the PWR core yielding at least one SQ were flagged at rates exceeding the IAEA threshold, sufficient probability was not achieved in three UCFR diversions: UCFR-I1 (one assembly from the inner core regions), as well as UCFR-O4 and UCFR-O6 (four and six assemblies, respectively, from the outer core). The cases with more than 1 SQ of plutonium diverted without detection are summarized in Table 2 . The UCFR-I1 diversion can be detected with a 19.1% probability with three months of measurement, approaching the 20% IAEA requirement. The UCFR-O4 and UCFR-O6 diversions, on the other hand, are substantially harder to detect. This is due to the peaked power distribution in the UCFR, resulting in a low neutron importance at the core periphery. The phenomenon is less pronounced in the more mature PWR design, resulting in less spatial variation in fission rates, and by extension, diversion detection probabilities. Table 2 Diversion scenarios and probability of detection Full size table A skilled reactor operator may attempt to mask the diversion by manipulating the total reactor power (highlighted by the lower, blue-colored values in the diversion bar-plot of Fig. 3 ). If successful, diversions of more than one SQ may pass undetected. However, such manipulations are very difficult to achieve in practice, and an operator would risk over-correction, which can increase the likelihood of detection relative to the un-masked case. Overshooting the σ power range runs the risk of also being detected by traditional safguards monitoring of power level. Furthermore, the operators would have no assurance over the success of their effort. The “flying blind” nature of such manipulation, combined with the potential to inadvertently reduce the chances of maintaining concealment should deter these attempts. Alternatively, while a proliferator may attempt to obtain one SQ in aggregate over the course of several diversions, this is considered an unlikely route due to the risk associated with circumventing traditional safeguards measures on multiple occasions. Consequently, RETINA systems are envisioned to complement current safeguards measures, rather than replace them entirely. The overall results demonstrate the potential for RETINA systems to safeguard reactor facilities. These detectors can substantially reduce the risk of a weaponizable quantity of material being diverted from current and future reactors. Since the replacement assemblies in all scenarios were the original low-enriched fuel, detection probabilities are expected to increase if the proliferator uses natural uranium instead (a path more likely to avoid IAEA accounting). With regards to the undetected UCFR diversions, the findings can help inform future design decisions of these conceptual cores. To counter the spatial dependence on detection probability, there is a strong imperative for further radial power flattening across assemblies; doing so via design optimization will also bring important safety and burnup advantages as well. Going forward, the two main areas for further technical improvement in antineutrino-based safeguards are background reduction and narrowing uncertainty on the antineutrino emissions from fission products (the largest component of uncertainty on the signal). As new technology is developed and operational experience gained, background rejection is expected to improve. In parallel, basic nuclear measurements of the kind pursued by SoLid 23 and PROSPECT 19 will refine the estimates of the antineutrino flux produced by the aggregate fission products of each fuel species. Additional calibration of measurements using known-state reactors can also be used to enhance precision. All of these improvements can be expected to reduce diversion detection times and increase the probability of detecting the five exception in Table 2 . Discussion A wide range of cases can be envisaged with different reactors or diversion schemes. It is considered beyond the scope of the paper to quantify diversion probabilities in each instance, but some discussion on notable cases is provided in this section. In a typical PWR, an alternate diversion scheme could involve the replacement of burnt fuels with natural-uranium bearing ones. This could allow the proliferator to avoid scrutiny by manufacturing fuel from its own uranium deposits, since all fresh fuels supplies are strictly surveilled. Analysis shows that these instances should in fact increase the risk of detection by a RETINA system due to an even larger disturbance in the antineutrino signature. This scenario also runs the risk of not providing sufficient criticality margin at startups or result in an early shutdown of the reactor. Both of which could be detected by traditional safeguarding means. Reducing the risk of detection would require the replacement of an assembly with another plutonium-bearing one, e.g., from the spent fuel pond. This is unlikely since a proliferator would run the risk of detection via traditional safeguards twice (once by displacing the spent fuel, and another form displacing an in-core assembly). A state actor able to divert fuel from the spent fuel ponds is much more likely to use these assemblies for a weapons-program rather than rely on those inside the reactor which are monitored by a RETINA system. Another potential challenge for a RETINA system arises if a nuclear plant does not operate as baseload. Load-following is increasingly likely as more interment renewable energy sources are added to the grid. In these instances, the reactor power will vary with time, with a correspondingly varying antineutrino source term. Because of the low-statistics regime of a RETINA system, signal detection averaged over days, weeks, or sometimes months is the relevant input to safeguards calculation. When load following is declared, the anticipated antineutrino signature can be adequately adjusted. However, since the overall power would be decreased (load-following typically seldom involves overpower conditions due to safety constraints), the detection time would proportionally increase as a function of \(\frac{1}{{\sqrt n }}\) in light of the reduced signal. As a result, if the total reactor energy generated over a period of time is halved, the total number of antineutrino detections is halved, and the detection time needed to draw a conclusion on a possible diversion is doubled. The safeguards regime would have to provide added scrutiny on the reactor during these instances. If on the other hand, load-follow is not declared, a departure from full nominal power can be detected within the order of a few hours via the total antineutrino flux similarly to what was demonstrated in SONGS 12 , 14 , but at an even faster rate in light of increases in detector efficiency. One other notable case to consider is online refueling. Many current reactors offer this capabilities (e.g., CANDU, RBMK, AGR). Their assemblies can be removed and replaced individually while reactor power is maintained. The online refueling itself is not a challenge for RETINA at a fundamental level. The system would still follow a similar approach to detect diversions. The main potential issue arises from the fact that these systems make several one-assembly diversions a more attractive route from a logistical standpoint. An SQ of plutonium could be slowly obtained by incrementally diverting one assembly at a time, many months apart. This would result in a much subtler variations in antineutrino signal and would be more challenging to detect with current RETINA technology. Traditional safeguards would need to be relied upon more heavily in these reactors. This issue is exacerbated in the case of the pebble-bed reactor concept. In these advance reactors, small fuel pebbles are continuously cycled in and out of the core. Individual pebbles have low neutronic importance relative to the larger core and their diversion is therefore very difficult to detect. It should be noted however, that monitoring a online-refueling reactor with a RETINA system force a prolifertor to conduct each diversion under large increments of several months or even years to avoid detection. This would substantially increase the breakout time to a weapon. Many factors come into play when a state decides to pursue nuclear weapons. These dynamics are likely to be significantly altered if antineutrino detectors are universally deployed. First, the continuous presence of “eyes on the ground” will be an important psychological barrier deterring states from proliferating. Second, shortening the detection timeline will provide added opportunity for the international community to intervene before a state can develop a full-fledged weapon from the diverted material. Nuclear proliferation supply-side theorists argue that states are more likely to proliferate if they have the means to do so 24 . The presence of a reactor providing a ready source of fissile material reduces the perceived barriers and costs on the path to proliferation, thus in theory, emboldening decision-makers. In a similar way, the presence of safeguards or antineutrino detectors increases barriers and risks on any attempt to proliferate material. There are no recorded diversion attempts with inspectors present, but instances are believed to have occurred in between inspections (e.g., Iraq 1980s) 25 . The presence of an “always-on” system, while not foolproof, would substantially increase the perceived risk of getting caught attempting to proliferate. As shown in Fig. 3 , in very few instances can a potential proliferator confidently extract on SQ of plutonium and avoid detection. Subsequently, RETINA-like systems are expected to deter states from acquiring nuclear weapons, strengthening the nonproliferation regime. Were a diversion to occur, it could be detected on a much shorter timeline than with traditional safeguards. This provides additional opportunity for the international community to intervene by applying diplomatic, economic, and military pressure to dissuade a given state from weaponizing the diverted material. The deployment path for these near-field antineutrino monitoring systems remains unclear at present. Wadsworth 26 has explored this issue as it relates to the Islamic Republic of Iran, concluding that certain types of antineutrino detectors should be favored over others, and recommended further investment in R&D efforts. As the author points out, suspect countries are likely to resist the implementation of the RETINA-type systems. Arguments against their deployment might range form “national soveriegnty” concerns to “unnecessary economic burdens”—if the inspectorate is forced to finance the added safeguards. This first issue could be alleviated by some countries setting out the example and voluntarily deploying these detectors on their nuclear fleet. The second issue would be addressed by requiring such a facility would be financed by the IAEA via support from the international community. As an initial step, the next generation of reactors can be designed to include antineutrino detectors with the Nuclear Supplier Group enforcing this added feature. Retrofitting old reactors will prove more difficult as it requires additional economic incentive and goodwill from states. Attempting to produce a universally binding agreement for NPT-signatories is likely to be a difficult path to follow. A more promising route would be to walk along the lines of the voluntary IAEA Additional Protocols, which were gradually enforced worldwide. From 1997 to 2015, the agreement has entered in force in 126 countries 27 . Leveraging this framework, we argue for a phased enforcement of the requirement, based on reactor type and construction time. The first phase of such a protocol would focus initially on new reactor concepts that are most at risk of proliferation (such as the UCFR). In the second phase, any newly planned reactor will need to contain such a system, regardless of type. Then in a last stage, all research and civilian reactors would be retrofitted with monitoring technology. Deploying these detectors to older systems can prove more burdensome, but not impossible, as was demonstrated in the SONGS project 9 , 12 , 15 , 28 . The technical aspect of detectors must also be addressed with standards put in place early on. Detector technology has been improving at a steady rate for the past several years and is expected to keep doing so even if an agreement takes hold. An IAEA certification procedure would need to be implemented, ensuring that as new capabilities are developed, they can be tested and integrated into future deployments. Beyond the realm of traditional safeguards, RETINA-type systems have also been proposed for disarmament negotiations and plutonium disposition. A recent study has argued for their deployment in Russian fast reactors tasked with eliminating plutonium stockpiles 29 . Studies have also investigated how they can be used to monitor MOX burnup in a PWR to verify plutonium incineration 30 . While the Plutonium Management and Disposition Agreement (PMDA) has recently stalled, antineutrino-monitoring can provide an exciting way of rebuilding trust between states in assuring procedures are being followed. The monitoring systems can also come into play for more ambition international treaties such as the Fissile Material Cutoff Treaty, which is still under negotiation at the UN Conference on Disarmament 31 . The agreement would place a cap on the amount of fissile material produced by nuclear weapons state and would need non-intrusive verification tools for military production reactors. RETINA systems, which collect no sensitive optical or acoustic information, would be ideal for such a task. The transition of the international nuclear fuel cycle to a hub-spoke model simultaneously reduces the risk of enrichment-based and reprocessing-based proliferation vectors, but the temptation for actors to divert material from the reactor itself remains or may even increase in the case of advanced long-lived designs. Because the RETINA monitoring system relies directly on the antineutrinos emitted from fission fragments, it is protected against spoofing of the relevant physical signature. RETINA-type antineutrino detectors can promptly detect reactor shutdowns, alerting inspectors if an off-cycle interruption occurs. When tasked with detecting compositional changes in the reactor core caused by diversion of material from current systems (PWR), and anticipated future systems (UCFR), the monitors can meet the detection threshold set by the IAEA in the majority of instances when more than one SQ of material is diverted. Notable exceptions were observed for assemblies diverted from the UCFR outer region, and for cases when the plant operator is able to optimally manipulate reactor power output to mask the signal. Further improvements in performance can be expected with future development in background noise rejection, directionality capabilities, detector fudicial volume, reactor-detector standoff, and reduction of uncertainties on detector signal (in particular, the antineutrino yield per fission of fuel isotopes). Despite their limitations, RETINA systems are expected to be an effective deterrent tool against attempted diversion, and therefore warrant further consideration by the IAEA and other agencies concerned with the spread of nuclear weapons. Methods MCNP models The Monte Carlo neutron transport code, MCNP6 32 , was used for the modeling of the PWR. The code is well-suited to simulate the performance of thermal cores and considered among the most well-established in the field of nuclear engineering. A core design based on a typical Westinghouse-type PWR was selected for the analysis. Depletion calculations were performed with the newly coupled CINDER90 code 32 . In order to reduce computation costs, the model takes advantage of PWR core symmetry and simulates 1/8th of the core volume. Reflective boundary conditions were applied on either side of the model “slice”. The simulations were all performed for one full operation cycle of 18 months. In order to ensure good statistical accuracy, a total of 100,000 particles were run over 200 cycles for each burn step; resulting in acceptable eigenvalue standard deviations of the order of 20 pcm. Diversion simulations were performed by modifying the original assembly arrangement and replacing once or twice burnt assemblies in specified locations with fresh ones. The fission rates of the base case was extracted and compared to that of each of the modified diversion simulations. This was used as the basis to calculate the antineutrino response deviance in each diversion scenario highlighted in Fig. 3 . Taking advantage of the core symmetry, a total of three diversion simulation were generated in total. Linear interpolation was used for the outer and middle core results in order to deduce the fission rate of other scenarios. The outer core simulation consisted of an eight-assembly diversion while the middle core one consisted of four. Interpolating between each model and the base case was used to obtain the fission rates of PWR-M1, -M2, -O1, -O2, and -O4. For each diversion simulation, the core eigenvalue was ensured to be above critical throughout depletion. REBUS models REBUS is a suite of codes developed at Argonne National Laboratory for fast reactor fuel cycle evaluation. It consists of different modules with specific functionalities. MC 2 -3 generates cross-sections that are weighted by TWODANT, a 2-D S-N code. These cross-sections are then used by the DIF3D-VARIANT, a 3-D P N flux solver, to compute the flux in each assembly region. The values are then used by REBUS to calculate fission rates and deplete the isotopic compositions of the fuel. The fission rate evolution over time is then used for the antineutrino detector analysis. Due to the evolution of the active zone inside of the UCFR-1000, cross-section updating is necessary. An automated script was developed to halt the simulation, re-generate the cross-section file, and then resume the depletion. Because the rate of divergence between the UCFR fuel cycle data should be most pronounced during the beginning of the cycle, metrics from the first 15 Effective Full Power Years (EFPY) were used to assess the case for periodic updates to the effective microscopic cross sections. For the reference fuel cycle, cross sections were updated every 1 EFPY from 0 to 15 EFPY, then every 3 EFPY for the remainder of the cycle. The relaxed update schedule past the first quarter of the burnup cycle is allowed since the burn zone is propagating slowly upward and the change in fuel isotopes is slow compared to at BOC. To adequately model flux variations inside the reactor, a P 3 flux angular order was selected. A nodal spatial order of 4 was used for the source term, 6 for the flux term within the node and 1 for the leakage term. A whole core modeled contrary (as opposed to the 1/8th MCNP model). Separate, individual diverted reactor models were run to obtain the relative fission yields; no interpolation between diversion scenarios was conducted. Since the UCFR operates continuously and without any refueling interval, the diversion simulations were taken to begin after 2.17 EFPY of operations. This timeline corresponds to the earliest time a proliferator could acquire 1 SQ of plutonium in a single assembly (I1). Data availability The data that support the findings of this study are available on request from the corresponding author A.E. The data are not publicly available due to them being generated using export-controlled codes. Code availability MCNP6 and REBUS codes were used to generate the reactor models and to perform fuel cycle analysis. The codes are available from . | Technology to measure the flow of subatomic particles known as antineutrinos from nuclear reactors could allow continuous remote monitoring designed to detect fueling changes that might indicate the diversion of nuclear materials. The monitoring could be done from outside the reactor vessel, and the technology may be sensitive enough to detect substitution of a single fuel assembly. The technique, which could be used with existing pressurized water reactors as well as future designs expected to require less frequent refueling, could supplement other monitoring techniques, including the presence of human inspectors. The potential utility of the above-ground antineutrino monitoring technique for current and future reactors was confirmed through extensive simulations done by researchers at the Georgia Institute of Technology. "Antineutrino detectors offer a solution for continuous, real-time verification of what is going on within a nuclear reactor without actually having to be in the reactor core," said Anna Erickson, associate professor in Georgia Tech's George W. Woodruff School of Mechanical Engineering. "You cannot shield antineutrinos, so if the state running a reactor decides to use it for nefarious purposes, they can't prevent us from seeing that there was a change in reactor operations." The research, to be reported August 6 in the journal Nature Communications, was partially supported by a grant from the Nuclear Regulatory Commission (NRC). The research evaluated two types of reactors, and antineutrino detection technology based on a PROSPECT detector currently deployed at Oak Ridge National Laboratory's High Flux Isotope Reactor (HFIR). Antineutrinos are elementary subatomic particles with an infinitesimally small mass and no electrical charge. They are capable of passing through shielding around a nuclear reactor core, where they are produced as part of the nuclear fission process. The flux of antineutrinos produced in a nuclear reactor depends on the type of fission materials and the power level at which the reactor is operated. "Traditional nuclear reactors slowly build up plutonium 239 in their cores as a consequence of uranium 238 absorption of neutrons, shifting the fission reaction from uranium 235 to plutonium 239 during the fuel cycle. We can see that in the signature of antineutrino emission changes over time," Erickson said. "If the fuel is changed by a rogue nation attempting to divert plutonium for weapons by replacing fuel assemblies, we should be able to see that with a detector capable of measuring even small changes in the signatures." The antineutrino signature of the fuel can be as unique as a retinal scan, and how the signature changes over time can be predicted using simulations, she said. "We could then verify that what we see with the antineutrino detector matches what we would expect to see." In the research, Erickson and recent Ph.D. graduates Christopher Stewart and Abdalla Abou-Jaoude used high-fidelity computer simulations to assess the capabilities of near-field antineutrino detectors that would be located near—but not inside—reactor containment vessels. Among the challenges is distinguishing between particles generated by fission and those from natural background. "We would measure the energy, position and timing to determine whether a detection was an antineutrino from the reactor or something else," she said. "Antineutrinos are difficult to detect and we cannot do that directly. These particles have a very small chance of interacting with a hydrogen nucleus, so we rely on those protons to convert the antineutrinos into positrons and neutrons." These images compare the evolution of antineutrino spectrum and antineutrino detector response as a function of reactor operational time in a pressurized water reactor and an ultra-long cycle fast reactor. Credit: Georgia Tech Nuclear reactors now used for power generation must be refueled on a regular basis, and that operation provides an opportunity for human inspection, but future generations of nuclear reactors may operate for as long as 30 years without refueling. The simulation showed that sodium-cooled reactors could also be monitored using antineutrino detectors, though their signatures will be different from those of the current generation of pressurized water reactors. Among the challenges ahead is reducing the size of the antineutrino detectors to make them portable enough to fit into a vehicle that could be driven past a nuclear reactor. Researchers also want to improve the directionality of the detectors to keep them focused on emissions from the reactor core to boost their ability to detect even small changes. The detection principle is similar in concept to that of retinal scans used for identity verification. In retinal scans, an infrared beam traverses a person's retina and the blood vessels, which are distinguishable by their higher light absorption relative to other tissue. This mapping information is then extracted and compared to a retinal scan taken earlier and stored in a database. If the two match, the person's identity can be verified. Similarly, a nuclear reactor continuously emits antineutrinos that vary in flux and spectrum with the particular fuel isotopes undergoing fission. Some antineutrinos interact in a nearby detector via inverse beta decay. The signal measured by that detector is compared to a reference copy stored in a database for the relevant reactor, initial fuel and burnup; a signal that sufficiently matches the reference copy would indicate that the core inventory has not been covertly altered. However, if the antineutrino flux of a perturbed reactor is sufficiently different from what would be expected, that could indicate that a diversion has taken place. The emission rates of antineutrino particles at different energies vary with operating lifetime as reactors shift from burning uranium to plutonium. The signal from a pressurized water reactor consists of a repeated 18-month operating cycle with a three-month refueling interval, while signal from an ultra-long cycle fast reactor (UCFR) would represent continuous operation, excluding maintenance interruptions. Preventing the proliferation of special nuclear materials suitable for weapons is a long-term concern of researchers from many different agencies and organizations, Erickson said. "It goes all the way from mining of nuclear material to disposition of nuclear material, and at every step of that process, we have to be concerned about who's handling it and whether it might get into the wrong hands," she explained. "The picture is more complicated because we don't want to prevent the use of nuclear materials for power generation because nuclear is a big contributor to non-carbon energy." The paper shows the feasibility of the technique and should encourage the continued development of detector technologies, Erickson said. "One of the highlights of the research is a detailed analysis of assembly-level diversion that is critical to our understanding of the limitations on antineutrino detectors and the potential implications for policy that could be implemented," she said. "I think the paper will encourage people to look into future systems in more detail." | 10.1038/s41467-019-11434-z |
Nano | A new method to produce gold nanoparticles in cancer cells | Aaron S. Schwartz-Duval et al, Intratumoral generation of photothermal gold nanoparticles through a vectorized biomineralization of ionic gold, Nature Communications (2020). DOI: 10.1038/s41467-020-17595-6 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-020-17595-6 | https://phys.org/news/2020-09-method-gold-nanoparticles-cancer-cells.html | Abstract Various cancer cells have been demonstrated to have the capacity to form plasmonic gold nanoparticles when chloroauric acid is introduced to their cellular microenvironment. But their biomedical applications are limited, particularly considering the millimolar concentrations and longer incubation period of ionic gold. Here, we describe a simplistic method of intracellular biomineralization to produce plasmonic gold nanoparticles at micromolar concentrations within 30 min of application utilizing polyethylene glycol as delivery vector for ionic gold. We have characterized this process for intracellular gold nanoparticle formation, which progressively accumulates proteins as the ionic gold clusters migrate to the nucleus. This nano-vectorized application of ionic gold emphasizes its potential biomedical opportunities while reducing the quantity of ionic gold and required incubation time. To demonstrate its biomedical potential, we further induce in-situ biosynthesis of gold nanoparticles within MCF7 tumor mouse xenografts which is followed by its photothermal remediation. Introduction Gold nanoparticles have been extensively utilized for biological applications due to their simplistic and tunable chemistry which provides the ability to control and modify size, morphology, and surface functionality; combined with their morphology-dependent 1 , 2 , 3 , 4 . Typically, generating gold nanoparticles require, at the minimum, chloroauric acid as a precursor material and a reducing agent for catalyzing the reductive nucleation. Variations in the reducing agent molecule as well as reaction conditions (e.g. pH, temperature, incubation time) will result in different morphologies, sizes, and functionalities of the resulting nanoparticles 5 , 6 , 7 , 8 . For biomedical applications, these variations in morphology, size, and functionality are directly correlated to their cellular internalization 9 , 10 , 11 , biodistribution, biological half-life 11 , 12 , 13 , renal secretion 11 , 12 , 13 , 14 , 15 , and plasmon optical properties 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 . Because of their plasmon-optical properties, gold nanoparticles are inherently theranostic, providing imaging contrast through near infrared (NIR) 16 , 17 , 15 , 16 , fluorescence 18 , 19 , X-Ray 20 , 21 , photoacoustic 22 , 23 , 24 , and Raman enhancement 25 , 26 . This plasmonic optical property therefore allows gold nanoparticles to act as a vehicle to enhance ablative therapies both for photothermal 27 , 28 , and X-ray radiation 29 , 30 , 31 . On the other hand, the bio-inertness of gold nanoparticles combined with the diverse range of applicable reductant molecules has allowed researchers to develop biomimetic gold nanoparticles using biomolecules as scaffolds for its reduction 32 , 33 , 34 , 35 . Therefore, this approach requires no additional steps for bioconjugation since the reducing biomolecules remain attached as capping agents to the formed nanoparticles. Further to this, if cellular machinery are used to generate gold nanoparticles, the resulting plasmon-optical properties could act as a reporter of the affecting cell phenotype. This capability to generate gold nanoparticles via cellular biomolecules has already been demonstrated previously in mammalian cells 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 . The precise mechanism for this cellular bioreduction of gold nanoparticles is not fully understood, however, these treatments have been known to negatively impact cell health through cellular responses via shock and stress-related proteins 42 , decrease in viability 43 , 44 , and even cell fixation 44 . These previous works, wherein ionic gold was applied diffusely to generate gold nanoparticles via cellular mechanisms, could not be directly applied for cellular diagnostics or therapeutics, as the process is so deleterious to cell health and not cell-specific. However, this process of cellular bioreduction forming gold nanoparticles has been marketed as a means for generating biomimetic and biofunctional gold nanoparticles, which could potentially be applied to different physiological systems. One such recent work by A.V. Singh et al. demonstrated the photothermal effects from intracellular gold, reduced in vitro through the addition of prefabricated gold nanoparticle seeds 36 . This work demonstrated that the process of biomineralization could easily be modified through suitable tuning in the recipe of precursor materials. In this work, we hypothesize that if ionic gold are deployed to cells in discrete nanoscaled packets, this can largely reduce the concentration of gold necessary for the reduction, as well as decrease the time frame and overall impact on cell health, thereby broadening its use for diagnostic and therapeutic purposes. Accordingly, a simplistic delivery vehicle has been demonstrated herein for deploying ionic gold, which indicate intracellular nanoparticle formation within a time span of ~30 min, much less than the previous literature reports, with concentrations lower than previous applications, in complete cell media. Additionally, we have identified proteins and their respective subcellular locations involved in the cellular bioreduction of gold nanoparticles providing larger insight into this potential reduction pathway. The process of sequential and progressive reduction of gold nanoparticle has been shown to occur initially via secreted proteins in the extracellular space and finally within the nucleus for terminal reduction. Because this approach has minimal impact on protein expression/integrity and some of the bound proteins are nucleotide binding, there is a strong potential of the developed gold nanoparticles to be used for theranostic application to provide photothermal ablation directly to the nucleotides. Thus, we demonstrate a theranostic application through a MCF-7 cancer xenograft model, wherein fluorescent and photothermally active gold nanoparticles are formed intracellularly to remediate cancer xenografts. Results Vehicle design and characterization for ionic gold delivery For intracellular gold nanoparticle formation, all of the constituents (sans chloroauric acid) are innately present in cellular systems. This intracellular gold nanoparticle formation has been demonstrated previously in mammalian cells, however, only through bulk diffusive treatments, lasting more than 24 h, and typically those are in solutions which contain no cell nutrients (such as DPBS) 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 . These treatment scenarios are non-ideal for minimizing the impact on cellular health. To minimize the caustic properties of chloroauric acid, we aim to deploy ionic gold salts in discrete nano-scaled clusters. It was hypothesized that if we are successful in deploying ionic gold in discrete nanoscaled clusters, it could potentially allow us to reduce the overall dosage, required for intracellular gold nanoparticle (GNP) formation, and in turn, reduce the impact on cellular physiology. In order to successfully enable intracellular reduction of ionic gold by the action of cellular biomolecules, it is necessary that the ionic gold delivery vehicle: (a) is made of material that will not spontaneously reduce ionic gold, (b) will contain the ionic gold in discrete nanoscale packets, (c) will have the capacity to interact with the cells, and (d) that cellular biomolecules will have access to the gold ions for nanoparticle formation. While chloroauric acid can be readily reduced by a large varieties of functional groups, it is not readily reduced by ether groups, carbon–carbon single bonds, nor under acidic pH conditions. With this knowledge in hand, hydroxyl terminated polyethylene glycol (PEG) was used as the ionic Au 3+ delivery vehicle. The rationale behind this was that PEG is known to cluster with ionic gold in acidic pH solutions 6 , and will not cause spontaneous gold nanoparticle (GNP) formation. We do not suspect that PEG would induce spontaneous GNP formation as polyol-based Au/Ag nanoparticle synthesis, where PEG could function as a reductant, typically require much higher temperatures (160 °C) than utilized for cell culture (37 °C) 48 . In a typical process for the formation of ionic gold delivery vehicle, HAuCl 4 (14 µmol) and PEG (5 µmol) were co-clustered in a sealed glass vial with 2 ml of carbon filtered deionized water (pH = 4). We observed that PEG molecules formed clusters in acidic conditions when compared to neutral conditions. In addition, positively charged gold ions were seen to preferentially localize within these clusters (Fig. 1a, b ). This mixture of ionic gold and PEG was then incubated in a water bath kept at 37 °C for 30 min to ensure thermal equilibrium achieved before dilution for analysis and/or cell treatment. A UV–Vis electromagnetic spectra confirmed that the gold in these Au-PEG clusters was ionic and not reduced in nature (Fig. 1c ) due to the lack of absorbance peaks that are characteristic of plasmon formation. This spectrum revealed an absorbance spectra typical of ionic gold (Supplementary Fig. 1a ), with no observable peak in the 500–700 nm range that is characteristic of the surface plasmonic resonance of reduced gold 1 , 2 , 3 , 4 . Additionally, no visible color change or precipitant formation was observed if the solution was left undisturbed on the benchtop (RT) for 30 days (all treatments and samples used for analysis were made fresh). The discrete Au-PEG clusters were 288 ± 46 nm in diameter, as quantified by transmission electron microscopy (TEM) (Fig. 1b ). From scanning electron microscopy–energy dispersive spectrum (SEM–EDS), we confirmed that the Au-PEG clusters contain gold (Fig. 1d, e ) that is localized within the PEG clusters rather than dispersed within the solution. The dark contrast in TEM afforded by these clusters (Fig. 1b ), as well as light scatter from dark field hyperspectral imaging of the clusters in solution (Fig. 1f, g ) further supports these observations that the ionic gold is localized within the PEG clusters. The light scatter afforded through enhanced dark-field hyperspectral imaging for the non-reduced ionic gold particles confirmed the lack of surface plasmon resonance peaks that would correspond to the reduced gold. The light scattering peak at 420 nm is an artifact of the Vis–NIR liquid light guide which transmits light beginning at 420 nm. Thus, using these spectral and microscopic techniques, we have successfully characterized our delivery vehicles as containing ionic gold in discrete nanoscaled packets (without auto-reduction). Fig. 1: Characterization of ionic Au-PEG delivery vehicle. Representative transmission electron microscopy (TEM) images of ( a ) PEG and ( b ) ionic Au–PEG clusters. c UV–vis absorbance spectrum of ionic Au–PEG clusters (200–800 nm). d Electron dispersion spectrum of ionic–Au–PEG clusters (on copper tape) captured from ( e ) scanning electron microscopy-energy dispersive spectra (SEM-EDS). Purple spectra correspond to signal obtained from the clusters, where the blue spectra corresponds to the background. f Spectral intensity of light scatter obtained through enhanced dark field hyperspectral imaging of discrete Au–PEG clusters in solution ( n = 3 particles, with 5 spectra from each). g Enhanced darkfield hyperspectral image with 2X digital magnification where the red box highlights the particles those represent the spectra as shown in ( f ). Full size image Forming gold nanoparticles via cellular mechanisms Ionic Au-PEG clusters, prepared with 0.24 mM Au 3+ , were capable of enabling a cellular reduction within 4 h of treatment in human breast cancer cell cultures (MCF-7) in complete cell media (containing 10% FBS, with 1x penicillin and streptomycin). Previous applications for generating gold nanoparticles intracellularly, typically apply 1 mM of ionic gold over time periods >24 h 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 . However, the amount of formed plasmonic gold nanoparticles were negligible or considerably reduced when the ionic gold was applied diffusively in the previous literature reports either at lower concentrations (0.1 mM 43 , 44 , 0.25 mM 42 ) or for shorter time periods (12 h 42 , 15 h 45 ). From our application, we were able to confirm the transition of ionic to plasmonic gold through the spectral library obtained from the enhanced dark-field hyperspectral imaging of Au-PEG clusters reduced through the interactions with MCF-7 cells (Fig. 2a, b ). Hyperspectral imaging technique was used to investigate the intracellular bio-reduction process of ionic gold. It was found that although the treated precursor concentration was quite low than the previously reported ones, characteristic surface plasmon bands for Au(0) could only be observed after the bio-reduction process (Fig. 2a ) in contrast the absence of plasmonic peaks for Au-PEG clusters (Fig. 1f ). This further confirms that the bio-reduction process led to the formation of AuNPs with the oxidation state of Au(0) from the +3 state at Au-PEG ionic clusters. To confirm that these plasmonic nanoparticles were internal and not simply on the cell surface, TEM imaging of MCF-7 cells treated with Au-PEG clusters was performed (Fig. 2c and Supplementary Fig. 1d ). An electron density focusing in formed particles with a dark, continuous, irregularly shaped border along the periphery was observed during this experiment (Fig. 2c ). Further to confirm that gold nanoparticle formation is occurring en route to cellular internalization and not by undesired reactions with either the culture plate coating or the culture media, we undertook several control experiments. One such possibility is the presence of high number of primary amines on the adherent culture plate surface (i.e. poly- l -lysine coating) may induce reduction of ionic gold within the solution of the culture plate. But no reductive or structural changes were observed in this case even after incubating the ionic Au-PEG clusters for 24 h at 37 °C on standard culture plates with poly- l -lysine coating (Supplementary Fig. 1b ). Bovine serum albumin (BSA), a protein nutrient source within the culture media, is capable of reducing gold nanoparticle with the aid of mild to strong reductants, heating or pH manipulation (outside of physiological norms) 48 , 49 . We confirmed that BSA was not the source of gold biomineralization in our system, as the fresh culture media (containing BSA) was not able to reduce the ionic gold clusters (24 h, 37 °C) as confirmed through TEM images of this treatment solution (Supplementary Fig. 1c ). For BSA-based biomineralization procedures, it is necessary to include strong/mild reductants or basic pH manipulation 49 , 50 . Contrary to these, neither strong/mild reductants, pH manipulation, or heating (beyond physiological norms) are present within the protocol of our system. Further the final pH of the cell surrounding was tested while HAuCl 4 was added to the cell media for a final concentration of 1 mM, more than four times the concentration that was used in the current study. We observed that the change in pH was <0.1 suggesting the claim of benign environment for the biomineralization of gold nanoparticles. Thus, to confirm that our gold nanoparticle formation is enhanced directly as a result of our delivery vehicle, Raman micrographs of cells treated with Au-PEG clusters, chloroauric acid (Au 3+ ) (lacking PEG), and a negative control containing PEG and sodium salt (Na + ) in equal molar concentration of the analyte (Au 3+ ) were acquired. It was observed that the Raman intensity for the cells treated with Au-PEG were markedly enhanced (8×) compared to cells treated with Na-PEG or just chloroauric acid only after four hour of incubation period (Fig. 3a, b ). Additionally, this Au-PEG treatment begins to provide plasmonic enhancement as early as 30 min after treatment (Fig. 3a ). This enhancement can only arise from species within a range of <100 nm. This plasmonic enhancement, only found from the cells treated with Au-PEG, confirmed the essential role of discretization of the Au 3+ cations within a delivery vehicle for plasmonic nanoparticle formation at the applied concentrations and time frame. Fig. 2: Confirmation of in vitro reduction occurring en route. a Spectral intensity of light scatter from MCF-7 cells treated with Au–PEG clusters ( n = 44 spectra), as distinct from untreated MCF-7 cells obtained through hyperspectral enhanced dark field imaging. b Corresponding enhanced hyperspectral darkfield image mapping of nanoparticles generated through Au–PEG treatment to MCF-7 cells with negative filtering with untreated MCF-7 cells (Supplemental Fig. 1e) to ensure no false-positive nanoparticle mapping. Scheme ( c , d ) with representative TEM images describing the reduction of ionic Au–PEG clusters ( c ), upon their interaction with cells ( d ). Full size image Fig. 3: Characterization of intracellular reduction of ionic gold. Raman spectra ( a ) of cells treated with either ionic Au–PEG clusters after 240 min (burgundy), 120 min (red), 60 min (orange), 30 min (yellow), Au 3+ (green), or Na–PEG (blue). Raman mapping (of 2800–3000 peak intensity) with the corresponding bright field and merged images for 240 min treatments of Au–PEG, Au 3+ and Na–PEG ( b ). Scale bars for ( b ) are 20 µm. c Raman spectra of Au–PEG-treated MCF-7 cellular fractions separated using differential centrifugation including the nucleus and large organelles, 15,000 g (red); mitochondria, lysosomes and other medium sized organelles, 100,000 g (green); membrane fragments, 300,000 g (blue); and highly soluble/cytosolic molecules (violet), >300,000 g ; and the glass slide without cell fractions (black). d SDS-PAGE stained with Coomassie Blue with intensity plots overlaid of protein from the nuclear fraction of MCF-7 cells without treatment (green), treated with Na–PEG (blue), or Au–PEG (yellow) with standard SDS-PAGE conditions (lighter-shaded colors, left three lanes), or without β-mercaptoethanol (darker-shaded colors, right three lanes). Full size image Investigation on participating biomolecules In order to utilize intracellularly formed GNPs directly for the biomedical applications (either diagnostic, therapeutic or a combination of both), knowledge of their formation process induced by the cellular biomolecules must be obtained. Elucidating the formation process and knowledge on the participating biomolecules for the formation of these gold nanosystems would provide valuable information relating to the possible pathological pathways with which this process could potentially be suitable for diagnostic/therapeutic applications. The enhanced Raman spectral maps afforded by ionic Au-PEG treatments to MCF-7 cells (Fig. 3a, b ) provided important chemical information regarding the biomolecules proximal to the intracellularly formed nanoparticles. The biomolecules, proximal to the intracellularly formed nanoparticles, may be bound to or involved in process of their formation. The enhanced Raman spectra contain characteristic peaks arising from lipid (1175–1385, 1446–1477, and 2849–2969 cm −1 ), and proteins (1590–1650 cm −1 ) 51 . Additionally, lack of enhanced characteristic phosphate peaks at 2380–2450 cm −1 in the enhanced Raman spectra allowed us to confirm that phosphates from the sodium phosphate buffer and/or the nucleic acids are not participating in this gold reduction (Fig. 3a ) 51 . However, the presence of amide, CH 3 , and CH 2 peaks do not provide enough information on their own to be considered viable as a diagnostic tool. In addition, this information does not prove precisely what physiologic processes are occurring within our system. To more precisely determine the biomolecules engaged in this intracellular reduction pathway, and which portions of the cell they originate from, cellular fractions of cells treated with the ionic gold nanoclusters were separated by density (through differential centrifugation) and the presence of plasmonic gold was subsequently investigated through acquisition of Raman spectra (SERS) (Fig. 3c ). To generate these cellular fractions, mechanical lysis by tip sonication (30 s at 1 A) of MCF-7 cells treated with the ionic Au-PEG or untreated cells was performed 52 . Then the fractions were separated by density via differential centrifugation into fractions typically associated with organelles of different sizes, i.e. (i) nuclei and large organelles, pellet from 15,000 × g for 5 min; (ii) mitochondria, lysosomes, peroxisomes, medium organelles, pellet from 100,000 × g for 60 min; (iii) cellular membrane fragments, pellet from 300,000 × g for 120 min; and (iv) highly soluble peptides/cytosolic molecules, supernatant from 300,000×g for 120 min) 53 . For these fractions, corresponding Raman spectra were collected from cells either treated and or not treated with Au-PEG (Fig. 3c and Supplementary Fig. 2 ). Significant enhancement of the Raman spectra was observed for the fraction containing nuclei and the fraction containing cell membrane fragments from the cells treated with Au-PEG. Both the nuclear fraction and cellular membrane fragment fraction contained noticeable amide peaks at 1590–1650 cm −1 51 . Due to the prominent amide peak, we were led to believe that these proteins were playing a significant role in this intracellular reduction process. The enhanced Raman spectra found in nuclear fraction, which implies localization of Au-PEG to the nucleus, does not necessarily imply internalization of the particles to the nucleus. Since the Raman microscope imaging is 2D, we cannot make this assertion from the cell imaging either. However, since there is a distribution of nanoparticle sizes, from sub 5 nm to larger, it could be inferred that some smaller particles may pass through the nuclear pores (~6 nm) to get internalized. The notion, that proteins are involved in cellular biomineralization, has been explored by Singh et al. 42 , however, their system is resultant from diffuse applications of Au 3+ so we should not infer that these systems/processes are identical. To determine if proteins are involved in our intracellular gold nanoparticle formation, a Coomassie Blue stained, sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS–PAGE) was used to separate proteins found in the fractions from cells treated with Au-PEG, Na-PEG, or untreated (Fig. 3d and Supplementary Fig. 3a–d ). We hypothesized that from Au-PEG and Na-PEG treatments, protein band intensity may be affected; and from the Au-PEG treatment, nanoparticle-bound proteins could be slowed in their electrophoretic migration through the gel due to the additional mass of gold. From this gel, we found no discernable difference between specific bands present nor their relative expression between treatments (Fig. 3d and Supplementary Fig. 3a–d ). The information provided by this gel posed two possible major implications—(1) none of our treatments were affecting protein expression/integrity, and, all proteins which were running through the gel in detectable quantities did not adhere to the gold nanoparticles, and/or (2) proteins were not involved in this process. Another possible explanation, which we explored, was that one of the reagents used for preparing the protein for SDS–PAGE separation may have possibly interfered with the protein–gold interactions, effectively freeing the involved proteins from the nanoparticles. The primary candidate considered for this interference was β-mercaptoethanol (βME), as it is known to function as a highly efficient capping/quenching agent for gold nanoparticle syntheses capable of breaking Au–S bonds between nanoparticles and protein 54 . To determine whether βME was responsible for this freeing of Au-bound proteins, we were prompted to run an additional gel comparing protein migration from the fractions of cells either treated with Au-PEG, Na-PEG, or no treatment in duplicate—half would receive βME and the other half would not (Fig. 3d and Supplementary Fig. 3a–d ). From these protein gels, we found distinct band intensity differential in the 10,000 × g (nuclei and large organelle) protein fraction between the well with Au-PEG βME (−) and all other wells of the same gel (Fig. 3d and Supplementary Fig. 3a–d ). The identical protein bands observed in control, Na-PEG, and Au-PEG of βME containing lanes indicated that neither the Na-PEG nor the Au-PEG treatment affected the protein expression/integrity of proteins found within the nuclear fraction. Comparing this observation with the lanes of the gel that were lacking βME, we found that changes in protein band intensity were observed only in the Au-PEG lane. These changes in protein band intensity, which exclusively decreases in intensity, were only observed in the Au-PEG βME lacking lane. This indicated that several proteins within that layer are attached to the formed plasmonic gold nanoparticles and that these proteins have maintained integrity. Having observed protein impedance within the nuclear fraction of the ionic Au-PEG treated cells, we aimed to identify these proteins through LC–MS protein fingerprinting. In order to identify proteins involved in intracellular gold nanoparticle formation through mass spectroscopy, we separated involved proteins from those not involved through gel electrophoresis. From our previous Coomassie-stained gel experiments, we found that involved proteins would remain bound to the gold nanoparticles preventing their electrophoretic migration through the gel. To separate these gold bound proteins from the non-involved proteins, we would run the nuclear fraction of ionic Au-PEG treated cells without βME for an excess of time (>3 h), at which point the largest ladder protein would have run off the gel (250 kDa) and we would then separate the top portion of the gel containing the wells (1.5 cm) from the bulk (Supplementary Fig. 4 ). The proteins/gold particles, located within either of these gel portions, were then extracted using a diffusion-mediated passive elution method 55 , 56 , 57 . These extracted proteins were then briefly trypsinized to fragment the proteins present from extractions and centrifuged at 6000 × g to pellet any gold nanoparticles out of the solution. The supernatant containing soluble protein fragments was then analyzed via LC–MS with MASCOT analysis (Matrixscience, Boston, MA). We found almost 2239 different protein fragments within the top portion of the gel, which contained proteins that were most impeded through the gel. These protein fragments identified within the top portion of the gel, including only those with a protein score >21 ( α < 0.05) and uniquely found in the top of the gel (excluding large slow moving proteins found in the bulk portion), accounted for 16 different proteins (Supplementary Tables 1 , 2 ). These proteins are located, as annotated on UniProt.org through COMPARTMENTS 58 , include subcellular regions ranging from extracellular to the nucleus (Supplementary Table 1 ). These proteins of interest are involved in 287 different molecular processes as identified through phylogenetic-based propagation of functional gene ontology (GO) annotations (Supplementary Table 2 ) 59 . Of these molecular processes, the most conserved GO term processes are related to binding (Table 1 ). Eleven of the 16 identified proteins are involved in binding processes, eight of those in protein binding, five in metal/calcium/cationic binding, three in ATP/nucleic acid binding, two in antigen binding, two which directly bind with each other, and two which bind actin filament (Table 1 ). These 11 proteins, involved in molecular binding and the intracellular reduction of gold nanoparticles, are characteristically located in regions ranging from extracellular to the nucleus 58 . Since our extraction and isolation of these gold-bound proteins was from the nuclear fraction and many of these proteins are not typically located in the nucleus, this suggests that progressive intracellular reduction may occur and continue sequentially, accumulating a protein corona from multiple regions of the cell (Fig. 4 ). This proposed migratory path with sequential reduction towards the nucleus initiates as the ionic gold delivery vehicle interacts with the secreted proteins (Fig. 4a ). To simulate this primary reduction process, which occurs extracellularly, we admixed ionic Au-PEG clusters with spent cell media (cells have been incubated with media for ~24 h). Through TEM imaging we were able to find electron density focusing within the PEG clusters potentially indicating nanoparticle formation (Fig. 4a ). This electron density focusing is not observed when ionic Au-PEG clusters are admixed with fresh cell media (Supplementary Fig. 1b ), indicating that this electron density focusing is resultant of cellular biomolecule secretions. The proteins involved in this primary reduction include those involved in antigen binding/phagocytosis (IGLL5, IGHA1), metabolic/catalytic activity (ZG16B, AMY2B, GBE1), cation binding (AMY2B and GBE1), and steroid binding (SCGB1D2) 59 . Two of these proteins involved in primary reduction are also located in multiple cellular regions (GBE1 and IGHA1) posing as potential migration enablers to the other cellular regions 58 . In addition, their localization to the extracellular regions, GBE1 and IGHA1 are also located within the cytoplasm and plasma membrane regions, respectively 58 . This potential for mitigating migration is especially notable for IGHA1, as this protein is both located in the plasma membrane and involved in receptor-mediated phagocytosis. The proteins GBE1 and IGHA1 could act as enablers for migration of the ionic Au-PEG clusters from the primary to secondary reduction phase, wherein the clusters interact with the plasma membrane (Fig. 4b ). Through cellular TEM images of our ionic gold PEG clusters, we were able to see additional electron density focusing compared with that found via interactions with cell secretions as they interact with the protein-rich membrane (Fig. 4b ). The additional electron density focusing shows nanoparticles within the clusters as more polydisperse than from the primary reduction with the addition of a dark electron-dense border contiguous with the plasma membrane. Additionally, from enhanced Raman signals found in the membrane of whole cells (Fig. 3a, b ) and in membrane fractions (Fig. 3c ), we concluded that plasmonic nanoparticles exist in the plasma membrane. This secondary reduction, catalyzed by the protein-rich membrane, may be initiated by IGHA1 as it found extracellularly, within the plasma membrane, and involved in receptor-mediated endocytosis. As this endocytosis is initiated, the ionic Au-PEG cluster is proximal to the plasma membrane, this proximity subjects the clusters to higher concentrations of cellular proteins/biomolecules, which are either secreted or located within the plasma membrane providing an environment more conducive to nanoparticle formation. The proteins involved in this secondary reduction within the plasma membrane are involved in cation binding (IGHA1, CDH23), antigen binding (IGHA1), kinase regulation/binding (SRCN1, ITGB1BP1), protein transport (ITGB1BP1), receptor clustering (ITGB1BP1), and endocytosis (IGHA1) 59 . In addition to IGHA1, SRCN1, and ITGB1BP1 are also found in multiple cellular regions as potential migratory enablers. Moreover being located within the plasma membrane, SRCN1 and ITGB1BP1 are both located within the cytoskeleton and ITGB1BP1 is located within the nucleus 58 . Once internalization of the ionic Au-PEG clusters is complete, we observed through TEM that the electron density at the center of these clusters diminishes (Fig. 4c ). In addition to the hollowing of the Au-PEG clusters, the electron-dense border is less defined compared to the secondary reduction phase (during interaction with the plasma membrane). A possible explanation for this reduction in electron density, is that, as the Au-PEG clusters interact with the plasma membrane, some portion of the gold may remain within the plasma membrane after the Au-PEG clusters are internalized. It is worthy to mention here that it is a diffusive regime-driven process. Once Au-PEG clusters reach the cell membrane, they get spontaneously reduced close to membrane landmarks and makes bigger clusters via spontaneous aggregation which limits their further diffusivity, hence some of the reduced Au remain distributed around cell periphery. Evidence supporting this explanation that some portion of the gold may remain within the plasma membrane, is found in the enhanced Raman spectra. Both in whole cells (Fig. 3a, b ) and the extracted cell fractions (Fig. 3c ) of those cells treated with ionic Au-PEG clusters showed enhanced Raman signal in the nucleus and membrane. This Raman enhancement indicated that nanoparticles resided within the plasma membrane and the nucleus after the 4 h of treatment period. The concept that gold nanoparticles remain within the plasma membrane as the ionic Au-PEG clusters continue to migrate toward the nucleus is supported by the reduction of overall electron density observed as these clusters are fully internalized. After internalization through the plasma membrane the Au-PEG clusters undergo progressive reduction, collecting proteins from the cytoskeleton (SRCN1, ITGB1BP1, CALM1), cytosol (PRDX4, GBE1), endoplasmic reticulum (PRDX4, JKAMP), and lysosome (GM2A) regions. The proteins in these non-nuclear regions are involved in metabolic processes (GM2A, CALM1), response to protein unfolding and oxidative stress (JKAMP, PRDX4), cation binding (CALM1, ITG1BP1), intracellular signal transduction (ITGB1BP1), and Titin binding (CALM1) 59 . The protein ITG1BP1, is involved in intercellular signal transduction and cation binding, and is located within the plasma membrane, cytoskeleton, and nucleus 58 . The protein CALM1, involved in cation binding and titin binding (a nuclear protein) 58 . Both ITG1BP1 and CALM1 pose as likely potential moderators for migration of ionic Au-PEG clusters to the nucleus for terminal reduction because of their involvement in cation binding complementary with their subcellular locations and nuclear protein interactions. If ITGB1BP1 and CALM1 retain their normal protein function wherein they transduce signals to the nucleus through binding (while they are bound to the Au-PEG protein complex), they pose as a potential route for mediating the migration of the ionic Au-PEG clusters to the nucleus for terminal reduction. We ascertain that the ionic Au-PEG clusters migrate to the nucleus for terminal reduction because of the enhancement of the Raman (whole cell mapping and fractions) combined with the identification of proteins bound to gold that are exclusively found in the nucleus (DDX31, FRG1, TTN). These proteins, found exclusively within the nucleus, are involved in cation/calmodulin binding (TTN), actin filament binding (TTN, FRG1), ATP binding (TTN, DDX31), and nucleotide binding (DDX31, FRG1, TTN) 59 . The protein titin 60 , with at least 23 cation-binding sites as well as binding calmodulin, poses as a potential gateway for the ionic Au-PEG clusters’ entry into the nucleus and then to the proximity of other nucleotide binding proteins (DDX31, FRG1). This complete intracellular progressive reduction process, involving at least 16 different proteins localized throughout the cell without apparent protein degradation (Fig. 3d and Supplementary Fig. 3a–c ), involving 287 different molecular processes (including nucleotide binding) with terminal reduction within the nucleus, poses as an interesting and innovative theranostic agent. Overall, the sequential bio-reduction process can be summerized as— (i) The first phase starts at extracellular regions where the proteins, IGLL5, IGHA1, ZG16B, AMY2B, GBE1, AMY2B, GBE1, and SCGB1D2, take part, where IGHA1 has been observed to be one of the prime protein responsible for the receptor-mediated phagocytosis of the Au-PEG clusters; (ii) The second phase starts within the plasma membrane where the proteins, IGHA1, CDH23, SRCN1, and ITGB1BP1 have taken part; (iii) Au-PEG clusters then undergo progressive reduction, collecting proteins from the cytoskeleton (SRCN1, ITGB1BP1, CALM1), cytosol (PRDX4, GBE1), endoplasmic reticulum (PRDX4, JKAMP), and lysosome (GM2A) regions and (iv) The migration of ionic Au-PEG clusters to the nucleus for terminal reduction occurs through the involvement of ITG1BP1 and CALM1. We have also ascertained DDX31, FRG1, TTN as the gold bound proteins those are exclusively found in the nucleus. This application, wherein nanoparticles are generated intracellularly through ionic Au-PEG clusters, would provide no plasmonic signal unless the treatment was successfully applied. From this Au-PEG application, gold nanoparticles generated intercellularly are bound to intact proteins involved in almost 300 different molecular processes (including nucleotide binding), and resulting nanoparticles localized within the nucleus. It may be presumed that protein corona would form as soon as the NPs were introduced into the biological milieu. Photothermal therapy can therefore potentially be applied using this process which would disrupt a host of molecular processes simultaneously as well as directly cause heating of the nucleus, nuclear proteins, and essentially nucleotides. Table 1 Proteins adhered to intracellularly formed gold nanoparticles identified through LC–MS of the nuclear fraction of MCF-7 cells treated with ionic Au-PEG clusters. Full size table Fig. 4: Scheme of progressive reduction process. Pictorial diagram for the intracellular formation of gold nanoparticles via Au–PEG clusters throughout their migration to the nucleus with TEM images of respective reduction phases. Full size image To identify specific amino acids preferred in ionic gold reduction from these 16 proteins, we calculated the amino acid distribution in terms of overall percent for on-site protein fragments and off-site protein fragments and compared against the amino acid distribution of the native protein BLAST data from Uniprot. Further to identify which amino acids, in the identified proteins of interest, were preferentially interacting with and reducing gold, we made an assumption that amino acids which participate actively/directly in reduction would be bonded to the gold nanoparticles and in turn would be underrepresented in the LC–MS fragmentation analysis compared with the native AA distribution (Fig. 5 ). Comparing the change in distribution in AA representation in on-site proteins between fragment from LC–MS and whole native proteins (from Uniprot), we were able to determine which AAs were highly represented in the identified protein fragments (unlikely for participation in gold reduction [Lys]; [Arg]; [Pro]), and which were represented in fewer instances than in the native protein (likely candidate AAs are [Leu]; [Thr]). These bound amino acids would not appear in as high frequency as the uninvolved AAs through LC–MS analysis due to the random nature of the fragmentation combined with the gravimetric separation of protein fragments from dense gold nanoparticles. However, this AA distribution change (Fig. 5 ) does not account for the natural bias of the fragmentation process for LC–MS analysis. To account for the bias of the fragmentation process for LC–MS analysis, we would subtract the change in AA representation identified between fragments and whole native proteins identified off-site (bulk of the gel). Amino acids, which have a positive change in AA representation between fragments and whole native proteins identified off-site are considered overrepresented ([Arg]) by the fragmentation process and those with a negative change considered to be underrepresented ([Cys]). To correct for the bias of the fragmentation process, we would subtract this bias/favorability in AA representation. Now comparing the distribution change in AA representation in on-site proteins after subtracting bias from the fragmentation process, we were able to determine more reasonably [Lys], [Arg], and [Pro] as unlikely candidates for reduction of gold and [Leu] and [Thr] as likely candidates (Fig. 5 ). However, this AA distribution change with subtraction bias of the fragmentation process does not account for the variability of AAs between on-site and off-site proteins (Fig. 5 ). From comparing the AA distribution between on-site and off-site proteins in their native states, we identified AAs which more highly represented in the target than off-site proteins ([Pro]; [Val]; [Ser]; [Thr]) and AAs which had lower representation ([Leu]). After subtracting this bias of the AA distribution between native states of the on-site and off-site proteins, we were able to identify [Lys], and [Arg] as overrepresented (likely not involved in reduction), and [Ser] and [Thr] as underrepresented (likely candidates for gold reduction). Fig. 5: Changes in amino acid distributions from proteins identified through LC-MS. Amino acid distributions for on-site proteins between those found in LC–MS fragments and in nature (red); for off-site proteins between those found in LC–MS fragments and in nature (blue) representing the bias of the trypsin fragmentation and the LC–MS–MASCOT system; for native proteins between those found on- and off-site (yellow) representing the preference of the amino acid distribution between on- and off-site proteins; and various correction subtractions (purple or orange are partial corrections and black includes all corrections). Error bars are standard deviations of the mean ( n ≥ 16 proteins). Full size image In vitro impact To characterize the cytotoxicity of our ionic gold delivery platform, we compared apparent metabolic impacts (through MTT) and physical morphological changes (through flow cytometry without staining) against the treatments with ionic gold (without PEG), PEG with ionic sodium, and Nifuroxazide with PEG (NIFU-PEG). From the MTT assay, our intracellular gold nanoparticle generating solution (Au-PEG) appears to reduce cell viability at lower treatment concentrations compared with Au 3+ and Na-PEG with a treatment-response dependent more on concentration than NIFU-PEG (Fig. 6a ). However, there is an apparent inflection point in the apparent viability of Au-PEG-treated cells which causes us to question reliability of this assay for our treatment. This inflection point occurs in MCF-7 cells as the apparent cell viability approaches 0%. We do not believe that this inflection point in apparent cell viability reflects an actual resurgence in cell viability or a change in treatment behavior at higher concentration, but actually reflects a reaction between unreacted treatments (ionic gold) and the dye, Thiazolyl Blue Tetrazolium bromide. To support this hypothesis, that the unreacted ionic gold can react with the Thiazolium Blue, we incubated our cell treatments (Au 3+ , Au-PEG, and Na-PEG) at titrated concentrations with Thiazolium Blue (at MTT concentrations) and measure absorbance at the same wavelength used for the MTT assay (570 nm). From incubating our cell treatments with Thiazolium Blue in the absence of cells, we found a linear relationship between the concentration of ionic gold and the absorbance used to measure cell viability through MTT (Fig. 6b ). From this we can ascertain that the inflection point in the cell viability is due to unreacted ionic gold interacting with the Thiazolium Blue. In addition to this, if our treatment can potentially interact with dyes, dye-based assays may not be ideal for characterizing the impact. On the other hand, by running flow cytometry and collecting forward scatter vs. side scatter measurements, we can comment on the morphological changes undergone by the cells in presence of various treatments without the use of dyes. From forward scatter parameter, one can monitor the change in cellular size, whereas from the side scatter parameter, one can comment on the internal complexity of cells. As shown in Fig. 6c , we observe three distinct populations/regions, marked as I (dead cells), II and III (cellular components) for all the samples including control. However, treatment with NIFU-PEG showed a marked increase of population of region 1 (dead cells) when compared with control, which could be directly attributed to the initiation of apoptosis by nifuroxazide resulting in cell death. Further histogram analysis via relative frequency vs. forward scatter (Fig. 6d ) supplements this finding, we obtained two populations, the first one corresponding to dead cells having low diameter, whereas the second population bigger in diameter and corresponding to healthy cells. For control and Au-PEG treatments, we observed a smaller number of dead cells and a greater number of healthy cells. However, for NIFU-PEG, most of the population had smaller diameter corresponding to dead cells, and minimal population at the higher diameter (healthy cells) owing to the toxic nature of nifuroxazide. Fig. 6: Cell viability of MCF-7 when treated with ionic gold. a Cells as assessed via thiazolium blue (MTT) metabolic assay for cells treated with Na–PEG (blue), Au 3+ (green), Au–PEG (red) and nifuroxazide mixed with PEG (black). For ( a ) and ( b ), error bars are standard deviation of the mean ( n = 3 biologically independent samples). Titrated light interactions between ionic-PEG mixtures (same color scheme) as admixed with thiazolium blue at concentrations used for MTT assay. Comparison of forward scatter vs side scatter plot for different samples at 0.24 mM treatments ( c ). Representative histogram analysis of different samples ( d ) showing two populations, one corresponding to dead cells at low FSC, whereas the second at higher FSC corresponding to healthy cells. Error bars in ( a ) and ( b ) are standard deviations of the mean. Full size image A microarray analysis was used to identify changes in gene expression for pathways related to apoptosis (Supplementary Fig. 5 ) with directed focus on expression changes at the end of these pathways for pro-survival or pro-apoptosis. From this we found upregulation of PMAIP1, MCL1, DDIT3 (pro-apoptosis) together inhibiting the pro-survival gene BCL2 (reacts to calcium and iron) for Au-PEG treatments when compared with untreated MCF-7 cells. Additionally, the GADD45 (pro-survival) gene was highly upregulated (associated with DNA damage and growth arrest) for Au-PEG treatments. Investigation of internalization mechanisms Cell internalization efficiency of our nanoparticles must be determined, so that we would be able to practically deliver high enough concentrations of gold for practical imaging purposes (X-Ray, photoacoustic, or fluorescence), and/or therapeutic photothermal ablation. To calculate the internalization efficiency, we treated MCF-7 cells in triplicate for 4 h with our ionic gold PEG clusters, collected the supernatant and cell pellet, submitted both for ICP analysis for gold and used the total mass of gold in the cell pellet vs. the supernatant to calculate the total internalized gold based on the treatment concentration. We found that nearly 50% of the gold was internalized in the breast cancer cells (Supplementary Fig. 6 ). We then investigated the internalization pathways those were involved using a series of inhibitors that would selectively block major endocytosis pathways (Supplementary Fig. 6 ). Small molecule inhibitors for our study selected in such a way that they did not exert any considerable toxicity by themselves in the short period of incubation. Furthermore, they were chosen so that they do not affect the actin cytoskeleton post treatment 23 , 61 . The working hypothesis of this study was based on the fact that a particular inhibitor would block a specific endocytic pathway and consequently not allow the nanoparticles to internalize in the cells via that particular pathway. A combination of 2-deoxyglucose (DOG) and sodium azide (NaN 3 ) were employed to inhibit glycogenosis and cellular respiration, respectively, via energy-dependent uptake 62 . Chlorpromazine (CPM) is known to reduce the generation of clathrin-coated pits via a reversible translocation of clathrin and its adapter proteins from plasma membrane to intracellular vesicles. CPM was employed to understand the clathrin-dependant cellular internalization 63 . Dynamin (GTPase) dependency in endocytosis of cells was probed by using dynasore 64 . Additionally, nystatin, which is a sterol-binding agent, was used to study clathrin-independent inhibition of endocytosis. Nystatin is known cause dissembly of caveolae and cholesterol in the membrane 65 , 66 . We repeated our ICP analysis of internalized gold with these four inhibitors of various major pathways of cellular internalization. If the addition of any of these internalization inhibitors were to reduce the percent of treated gold found in the cellular pellet, we would interpret this as a pathway which our endocytosis was routed. From this study, however, we found that none of these inhibitory treatments were able to reduce the internalization of the gold (Supplementary Fig. 6 ). We found the lack of impact on internalization by these inhibitors perplexing, as these pathways were involved in majority of the cellular internalization and approximately half of the gold was being internalized. A microarray analysis was then used to identify changes in gene expression for pathways related endocytosis (Supplementary Fig. 7 ). Through this pathway map, we observed no change in the gene expression for clarthrin-dependent endocytosis (ARF6, PIPK1A, PLD1), clathrin-independent endocytosis (ARF6, SRC, PIP5K1A, HRAS, PLD1), or lipid raft formation (PIP2, MHC1, PIP3). This may indicate internalization through non-classical means or through mechanical forces generated as gold is reduced along the cell membrane forming bonds with cellular proteins. However, there are changes in gene expression for processes within this mapping. Within this map, genes those are the most heavily downregulated are associated with cation binding (TGFBR2, ERBB4, CBLB, GIT2), and those upregulated are associated with cell stress and protein refolding (HSPA1A). However, the participation of immunoglobulin heavy constant alpha 1 (IGHA1) protein during the bio-reduction of Au-PEG clusters to form AuNPs (Table 1 ) supports the receptor-mediated phagocytosis as one of plausible pathways of cellular internalization for these clusters. In vivo theranostics From our application of ionic gold clustered with PEG, we have developed a unique system wherein gold nanoparticles are reduced by cellular biomolecules those are able to retain their functionality, including the capacity to guide the remaining cluster to the nucleus. The intracellular formation and nuclear migration of these gold nanoparticles presents a high potential for theranostic photothermal ablative application. To discover the theranostic potential of the intracellular generation of functional gold nanoparticles, we applied our ionic Au-PEG clusters to MCF-7 xenografts in J:NU mice compared with control injections of ionic Na-PEG (Supplementary Fig. 8 ). Intratumoral injections of ionic Au-PEG clusters presented fluorescence contrast via IVIS imaging with an excitation wavelength of 430 nm and an emission wavelength of 840 nm. Although some fluorescence signal at 430 nm excitation and 840 nm emission was observed, the lack of strong fluorescence signal in vitro was presumably due to the differences in microenvironment and cell density which could result in variation of sizes of nanoparticles as well as in concentration. This observed fluorescence emission is not uncommon for ultrasmall gold nanoclusters (<5 nm), wherein the emission wavelength can be tuned from 385 to 866 nm 67 . From our system, wherein gold nanoparticles were generated intratumorally, fluorescence contrast was observable above background as early as one day after injection of ionic Au-PEG clusters (Fig. 7a–c ). We observed increasing contrast between injection sites and off-site locations for up to 2 days, which would then dissipate after one week (Fig. 7a ). The increase in fluorescence contrast between the injection site and off-site locations over this 2-day period suggests that nanoparticles may be continually forming at the injection site for up to 2 days beyond the injection time point. This fluorescence contrast between injection site and off-site (no treatment) was not observed for ionic Na-PEG injections at all the time points. To investigate therapeutic potential, tumors injected with either Au-PEG or Na-PEG would receive laser treatments using 500 mW 633 nm continuous wave laser for 3 min (+laser) or 0 min (−laser) periods with an iPad-mounted IR camera to monitor and quantify the temperature changes. When applications of laser ablation were applied, tumor temperatures would rise to 51.2 ± 1.7 °C for successful intratumoral Au-PEG treatments and 40.8 ± 1.0 °C for Na-PEG treatments (Fig. 7b ). Successful hyperthermia is commonly defined as heating tissue at >45 °C for several minutes (5–10 min) 68 , 69 . Lesions/scabbing and tumor remediation occurred for tumors with successful applications of Au-PEG with two or fewer laser applications in all but one mouse ( n = 5). Laser applications made to successful Au-PEG treatments would reduce the apparent fluorescence contrast when compared against Au-PEG treatments without laser treatment (Fig. 7a ). This apparent reduction in fluorescence contrast from laser treatment is likely due to the resultant hyperthermic process, either through loosening cell membranes and destroying tissue (allowing nanoparticle escape) and/or through the addition of thermal energy modifying the reaction kinetics (forming non-fluorescent nanoparticles). Hyperthermia causes cell damage by loosening cell membranes and denaturing proteins 70 , 71 , and hyperthermic laser applications have been demonstrated previously to spontaneously reduce nanoparticles through laser heating 72 , 73 , 74 . It is likely that a combination of these effects are taking place, wherein photothermal applications are simultaneously loosening tissue for the escape of fluorescent gold nanoparticles and unreacted ionic gold as well as initiating laser-induced gold nanoparticle formation (Supplementary Fig. 9 ). Applications of Au-PEG which were unsuccessfully applied intratumorally, due to the small size of the tumors (resulting in an intradermal application), would result in diffusive fluorescence with contrast between tumor and off-site no greater than Na-PEG (Supplementary Fig. 10 ). When ablative laser applications were applied to intradermal injections of Au-PEG (opposed to intratumoral injections), tumor temperatures would rise to 40.7 ± 2.3 °C (no significant difference than Na-PEG). The low fluorescence contrast and lack of photothermal effect from intradermal injections of Au-PEG (compared to intratumoral injections) provided us with near immediate feedback and utility of intratumorally generated gold nanoparticles towards theranostic application. This is because, after failed intratumoral injections, we were able to acquire fluorescent images for confirmation of the success of our application. This allowed us to predict if photothermal laser application would provide therapeutic effect or if we should rather wait for fluorescence to reduce background (by waiting one week) before attempting a second injection attempt. Therefore, if we applied laser therapy to unsuccessful injections, we would observe no hyperthermia impact (on-site or off-site). To further investigate the ionic Au-PEG cluster for theranostic applications focusing on the intratumoral generation of gold nanoparticles, organs (tumors, sentinel lymph nodes, liver, kidneys, and spleen) were harvested and imaged with either Raman mapping (Fig. 7c , Supplementary Fig. 11 ) or stained by H&E for bright field imaging (Supplementary Fig. 12 ). From Raman mapping, we found increased Raman signal indicating plasmonic enhancement in tumors treated with Au-PEG which had received photothermal ablation compared with the other treatments (Fig. 7c ). This plasmonic enhancement, found only in the section of the tumor treated with Au-PEG clusters and laser therapy, indicates that photothermal ablation may induce or modify nanoparticle formation in a manner separate from the cellular reduction. The application of ionic Au-PEG clusters to generate fluorescent and photothermally active nanoparticles intratumorally resulted in no observable pathological effects through our short-term experiment regardless of the success of the administration. Additionally, we demonstrated that hyperthermic laser applications could be applied to modify the nanoparticle formation intratumorally. Fig. 7: In vivo efficacy of the gold nanoparticles generated through vectorized biomineralization of ionic gold. a Fluorescence contrast afforded via in vivo imaging system (IVIS)-based imaging (430 Ex 840 Em ) between on-site and off-site across one week period for treatments of Au–PEG with no photothermal treatment (yellow), Au–PEG with photothermal treatment (red), or Na–PEG (blue) with bright field and fluorescence images representative of Au–PEG or Na–PEG-treated mice (three days post injection) with arrows indicating intratumoral injection sites for Na–PEG (blue) and Au–PEG (yellow). Error bars are standard deviation of the mean ( n ≥ 3 biologically independent animals). b Photothermal effect of intratumorally generated gold nanoparticles via 3 min applications of 500 mW 633 nm laser as represented by maximum surface temperature from the tumors either treated with Na–PEG (purple) or Au–PEG (red) with thermal and bright field images representative of Na–PEG (purple) or Au–PEG (red)-treated mice after two laser applications (thresholding contrast for thermal images are automatically read via the iPad mounted FLIR software). Error bars are standard deviation of the mean ( n ≥ 5 laser treatments). c Bright field, Raman spectral mapped and merged images of excised tumors from mice treated with Au–PEG (± laser ablation) or Na–PEG (± laser ablation) with corresponding Raman point spectra for positions designated with arrows. Scale bars are 20 µm. Full size image Discussion In summary, we have demonstrated a simplistic nano-scaled delivery system that combines ionic gold and PEG, which enables the progressive reduction and formation of plasmonic gold nanoparticles. These nanoparticles are both fluorescent and provide a route for ablative therapies through photothermal heating. We have characterized this process through the identification of associated proteins combined with their entailing molecular processes and subcellular locations. We have determined that our process does not destroy the integrity of its associated proteins and requires considerably lower concentrations of ionic gold and time as compared to the previous literature methods. For this nanotheranostic platform, we understand that a cellular-driven synthesis may be perceived as a major drawback as it poses as an element outside of control. However, this lack in control may pose as a potential benefit wherein theranostic nanoparticles are generated differentially, dependent on the cellular machinery and microenvironment. Further to this, we envisage to observe this effect in normal control cells in comparison to the cancer cells and develop a more optimized generation of biomineralized plasmonic nanoparticles which can efficiently be used for hyperthermia applications in vivo. We envision future applications similar to this presented work, wherein the treatment focuses on optimization of the nanostructures to propel the next major advancement in nanomedicine. These envisioned fluid-form treatments will utilize and rely on cellular machinery dependent on the local pathology of the tissue providing highly specific opportunities for sensing, therapy, or control over local physiology. Methods Materials Chloroauric acid (SKU: 254169-5G), bi-hydroxyl terminated polyethylene glycol (Mn = 10,000 g/mol) (SKU: 8218811000), sodium chloride (SKU: 746398-500G), βME (SKU: M6250-100ML), dimethyl sulfoxide (SKU: 276855-100ML), fetal bovine serum (SKU: TMS-013-B), thiazolyl blue tetrazolium bromide (SKU: M5655-500MG), phosphate buffered saline (SKU: P3813-10PAK), Dulbecco’s modified Eagle medium (SKU: D7777-10X1L), trypsin-EDTA (SKU: T4049-500ML), aluminum potassium sulfate dodecahydrate (SKU: A6435-100G), and sodium iodate (SKU: 71702-25G) were purchased from Sigma-Aldrich unless otherwise stated. All components for SDS–PAGE and protein extraction kit were purchased from BioRad Laboratories. Histology grade ethanol was purchased from Thermo Scientific. Histology grade xylene was purchased from VWR. Eosin Y Hematoxylin hydrate, and Phloxine B were purchased from Acros Organics. Au-PEG cluster synthesis In a 20 ml scintillation vial, 0.015 mmoles of HAuCl 4 and 500 mg of PEG (Mn = 10,000 g/mol) were mixed with 2 ml of carbon-filtered deionized water (0.2 µm cellulosic membrane, pH 4.0). This mixture was incubated in a 37 °C water bath for 30 min before being diluted for further use. Na-PEG cluster synthesis In a 20 ml scintillation vial, 0.015 mmoles of NaCl and 500 mg of PEG (Mn = 10,000 g/mol) are mixed with 2 ml of carbon-filtered deionized water (0.2 µm cellulosic membrane, pH 4.0). This mixture was incubated in a 37 °C water bath for 30 min before being diluted 100 fold with carbon-filtered deionized water (0.2 µm cellulosic membrane, pH = 7). TEM measurements For TEM, Au-PEG clusters were diluted 100-fold with carbon-filtered deionized water (0.2 μm cellulosic membrane, pH = 7) and 2.5 μl of the diluted Au-PEG clusters were placed on a 300-mesh carbon film supported by a copper grid and allowed to stabilize for 2 min. A filter paper was then used to remove liquid for thin film formation and then allowed to air dry while covered. Images were obtained using a Jeol 2010 cryo-electron microscope operating at 200 kV, and using different degrees of defocus to obtain an adequate bright contrast. Images were recorded on a Gatan UltraScan 2k × 2k CCD. These CCD images were processed and analyzed with ImageJ ( ) version 1.48. Scanning electron microscopy and electron dispersion spectroscopy For SEM-EDS, Au-PEG clusters were diluted 100-fold with carbon-filtered deionized water (0.2 μm cellulosic membrane, pH = 7) and several 3 μl drops of the diluted Au-PEG clusters were placed on a strip of copper tape placed on an SEM sample grid and allowed to stabilize for 5 min. A filter paper was used to remove liquid for thin film formation and then allowed to air dry while covered. Images were obtained using a Hitachi S4700 scanning electron microscope with Oxford Instruments ISIS EDS X-ray Microanalysis System. SEM images were captured using a 10 kV accelerating voltage, a 10 μA emission current, and a 12 mm working distance, adjusting sample height for coarse focus and low degrees of defocus to obtain secondary electron images and EDS. Images were recorded using a Centaurus BSE detector. UV–Vis absorption studies For collecting the UV–Vis spectra, 1 ml of the as-prepared Au-PEG clusters were used. For calibration, the blank consisted of carbon-filtered deionized water (0.2 µm cellulosic membrane, pH = 7). The absorption spectra were acquired in scanning mode for the range of 200–1000 nm using a GeneSys 10 S UV–Vis spectrophotometer (Thermo Scientific, Rockford, IL). Measurements were taken at every 0.1 nm interval. Mammallian cell culture MCF-7 cells (ER (+) human breast cancer cells) sourced from ATCC (catalog no. HTB-22) were cultured in Dulbecco’s modified Eagle’s medium (DMEM; Sigma) supplemented with 10% fetal bovine serum (FBS) and 1x Penstrep in T25 culture flasks (Cellstar, Germany) and were incubated at 37 °C in a 99% humidified atmosphere containing 5% CO 2 . Cells were regularly passaged by trypsinization with 0.1% trypsin (EDTA 0.02%, dextrose 0.05%, and trypsin 0.1%) in DPBS (pH 7.4). Non-synchronized cells were used for all the experiments. Cell growth inhibition studies with MTT assay MCF-7 cells were seeded at 10,000 cells/well in DMEM with 10% FBS and 1x Penstrep (200 μl/well) in a 96-well plate and incubated at 37 °C in a 99% humidified atmosphere containing 5% CO 2 for 24 h. Cell treatments were as follows: control, Na-PEG, nifuroxazide-PEG, Au-PEG, and Au 3+ . Each treatment of the cells had concentrations 0.05–0.6 mM. After treatment, cells were incubated for 44 h. After 44 h, 20 μl MTT solution (5 mg/ml Thiazolyl blue tetrazolium bromide) was added to each well and the plate was incubated for an additional 4 h. Media was then aspirated from each well and 200 μl dimethyl sulfoxide (DMSO) was added. The plate absorbance was then read at 570 nm wavelength using Gen5 Microplate Reader and Imager Software. For investigating the reaction with ionic gold and Thiazolyl blue tetrazolium bromide, identical concentrations were used, without cells. Cell culture treatment with Au-PEG clusters for transmission electron microscopy Samples of Au-PEG were diluted in full cell media to 0.24 mM Au 3+ by mixing 0.5 ml Au-PEG clusters with 15 ml cell media. After removing old media, this mixture was incubated on a monolayer of MCF7 cells (grown in T-25 flasks until ~80% confluency) growing at 37 °C with 5% CO 2 for 4 h. After this incubation period, growth medium was discarded and cell monolayer was washed with DPBS before trypsinizing the treated cells. Cells were harvested in small 1.5 ml centrifuge tubes and collected in DPBS before fixing. After the cell pellet was fixed in a Karnovsky’s fixative in phosphate buffered 2% glutaraldehyde and 2.5% paraformaldehyde. Microwave fixation was used with this primary fixative, and the tissue is then washed in Sorenson’s phosphate buffer with no further additives. Microwave fixation was also used with the secondary 2% osmium tetroxide fixative, followed by the addition of 3% potassium ferricyanide for 30 min. After washing with water, saturated uranyl acetate was added for enbloc staining. The tissue was dehydrated in a series of increasing concentrations of ethanol. Acetonitrile was used as the transition fluid between ethanol and the epoxy. Infiltration series was done with an epoxy mixture using the epon substitute Lx112. The resulting blocks were polymerized at 90 °C overnight, trimmed and ultrathin sectioned with diamond knives. Sections were stained with uranyl acetate (2%) and lead citrate (3%) and examined or photographed with a Hitachi H600 transmission electron microscope at 75 kV. Enhanced darkfield hyperspectral imaging Optical and hyperspectral images were captured at ×60 magnification under enhanced darkfield illumination using the CytoViva hyperspectral imaging system (Auburn, AL). This hyperspectral imaging system couples an enhanced dark-field illuminator with optical and hyperspectral CCD cameras. The hyperspectral images, also called datacubes, were collected using the “push broom” method. The spectral data (400–1000 nm) was acquired 1 pixel row at a time. The acquisition was facilitated by a motorized stage. The hyperspectral analysis software ENVI compiled this spectral and spatial data into a datacube, in which each pixel contained spectral data. Spectral libraries corresponding to the reduced gold nanoparticles were built from the images of exposed MCF-7 cells. These libraries were filtered against a negative control image (cells only, Supplementary Fig. S1e ) to ensure no false-positive mapping of the nanoparticles. Using the spectral angle mapper (SAM) algorithm, the spectral libraries were compared to their respective images. Control treatments with Au-PEG clusters Both 150 µl of as-prepared Au-PEG clusters and 250 µl of media either fresh or having been used to incubate cells for ~24 h (‘spent’ media) were mixed in a 2 ml microcentrifuge tube and incubated in a water bath for 24 h. From these samples, 2.5 µl was collected for TEM analysis as described in the “TEM measurements” section. 2 ml of as-prepared Au-PEG clusters were added to an empty poly- l -lysine-treated culture plate. This container was incubated at 37 °C in a 99% humidified atmosphere containing 5% CO 2 for 24 h. From these samples, 2.5 µl would be extracted for TEM analysis as described in the TEM measurements section. Cell culture treatment with Au-PEG clusters for Raman spectroscopic analysis Samples of Au-PEG were diluted in full cell media to 0.24 mM Au 3+ by mixing 0.5 ml Au-PEG clusters with 15 ml cell media. After removing old media, this mixture would incubate on a monolayer of MCF7 cells (grown on glass slides until ~80% confluency) growing at 37 °C with 5% CO 2 for 4 h. After this incubation period, growth medium was discarded and cell monolayer was washed with DPBS before fixing with 37% formaldehyde solution. Raman spectra were acquired on cells incubated with Au-PEG treatment and on control cells lacking treatment in reflection mode (LabRAM HR 3D Horiba confocal microscope). Laser light was focused through a ×100, NA 0.8 objective into the sample plane and the scattering was collected in the reflection geometry using the spectrograph coupled with an Andor Newton back-illuminated EMCCD camera. The excitation wavelength for the measurements was set to 633 nm, and the power was set to 8 mW at the sample with a 0.2 s acquisition time. Raman shift from 1000 to 3050 cm –1 was collected at 8 cm –1 spectral resolution. Intensities of select vibrational modes were selected to generate the Raman images. Cell fractionation of treated cells Samples of Au-PEG were diluted in full cell media to 0.24 mM Au 3+ by mixing 0.5 ml Au-PEG clusters with 15 ml cell media. After removing old media, this mixture was administered on a monolayer of MCF7 cells (grown in T-25 flask ~80% confluency) growing at 37 °C with 5% CO 2 for 4 h. The cells were then trypsinized and centrifuged at 20,000 × g to form a cell pellet. The trypsin was removed, and the pellet suspended in 1.5 ml DPBS before tip sonication (5 s ON, 2 s OFF at 1 A for 2 min) was used to rupture the cells. Ruptured cells were separated into cell fractions 1 and homogenate was first centrifuged at 600 × g to pellet unbroken cells. Supernatant was collected and centrifuged at 15,000 × g for 5 min. The pellet was separated and the supernatant was collected and centrifuged at 100,000 × g for 60 min. The pellet was separated and the supernatant was collected and centrifuged at 300,000 × g for 120 min. Using an Eppendorf (Hamburg, Germany) 5424R microcentrifuge for centrifugation speeds lower than 100,000 × g , and a Beckman Coulter (Indianapolis, IN) Optima MAX-XP Ultracentrifuge for speeds above 100,000 × g . Pelleted fractions were suspended in 1 ml DPBS (pH 7.4) before SDS–PAGE or Raman analysis. SDS–PAGE and Raman samples were prepared immediately after fractionization. Raman spectroscopic analysis Raman measurements were taken using a Nanophoton Raman instrument at Frederick Seitz Materials Research Laboratory Central research facilities, UIUC (532 nm laser). For each spectrum, a grating (600 mm −1 ) scan was taken over the range of 1000–3000 cm −1 at 0.2% laser power for 1 min (with ×20 objective). An average of 20 spectra was recorded per sample. Spectra were exported as.txt files and plotted using Graphpad prism. SDS–PAGE analysis of cell fractions A BioRad Mini-PROTEAN ® tetra vertical electrophoresis cell was loaded with 2 × 5–20% gradient Mini-PROTEAN gels and the electrophoresis cell filled to the appropriate fill line with 1x Tris/Glycine SDS buffer. Two portions of nuclear protein fractions from cells treated with Au-PEG, Na-PEG, or untreated were admixed with 4x Laemmli sample buffer (BioRad) either containing 2.5% βME or with 1x Tris/glycine SDS buffer in the same volume with pipetting to addmix. From these prepared protein samples 20 µl was added to wells 2–4 and 6–8, with 5 µl of Precision Plus (10–250 kDa) protein ladder in lanes 5 and 9. After allowing the samples to settle in their respective wells for 5 min the cover of the electrophoresis cell was added, power turned on to 200 V and allowed to run ~35 min (the dye front <1 cm from the edge of the gel). For LC–MS, the nuclear fraction extracted from cells treated with Au-PEG were allowed to run for >3 h (until all of the ladder proteins had run off the gel) without protein from other cell treatments. Coomassie Blue total protein stain Gel was briefly rinsed with DI H 2 O before being immersed in protein fixing solution (10% acetic acid, 10% methanol, in Di H 2 O) for 30 min with gentle rocking. After fixing for 30 min, the gel was stained with 1x Coomassie Blue staining solution (BioRad) for 2 h with gentle rocking. After staining, the gel was submerged in destaining solution (10% acetic acid in DI H 2 O) for 24 h, replacing the destaining solution as it became saturated with residual Coomassie stain. After the gel was adequately destained, a BioRad gel-dock imaging system was used to image the stained gel. The images were contrast enhanced, and band intensities analyzed with ImageJ. The uncropped images for these coomassie-stained SDS–PAGE gels have been shown in Supplementary Fig. 3 . Protein extraction via passive elution and LC–MS analysis In order to separate proteins, bound to gold nanoparticles from unbound, we used the nuclear fractions from MCF-7 cells treated with Au-PEG and allowed it to run through gel electrophoresis in excess of time (>3 h). The top portion of the gel containing the wells of the gel (the top 1.5 cm) were excised from the bulk of the gel with a scalpel and placed into separate 15 ml centrifuge tubes. The gel pieces were ground inside their respective containers with sterile cleaned metal weighing spatulas before mixing with minimal elution buffer (50 mM Tris–HCl, 150 mM NaCl, and 0.1 mM EDTA; at pH 7.5), but ensuring that the gel pieces were completely immersed. These immersed gel fragments containing protein were incubated at room temperature on a rocker for 48 h. After incubation, samples were centrifuged at 5000 × g for 10 min before removing the supernatant for LC–MS analysis via partial trypsinization digestion performed in the Roy J. Carver Biotechnology Center (CBC). Inhibitor study A T-175 flask of MCF7 cells at 80% confluency was split and plated (at equal cell density) into 15 T-25 culture flasks. Cells were grown for 24 h before being incubated with various endocytic inhibitors. Inhibitor formulations were made with reconstituted medium containing NaN 3 and deoxyglucose, CPM, nystatin and dynasore at a concentration of 10, 50, 28 μM, 180 nM and 80 μM, respectively 2 , 3 , 4 . Cells were incubated with inhibitors for 1 h at ambient condition. Inhibitors were then replaced with Au-PEG diluted in full cell media to 0.24 mM Au 3+ by mixing 0.5 ml Au-PEG clusters with 15 ml cell media. All the treatments were performed in triplicate. Cells treated with ionic Au-PEG clusters in absence of inhibitors were used as control. After the required incubation time of 4 h, cell pellets and supernatants were collected for each sample and subjected to further ICP-MS analysis to determine the intracellular amount of gold. Flow cytometry analysis To further characterize the cytotoxicity of our ionic gold delivery platform, we performed flow cytometry analysis. Cells (MCF-7, 10,000 cells/well) were grown in 24-well plate for 24 h before treating with different samples prepared for this study for 4 h. At the end of incubation, cells were trypsinized and collected in 0.2% FBS containing DPBS. Samples were analyzed using a Guava EasyCyte Plus Flow cytometer. For each sample, data from 5000 single cell events were collected for 3 min, in triplicates, and forward scatter vs. side scatter information were recorded. The results were analyzed and plotted using Flowpy. Animal studies To investigate potential theranostic applications pre-clinically, we generated and treated MCF-7 xenograft tumors in female J/NU immunodeficient double knockout mice. The experimental protocol received ethical approval and was approved by the Institutional Animal Care and Use Committee (IACUC), University of Illinois at Urbana–Champaign, and satisfied all University and National Institutes of Health (NIH) rules for the humane use of laboratory animals. All the animal experiments were complied by the relevant ethical regulations for animal testing and research and carried out in accordance with the approved guidelines. Female J/Nu mice (2–3 weeks old) were bought from Jackson Laboratories, USA. Upon arrival, mice were allowed 2 weeks for acclimation. Animals were group housed with free access to food and water, and individuality marked through ear punching. We injected 10 6 cells in matrigel on the hind flanks of n = 12 mice with 1 injection per side. Of these injections, 16 tumors grew to usable size (~5 × 5 mm 2 ). There were 5 mice with 2 tumors, 6 with only 1, and 1 mouse which did not have any tumors of usable size. These 16 tumors received intratumoral injections of either Na-PEG or Au-PEG. Mice which had tumor growth to suitable size (~5 × 5 mm 2 ) on both flanks received two different treatments to both tumors (i.e. Au-PEG or Na-PEG and ±laser ablation). Tumors injected with either Au-PEG or Na-PEG received photothermal ablative therapy using 500 mW 633 nm continuous wave laser for 5 min (+laser) or 0 min (−laser) periods with IR camera (FLIR technologies) to monitor and quantify the temperature changes. From these groups, we had four tumors with Na-PEG injections and 0 min of laser treatment; three tumors with Na-PEG injections and 5 min of laser treatment; four tumors with Au-PEG injections and 0 min of laser treatment; five tumors with Au-PEG injections and 5 min of laser treatment. For all tumors which were successfully ablated, treatments were no longer made for those mice. Injections of 40 µl were administered using 32 gauge syringe needles over 2 min under isoflurane anesthesia (2.5% in oxygen) with an oxygen flow rate of 1 l/h. IVIS imaging studies Animals were serially imaged at 1, 24, 48, 72, and 168 h after treatment at 430 nm excitation and 840 nm emission wavelengths. Animals which received photothermal ablative intervention were imaged at these same time points and additionally 30 min after laser treatment (1.5, and 24.5, and 48.5 h). During imaging, animals were anesthetized with isoflurane (2.5% in oxygen) with an oxygen flow rate of 1 l/hr. Photothermal ablative therapy with IR-Imaging Photothermal ablative therapy was applied using 500 mW 633 nm continuous wave laser (RLTMRL-635-500-5, Roithner Lasertechnik) with 40 mm 2 beam diameter for 5 min (+laser) or 0 min (−laser) periods and monitored with IR camera (FLIR technologies) to visualize and quantify the temperature changes. Anesthesized mice were placed on the laser stand mount, using a machine-etched crosshair, so that the beam path would intersect directly with the tumor. Upon completion of the laser therapy, a neutral density filter with 100% OD was placed in front of the beam, blocking the path of the laser. See Supplementary Fig. S9 for laser stand setup. An iPad-mounted IR-camera (FLIR Technologies) was used to obtain snapshot thermal images at 2.5 min after photothermal applications with a laser. Temperature maxima for these images were considered as photothermal treatments representative of surface temperature. During photothermal treatments and IR-imaging animals were anesthetized with isoflurane (2.5% in oxygen) with an oxygen flow rate of 1 l/h. Histology sectioning At the end of the experiment, animals were sacrificed via anesthesia overdose. Organ tissues (tumors, sentinel lymph nodes, liver, kidneys, and spleen) were immediately excised and frozen in cassettes containing optimal cutting temperature (OCT) compound. Embedded tissue blocks were clamped in microtome and sectioned at 6 μm thickness. Tumor and sentinel lymph node slices were either left unstained (for Raman mapping) or stained with hematoxylin and eosin (H&E) for bright field imaging. H&E staining Slides were first fixed for 5 min in 4% PFA. Immediately following fixation, slides were rinsed in cold PBS. Slides were then stained with H&E using standard protocol. Briefly, slides were immersed for 8 min in Mayer hematoxylin solution, followed by a 2 min wash in warm, slow running water. Then, slides were blued with two dips in lithium carbonate. Slides were again washed in warm slow running water. Slides were then immersed in 80% ethanol for 2 min, followed by 5 min in eosin. Slides were dehydrated by performing sequential immersions in ethanol, 4 ×2 min in 95% ethanol (in two separate containers), 4 × 2 min in 100% ethanol (in two separate containers), and finally 3 × 3 min immersions in xylene (in three separate containers). Coverslips were then applied using xylene-based Permount. Raman-mapped histology Unstained tissue sections were imaged in the reflection mode via LabRAM HR 3D Horiba confocal microscope with 532 nm wavelength excitation laser with power set at 8 mW. The laser light was focused through a LWD Visible ×20, NA 0.40, objective into the sample plane and scattering was recorded in the reflection geometry using a spectrograph coupled with an Andor Newton back-illuminated EMCCD camera. An acquisition time of 1 s was used. The Raman shift from 310 to 3410 cm –1 was recorded with a spectral resolution of 3 cm –1 . Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The authors declare that the data supporting all the findings of this study are available within the article and Supporting Information files. Zip files for Microarray data can be downloaded using the following link ( ); CSV files for protein MS data can be downloaded using the following links: Top gel portion ( ); Bottom gel portion ( ). All the other relevant data are available from the author upon reasonable request. | Dipanjan Pan, professor of chemical, biochemical, and environmental engineering at UMBC, and collaborators published a seminal study in Nature Communications that demonstrates for the first time a method of biosynthesizing plasmonic gold nanoparticles within cancer cells, without the need for conventional bench-top lab methods. It has the potential to notably expand biomedical applications. Conventional laboratory-based synthesis of gold nanoparticles require ionic precursors and reducing agents subjected to varying reaction conditions such as temperature, pH, and time. This leads to variation in nanoparticle size, morphology and functionalities that are directly correlated to their internalization in cells, their residence time in vivo, and clearance. In order to avoid these uncertainties, this work demonstrates that biosynthesis of gold nanoparticles can be achieved efficiently directly in human cells without requiring conventional laboratory methods. The researchers examined how various cancer cells responded to the introduction of chloroauric acid to their cellular microenvironment by forming gold nanoparticles. These nanoparticles generated within the cell can potentially be used for various biomedical applications, including in X-ray imaging and in therapy by destroying abnormal tissue or cells. In the paper, Pan and his team describe their new method of producing these plasmonic gold nanoparticles within cells in minutes, within a cell's nucleus, using polyethylene glycol as a delivery vector for ionic gold. "We have developed a unique system where gold nanoparticles are reduced by cellular biomolecules and those are able to retain their functionality, including the capacity to guide the remaining cluster to the nucleus," says Pan. They also worked to further demonstrate the biomedical potential of this approach by inducing in-situ biosynthesis of gold nanoparticles within a mouse tumor, followed by photothermal remediation of the tumor. Pan explains that the mouse study exemplified how "the intracellular formation and nuclear migration of these gold nanoparticles presents a highly promising approach for drug delivery application." "Gold is the quintessential noble element that has been used in biomedical applications since its first colloidal synthesis more than three centuries ago," Pan notes. "To appreciate its potential for clinical application, however, the most challenging research ahead of us will be to find new methods of producing these particles with uncompromised reproducibility with functionalities that can promote efficient cellular binding, clearance, and biocompatibility and to assess their long-term term effects on human health. This new study is a small but important step toward that overarching goal." | 10.1038/s41467-020-17595-6 |
Medicine | Foods associated with an increased risk of cardiovascular disease and death in middle-age | Associations between dietary patterns and the incidence of total and fatal cardiovascular disease and all-cause mortality in 116,806 individuals from the UK Biobank: a prospective cohort study, BMC Medicine (2021). DOI: 10.1186/s12916-021-01958-x Journal information: BMC Medicine | http://dx.doi.org/10.1186/s12916-021-01958-x | https://medicalxpress.com/news/2021-04-foods-cardiovascular-disease-death-middle-age.html | Abstract Background Traditionally, studies investigating diet and health associations have focused on single nutrients. However, key nutrients co-exist in many common foods, and studies focusing solely on individual nutrients may obscure their combined effects on cardiovascular disease (CVD) and all-cause mortality. We aimed to identify food-based dietary patterns which operate through excess energy intake and explain high variability in energy density, free sugars, saturated fat, and fiber intakes and to investigate their association with total and fatal CVD and all-cause mortality. Methods Detailed dietary data was collected using a 24-h online dietary assessment on two or more occasions ( n = 116,806). We used reduced rank regression to derive dietary patterns explaining the maximum variance. Multivariable Cox-proportional hazards models were used to investigate prospective associations with all-cause mortality and fatal and non-fatal CVD. Results Over an average of 4.9 years of follow-up, 4245 cases of total CVD, 838 cases of fatal CVD, and 3629 cases of all-cause mortality occurred. Two dietary patterns were retained that jointly explained 63% of variation in energy density, free sugars, saturated fat, and fiber intakes in total. The main dietary pattern was characterized by high intakes of chocolate and confectionery, butter and low-fiber bread, and low intakes of fresh fruit and vegetables. There was a positive linear association between the dietary pattern and total CVD [hazard ratio (HR) per z- score 1.07, 95% confidence interval (CI) 1.04–1.09; HR total CVD 1.40, 95% CI 1.31–1.50, and HR all-cause mortality 1.37, 95% CI 1.27–1.47 in highest quintile]. A second dietary pattern was characterized by a higher intakes of sugar-sweetened beverages, fruit juice, and table sugar/preserves. There was a non-linear association with total CVD risk and all-cause mortality, with increased risk in the highest quintile [HR total CVD 1.14, 95% CI 1.07–1.22; HR all-cause mortality 1.11, 95% CI 1.03–1.19]. Conclusions We identified dietary patterns which are associated with increased risk of CVD and all-cause mortality. These results help identify specific foods and beverages which are major contributors to unhealthy dietary patterns and provide evidence to underpin food-based dietary advice to reduce health risks. Peer Review reports Background Reducing the burden of cardiovascular disease (CVD) is a top public health priority in the UK and worldwide [ 1 ]. A poor diet is a major contributor to morbidity and premature mortality, especially CVD [ 2 , 3 ], in part by promoting excess weight, but also by raising total cholesterol and low-density lipoproteins concentrations (LDL) and increasing the risk of diabetes and hypertension [ 3 ]. Traditionally, the vast majority of epidemiological studies investigating diet and health associations have usually focused on single nutrients and this evidence is reflected in current dietary recommendations [ 4 , 5 , 6 ]. These emphasize the importance of achieving and maintaining a healthy weight, reductions in saturated fatty acids (SFAs) and free sugars [ 7 , 8 ], and increases in dietary fiber [ 5 ]. High dietary energy density and free sugars are associated with increased risk of weight gain [ 8 ] which can further increase CVD and mortality risk [ 8 , 9 ], while SFAs increase total blood cholesterol and LDL [ 10 , 11 ]. However, other recent meta-analyses and observational studies have not found evidence for a beneficial effect of reducing SFA intake on CVD and total mortality [ 12 , 13 ], or found protective effects against stroke [ 14 ]. Dietary fiber may lower the risk of CVD, through improved glucose control and lower serum cholesterol concentration [ 15 ]. However, despite years of public health efforts, population dietary change has been slow [ 1 , 16 ]. This may reflect in part the difficulties of translating present dietary recommendations into food-based public health advice [ 17 ], and some existing recommendations are not universally echoed across countries [ 18 ]. The public have frequently been confused by apparently conflicting messages, for example about the importance of reducing saturated fat or free sugars [ 12 ], without recognizing that these nutrients frequently co-exist in foods and that the consequence may be a diet that is high in both saturated fats and free sugars and they may have synergistic effects on health. Dietary guidelines which focus on foods rather than individual nutrient recommendations could help avoid confusion and avoid inadvertent increases in one nutrient of concern at the expense of another. Despite the inclusion of some food-based recommendations in recent dietary guidelines (especially regarding fruits, vegetables, dairy), nutrient-based advice still remains the most common, often co-existing with food-based guidance, as seen in the latest release of the Dietary Guidelines for Americans 2020–2025 [ 6 ]. Increasingly, researchers have sought to characterize complex dietary patterns using either a priori (based on adherence to a specific patterns, e.g., Mediterranean diet, or a score which reflects overall dietary quality) [ 19 ] or a posteriori (based on the observed dietary intake using empirical methods such as factor analysis or principal component analysis (PCA)) [ 20 , 21 ]. Reduced rank regression (RRR) is a data-dimension reduction technique that aims to identify the combination of food groups that explain the maximum amount of variation in a set of response variables (e.g., nutrients) hypothesized to be on the causal pathway between diet and health outcomes. This approach can test a priori hypotheses of the pathophysiology of disease [ 22 ]. To our knowledge, only six longitudinal cohort studies have examined overall CVD risk and/or all-cause mortality using RRR, but all included smaller populations and none was focused in the UK (Additional file 1 : Table S1). This population-specificity is important given that dietary patterns can vary substantially even when nutrient intakes are broadly similar, owing to cultural differences in food preference. Using data from the UK Biobank study, we aimed to identify food-based dietary patterns explaining the variability in known dietary risk factors which operate through excess energy intake, such as energy density, free sugars, saturated fat, and low fiber intakes, and to investigate their association with total and fatal cardiovascular disease (CVD) and all-cause mortality. Methods Study design and participants UK Biobank is a prospective study that recruited 502,536 participants aged 37 to 73 at baseline (between 2006 and 2010) with participants’ data linked to hospital and mortality records [ 23 ]. Participants completed a full baseline assessment, including self-reported measurements via touch-screen questionnaires as well as a verbal interview collecting a wide range of information on socio-demographic factors, lifestyle and behavioral factors, and a medical history. Physical measurements (i.e., height, weight), blood and urine samples were also taken. UK Biobank protocols and study details can be found elsewhere ( ). The UK Biobank study was conducted according to the Declaration of Helsinki and ethical approval was granted by the North West Multi-Centre Research Ethics Committee (reference number 06/MRE08/65). At recruitment, all participants gave informed consent to participate and be followed-up through data-linkage. Study measures Dietary assessment All UK Biobank participants who had provided an email address at recruitment were invited to complete the 24-h online dietary assessment (Oxford WebQ), which is a web-based 24-h dietary assessment tool developed and evaluated for use in large population studies [ 24 ]. The Oxford WebQ was collected at baseline and on up to four separate occasions (cycle 1: February 2011 to April 2011; cycle 2: June 2011 to September 2011; cycle 3: October 2011 to December 2011; cycle 4: April 2012 to June 2012) [ 25 ]. Recorded food and drinks were classified into 50 groups according to their nutrient profile or culinary use and closely following the classification used in the UK National Diet and Nutrition Survey and (Additional file 1 : Table S2). From the original sample of UK Biobank participants, we used data from participants that completed a dietary assessment on two or more occasions in order to better reflect usual intakes. The Oxford WebQ automatically generated total nutrient intakes as well as intakes from each food/beverage collected at each assessment. We calculated average nutrient and food group intakes across all dietary assessments after removing participants with implausible energy intakes [ 26 ]. The Oxford WebQ has been validated against biomarkers [ 27 ] and compared to interviewer-administered 24 h recalls [ 28 ] and showed acceptable reproducibility when using at least 2 dietary assessments [ 29 ]. Outcome ascertainment Outcomes were defined as primary or secondary events using hospital admission and death registry data linked to the UK Biobank. Total CVD was defined as a hospital admission or death using ICD-10 which included coronary heart disease (CHD; I20–I25), congestive heart failure or cardiomyopathy (CHF; I50, I50.1, 150.9, I11.0, I13.0, I13.2, I42, I43.1), and total stroke (I60–I64). Fatal CVD events were measured by I00–I25, I27–I88, and I95–I99. The hospital registry-based follow-up ended on 31st March 2017, in England; 31st October 2016, in Scotland; and 29th February 2016, in Wales. Individuals were censored on these dates, the time of event in question, or the time of death, whichever occurred first. Death registry included all deaths that occurred before 30th April 2020 in England, Wales, and Scotland. Statistical analyses Identification of dietary patterns The RRR analysis was used to identify dietary patterns that could explain the maximum variation in a set of nutrient response variables hypothesized to be on the causal pathway between predictors (food groups) and outcomes (CVD and all-cause mortality events). We selected the following response variables which contribute to excess energy intake: energy density (kJ/g), SFA (% total energy), free sugars (% total energy), and fiber density (g/MJ) as there is evidence for their role in the development of CVD and mortality [ 7 , 8 , 10 , 15 ]. Energy density (kJ/g) is a proxy for high energy intake and was calculated as the amount of energy (kJ) divided by the total food weight (g) excluding beverages because of their disproportionate influence on the total volume (g) of the diet [ 30 ]. Fiber density (g/MJ) was calculated by dividing total dietary fiber intake (g) by total daily energy intake (kJ) then multiplying by 1000. Respondents were assigned a z- score for each dietary pattern which quantified how much their reported dietary intake reflected each dietary pattern relative to other respondents in the study sample. The RRR model calculates dietary pattern z -scores for each respondent as a linear, weighted combination of all of their standardized food group intakes by using weights unique to each dietary pattern. An increasing intake of food groups having a positive factor loadings increases the dietary pattern z -score, while an increasing intake of food groups with negative factor loadings decreases the dietary pattern z -score. The number of extracted patterns depends on the number of response variables, and dietary patterns which explained individually more than 20% of variation in response variables were retained. To test the robustness of the derived dietary patterns, a random sample cross-validation procedure was performed in which 50 random subsets were used as test sets [ 31 ]. The association between the retained dietary patterns and response variables was assessed by correlation coefficients. Distributions of outcomes, demographic, socioeconomic status, behavioral risk factors, medical conditions, and dietary intake were compared to examine the differences across quintiles of dietary pattern scores (chi-squared test for categorical variables and ANOVA tests for continuous variables). Associations between dietary patterns with outcomes Multivariable Cox proportional hazards models with age as the underlying timescale variable were used to estimate HRs (hazard ratio) for total (fatal and non-fatal combined), fatal CVD risk, and all-cause mortality with 95% CIs (confidence interval) for each increment in the z -score of the dietary patterns as well as for quintiles, using the floated absolute risk method [ 32 ]. A quadratic term for dietary pattern z -scores was included in the model where we found evidence of non-linearity. The proportional hazards assumption was based on Schoenfeld residuals and was not violated for the variables of interest in the adjusted model ( P > 0.05). Person-time of follow-up was calculated from the age at which the last dietary assessment was completed until the age at which the event happened (CVD or death) or the end of censoring (March 2017), whichever came first. Trend tests were performed by including the median score of each pattern quintile as a continuous variable in the models, the lowest quintile was used as the reference. Restricted cubic splines were computed with five knots to visually explore non-linear associations between dietary patterns and outcomes. All analyses were stratified by sex (men, women) and region (Scotland, Wales, England) and adjusted for ethnicity (others white), Townsend index of deprivation (a composite measure of deprivation based on unemployment, non-car ownership, non-home ownership, and household overcrowding, categorized as quintiles 1–5, with high index indicating more deprivation), education group (higher degree, any school degree, vocational qualifications, other), smoking status (never, current, previous), physical activity (low, moderate, high), energy intake (log transformed), and menopause status (N/A, no, yes). For details on the derivation of these covariates, see Additional file 1 : Table S3. Sensitivity, exploratory, and heterogeneity analyses Sensitivity analyses were conducted to exclude participants who had a CVD event within 2 years after completing their last 24-h online dietary assessment to account for reverse causality ( N = 115,532). Additionally, the RRR analysis was repeated among participants providing 3+ ( N = 72,912), 4+ ( N = 33,760), or 5+ ( N = 5403) 24-h online dietary assessments to test whether the derived dietary patterns might be influenced by the number of days of dietary reporting. To examine the potential roles of adiposity measures and blood biomarkers in these relationships, we used multivariable linear regression to calculate geometric mean concentrations of body mass index (BMI) and biomarkers of CVD risk by quintiles of dietary patterns for baseline measurements of systolic blood pressure (SBP), DBP (diastolic blood pressure), HbA1c (glycated hemoglobin A1c), LDL, and HDL (high-density lipoprotein). This cross-sectional analysis included 26,277 participants with BMI and blood biomarkers measured at baseline, in addition to at least 2 dietary assessments (the first one at baseline and the subsequent ones during the follow-up). Additionally, BMI categories, diagnosed hypertension, diabetes, or high cholesterol were assessed as potential mediators using likelihood ratio tests. Each of these variables was added one at a time to the main model described above, and an additional model was also fitted including all potential mediators. Heterogeneity in the associations between dietary patterns and risk of total CVD was assessed for sex, age at recruitment (< 60 years or ≥ 60 years), smoking status, BMI group, and presence of any of the risk factors (hypertension, diabetes, and high cholesterol), using likelihood ratio test. SAS (version 9.4; SAS Institute) was used to conduct RRR. Descriptive statistics and regressions were conducted using Stata (version 14; StataCorp LP). A two-sided p value of < 0.05 was defined as statistically significant. Results A total of 116,806 participants were included in all analyses after exclusions. Participants were excluded for the following reasons: did not provide any dietary data ( n = 291,514), only provided only one dietary questionnaire ( n = 84,166), CVD occurred before baseline assessment ( n = 6422), CVD occurred before the last dietary questionnaire ( n = 1337), pregnancy ( n = 108), missing data for the key confounders ( n = 702), response variables ( n = 10), medical conditions ( n = 372), and implausible energy intake (over reporters: n = 119, under reporters: n = 980). (Additional file 1 : Figure S1). The RRR analysis identified two major dietary patterns that could consistently explain the greatest amount of shared variation in all response variables (43% for dietary pattern 1, 20% for dietary pattern 2) (Additional file 1 : Table S4). Dietary pattern 1 was characterized by positive loadings for chocolate and confectionery, butter and other animal fat spreads (primarily butter), and low-fiber bread and was strongly negatively associated with fresh fruit, vegetables, and high-fiber breakfast cereals. Dietary pattern 2 was characterized by positive loadings for sugar-sweetened beverages (SSBs), fruit juice, and table sugar and preserves and negative loadings for high fat cheese and butter (Fig. 1 ). Sensitivity analyses showed that the derived dietary patterns from RRR analyses were consistent regardless of the number of 24 h dietary assessments provided (Additional file 1 : Figures S2-S5). Fig. 1 Factor loadings for food groups in each dietary pattern. Note: %E, proportion of total energy intake Full size image A higher proportion of men, of younger age, with higher Townsend index, current smokers, less physical activity, higher prevalence of obesity, or hypertension were found across higher quintiles of dietary pattern 1 (Table 1 , Additional file 1 : Table S5). A lower proportion of current smokers, with less physical activity, and lower prevalence of obesity, hypertension, diabetes, and high cholesterol were found across higher quintiles of dietary pattern 2. Table 1 Baseline characteristics of participants in two main dietary patterns ( N = 116,806) Full size table Associations between dietary pattern and outcomes There were 4245 cases of incident CVD, 838 CVD deaths and 3629 deaths from all causes during 907,431 persons years of follow-up (7.8 years of median follow-up from baseline; 4.9 years from the last dietary assessment), respectively. We found a positive linear association for each standard deviation increase in dietary pattern 1 z-score and risk of total CVD (hazard ratio 1.07, 95% confidence interval 1.04 to 1.09), fatal CVD (1.07, 1.02 to 1.13), and all-cause mortality (1.08, 1.05 to 1.11) (Fig. 2 , Additional file 1 : Table S6). This association was also positive across dietary pattern 1 quintiles ( P for trend < 0.05). The nonlinear association between dietary pattern 2 z-scores and health outcomes was described by a quadratic model, total CVD (1.02, 1.01 to 1.03), fatal CVD (1.02, 1.01 to 1.04), and all-cause mortality (1.01, 1.00 to 1.03). Fig. 2 Prospective associations between dietary patterns and the risk of total CVD events and all-cause mortality ( n = 116,806). Notes: All the models were stratified by sex and regions (England, Scotland, and Wales) and adjusted for ethnicity, socioeconomic status, behavioral risk factors, energy intake, and menopause in women. Z -scores for DP1 and DP2 were analyzed in mutually adjusted models to examine their independent associations with health outcomes. Adjusted HRs (hazard ratio) and confidence intervals (CI) of DP scores quintiles obtained using the floated absolute risk method of Cox proportional hazards regression, which enabled the comparisons across different quintiles of z -score. Trend tests were conducted by including the median score of each pattern quintile as a continuous variable in the models Full size image Analysis of splines also showed a linear association between dietary pattern 1 z-scores and total CVD, fatal CVD, and all-cause mortality. For dietary pattern 2, there was a non-linear association with total CVD, fatal CVD, and all-cause mortality (Additional file 1 : Figure S6). Sensitivity analysis excluding participants who had a CVD event within 2 years of completing their last 24-h online dietary assessment showed that the associations between dietary patterns and the risk of total and fatal CVD events and all-cause mortality were unchanged (Additional file 1 : Table S7). In cross-sectional analyses, people scoring high on dietary pattern 1 had increased BMI, DBP, and lower HDL at baseline, while people scoring high on dietary pattern 2 had generally no clinically meaningful differences on levels of their BMI or biomarkers except for a lower HDL (Fig. 3 ). Fig. 3 Baseline biomarkers by quintiles of dietary pattern scores among patients have both baseline and another dietary assessment during the follow-up ( N = 26,277) Full size image Mediation analysis found the association between the highest quintile of dietary pattern 1 and incident CVD was slightly attenuated following adjustment for the BMI group (Additional file 1 : Figure S7), while the association between dietary pattern 2 and total CVD was not attenuated when adjusted for BMI group, hypertension, diabetes, and high cholesterol individually or simultaneously. In subgroup analyses, a significantly higher risk of total CVD was observed among participants with dietary pattern 1 who were aged < 60 years (1.09, 1.05 to 1.13), or living with overweight (1.07, 1.03 to 1.11) or obesity (1.05, 1.00 to 1.10). A significantly greater association between dietary pattern 2 and total CVD risk was observed among women (1.02, 1.01 to 1.03), aged < 60 years (1.01, 1.00 to 1.03), and those with obesity (1.02, 1.00 to 1.04) (Additional file 1 : Figure S8). Discussion In this sample of middle-aged British adults, two principal dietary patterns explained 43% and 20% of the variance in specific nutrients, namely energy density, saturated fat, free sugars, and fiber, which are hypothesized to be on the pathway between the associations of food groups and CVD and all-cause mortality through their contribution to excess energy intake. In the primary pattern, greater consumption of chocolate and confectionery, butter, refined bread, and table sugar and preserves together with low intakes of fresh fruit, vegetables, and wholegrain foods was significantly associated with increased CVD and all-cause mortality. A second pattern was related to higher intakes of free sugars, predominately from sugar sweetened beverages, fruit juice, chocolate and confectionary, and table sugar and preserves but low in butter and higher fat cheese. The association of this dietary pattern with incident CVD and all-cause mortality was non-linear, with only evidence of increased risk for those with the highest dietary pattern z -scores. Exploratory analyses suggested the association observed with dietary pattern 1 was potentially mediated by excess weight. RRR has not been widely used to identify dietary patterns and their associations with CVD risks. The first dietary pattern largely confirms previous studies reporting associations with a priori “Western” dietary patterns and the benefits of “Mediterranean” diets, and with a large body of data reporting the associations between individual food groups or nutrients and disease outcomes from prospective cohort studies in the USA and Europe [ 9 , 11 , 33 , 34 ]. It is notable that people in the dietary pattern quintile with the lowest risk had mean intakes of energy from SFA of 9.7%, very close to the national and international recommendations, and free sugars accounted for 8.8% of total energy, below the World Health Organization (WHO) guidelines [ 35 ], though this level still exceeded the more stringent UK recommendations [ 36 ]. The second dietary pattern is more unusual and is characterized by higher intakes of sugar-sweetened beverages, fruit juice, and table sugar and preserves, together with lower intakes of high fat cheese and butter. This dietary pattern is striking because people in the highest quintile, with very high free sugars intake, otherwise followed other healthy behaviors, with higher physical activity, lower alcohol intake, and were less likely to smoke, and their intake of SFA met the recommended levels. People in the highest quintile for this dietary pattern had increased risks for CVD and all-cause mortality and consumed on average, 17.3% of dietary energy from free sugars, more than three times the UK dietary guideline, but only 10% SFA, which is the recommended level. While some previous research has shown that higher consumption of SSBs and other added sugars are associated with a higher risk of CVD [ 9 , 37 , 38 , 39 , 40 ] and all-cause mortality [ 41 ], recent reviews of the evidence by the WHO [ 42 ], and by the UK Scientific Advisory Committee on Nutrition [ 36 ] did not identify a specific link between sugar intakes and total mortality. Implications of this research This analysis supports dietary recommendations to limit particular foods groups, specifically chocolate and confectionery, butter and other animal fat spreads (primarily butter), low-fiber bread, sugar-sweetened beverages (SSBs), fruit juice, and table sugar and preserves. By identifying the food groups and dietary patterns which are associated with reduced risk of CVD or all-cause mortality, it adds useful details over and above nutrient recommendations to inform public health interventions to support people to achieve meaningful dietary changes to improve health. Promoting food-based dietary guidelines can help to reduce the conflict perceived by the public between recommendations to reduce saturated fat and free sugars, and considering foods within the context of an overall dietary patterns means that the recommendations are likely to be culturally appropriate. The role of animal-based fat spreads such as butter is less clear since this is positively related to poor outcomes in dietary pattern 1 and negatively in dietary pattern 2. This reflects the ongoing debate about the role of saturated fatty acids from dairy products, rich in stearic acid, which appear to have less atherogenic potential than foods rich in other, longer chain, saturated fatty acids [ 11 ]. It implies that, at the present time, dietary advice to reduce saturated fatty acids should focus primarily on sources of saturated fat, such as chocolate and confectionery [ 43 ], and red and processed meat [ 44 ], which are consistently associated with adverse health outcomes. Overall, this study suggests that energy-dense diets with high intakes of free sugars but low intakes of vegetables, fruit, and fiber (dietary pattern 1), animal fats may contribute to an increased risk of CVD and mortality. However, a dietary pattern that is very high in free sugars, even if low in animal fats and SFA (dietary pattern 2), can also potentially increase CVD and mortality risk. However, there was no evidence to support the hypothesis that traditional CVD risk factors mediated the associations between dietary pattern 2 and CVD, and therefore, further research is needed to understand this association. The strengths of this study include a large sample size and outcome ascertainment by linkage to medical records that minimized the loss to follow-up. Our estimates of dietary intake were consistent with official statistics from the UK NDNS (National Diet and Nutrition Survey, 2014/15-2015/16) [ 16 ], suggesting that our results could have moderate generalizability with the total UK adult population. We adjusted for multiple confounders and conducted sensitivity analyses to confirm the robustness of the derived dietary patterns and associations with CVD outcomes. Unlike other forms of analysis of dietary patterns, RRR provides a link between the mechanistic evidence which is largely based on the effects of individual nutrients on disease processes and the natural mode of consumption of foods in the population [ 22 ]. This prospective study in a large cohort of people provides evidence to help inform food-based dietary guidelines which are sensitive to current patterns of consumption in the UK. By focusing on the food groups which have the highest or lowest factor loadings, public health planning can target the foods where changes in intake would be expected to yield the greatest improvement in health outcomes. Study limitations A limitation of this study is that dietary intakes were measured by multiple 24-h online dietary assessments, which like all self-reported exposures, are subject to recall bias and misreporting, and are dependent on the accuracy of the food composition databases. In our analyses, dietary data from a minimum of two 24-h online dietary assessments was used to derive dietary patterns, to capture estimates which are closer to usual intake than a single measure [ 29 , 45 ]. This approach was supported by our sensitivity analyses that derived identical dietary patterns when restricting the sample to individuals that provided 2, 3, 4, or 5 dietary questionnaires. Nevertheless, incident CVD in the few years following observations of diet reflects a lifetime’s exposure, and our measures of dietary pattern within a short follow-up period reflect only the most recent portion of this. Moreover, the CVD risk factors and biomarkers used in the cross-sectional and mediation analysis were only measured at baseline, which were 1 to 4 years before the dietary data collection. Finally, the dietary patterns identified may only be applicable to a UK population, since the combination of foods is likely to be culturally specific, but this method could be used to understand other cultural patterns of food consumption and their association with preventable disease. It is impossible to randomly allocate people to lifetime exposures to diet and so most nutritional epidemiology is observational, as was the case here. Uncertainty remains as to whether these dietary patterns are stable over time, although previous evidence using data from two time points suggests patterns were generally consistent over 18 years of follow-up [ 46 ]. Previous studies have reported that participants completing more dietary questionnaires tended to be older and more educated [ 47 ], which may limit the generalizability to the wider UK adult population. Conclusions This analysis shows that diets high in chocolate confectionery, butter, refined bread, and table sugar and preserves, together with low intakes of fresh fruit, vegetables, and wholegrain foods, are associated with an increased risk of CVD and all-cause mortality, in part through a link to excess weight (dietary pattern 1). It also suggests that diets particularly high in sugar-sweetened beverages, fruit juice, table sugar and preserves (dietary pattern 2) may be an independent risk factor for premature mortality, and the mechanisms underpinning this association require further investigation. The present study helps identify specific foods and beverages which are major contributors to unhealthy dietary patterns and provides evidence to underpin food-based dietary advice to reduce health risks. Availability of data and materials The datasets generated/and or analyzed in the current study will be made available for bona fide researchers who apply to use the UK Biobank data set by registering and applying at . Abbreviations RRR: Reduced rank regression CVD: Cardiovascular disease HR: Hazard ratio SBP: Systolic blood pressure DBP: Diastolic blood pressure HbA1c: Glycated hemoglobin A1c LDL: Low-density lipoprotein HDL: High-density lipoprotein | Two common dietary patterns identified in British adults, which include high intakes of chocolate and confectionary, may be associated with an increased risk of cardiovascular disease and death in middle-age, according to a study published in the open access journal BMC Medicine. Carmen Piernas, the corresponding author said: "Cardiovascular disease is one of the main causes of death and disability in the UK and poor diet is a major contributor to this. The most common dietary guidelines are based on the nutrients found in foods rather than foods themselves and this can be confusing for the public. Our findings help identify specific foods and beverages that are commonly eaten in Britain and that may increase the risk of cardiovascular disease and mortality." Researchers from the University of Oxford, UK identified two diets that were associated with an increased risk of cardiovascular disease and death in middle-age in Britain. The first was high in chocolate, confectionary, butter and white bread and low in fresh fruit and vegetables. The second was high in sugar-sweetened beverages, fruit juice, chocolate, confectionary, table sugar and preserves and low in butter and higher-fat cheese. The researchers found that those whose diet included higher amounts of chocolate, confectionary, butter and white bread, were more likely to be male, younger, experiencing economic deprivation, current smokers, less physically active, living with obesity or have hypertension compared to those whose diet did not include high amounts of these foods. In this group, individuals who were younger than 60 years old or living with overweight or obesity had a higher risk of cardiovascular disease than individuals who were older than 60 years or not living with overweight or obesity. Those whose diet was high in sugar-sweetened beverages, fruit juice and preserves were found to have an increased risk for cardiovascular disease and mortality, even though they also tended to be physically active and less likely to be current smokers or living with obesity, hypertension, diabetes or high cholesterol, than those who did not eat this diet. Women, individuals who were younger than 60 years old or who lived with obesity in particular had a higher risk of cardiovascular disease, if they consumed a diet high in these foods. To examine the effects of diet on the risk of cardiovascular disease and mortality, the authors analysed data collected from 116,806 adults from England, Scotland and Wales who were recruited to the UK Biobank between 2006 and 2010. Participants were aged between 37 and 73 years old, with an average age of 56 years old. Participants reported the food they ate during the previous 24 hours on between two and five occasions. The researchers then identified the nutrients and food groups eaten by participants. The incidence of cardiovascular disease and mortality was calculated using hospital admission and death registry records until 2017 and 2020, respectively. The authors caution that the observational nature of the study does not allow for conclusions about a causal relationship between diet, cardiovascular disease and mortality. Additionally, as dietary data was taken from individual 24 hour assessments rather than a continuous period of time, it may not be representative of participants' lifetime diets. Future research could investigate the potential reasons for the associations between the two diets investigated in this study and cardiovascular disease and mortality. Carmen Piernas said: "Our research suggests that eating less chocolate, confectionery, butter, low-fibre bread, sugar-sweetened beverages, fruit juice, table sugar and preserves could be associated with a lower risk of cardiovascular disease or death during middle-age. This is consistent with previous research which has suggested that eating foods that contain less sugar and fewer calories may be associated with a lower risk of cardiovascular disease. The findings of this study could be used to create food-based dietary advice that could help people eat more healthily and reduce their risk of cardiovascular disease." | 10.1186/s12916-021-01958-x |
Nano | Researchers discover new route to spin-polarized contacts on silicon | DOI 10.1038/nnano.2012.161 Journal information: Nature Nanotechnology | http://dx.doi.org/10.1038/nnano.2012.161 | https://phys.org/news/2012-10-route-spin-polarized-contacts-silicon.html | Abstract Spin manipulation in a semiconductor offers a new paradigm for device operation beyond Moore's law. Ferromagnetic metals are ideal contacts for spin injection and detection, but the intervening tunnel barrier required to accommodate the large difference in conductivity introduces defects, trapped charge and material interdiffusion, which severely compromise performance. Here, we show that single-layer graphene successfully circumvents the classic issue of conductivity mismatch between a metal and a semiconductor for electrical spin injection and detection, providing a highly uniform, chemically inert and thermally robust tunnel barrier. We demonstrate electrical generation and detection of spin accumulation in silicon above room temperature, and show that the contact resistance–area products are two to three orders of magnitude lower than those achieved with oxide tunnel barriers on silicon substrates with identical doping levels. Our results identify a new route to low resistance–area product spin-polarized contacts, a key requirement for semiconductor spintronic devices that rely on two-terminal magnetoresistance, including spin-based transistors, logic and memory. Main The spin angular momentum of an electron has been identified as a potential new state variable in semiconductor device operation for use beyond Moore's law 1 , 2 , 3 , and new paradigms for spin-based device operation have been proposed and modelled 4 , 5 , 6 , 7 , 8 . Ferromagnetic metals, which exhibit intrinsically spin-polarized electron populations, high Curie temperatures and low coercive fields, are seemingly ideal candidates as contacts for electrical injection and detection of spin currents in the semiconductor channel. However, the large difference in conductivity between metal and semiconductor makes this impossible 9 , 10 , 11 . A tunnel barrier between the ferromagnetic metal and the semiconductor has been identified as a potential solution to this problem 9 , 10 , 11 , and extensive effort has therefore been directed towards developing appropriate tunnel barriers for spin contacts. Most work has focused on a reverse-biased ferromagnetic Schottky barrier 12 , 13 or an insulating oxide layer such as Al 2 O 3 or MgO with a ferromagnetic metal contact 14 , 15 , 16 , 17 . For example, we have recently shown that SiO 2 , an oxide widely used in the electronics industry, serves as an excellent spin tunnel barrier in ferromagnetic metal/SiO 2 /Si structures 18 , although the oxide thickness required to achieve a good spin signal resulted in high contact resistance–area products. An ideal tunnel barrier should exhibit the following key material characteristics: a uniform and planar habit with well-controlled thickness, minimal defect/trapped charge density, a low resistance–area product for minimal power consumption, and compatibility with both the ferromagnetic metal and the semiconductor of choice, ensuring minimal diffusion to/from the surrounding materials at the temperatures required for device processing. Metal Schottky barriers and oxide layers are susceptible to interdiffusion, interface defects and trapped charge, which have been shown to compromise spin injection/transport/detection. Ferromagnetic metals readily form silicides even at room temperature 19 , and diffusion of the ferromagnetic species into the silicon creates magnetic scattering sites, limiting spin diffusion lengths and spin lifetimes in the silicon. Even a well developed and widely utilized oxide such as SiO 2 is known to have defects and trapped or mobile charge, which limit both charge and spin-based performance. Such approaches also result in contacts with high resistance–area products, and previous work has shown that smaller resistance–area products within a window of values are essential for efficient spin injection/detection 11 , 20 , in addition to the more obvious benefit of reduced power consumption. Graphene as a tunnel barrier Graphene, an atomically thin honeycomb lattice of carbon, offers a compelling alternative. Although it is very conductive in plane 21 , 22 , it exhibits poor conductivity perpendicular to the plane 23 . Its sp 2 bonding results in a highly uniform, defect-free layer that is chemically inert, thermally robust, and essentially impervious to diffusion 24 . We have recently demonstrated that single-layer graphene can be used as a tunnel barrier between two metals in a magnetic tunnel junction, albeit with modest tunnel magnetoresistance 25 . Here, we show that a ferromagnetic metal/monolayer graphene contact serves as a spin-polarized tunnel barrier contact that successfully circumvents the classic metal/semiconductor conductivity mismatch issue for electrical spin injection into a semiconductor, and enables one to achieve contact resistance–area products that fall within the critical window of values required for practical devices 11 , 20 . We demonstrate electrical injection and detection of spin accumulation in silicon above room temperature, and show that the corresponding spin lifetimes correlate with the silicon donor concentration, confirming that the spin accumulation measured occurs in the silicon and not in the graphene or interface trap states. The resistance–area products are three orders of magnitude lower than those achieved with oxide tunnel barrier contacts on silicon substrates with identical doping levels. These results enable the realization of semiconductor spintronic devices such as spin-based transistors, logic and memory that rely on local magnetoresistance 4 , 5 , 6 , 7 . Graphene was grown by low-pressure chemical vapour deposition (CVD) within copper foil ‘enclosures’ according to ref. 26 , and transferred onto hydrogen-passivated n-type silicon (001) substrates (electron density n = 6 × 10 19 and 1 × 10 19 cm −3 ). Raman spectroscopy confirmed that the graphene was of high quality with minimal defects 27 . During device fabrication, care was taken to isolate the conducting edges 28 of the graphene by burying them in a layer of sputter-deposited Si 3 N 4 ( Fig. 1 ). Ni 80 Fe 20 was then sputter-deposited onto the graphene through vias in the Si 3 N 4 , defining the spin-polarized contacts, followed by another layer of Si 3 N 4 , and then electron-beam evaporation of ohmic Ti/Au contacts and bond pads. A schematic of the devices is provided in Fig. 1 a,b. Electrical measurements were performed in a cryogen-free cryostat and electromagnet set-up using a three-terminal configuration, as depicted in Fig. 1 a and described in the following. Further details may be found in the Supplementary Information . Figure 1: Schematic and cross-section of the samples. a , Monolayer graphene serves as a tunnel barrier between the ferromagnetic metal contact and the Si substrate. Contacts 1 and 3 are ohmic Ti/Au contacts. b , The contact is designed so that the edges of the graphene are embedded in the SiN insulator, preventing conduction through the graphene edge states, which would short out the tunnel barrier. Full size image Figure 2 a shows the temperature dependence of the zero bias resistance (ZBR) for NiFe/monolayer graphene/Si (6 × 10 19 cm −3 ) contacts. The weak temperature dependence confirms that transport occurs by tunnelling through a pin-hole-free tunnel barrier, and provides a more definitive test for tunnelling than fits to a parabolic model 29 . The inset shows a typical nonlinear I–V curve at 300 K. Figure 2: Electrical characteristics of the NiFe/graphene/Si contacts. a , The normalized zero bias resistance (ZBR) shows a weak insulator-like temperature dependence, confirming tunnel transport. Each solid colour line is from a different contact, illustrating the reproducibility of the data. Inset: I – V curve at 300 K. The ohmic NiFe/Si contacts exhibit metallic behaviour; results for three contacts are shown by triangles, red-dashed and green-dashed lines. b , Current–voltage curves and corresponding resistance–area (RA) products for NiFe/Si, NiFe/graphene/Si and NiFe/2 nm SiO 2 /Si contacts. The Si electron density is 6 × 10 19 cm −3 . Full size image Graphene serves as a highly conductive tunnel barrier, as demonstrated in Fig. 2 b by a comparison of the room-temperature resistance–area products and current density versus voltage curves for three types of contact to the silicon (6 × 10 19 cm −3 ) substrate: NiFe/Si (ohmic), NiFe/graphene/Si (tunnelling) and NiFe/2 nm SiO 2 /Si (tunnelling) 18 contacts. NiFe deposited directly on the silicon forms an ohmic contact with a resistance–area product of 0.4 kΩ µm 2 , and the conductivity increases with decreasing temperature (metallic-like, Fig. 2 a). When a single layer of graphene is inserted between the NiFe and the silicon, tunnelling behaviour dominates, but the resistance–area product increases to only 6 kΩ µm 2 . In contrast, a NiFe/2 nm SiO 2 tunnel contact on the same substrate 18 has a much larger resistance–area product of 15 MΩ µm 2 , with a corresponding decrease in the tunnel current of over a factor of 10 3 at a given bias voltage. A similar trend is observed for contacts to the Si(1 × 10 19 cm −3 ) substrate, although the NiFe/Si contact is Schottky-like rather than ohmic. Thus, the uniform atomically thin character of graphene provides a superior tunnel barrier with a low resistance–area product. A graphene solution to conductivity mismatch: spin injection The large difference in conductivity precludes spin injection from a typical ferromagnetic metal into a semiconductor—the higher resistivity of the semiconductor limits current flow, so that equal amounts of majority and minority spin current flow into the semiconductor, resulting in zero net spin polarization 9 , 10 , 11 . A ferromagnetic metal/tunnel barrier provides a spin-selective interface resistance and circumvents this issue; the source term is the interface spin-polarized density of states of the ferromagnetic metal, and the higher tunnel barrier resistance controls the current flow 30 . Spin accumulation and precession directly under the magnetic tunnel barrier contact interface can be observed using a single contact for both spin injection and detection, as shown in Fig. 1 a. A current is applied to contacts 1 and 2, and a voltage is measured across contacts 2 and 3. The injection of spin-polarized carriers from ferromagnetic contact 2 produces a net spin accumulation in the silicon described by the splitting of the spin-dependent electrochemical potential, Δ μ = μ up – μ down ( Fig. 3 a), which is detected as a voltage Δ V 3T = γ Δ μ /2 e , where γ is the tunnelling spin polarization of the magnetic tunnel contact. When a magnetic field B z is applied perpendicular to the electron spin direction (sample surface), spin precession at the Larmor frequency ω L = gμ B B z / results in a reduction of the net spin accumulation due to precessional dephasing (the Hanle effect) 31 . Here, g is the Lande g -factor ( g = 2 for silicon), μ B the Bohr magneton, and the reduced Planck's constant. The voltage Δ V 3T ( B z ) decreases with B z with a Lorentzian lineshape given by Δ V 3T ( B z ) = Δ V 3T (0)/[1 + ( ω L τ s ) 2 ] (ref. 31 ). Fits to this lineshape give a lower bound for the spin lifetime, τ s (ref. 17 ). Figure 3: Hanle spin precession measurements. a , Schematics illustrating spin injection, spin precession in an applied magnetic field, and the Lorentzian lineshape expected for an electrical measurement of the Hanle effect. N(E) refers to the energy-dependent density of states for majority spin (up arrow) and minority spin (down arrow) electrons. b , Room-temperature Hanle data for spin injection and extraction for the NiFe/graphene/Si (1 × 10 19 ) samples using 40 × 40 µm 2 contacts. c , Low-temperature (4 K) Hanle data for the NiFe/Si ohmic reference contacts, and the NiFe/graphene/Si samples for both carrier concentrations studied. The red lines are fits to the experimental data using the Lorentzian function described in the text. The plots are offset for clarity, and every fifth data point is shown. d , Spin lifetimes obtained from three-terminal Hanle measurements at 10 K as a function of the Si electron density for the tunnel barrier materials indicated and different ferromagnetic metal contacts (Fe, CoFe, NiFe). Symbol shapes distinguish the tunnel barrier materials: triangles, SiO 2 ; circles, Al 2 O 3 ; squares, MgO; stars, graphene. Solid symbols correspond to devices with Ni 0.8 Fe 0.2 contacts, half-solid symbols to Fe contacts, and open symbols to Co 0.9 Fe 0.1 contacts. The spin lifetimes show a pronounced dependence on the Si doping level, and little dependence on the choice of tunnel barrier or magnetic metal. Full size image Figure 3 presents data obtained from Hanle measurements of the NiFe/graphene/Si devices at room temperature ( Fig. 3 b) for spin injection and extraction, and at a low temperature ( Fig. 3 c) for spin injection for different silicon carrier concentrations, and for control samples. Devices incorporating a graphene tunnel barrier show a clear Hanle signal, confirming successful spin accumulation and field-induced precession. In contrast, control devices without graphene ( Fig. 3 c) show no signal, indicating that no significant spin accumulation occurs, as expected due to the large conductivity mismatch in the absence of a tunnel barrier. Typical Hanle data are shown in Fig. 3 b for a NiFe/graphene/Si (1 × 10 19 ) sample at room temperature. When the contact is biased to inject electrons from the NiFe into the silicon, majority spin polarization builds in the silicon and a negative peak is observed in the Hanle signal at zero field. The magnitude of the spin voltage decreases with applied field B z as the spins precess and dephase, as described above. In the extraction case, a positive bias is applied to the NiFe, and majority spin electrons preferentially tunnel from the silicon into the NiFe, so that a minority spin polarization builds up in the silicon. In this case, the Hanle curve should reverse sign, and this behaviour is observed experimentally. Note that there is a significant difference between the magnitude of the Hanle signal in injection and extraction for a given bias current, as expected for an asymmetric metal/insulator/semiconductor structure (M/I/S). This also rules out spurious effects such as anisotropic magnetoresistance, which would exhibit equal amplitude upon reversing the bias. The measured spin voltage Δ V 3T ( B z = 0) is significantly larger at lower temperatures for a given bias current ( Fig. 3 c). Values for the spin lifetime, τ s , are obtained from fits to the Hanle curves using the Lorentzian described above. Typical fits are shown by the solid lines in Fig. 3 b,c, and yield values of 130 ± 10 ps at 300 K and a bias current of ±10 mA for the Si(1 × 10 19 ) sample. The spin lifetime depends strongly on contact bias and silicon donor density, and weakly on temperature due to the metallic character of the silicon, as discussed previously 18 , 32 . The spin lifetime decreases with increasing donor density, as expected from electron spin resonance (ESR) measurements on bulk silicon 32 , 33 , 34 . At T = 4 K, fits to the Hanle data of Fig. 3 c yield τ s = 140 ps and 105 ps for the Si(1 × 10 19 ) and Si(6 × 10 19 ) samples, respectively. These values agree well with those reported for NiFe/SiO 2 tunnel barrier contacts 18 , and show a clear correlation with the character of the silicon. This is shown explicitly in Fig. 3 d, where we plot the spin lifetime obtained from three-terminal Hanle data on n-Si as a function of electron density for four different tunnel barrier materials (graphene, Al 2 O 3 , MgO and SiO 2 ) and three different magnetic metal contacts (Fe, CoFe and NiFe). The spin lifetime measured with the three-terminal Hanle geometry shows a clear dependence only on electron density, and the dependence is consistent with literature ESR data on bulk silicon. The spin lifetime is completely independent of the tunnel barrier material or magnetic metal used for the contact. The values for the graphene tunnel barriers fall directly on the curve. These data confirm that the spin accumulation occurs in the silicon, and not in the graphene or possible interface trap states. The measured spin lifetimes are shorter than those in bulk silicon because they reflect the environment directly beneath the contact, where the reduced symmetry and increased scattering from the interface are likely to produce additional spin scattering 18 . Bias and temperature dependence The bias dependence of τ s is summarized in Fig. 4 a at T = 4 K for the Si(1 × 10 19 ) sample. At low bias, the spin lifetime is ∼ 200 ps, and decreases to 100 ps with increasing negative bias (spin injection). For positive bias (spin extraction), τ s initially increases slightly to 220 ps, and then decreases. The bias dependence is not fully understood, and a detailed discussion is beyond the scope of this text, but some qualitative observations can be made. The bias voltage alters the interface electric field and the electron energy, and hence the relevant portion of band structure involved. For spin injection, a negative bias is applied to the ferromagnetic/graphene tunnel contact, and both the interface electric field and the injected electron energy are increased. Both effects have been shown to reduce the measured spin lifetime. Hot electrons injected at higher energies into the host band structure experience rapid thermalization accompanied by spin relaxation 35 , consistent with the trend we observe here. For spin extraction, the positive bias initially reduces the electric field, which exists due to carrier depletion at the silicon interface, and the modest increase in spin lifetime observed is consistent with this behaviour. Higher positive biases produce an interface electric field of opposite sign, pulling the electrons in the silicon towards the interface where the reduced symmetry and higher defect density are likely to produce more rapid spin relaxation, as observed experimentally. The electrons extracted from the silicon also sample the unfilled density of states of the NiFe where the polarization is lower 35 . This latter mechanism may change the spin extraction efficiency and hence the measured spin voltage (discussed below), but to first order should not strongly impact the spin lifetime measured in the silicon. Figure 4: Bias dependence of spin lifetime/diffusion length and spin-voltage at 4 K. a , Spin lifetime τ s and diffusion length L SD = ( Dτ s ) 1/2 are at a maximum at low bias, and decrease with increasing positive and negative bias. b , Measured three-terminal spin voltage is strongly asymmetric with bias, although the conventional current–voltage characteristic is predominantly symmetric. The Si carrier concentration is n = 1 × 10 19 cm −3 . Full size image The measured spin voltage Δ V 3T ( B z = 0) produced by spin accumulation in the silicon also exhibits a strong bias dependence, as summarized in Fig. 4 b together with the I–V plot of the NiFe/graphene/Si(1 × 10 19 ) contact. Although the I–V curve is approximately symmetrical, the Δ V 3T – V curve is markedly asymmetric, with much higher values achieved for spin injection than for extraction, as already noted in the Hanle data of Fig. 3 b. For spin injection, Δ V 3T increases with the bias current as one might reasonably expect, despite the fact that the spin lifetime is decreasing, indicating that the spin source process more strongly affects the spin accumulation than the spin relaxation process. However, for spin extraction, Δ V 3T quickly saturates, even though the current is exponentially increasing with bias voltage. This may be attributed in part to a decrease in the spin lifetime ( Fig. 4 a), or to the bias dependence of the detection efficiency 36 , or to less efficient spin filtering due to reduced spin polarization of the NiFe density of states above the Fermi level 35 . Further work is necessary to quantify the roles played by these various processes. Similar data were obtained for the Si(6 × 10 19 ) sample over ±0.2 V bias (limited by the maximum current allowed), and exhibited symmetric behaviour. The spin diffusion length is given by L SD = ( Dτ s ) 1/2 , where D is the diffusion constant calculated from the measured carrier concentration and mobility using the Fermi–Dirac expansion for degenerate systems, D = (2 μkT / q )( F 1/2 ( η F )/ F −1/2 ( η F )), where F 1/2 ( η F ) and F −1/2 ( η F ) are the Fermi–Dirac integrals of order 1/2 and −1/2, respectively, and η F = ( E F – E C )/ kT (ref. 37 ). Values at 4 K are shown in Fig. 4 a as a function of bias for the Si(1 × 10 19 ) sample, and for both samples as a function of temperature in Fig. 5 a. The temperature dependence mirrors that of the diffusion constant. L SD reaches a value ∼ 200 nm near room temperature for these carrier concentrations, demonstrating that practical devices based on spin transport in silicon are viable with conventional lithographic and fabrication techniques. Figure 5: Temperature dependence of spin diffusion length and voltage. a , Temperature dependence of L SD is dominated by that of the diffusion constant. b , Measured three-terminal spin voltage decreases monotonically with increasing temperature. Full size image The measured spin voltage Δ V 3T ( B z = 0) decreases monotonically with temperature, as shown in Fig. 5 b. The spin resistance–area product, defined as Δ V 3T (0) A / I b , where A is the contact area and I b the bias current, is a useful metric for evaluating contact performance and for comparing experimental results with theory. For the higher biases used (producing larger spin voltages), the Si(1 × 10 19 ) and Si(6 × 10 19 ) samples exhibit spin resistance–area products of 72 Ω μm 2 and 0.84 Ω μm 2 respectively at T = 4 K, and 9 Ω μm 2 and 0.04 Ω μm 2 at 300 K. The spin resistance–area product predicted by the theory of diffusive transport across a single ferromagnetic metal/semiconductor interface for the geometry used here is given by γ 2 r 1 = γ 2 ( ρL SD ) (refs 11 , 20 ), where ρ is the sample resistivity and L SD is the spin diffusion length determined from the Hanle data as described above. If we assume a typical value for the tunnelling spin polarization γ ≈ 0.4, the corresponding values for our samples are 1.2–1.8 Ω μm 2 (1 × 10 19 ) and 0.2–0.3 Ω μm 2 (6 × 10 19 ) over the temperature range 4–300 K. We thus see generally good agreement with the calculated results, although the bias dependence we observe experimentally has not been addressed theoretically. Resistance–area products The conventional resistance–area product of the magnetic contact is an important parameter in determining the practical applications of spin-based semiconductor devices such as the silicon spin-MOSFET 5 (note that this is the standard resistance–area product rather than the spin resistance–area product discussed above). The operation of such devices depends on realizing significant local magnetoresistance, that is, the magnetoresistance measured directly between two magnetic contacts. Calculations have shown that significant local magnetoresistance can be achieved only if the contact resistance–area product falls within a range that depends on the silicon channel conductivity, the spin lifetime, the contact spacing (for example, the spin transistor gate length) 11 , 20 and the contact width 38 . The resistance–area products of all tunnel barrier contacts so far have been much larger than required, making such devices unattainable. However, the low resistance–area products provided by the graphene tunnel barriers fall within this window, and enable realization of these and other important spintronic devices. Previous work to lower the resistance–area product utilized a low-workfunction metal such as gadolinium at the tunnel barrier interface, but no spin accumulation in the semiconductor was demonstrated 39 . We calculated the range of optimum resistance–area products and the corresponding local magnetoresistance as a function of the silicon electron density based on the model of ref. 11 , using the contact geometry shown as the inset to Fig. 6 . The geometric parameters are chosen to be consistent with the node anticipated for silicon device technology within the next five years. The carrier concentration dependence of the mobility, resistivity and spin lifetime are taken from refs 32–34 and 37, and we assume that the Einstein relation between the mobility and diffusion constant holds for the complete doping range. The colour code in Fig. 6 identifies the range of useful magnetoresistance and the corresponding window of contact resistance–area products required. Figure 6: Resistance–area product window for local magnetoresistance. Calculation of the local (two-terminal) magnetoresistance (MR) as a function of the conventional resistance–area product of the contact and the Si electron density for the device geometry shown in the inset, using the theory of ref. 11 . Data points are the resistance–area products measured for our ferromagnetic metal/tunnel barrier/Si contacts using 2 nm SiO 2 (triangles), 1.5 nm Al 2 O 3 (diamond) and monolayer graphene (circles) tunnel barriers prepared from identical Si wafers in our laboratory. The ferromagnetic metal/graphene resistance–area products fall within the window of useful magnetoresistance values. W = w = 11 nm. Full size image Tunnel barrier contacts of ferromagnetic/Al 2 O 3 and ferromagnetic/SiO 2 fabricated previously in our laboratory have been shown to produce significant spin accumulation in silicon 15 , 18 , but have resistance–area products that are too high to generate usable local magnetoresistance. In contrast, utilizing monolayer graphene as the tunnel barrier lowers the resistance–area product by orders of magnitude, and values for the NiFe/graphene contacts on bulk wafers fall well within the range required to generate high local magnetoresistance. Reducing the resistance–area product also has a positive effect on the electrical properties of the spin device, as lowering the resistance reduces noise and increases the speed of an electrical circuit 40 . Conclusions In summary, we have demonstrated that a ferromagnetic metal/monolayer graphene contact serves as a spin-polarized tunnel barrier contact, which successfully circumvents the classic metal/semiconductor conductivity mismatch issue for electrical spin injection and detection. Our results identify a route to low resistance–area product spin-polarized contacts, a crucial requirement enabling future semiconductor spintronic devices. Graphene provides a tunnel barrier with a uniform and planar habit, well-controlled thickness, minimal defect/trapped charge density, a low resistance–area product and compatibility with both the ferromagnetic metal and semiconductor of choice, ensuring minimal diffusion to/from the surrounding materials at the temperatures required for device processing. Utilizing multilayer graphene in such structures may provide much higher values of the tunnel spin polarization due to the band structure derived spin filtering effects that have been predicted for selected ferromagnetic metal/multilayer graphene structures 41 , 42 , 43 . Such an increase will improve the performance of semiconductor spintronic devices by providing higher signal-to-noise ratios and corresponding operating speeds, thereby advancing the technological applications of silicon spintronics 44 . | (Phys.org)—Scientists at the Naval Research Laboratory have demonstrated that graphene, a single layer of carbon atoms in a honeycomb lattice, can serve as a low resistance spin-polarized tunnel barrier contact which successfully enables spin injection/detection in silicon from a ferromagnetic metal. The graphene provides a highly uniform, chemically inert and thermally robust tunnel barrier free of defects and trap states which plague oxide barriers. This discovery clears an important hurdle to the development of future semiconductor spintronic devices, that is, devices which rely on manipulating the electron's spin rather than its charge for low-power, high-speed information processing beyond the traditional size scaling of Moore's Law. The research results are reported in a paper published in Nature Nanotechnology on September 30, 2012. Ferromagnetic metals, such as iron or permalloy, have intrinsically spin-polarized electron populations (more "spin-up" electrons than "spin-down", see figure), and are thus ideal contacts for injection and detection of spin in a semiconductor. An intervening tunnel barrier is required to avoid saturation of both semiconductor spin channels by the much larger metal conductivity - this would otherwise result in no net spin polarization in the semiconductor. However, the oxide barriers typically used (such as Al2O3 or MgO) introduce defects, trapped charge and interdiffusion, and have resistances, which are too high - all of these factors severely impact the performance. To solve this problem, the NRL research team, led by Dr. Berend Jonker, used single layer graphene as the tunnel barrier. This novel approach utilizes a defect resistant, chemically inert and stable material with well-controlled thickness to achieve a low resistance spin contact compatible with both the ferromagnetic metal and semiconductor of choice. These qualities insure minimal diffusion to/ and from the surrounding materials at temperatures required for device manufacturing. The research team used this approach to demonstrate electrical generation and detection of spin accumulation in silicon above room temperature, and showed that the contact resistance-area products are 100 to 1000 times lower than achieved with oxide tunnel barriers on silicon substrates with identical doping levels. These results identify a new route to low resistance-area product spin-polarized contacts, a key requirement for semiconductor spintronic devices that rely upon two-terminal magnetoresistance, including spin-based transistors, logic and memory, explains NRL's Dr. Berend Jonker. In looking to the future, the NRL team suggests that the use of multilayer graphene in such structures may provide much higher values of the tunnel spin polarization due to band structure derived spin filtering effects which have been predicted for selected ferromagnetic metal / multi-layer graphene structures. This increase would improve the performance of semiconductor spintronic devices by providing higher signal to noise ratios and corresponding operating speeds, advancing the techological applications of silicon spintronics. | DOI 10.1038/nnano.2012.161 |
Nano | The most sensitive torque measuring device ever built | Jonghoon Ahn et al. Ultrasensitive torque detection with an optically levitated nanorotor, Nature Nanotechnology (2020). DOI: 10.1038/s41565-019-0605-9 Journal information: Nature Nanotechnology | http://dx.doi.org/10.1038/s41565-019-0605-9 | https://phys.org/news/2020-01-sensitive-torque-device-built.html | Abstract Torque sensors such as the torsion balance enabled the first determination of the gravitational constant by Henri Cavendish 1 and the discovery of Coulomb’s law. Torque sensors are also widely used in studying small-scale magnetism 2 , 3 , the Casimir effect 4 and other applications 5 . Great effort has been made to improve the torque detection sensitivity by nanofabrication and cryogenic cooling. Until now, the most sensitive torque sensor has achieved a remarkable sensitivity of 2.9 × 10 −24 N m Hz −1/2 at millikelvin temperatures in a dilution refrigerator 6 . Here, we show a torque sensor reaching sensitivity of (4.2 ± 1.2) × 10 −27 N m Hz −1/2 at room temperature. It is created by an optically levitated nanoparticle in vacuum. Our system does not require complex nanofabrication. Moreover, we drive a nanoparticle to rotate at a record high speed beyond 5 GHz (300 billion r.p.m.). Our calculations show that this system will be able to detect the long sought after vacuum friction 7 , 8 , 9 , 10 near a surface under realistic conditions. The optically levitated nanorotor will also have applications in studying nanoscale magnetism 2 , 3 and the quantum geometric phase 11 . Main Recent developments in levitated optomechanics provide a new paradigm for sensing and precision measurements 12 , 13 , 14 . Recently, the centre-of-mass motion of an optically levitated nanoparticle in vacuum was cooled to microkelvin temperatures 15 . Experimental control of the rotation 16 , 17 , 18 , 19 , 20 , 21 , torsional vibration 18 , 22 and precession 23 of a levitated nanoparticle in vacuum have also been demonstrated. A levitated nanoparticle has been used to study nonequilibrium thermodynamics at small scales 24 , 25 , 26 , 27 and demonstrate force sensing at the zeptonewton scale 13 . It was proposed that an optically levitated nonspherical nanoparticle in vacuum would be an ultrasensitive torque sensor 22 and could study anisotropic surface interactions 28 . While optically levitated torque sensors have attracted a lot of interest 17 , 18 , 23 , an experimental demonstration of a torque sensitivity better than that of the state-of-the-art nanofabricated torque sensor (10 −24 N m Hz −1/2 ) (ref. 6 ) has not been reported. In this experiment, we optically trap a silica nanoparticle (a nanosphere or a nanodumbbell) in a vacuum chamber using a tightly focused 1,550-nm laser (Fig. 1 ). The polarization of the trapping laser is controlled with a quarter waveplate. An additional 1,020-nm laser is used to apply an external torque that will be measured. The trapping laser passes a collimation lens and is guided to balanced photodetectors that monitor the rotational, torsional and centre-of-mass motions of the levitated nanoparticle. When the nanoparticle rotates, it changes the polarization of the trapping laser slightly, which is monitored with a balanced photodetector after a polarizing beam splitter (Fig. 1a ). The signal of the balanced detector is sent to a spectrum analyser to measure the rotation frequency. Fig. 1: Experimental schematic and rotation spectra of an optically levitated nanoparticle. a , A silica nanoparticle (NP) is levitated in vacuum with a 500-mW, 1,550-nm laser tightly focused by an objective lens (OBJ) with a numerical aperture of 0.85. An additional 1,020-nm laser is used to apply an external torque on the nanoparticle. The polarization of each laser is controlled with a quarter waveplate ( \(\lambda\) /4). After the collimation lens, the trapping laser is directed to detectors for monitoring the motion of the trapped nanoparticle. DM, dichroic mirror; \(\lambda\) /2, half waveplate; PBS, polarizing beam splitter; and DET, balanced photodetector. Inset: scanning electron microscope images of a silica nanosphere (left) and a silica nanodumbbell (right). The scale bar is 200 nm for both images. b , A measured PSD of the rotation of an optically levitated nanoparticle at \(1{0}^{-4}\) torr. The frequency of the PSD peak is twice the rotation frequency of the nanoparticle. c , A spectrogram (time trace) of the rotation PSD of an optically levitated nanoparticle recorded for 100 s. The first vertical line corresponds to the PSD shown in b . a.u., arbitrary units. Source data Full size image Once a nanoparticle is trapped in a linearly polarized 1,550-nm laser, we collect the power spectral density (PSD) signals of its motion at 10 torr to verify its geometry 18 . Figure 2a shows the PSDs of the motion of a nanosphere. The ratio of the damping rates in directions perpendicular and parallel to the electric field of the laser is measured to be \(1.02\pm 0.01\) , which is in reasonable agreement with the expected value of one for a sphere. There is no observable torsional peak for the nanosphere. On the other hand, the PSD of a nanodumbbell has a clear torsional peak as shown in Fig. 2b . The measured damping ratio is \(1.23\pm 0.02\) for this nanodumbbell, which is comparable with the expected value of 1.27 (ref. 18 ). Fig. 2: Vibration and rotation of optically levitated silica nanoparticles. a , b , PSDs of the motions of a silica nanosphere ( a ) and a nanodumbbell ( b ) trapped by a linearly polarized laser at 10 torr. The ratios of the damping rates in directions perpendicular ( \(y\) ) and parallel ( \(x\) ) to the electric field of the laser are \(1.02\pm 0.01\) and \(1.23\pm 0.02\) for the nanosphere ( a ) and nanodumbbell ( b ), respectively. Also, an additional torsional peak appears for the nanodumbbell that does not show for the nanosphere. The \(z\) axis is parallel to the propagating direction of the laser. c , The rotation frequencies of a nanodumbbell (blue squares) and a silica nanosphere (green circles) in a circularly polarized optical tweezer as a function of the air pressure. The solid lines show the \(1/p\) dependence of the rotation frequencies, where \(p\) is the air pressure. The diameters of the nanosphere and the nanodumbbell are about 150 nm. d , Rotation frequency as a function of pressure for two nanoparticles with large ultimate rotation frequencies. The solid line shows the \(1/p\) dependence of the rotation frequencies. Inset: measured PSDs of the rotational motions of two nanoparticles. The corresponding mechanical rotation frequencies are 5.2 GHz (red) and 5.0 GHz (blue). Source data Full size image After the geometry of a levitated nanoparticle is confirmed, we change the polarization of the trapping laser from linear to circular. The angular momentum of the circularly polarized laser induces a torque on the levitated nanoparticle and drives it to rotate 18 , 19 . The rotation speed is determined by the balance between the optical torque and the drag torque from the surrounding air. Thus, the rotation speed is inversely proportional to the air pressure, as shown in Fig. 2c . The rotation speed of a nanodumbbell is much faster than that of a nanosphere with the same diameter in the same trap at the same pressure. This is because the optical torque on the nanodummbell is much larger than that on the nanosphere due to their different shapes. Figure 2d shows two nanoparticles with the fastest rotation frequencies observed in our experiment so far. For the data shown in red circles, the rotation rate reaches 5.2 GHz at \(1.23\times 1{0}^{-5}\) torr before the nanoparticle is lost from the optical trap. This is about five times faster than the fastest speeds reported in refs. 18 , 19 . Furthermore, we employ the nanorotor as an ultrasensitive torque sensor. To test its performance, we use an additional 1,020-nm laser to apply an external torque. If we modulate the 1,020-nm laser sinusoidally, the net torque applied on the nanorotor is $${M}_{\mathrm{d.c.}}+{M}_{\mathrm{a.c.}}{\sin}({\mathrm{\omega }}_{\mathrm{m}}t)+{M}_{\mathrm{th}}-I {\gamma} {\mathrm{\omega }}_{\mathrm{r}}=I{\dot{{\mathrm{\omega }}_{\mathrm{r}}}}$$ (1) where \({M}_{\mathrm{d.c.}}\) is the constant component of the optical torque that mainly comes from the trapping beam, M a.c. is the external alternating torque drive from the 1,020-nm laser, \({{\omega }_{\mathrm{m}}}\) is the frequency of the modulation, M th is the thermal fluctuation torque, \(I\) is the moment of inertia of the nanorotor, \({\mathrm{\omega }}_{\mathrm {r}}\) is the angular rotation velocity of the nanoparticle and \(\gamma\) is the rotational damping rate because of residual air molecules. If we ignore the thermal noise M th , we have \({\mathrm{\omega }}_{\mathrm{r}}(t)={\mathrm{\omega}}_{\mathrm{d.c.}}+{\frac{M_{\mathrm{a.c.}}}{{\sqrt{I^{2}[{{\mathrm{\omega }}_{\mathrm{m}}^{2}}+{{\gamma }^{2}}]}}}}\sin{(\omega_m t+\phi)}\) after the modulated external torque is turned on for a long time. Here, \({\mathrm{\omega }}_{\mathrm{d.c.}}={M}_{\mathrm{d.c.}}/(I {\gamma })\) is the average rotation frequency and \({\phi} {=} {{\tan }^{-1}}(-{{\mathrm{\omega }}_{\mathrm{m}}}/{\gamma} )\) . The rotational damping rate \({\gamma}\) can be measured experimentally. We can suddenly turn on the 1,020-nm laser and measure the rotation frequency as a function of time (Fig. 3a ). The collected data is fit with an exponential curve \({\mathrm{\omega}} ={\mathrm{\omega }}_{1}+({\mathrm{\omega }}_{2}-{\mathrm{\omega }}_{1})(1-{\mathrm{e}}^{-\frac{t-{t}_{1}}{\tau }})\) . Here, ω 1 is the initial rotation frequency, ω 2 is the terminal rotation frequency, \({t}_{1}\) is the time when the 1,020-nm laser is turned on and \({\tau} = 1/{\gamma}\) is the damping time. From the fitting, we determine the damping time. The measured damping time at different pressures are plotted in Fig. 3b . Fig. 3: Ultrasensitive detection of an external torque. a , A spectrogram of the rotation of a trapped silica nanosphere at \(9.4\times 1{0}^{-6}\) torr. After the 1,020-nm laser is turned on at 8 s, the rotation frequency increases until its maximum is reached. The frequency trace is fit to an exponential curve to obtain the rotational damping time. b , The damping time of the rotation as a function of air pressure. The solid line shows \(1/p\) dependence. c , The bottom subfigure shows a rotation spectrogram of the levitated nanosphere under a sinusoidally modulated external torque. Data were collected at \(1.5\times 1{0}^{-5}\) torr. The laser power is modulated with an amplitude of 39 mW (measured before the objective lens) at a frequency of 200 mHz. For comparison, a radio-frequency signal generated by a VCO controlled by the same modulation signal that is used to control the laser power is shown in the top subfigure. d , PSD of the time trace of the rotation frequency. The peak at 200 mHz is due to the modulated external torque. The red dashed line and the blue solid line correspond to a modulation amplitude of 20 mW and no modulation, respectively. e , Torque sensitivity \(\sqrt{{S}_{T}}\) obtained from the rotational PSD with no modulation (blue solid line). The yellow dashed line shows its average value 4.3 × 10 −27 N m Hz −1/2 in the frequency range shown in this figure. f , Measured torque for different modulation amplitudes of the laser power. When the modulated laser power is 1.1 mW, the measured external torque is as small as \((4.7\pm 3.6)\times 1{0}^{-28}\) N m. Error bar represents standard deviation of measurements. Source data Full size image The external alternating torque M a.c. can be measured by observing the change of the rotational frequency \({\mathrm{\omega }}_{\mathrm{r}}(t)\) as a function of time: $${M}_{\mathrm{a.c.}}={\frac{2}{\Delta t}}{\mathop{\int }\limits_{0}^{\Delta t}}\left({\mathrm{\omega }}_{\mathrm{r}}-{\mathrm{\omega }}_{\mathrm{d.c.}}\right){\sqrt{I^{2}[{\mathrm{\omega }}_{\mathrm{m}}^{2}+{\gamma }^{2}]}}{\sin } ({\mathrm{\omega }}_{\mathrm{m}}t+{\phi} ){\mathrm{d}}t$$ (2) Here, \(\Delta t\) is the measurement time, which should be an integer times the modulation period \(2\uppi /{{\upomega }}_{\mathrm{m}}\) for this equation to hold. The sensitivity of measuring an external torque will be limited by the Brownian motion due to the thermal noise torque. Because of the thermal noise, the minimum external torque that can be measured is \({M}_{\mathrm{min}}=\sqrt{\frac{4{k}_{\mathrm{B}}TI\gamma }{\Delta t}}\) (refs. 28 , 29 ), which is defined when the signal-to-noise ratio equals one 29 , \({k}_{\mathrm{{B}}}\) is the Boltzmann constant and \(T\) is the temperature. Including the effects of the thermal noise, the single-sided PSD of the time-dependent angular velocity \(({\mathrm{\omega }}_{\mathrm{r}}-{\mathrm{\omega }}_{\mathrm{d.c.}})\) of rotation for a measurement time of \(\Delta t\) is $${S}_{\mathrm{r}}{(\mathrm{\omega})}=\frac{4{k}_{\mathrm{B}}T{\gamma} }{I[{{\mathrm{\omega}}}^{2}+{\gamma }^{2}]}+\frac{{M}_{\mathrm{a}.\mathrm{c}.}^{2}{\Delta} t\ {\rm{sinc}}^{2}[{(\mathrm{\omega} -\mathrm{\omega }_{m}}){\Delta} t/2]}{2{I}^{2}[{\mathrm{\omega }}^{2}+{\gamma }^{2}]}$$ (3) Note that \({S}_{\mathrm{{r}}}(\omega )\) can be calculated from the time-dependent rotation frequency \({{\upomega }}_{\mathrm{r}}/2\uppi\) (Fig. 3c ) measured by a spectrum analyser directly. This is very different from the case of the centre-of-mass motion, where a calibration factor is required to convert a measured voltage signal to the real position 30 . We introduce \({S}_{\mathrm{{noise}}}{\mathrm{(\omega )}}\) as the measured \({S}_{\mathrm{{r}}}(\omega )\) due to noise when there is no modulation ( \({M}_{\mathrm{{a.c.}}}=0\) ). The corresponding torque noise spectral density of this system is \({S}_{T}={I}^{2}({\gamma }^{2}+{\mathrm{\omega }}^{2}){S}_{\mathrm{noise}}{\mathrm{(\omega )}}\) (ref. 6 ). Then the torque detection sensitivity obtained from the PSD without modulation is \(\sqrt{{S}_{T}}\) . The minimum detectable torque is related to the torque sensitivity as \({M}_{\mathrm{{min}}}=\sqrt{{S}_{T}}/\sqrt{\Delta t}\) . If the system is limited by thermal noise, we have \({S}_{T}=4{k}_{\mathrm{{B}}}TI\gamma\) . To measure the external torque exerted on a silica nanosphere by the circularly polarized 1,020-nm laser according to equation ( 2 ), we modulate the laser power with a sinusoidal signal at 200 mHz while we measure the rotational PSD of the nanorotor in real time (bottom subfigure in Fig. 3c ). For reference, we simultaneously monitor a radio-frequency signal generated by a voltage-controlled oscillator (VCO) modulated sinusoidally at the same time (top subfigure in Fig. 3c ). We repeat the measurement for different modulation amplitudes. Each measurement takes 100 s. The resulting PSD of the angular velocity \(({\mathrm{\omega }}_{\mathrm{r}}-{\mathrm{\omega }}_{0})\) are shown in Fig. 3d . Figure 3e shows the torque sensitivity obtained from the PSD when there is no modulation. We can then calculate the external torque using the measured \(({{\omega }_{r}}{-}{{\omega }_{0}})\) and the damping time according to equation ( 2 ). As shown in Fig. 3f , the measured external torque is \((1.3\pm 0.5)\times 1{0}^{-27}\;{\rm{N}}\;{\rm{m}}\) when the modulation amplitude is 3.2 mW, and is as small as \((4.7\pm 3.6)\times 1{0}^{-28}\;{\rm{N}}\;{\rm{m}}\) when the modulation amplitude is 1.1 mW. The minimum resolvable torque corresponds to the standard deviation of measurements due to noises 29 . For small torques ( \(<3\times 1{0}^{-27}\ {\rm{N}}\;{\rm{m}}\) ), the average standard deviation of each measurement is \((4.2\pm 1.2)\times 1{0}^{-28}\ {\rm{N}}\;{\rm{m}}\) for a measurement time of 100 s at \(1.3\times 1{0}^{-5}\) torr. This corresponds to a measured sensitivity of (4.2 ± 1.2) × 10 −27 N m Hz −1/2 , which is comparable to the theoretical thermal noise-limited sensitivity of 3.3 × 10 −27 N m Hz −1/2 at 300 K at that pressure. The measured sensitivity is several orders improved compared to the state-of-the-art nanofabricated torque sensor in a cryogenic environment 6 . One important application of our ultrasensitive torque sensor with an optically levitated nanorotor will be to detect the long-sought after vacuum friction 7 , 8 , 9 , 10 . A fast-rotating neutral nanoparticle can convert quantum and thermal vacuum fluctuations to radiation emission. Because of this, the electromagnetic vacuum behaves like a complex fluid and will exert a frictional torque on a nanorotor 7 , 8 . While there have been many theoretical investigations on vacuum friction, it has not been observed experimentally yet. The vacuum friction is extremely weak in free space 8 , but can be enhanced by a nearby surface with a large local density of electromagnetic states 9 , 10 . We perform numerical calculations to find the suitable conditions to detect the vacuum friction. Our calculations show that the vacuum friction acting on a silica nanosphere rotating at 1 GHz near a flat silica surface will be large enough to be observed under realistic conditions. Assuming that a nanosphere is located at a distance \(d\) from the surface and rotates at an angular velocity of \(\Omega\) around an axis parallel to the substrate, the vacuum frictional torque is 9 , 10 : $$\begin{array}{lll}{{M}_{\mathrm{vac}}} & {=} & -{\frac{2\hslash }{\uppi }}{\mathop{\int }\limits_{-\infty }^{\infty }}[{\rm{n}}_{1}{{({\mathrm {\omega}} -{\mathrm {\Omega}} )}}-{{\rm{n}}_{0}}{\mathrm{(\omega )}}]\\ & {\times } &{\mathrm{Im}}[{\alpha } {{({\mathrm {\omega}} -{\mathrm {\Omega}} )}}]{\mathrm{Im}}[{\bar{G}}{\mathrm{(\omega )}}]{\mathrm{d}}{\mathrm{\omega }}\end{array}$$ (4) where \({n}_{j}{\mathrm{(\omega )}}={[\exp (\hslash {\mathrm{\omega}} /{k}_{\mathrm{B}}{T}_{j})-1]}^{-1}\) is the Bose–Einstein distribution function at temperature \({T}_{j}\) . For simplicity, we assume the temperatures of the nanosphere ( \({T}_{1}\) ) and the substrate ( \({T}_{0}\) ) are the same in our calculation, \(\alpha (\omega )\) is the electrical polarizability of the nanosphere, \(\bar{G}(\omega )=[{G}_{xx}(\omega )+{G}_{yy}(\omega )]/2\) , where \({G}_{xx}\) and \({G}_{yy}\) are electromagnetic Green tensor components 9 and \(z\) is the axis of rotation (see Supplementary Fig. 1 ). From equation ( 4 ), we can find that the imaginary part of the polarizability contributes to the vacuum friction (see Supplementary Fig. 2 ). We calculate the vacuum frictional torque on a 75-nm-radius silica nanosphere near three different substrates (silica, Si 3 N 4 and SiC) at different separations and temperatures (Fig. 4 ). These materials support phonon polaritons. Their dielectric functions can be described by the Drude–Lorentz model (see Supplementary Table 1 ) 9 , 10 , 31 . For rotation frequencies much smaller than \({k}_{\mathrm{{B}}}{T}_{j}/\hslash\) , \({M}_{\mathrm{{vac}}}\) is proportional to the rotation frequency (see Supplementary Fig. 3 ). Inspired by our experimental results, we assume the rotation frequency to be 1 GHz in the calculation. As shown in Fig. 4a , a silica surface will give the largest vacuum friction for a rotating silica nanosphere because their phonon polariton modes match. The vacuum frictional torque can be close to \(1{0}^{-27}\) N m at small separations, which is comparable to what we have measured in this experiment. Smaller torques can be measured at lower pressures and for longer times. We also calculated the air damping torque on a rotating nanosphere due to residual air molecules in the vacuum chamber at different pressures. The vacuum friction increases when the temperature increases, while the air damping torque decreases when the temperature increases if the air pressure is constant. At \(1{0}^{-9}\) torr, the vacuum frictional torque is larger than the air damping torque when the temperature of the substrate and the nanosphere is larger than 350 K at 200-nm separation, or larger than 590 K at 300-nm separation (Fig. 4b ). These temperatures should be easy to achieve. Similar pressures have also been achieved in levitation experiments 15 , 32 . With recent experimental demonstrations of optical levitation of nanospheres less than 400 nm away from surfaces in a vacuum 33 , 34 , we expect that fast rotation can also be achieved near a surface. Therefore, the detection of the vacuum friction with an optically levitated nanorotor torque sensor will be feasible under realistic conditions. Fig. 4: Calculated vacuum friction on a rotating nanosphere near a surface. a , Calculated vacuum frictional torque on a rotating silica nanosphere as a function of the separation between the nanosphere and a substrate. From the top down, the curves are for silica, Si 3 N 4 and SiC substrates. In the calculations, the radius of the silica nanosphere is 75 nm, its rotation frequency is 1 GHz and the temperature is 1,000 K. Inset: schematic of a rotating nanosphere near a substrate. b , Comparison of the vacuum frictional torque (solid lines) and the air damping torque (dashed lines) on a rotating silica nanosphere near a silica substrate as a function of temperature. The blue and red solid curves are vacuum frictional torques at 200- and 300-nm separations, respectively. The green and orange dashed curves are air damping torques at \(1{0}^{-9}\) and \(1{0}^{-10}\) torr, respectively. Other parameters are the same as those for a . Source data Full size image In conclusion, we have demonstrated an ultrasensitive torque sensor with an optically levitated nanorotor in vacuum. We demonstrate a torque detection sensitivity of (4.2 ± 1.2) × 10 −27 N m Hz −1/2 and achieve a record high rotational speed exceeding 5 GHz for a nanorotor. The measured torque sensitivity of our system at room temperature is several orders better than that of the state-of-the-art nanofabricated torque sensor at millikelvin temperatures 6 . Our system should be suitable for detecting the vacuum friction 7 , 8 , 9 , 10 . If the rotating nanoparticle contains an electron spin (for example, a diamond nitrogen-vacancy centre), the quantum geometric phase 11 can be observed. It can also be used to study nanoscale magnetism, especially the Einstein–de Haas effect and the Barnett effect 3 . Methods Silica nanoparticles are diluted in water and launched to air by an ultrasonic nebulizer. Once a nanoparticle is trapped by a linearly polarized 1,550-nm optical tweezer, the air pressure is lowered to about 0.001 torr to remove unwanted nanoparticles in the vacuum chamber. Subsequently, the pressure is increased to 10 torr where the geometry of the trapped nanoparticle is verified by the ratio of damping coefficients along different directions. After a nanosphere or a nanodumbbell is verified, the polarization of the trapping laser is changed from linear to circular and the pressure is lowered. Balanced photodetectors are used to monitor the centre-of-mass, torsional and rotational motion of a trapped nanoparticle. The rotational signal from the balanced photodetector is sent to a radio-frequency amplifier and then to a spectrum analyser that monitors the rotation in real time. We typically collect the rotational spectrum of the nanoparticle every 10 ms. An additional 1,020-nm laser is aligned to the trapped nanoparticle to apply an external torque on the nanoparticle. For the measurement of the external torque, the power of the 1,020-nm laser is altered with a 200-mHz sinusoidal modulation voltage and the spectrogram of the rotation signal is collected for 100 s in each measurement. To observe the modulation voltage with the spectrum analyser, the modulation voltage is connected to a VCO. The VCO generates a frequency-modulated radio-frequency signal depending on the modulation voltage. The radio-frequency signal generated by the VCO and the rotational signal of the nanoparticle are detected by the spectrum analyser simultaneously. We then perform Fourier transform and calculate the PSD of the collected spectrogram of the rotation frequency of the nanoparticle. The external torque can be measured with the rotational frequency as a function of time following equation ( 2 ). Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding author on reasonable request. | A team of physicists at Purdue University has built the most sensitive torque measuring device ever. In their paper published in the journal Nature Nanotechnology, the team describes their new device and outline how it might be used. Torque is a twisting force that often leads to rotation. Devices built to measure torque in a system take many forms and come in many sizes. In recent years, scientists have been working on ways to downsize torque sensors with the goal of measuring very small amounts of torque. Tiny devices that use nanofabrication and cryogenic cooling have been developed to study such things as the Casimir effect and small-scale magnetism. Prior to this new effort, the most sensitive torque sensor had achieved a sensitivity of 2.9 × 10−24 N m Hz−1/2 at millikelvin temperatures. The team at Purdue set themselves the goal of breaking that record. The new device consisted of a silica nanoparticle suspended inside of a vacuum chamber by a 500-mW, 1,550-nm laser beam. The team applied torque to the nanoparticle by firing a pulsating, circularly polarized 1,020-nm laser beam at it for 100 seconds at a time. The researchers used a quarter waveplate to control polarization. The rotating waves in the electromagnet beam imparted a twisting action on the nanoparticle, making it spin at 300 billion rpm—the fastest man-made rotor ever built. The team was able to measure the amount of torque in the device by measuring how much the particle's spin speed changed during the on and off cycles using an optical sensor. The researchers point out that their system, unlike others being developed, did not require intricate nanofabrication. Using the device, the researchers were able to measure torque to a quadrillionth of a newton-meter—making it approximately 700 times as sensitive as the previous record holder. They claim that their device will be the first to measure vacuum friction—in which quantum mechanics suggests an object spinning in a vacuum experiences drag due to electromagnetic fields that constantly appear and disappear. The team also claims that the device could be used for nanoscale magnetism research and for studying the quantum geometric phase. | 10.1038/s41565-019-0605-9 |
Medicine | Schizophrenia and memory deficits: Solving the mystery behind a most stubborn symptom | Impaired hippocampal place cell dynamics in a mouse model of the 22q11.2 deletion, Nature Neuroscience (2017). DOI: 10.1038/nn.4634 Journal information: Nature Neuroscience | http://dx.doi.org/10.1038/nn.4634 | https://medicalxpress.com/news/2017-09-schizophrenia-memory-deficits-mystery-stubborn.html | Abstract Hippocampal place cells represent the cellular substrate of episodic memory. Place cell ensembles reorganize to support learning but must also maintain stable representations to facilitate memory recall. Despite extensive research, the learning-related role of place cell dynamics in health and disease remains elusive. Using chronic two-photon Ca 2+ imaging in hippocampal area CA1 of wild-type and Df(16)A +/− mice, an animal model of 22q11.2 deletion syndrome, one of the most common genetic risk factors for cognitive dysfunction and schizophrenia, we found that goal-oriented learning in wild-type mice was supported by stable spatial maps and robust remapping of place fields toward the goal location. Df(16)A +/− mice showed a significant learning deficit accompanied by reduced spatial map stability and the absence of goal-directed place cell reorganization. These results expand our understanding of the hippocampal ensemble dynamics supporting cognitive flexibility and demonstrate their importance in a model of 22q11.2-associated cognitive dysfunction. Main Episodic memory, the encoding of personal experience organized in space and time, is a fundamental aspect of cognition 1 . Episodic memory dysfunctions are highly debilitating symptoms of various neurological, cognitive and psychiatric disorders, including schizophrenia (SCZ) 2 . Cognitive deficits in general appear to be the strongest predictor of SCZ patients' functional outcomes 3 ; however, neural circuit dynamics supporting episodic memory and the manner in which they fail in SCZ remains poorly understood. To this end, we studied a well characterized animal model of cognitive dysfunction and SCZ, the Df(16)A +/− mouse model of the 22q11.2 deletion syndrome (22q11.2DS) 4 . The well documented role of the hippocampus in episodic and spatial memory 1 , 5 , 6 , 7 , combined with morphological and functional alterations of the hippocampus in SCZ patients 8 , 9 , collectively points to a central role of this brain area in the pathophysiology of cognitive memory deficits in SCZ 10 . In particular, physiological and morphological alterations have been reported specifically in area CA1—the hippocampal output node—in SCZ patients 11 , suggesting a potentially primary role for this area in disease pathophysiology. Principal cells throughout the hippocampus are selectively active in specific locations within an environment (place cells) 12 . Place cells collectively form cognitive maps representing spatial components of episodic memories 6 , 13 , the long-term stability of which is a widely posited prerequisite for reliable learning 14 , 15 , 16 , 17 , 18 . Place cell map stability is affected by attentional and task demands, and place cell maps also incorporate goal-related information during learning 15 , 19 , 20 , 21 , 22 , 23 , 24 , 25 . In particular, reorganizing place cell maps to enrich goal locations was found to predict memory performance 26 . Therefore, monitoring place cell ensemble dynamics during goal-directed learning may provide a tractable entry point for understanding how episodic memory deficits arise from genetic mutations associated with SCZ. Two-photon Ca 2+ imaging in awake mice during head-fixed behaviors allows for the chronic recording of physiological activity from individual place cells, as well as their ensemble activity as a whole. By tracking the activity of place cell populations in Df(16)A +/− mice and wild-type (WT) littermates through each phase of a goal-oriented learning task, we identified specific aspects of place cell map stability that evolved with learning, as well as alterations in the stability and plasticity of these cognitive maps in the mutant mice. Our findings highlight reduced stability and impaired goal-directed reorganization of hippocampal place cells as fundamental components of 22q11.2-deletion-linked cognitive dysfunction. Results Df(16)A +/− mice are impaired in a head-fixed goal-oriented learning task upon changes in both context and reward location To facilitate chronic recording from hippocampal CA1 place cells during learning, we designed a head-fixed variation of goal-oriented learning (GOL; Fig. 1a,b and Online Methods ) tasks that have been previously used in freely moving rodents 26 , allowing for chronic two-photon functional Ca 2+ imaging. Our task consisted of three sessions per day, with 3 days (d) for each of three conditions (27 total sessions per mouse). In Condition I, mice learned a single fixed reward location, then remembered that location while the environmental context and local cues were altered (Online Methods ) in Condition II, and the reward was moved in Condition III. Figure 1: Differences in learning performance between Df(16)A +/− and WT mice in GOL task. ( a ) The three conditions of the GOL task. Mice spend 3 d in each condition. Contexts A and A′ are composed of different auditory, visual, olfactory and tactile cues (Online Methods ), varied between Condition I and Condition II. The location of the hidden reward (blue circles, Rew 1 and Rew 2) is switched between Condition II and Condition III. Water-deprived mice trained to run on a linear treadmill were introduced to a novel environmental context (Context A) consisting of a feature-rich fabric belt and specific background with nonspatial odor, tones and blinking light patterns (Context A) on the first day of the experiment. Operant water rewards were available at a single unmarked location on the belt (Rew 1 in Conditions I and II; Rew 2 in Condition III); if the mouse licked in the correct location they received a water reward, but no water was administered if they did not lick in the reward location or if they licked outside the reward location (Condition I, 3 d and 3 sessions per d). The time of each lick as well as the position of the mouse on the treadmill were recorded both to determine when to deliver water rewards and to provide a readout of learning. To test the ability of mice to adjust to changes in the task conditions, mice were exposed to an altered context (Context A′: same sequence of belt materials, shuffled local cues, different nonspatial odor, tone and light; Online Methods ), while maintaining the same reward location relative to the belt fabric sequence (Condition II, 3 d and 3 sessions per d). During the last part of the task, the location of the hidden reward was changed while maintaining the familiar context from Condition II (Condition III, 3 d and 3 sessions per d). ( b ) Example histograms of lick counts by position for a WT mouse (blue) and a Df(16)A +/− mouse (red) on the first and last days of each condition. Green bars, reward locations. As the mice learned the reward location they switched from exploratory licking along the entire belt to focused licking only at the reward location, suppressing licking at other locations. Hence licking became increasingly specific for the reward location. ( c ) Learning performance of WT and Df(16)A +/− mice based on fraction of licks in the reward zone ( n = 6 mice per genotype, main effects described in Results section, post hoc tests with Benjamini-Hochberg correction, Condition I, two-way mixed-design RM ANOVA, main effect of day: F 2,20 = 28.235, P < 0.0001; Condition II, two-way mixed-design RM ANOVA, main effect of genotype: F 1,10 = 6.297, P = 0.031; main effect of day: F 2,20 = 4.076, P = 0.033; Day 1: P = 0.015 ; Condition III, two-way mixed-design RM ANOVA, main effects of day: F 2,20 = 15.762, P < 0.0001; main effect of genotype: F 1,10 = 7.768, P = 0.019; day × genotype interaction P = 0.932: n.s.; Day 3: P = 0.022). Error bars represent s.e.m. ( d ) Learning performance for WT and Df(16)A +/− mice on the first and last session of each day by Condition. Across all conditions, both genotypes performed better at the end of the day. During Condition I, WT and Df(16)A +/− mice performed similarly throughout the day (main effects described in Results section), while in Condition II, Df(16)A +/− mice were more impaired at the start of the day ( post hoc tests with Benjamini-Hochberg correction: two-way mixed-design RM ANOVA, main effect of session: F 1,10 = 40.506, P < 0.0001; genotype × session interaction: F 1,10 = 6.404, P = 0.030; main effect of genotype, P = 0.213: n.s.), and in Condition III they additionally never reached WT levels ( post hoc tests with Benjamini-Hochberg correction: two-way mixed-design RM ANOVA, main effect of genotype: F 1,10 = 6.433, P = 0.030; main effect of session: F 1,10 = 53.237, P < 0.0001; genotype × session interaction P = 0.085: n.s.). Center line in box plot is the median, the top and bottom of the box denote the 1st and 3rd quartile of the data, respectively, and the whiskers mark the full range of the data. * P < 0.05. Full size image Our overall analysis of behavior during the GOL task revealed general differences among genotypes, which developed in a condition-dependent manner (linear mixed-effects model, fixed effects for genotype and condition, day nested under condition as covariate, mouse as random effect; significant effect of: genotype: F 1,83.049 = 5.675, P = 0.019; genotype × Condition interaction: F 2,82.988 = 5.450, P = 0.006; Condition: F 2,82.988 = 23.554, P < 0.0001). Specifically, the initial analysis revealed significant differences for Conditions II ( P = 0.028) and III ( P = 0.0008) among the two genotypes (for formulas, see Online Methods ). We found that both Df(16)A +/− mice and WT littermates had a similar ability to learn the initial location of the hidden reward, as assessed by the suppression of unrewarded licks outside of the reward zone and an increase in the fraction of licks within the reward zone ( post hoc tests for Condition I with Benjamini-Hochberg correction: two-way mixed-design repeated-measures (RM) ANOVA, main effect of day: F 2,20 = 28.235, P < 0.0001; genotype × day interaction, P = 0.608 and main effect of genotype, P = 0.319: nonsignificant (n.s.); Fig. 1b,c and Supplementary Fig. 1 ). During this initial learning period, both WT and Df(16)A +/− mice explored the task at similar levels ( Supplementary Fig. 2 ). A change in the environmental context (Condition II) had no detectable effect on WT animals, as their learning of the reward location in the new context continued to improve until their performance plateaued. However, in the Df(16)A +/− mice, task performance dropped on the first day and was overall worse than WT mice during Condition II ( post hoc tests with Benjamini-Hochberg correction: two-way mixed-design RM ANOVA, main effect of genotype: F 1,10 = 6.297, P = 0.031; main effect of day: F 2,20 = 4.076, P = 0.033; day 1: P = 0.015; day × genotype interaction: P = 0.239, n.s.; Fig. 1b,c ). Although not significant, the drop in performance and increase in variability on the third day of Condition II for WT and Df(16)A +/− mice might have arisen from differences in attention or motivation. By this point mice had been training in the task for 6 consecutive days and were performing relatively well, so the task had become familiar and it is possible that they were less water-deprived and thus overall less motivated. Finally, changing the reward location while maintaining a familiar context (Condition III) challenged Df(16)A +/− mice to a greater degree than WT animals, as they were significantly impaired in acquiring the new reward location ( post hoc tests with Benjamini-Hochberg correction, Condition III, two-way mixed-design RM ANOVA, main effects of day: F 2,20 = 15.762, P < 0.0001; main effect of genotype: F 1,10 = 7.768, P = 0.019; day × genotype interaction: P = 0.932, n.s.; day 3: P = 0.022; Fig. 1b,c ). Thus, although Df(16)A +/− mice were initially able to perform a spatially guided reward task, learning deficits were revealed by manipulation of task parameters, specifically the environmental context or the reward location. Assessment of learning by anticipatory licking, although a much less sensitive behavioral readout, revealed the same pattern of learning performance and differences among the two genotypes ( Supplementary Fig. 3 ). We noticed during the task that Df(16)A +/− mice appeared to be relatively more impaired at the start of each day, so to identify differences in the overnight consolidation of the task memory, we compared task performance at the beginning and the end of the day during each condition ( Fig. 1d ). Overall, there was a strong effect of the first versus last session of the day and a significant genotype–condition interaction, as previously detected (linear mixed-effects model, fixed effects for condition, session and genotype, session as covariate, random effect for mouse; genotype effect: F 1,90.473 = 7.638, P = 0.007; session effect: F 1,159.244 = 22.612, P < 0.0001; genotype × condition interaction: F 2,90.347 = 3.732, P = 0.028). During Condition I, in which we observed no learning deficit in the Df(16)A +/− mice, both WT and Df(16)A +/− mice performed at comparable levels throughout the day, and both performed better at the end of the day than at the beginning (Condition I, post hoc tests with Benjamini-Hochberg correction: two-way mixed-design RM ANOVA, main effect of session: F 1,10 = 6.901, P = 0.025; main effect of genotype, P = 0.437 and genotype × session interaction, P = 0.975, n.s.). During Condition II, in which we observed an overall decrease in task performance in the Df(16)A +/− mice, we found that the Df(16)A +/− animals performed more poorly on the first session of each day, before reaching WT performance levels by the end of the day (Condition II, post hoc tests with Benjamini-Hochberg correction: two-way mixed-design RM ANOVA, main effect of session: F 1,10 = 40.506, P < 0.0001; F 1,10 = 6.404, genotype × session interaction: P = 0.030; main effect of genotype: P = 0.213, n.s.). Finally, during Condition III, in which we observed the most robust learning deficit in the Df(16)A +/− mice, we found that Df(16)A +/− mice performed significantly worse throughout the entire day (Condition III, post hoc tests with Benjamini-Hochberg correction: two-way mixed-design RM ANOVA, main effect of genotype: F 1,10 = 6.433, P = 0.030; main effect of session: F 1,10 = 53.237, P < 0.0001, genotype × session interaction: P = 0.085, n.s.). Collectively, these results indicate that deficits in overnight consolidation likely contributed to the differences we observed between genotypes. Differences in place cell properties at the neuronal population levels in the Df(16)A +/− mice We used two-photon Ca 2+ imaging of large neuronal populations in the CA1 pyramidal layer during the GOL task to assess the basic coding properties of place cells ( Fig. 2a,b ). Spatially tuned Ca 2+ transients ( Fig. 2c ) were detected in both WT and Df(16)A +/− mice, but we found that the fraction of identified neurons that exhibited place cell properties was about 25% smaller in Df(16)A +/− mice compared to WT mice across all sessions (place cell fraction: averaged by mouse, independent samples t test, t = 1.620, P = 0.140; linear mixed-effects model with mouse as random factor F 1,10.917 = 3.086, P = 0.107; Fig. 2d ). This effect was not driven by a silent fraction of cells in the Df(16)A +/− mice or by differences in our sampling of the pyramidal cells between the genotypes, as the available place cell population was similar cumulatively over all imaging sessions (lifetime place coding: P = 0.244; Supplementary Fig. 4a ), but individual cells were identified as place cells in fewer sessions (fraction of sessions a place cell: P < 0.0001; Supplementary Fig. 4b ). Furthermore, the spatial tuning of individual place cells in Df(16)A +/− mice was less diffuse, as indicated by differences in the number of place fields per place cell (place fields per place cell: WT, 1.180 ± 0.004, n = 12,571, place cells × sessions; Df(16)A +/− , 1.110 ± 0.004, n = 7,683, place cells × sessions; linear mixed-effects model with number of place fields and genotype as fixed factors and mouse ID as random factor: number of place fields × genotype interaction: F 3,38.000 = 5.054, P = 0.005; genotype effect for: single place field; P = 0.0037, two fields per place cell; P = 0.010, three fields per place cell; P = 0.755, see formula in Online Methods ; Fig. 2e ), slightly narrower place fields (place field width: linear mixed-effects model with mouse as random factor: F 1,11.164 = 4.371, P = 0.060; Fig. 2f and Supplementary Fig. 4e ), less variability in Ca 2+ transient firing locations (circular variance; inset averaged by mouse, Welch's t test, t = 2.327, P = 0.0491; linear mixed-effects model with mouse as random factor: F 1,11.006 = 5.695, P = 0.036; Fig. 2g ) and less out-of-field firing (transient specificity: P < 0.0001; Supplementary Fig. 4d ). Figure 2: Altered place cell properties in Df(16)A +/− mice. ( a ) Schematic of head-fixed behavioral setup. Two-photon objective, 2-p obj. ( b ) Mice were injected with AAV1/2( Synapsin-GCaMP6f ) (rAAV( GCaPM6f )) in dorsal hippocampal area CA1 to express the genetically encoded Ca 2+ indicator GCaMP6f in neurons located in the CA1 pyramidal layer. Mice were then implanted with a head-post and imaging window to provide long-term optical access to the CA1 pyramidal layer. Left: schematic of two-photon Ca 2+ imaging in the CA1 pyramidal layer. Right: representative two-photon fields of view across the pyramidal layer showing cross-sections of GCaMP6f-expressing cell bodies from a WT mouse (middle) and a mouse Df(16)A +/− (right). We chronically imaged 179–621 regions of interest (ROIs; Online Methods ) corresponding to cell bodies in each field of view. ( c ) Left: GCaMP6f Ca 2+ fluorescence (Δ F / F ) traces from two example spatially tuned CA1 place cells in WT and Df(16)A +/− mice during 10-min sessions. Significant Ca 2+ transients are highlighted in blue or red, and treadmill position is shown below the traces. Middle: polar trajectory plots showing significant running-related transients for the same example cells. Animals' position (angle) over time (radius), gray; onset times of significant running-related calcium transients, colored circles. Shaded slices denote place fields. Right: transient vector plots showing the position (angle) and occupancy-normalized weight of each running-related transient (radius), as used to calculate occupancy-normalized transient rate histograms and transient circular variance. Green lines, transient resultant vector (magnitude = 1 – circular variance). ( d – g ) Compared to WT, Df(16)A +/− mice had ( d ) a smaller fraction of cells per experiment with significant spatial information (place cell fraction: WT: 0.2553 ± 0.0109, n = 124 sessions; Df(16)A +/− : 0.1924 ± 0.0079, n = 98 sessions; P < 0.0001; inset: averaged by mouse, independent samples t test, t = 1.620, P = 0.140; linear mixed-effects model with mouse as random factor: F 1,10.917 = 3.086, P = 0.107), ( e ) fewer multipeaked place cells (place fields per place cell; WT: 1.180 ± 0.004, n = 12,571 PC × sessions; Df(16)A +/− : 1.110 ± 0.004, n = 7,683 PC × sessions; linear mixed-effects model with number of place fields and genotype as fixed factors and mouse as random factor: number of place fields × genotype interaction: F 3,38.000 = 5.054, P = 0.005 ; genotype effect for single place field, P = 0.0037; for two fields per PC, P = 0.010; for three fields per PC, P = 0.755), ( f ) narrower place fields (place field width; WT: 32.531 ± 0.135, n = 12,571 PC × sessions; Df(16)A +/− : 29.532 ± 0.144, n = 7,683 PC × sessions; linear mixed-effects model with mouse as random factor: F 1,11.164 = 4.371, P = 0.060; dashed vertical lines indicate means) and ( g ) lower circular variance (WT: 0.310 ± 0.0013, n = 43,068 cells × sessions; Df(16)A +/− : 0.189 ± 0.0014, n = 27,397 cells × sessions, linear mixed-effects model with mouse as random factor: F 1,11.006 = 5.695, P = 0.036; inset: averaged by mouse, Welch's t test, t = 2.327, P = 0.0491). * P < 0.05, ** P < 0.01. Full size image Spatial map is less stable in Df(16)A +/− compared to WT mice To examine the evolution of spatial maps throughout the GOL task, we repeatedly imaged the same populations of individually identified neurons throughout the 27 sessions of the GOL task (cells per mouse, mean ± s.d.; WT: 463 ± 37, n = 6 mice; Df(16)A +/− : 479 ± 84, n = 5 mice) and looked at two aspects of stability: place cell population stability (recurrence probability: probability of a cell being identified as a place cell in paired sessions) and individual pyramidal cell firing stability (centroid shift: distance between centroid of firing in paired sessions). Combining all conditions and sessions, we found that individual place cells recurred from day to day significantly above chance levels in WT and Df(16)A +/− mice (recurrence probability: WT versus shuffle: P = <0.0001; Df(16)A +/− versus shuffle: P < 0.0001; Fig. 3a ), but a significantly smaller fraction of place cells re-occurred from day to day in Df(16)A +/− than WT mice (WT versus Df(16)A +/− : independent samples t test, t = 5.72, P < 0.0001; aggregated by mouse, independent samples t test, t = 2.611 P = 0.028; Fig. 3a ). This decreased overlap in place cell population in the Df(16)A +/− mice was primarily driven by decreased stability overnight, as this difference was not observed within a day from session to session (linear mixed-effects model with genotype and elapsed time as fixed effects and mouse as random effect; genotype × elapsed time interaction: F 1,145.754 = 5.858, P = 0.017; post hoc analysis, WT versus Df(16)A +/− , session-to-session: F 1,10.659 = 0.664, P = 0.433; day-to-day: F 1,10.086 = 20.534, P = 0.001, significant after Benjamini-Hochberg correction; Fig. 3b ), again suggesting a disruption in overnight consolidation, as seen with the Df(16)A +/− behavioral performance ( Fig. 1d ). Figure 3: Disrupted stability of place cell population in Df(16)A +/− compared to WT mice. ( a ) Top: example of place cell recurrence. In a given field of view, a subset of all cells has significant spatial tuning each day (place cells, green). The overlap in this population is the recurrence probability (40% in this example). Bottom: distribution of recurrence fractions from day to day for WT and Df(16)A +/− mice for all sessions (dotted line is cell-identity shuffle distribution: WT: 0.456 ± 0.015, n = 74 sessions, Df(16)A +/− : 0.327 ± 0.017, n = 59 sessions, shuffle: 0.229 ± 0.009, n = 133 sessions; WT vs. shuffle: Welch's t test, t = 12.64, P < 0.0001; Df(16)A +/− vs. shuffle: Welch's t test, t = 5.124, P < 0.0001; WT vs. Df(16)A +/− : independent samples t test, t = 5.72, P < 0.0001) and aggregated by mouse (inset; horizontal dotted line is cell-identity shuffle: WT vs. Df(16)A +/− : independent samples t test, t = 2.611, P = 0.028). ( b ) Mean fraction of cells that reoccur as place cells from session to session (S-S) or day to day (D-D) for WT and Df(16)A +/− mice (dotted line is mean place cell fraction; linear mixed-effects model with genotype and elapsed time as fixed effects and mouse ID as random effect; genotype × elapsed time interaction: F 1,145.754 = 5.858, P = 0.017; post hoc analysis, WT vs. Df(16)A +/− , S-S: F 1,10.659 = 0.664, P = 0.433; D-D: F 1,10.086 = 20.534, P = 0.001, significant after Benjamini-Hochberg correction). ( c ) Correlation of place cell recurrence with performance throughout the task. Solid lines, linear regression fit; shaded regions, 95% confidence intervals calculated from bootstrap resampling (Pearson's correlation coefficient, WT: 0.288, P = 0.013; Df(16)A +/− : 0.416, P = 0.001; WT correlation vs. Df(16)A +/− correlation, Fisher z -transformation of correlations, general linear model (GLM), univariate ANOVA: genotype × z recurrence probability interaction: F 1,132 = 0.599, P = 0.440; alternatively: linear mixed effects model with genotype as fixed effect, recurrence probability as covariate and mouse ID as random effect: genotype × recurrence probability interaction: F 1,129 = 1.083, P = 0.300; recurrence effect: F 1,129.000 = 18.197, P < 0.0001). ( d ) Top: preferred spatial tuning is represented as vectors where the angle is the position on the treadmill of maximal activity. Across three sessions (green, blue and orange lines), spatial preference is generally stable (green to blue sessions), though salient events or changes to the environment can induce remapping (blue to orange sessions). The centroid shift is the angle between these vectors, represented as the fraction of the belt. Bottom: distribution of mean centroid shift from day to day per session (dotted line is cell-identity shuffled distribution: WT: 0.204 ± 0.003, n = 74 sessions, Df(16)A +/− : 0.224 ± 0.003, n = 59 sessions, shuffle: 0.242 ± 0.002, n = 133; WT vs. shuffle: independent sample t test, t = −9.42, P < 0.0001; Df(16)A +/− vs. shuffle: independent samples t test, t = −4.25, P < 0.0001; WT vs. Df(16)A +/− : independent samples t test, t = −4.71, P < 0.0001) and aggregated by mouse (inset; horizontal dashed line is cell-identity shuffle; independent samples t test, t = 2.58, P = 0.0295). ( e ) Mean centroid shift from S-S or D-D for WT and Df(16)A +/− mice (dotted line is mean centroid shift, linear mixed-effects model with genotype and elapsed time as fixed effects and mouse ID as random effect, genotype × elapsed time interaction: F 1, 38,078.993 = 15.042, P < 0.0001; post hoc analysis, WT vs. Df(16)A +/− , S-S: F 1,11.137 = 0.303, P = 0.593; D-D, F 1,10.577 = 8.724, P = 0.014, significant after Benjamini-Hochberg correction). ( f ) Correlation of mean day-to-day stability with performance throughout the task. Solid line and shaded regions as in d (Pearson's correlation coefficient, WT: −0.306, P = 0.008; Df(16)A +/− : −0.218, P = 0.097; WT correlation vs. Df(16)A +/− correlation, Fisher z transformation of correlations, GLM, univariate ANOVA: genotype × R stability–probability interaction: F 1,132 = 0.268, P = 0.605; linear mixed effects model with genotype as fixed effect, centroid shift as covariate and mouse ID as random effect: genotype × centroid shift: F 1,133.000 = 0.001, P = 0.982; centroid shift: F 1,133.000 = 8.804, P = 0.004). In b and e , center line in box plot is the median, the top and bottom of the box denote the 1st and 3rd quartile of the data, respectively, and the whiskers mark the full range of the data. ( g – i ) Task performance and population stability by genotype follow similar trajectories across conditions. Error bars represent s.e.m. of total number of sessions by mouse. Bonferroni-corrected post hoc tests comparing genotype per condition. ( g ) Fraction of licks in the reward zone by condition (two-way ANOVA, main effect of genotype P < 0.0001, main effect of condition P < 0.0001, genotype × condition interaction: P < 0.0001; post hoc comparisons: Condition II, P = 0.011; Condition III, P < 0.001). ( h ) Recurrence probability by condition (linear mixed-effects model with condition and genotype as fixed effects and mouse as random effect, genotype effect: F 1,11.084 = 7.293, P = 0.021, genotype × condition interaction: P = 0.083; post hoc comparisons: Condition III, P = 0.004). ( i ) Mean centroid shift by condition (linear mixed-effects model as before, genotype effect: F 1,10.107 = 6.771, P = 0.026). * P < 0.05, ** P < 0.01, *** P < 0.001. Full size image We next looked at the shift in firing locations in both WT and Df(16)A +/− mice to assess the similarity of spatial tuning from day to day. We found that while preferred firing locations of all cells were more stable than chance in both genotypes (centroid shift: WT versus shuffle: independent sample t test, t = −9.42, P < 0.0001; Df(16)A +/− versus shuffle: independent samples t test, t = −4.25, P < 0.0001; Fig. 3d ; place field correlation: WT versus shuffle: P < 0.0001; Df(16)A +/− versus shuffle: P < 0.0001; Supplementary Fig. 5a ), the spatial tuning in Df(16)A +/− mice was significantly less stable from day to day compared to WT mice (centroid shift: WT versus Df(16)A +/− : independent samples t test, t = −4.71, P < 0.0001) and aggregated by mouse (independent samples t test, t = 2.58, P = 0.0295; Fig. 3d ; place field correlation: WT versus Df(16)A +/− : P < 0.0001; Supplementary Fig. 5a ). Also, just as the active place cell population overlap was similar within day between WT and Df(16)A +/− mice, spatial tuning was also not different between WT and Df(16)A +/− mice from session to session within the same day (centroid shift: linear mixed-effects model with genotype and elapsed time as fixed effects and mouse as random effect, genotype × elapsed time interaction: F 1, 38,078.993 = 15.042, P < 0.0001; post hoc analysis, WT versus Df(16)A +/− , session-to-session: F 1,11.137 = 0.303, P = 0.593; day-to-day, F 1,10.577 = 8.724, P = 0.014, significant after Benjamini-Hochberg correction; Fig. 3e ; place field correlation: genotype × elapsed time interaction: P = 0.0051; post hoc analysis, WT versus Df(16)A +/− , session-to-session: P = 0.613; day-to-day, P < 0.0001; Supplementary Fig. 5b ). Taken together, spatial maps were less stable in Df(16)A +/− compared to WT mice from day to day (but not from session to session), as seen by lower recurrence of place cells and a larger shift in spatial tuning centroids, reflecting disrupted spatial maps in the mutant mice. Task performance correlates with spatial map stability If the stability of place fields over time provides the basis for spatial and episodic learning 15 , 16 , 17 , 18 , we would expect that the relative stability of place cell maps would reflect task performance. Indeed, on a per-session basis the overlap in the identity of place cells from day to day correlated with learning performance across all conditions of the GOL task for both groups (recurrence probability versus fraction of licks in reward zone: Pearson's correlation coefficient, WT: 0.288, P = 0.013; Df(16)A +/− : 0.416, P = 0.001; WT correlation versus Df(16)A +/− correlation, Fisher z transformation of correlations, general linear model, univariate ANOVA: genotype × z- recurrence probability interaction: F 1,132 = 0.599, P = 0.440; alternatively: linear mixed effects model with genotype as fixed effect, recurrence probability as covariate and mouse as random effect: genotype × recurrence probability interaction: F 1,129 = 1.083, P = 0.300; recurrence effect: F 1,129.000 = 18.197, P < 0.0001; Fig. 3c ). This finding suggests that this coding strategy is implemented by both WT and Df(16)A +/− mice, though the overall decreased population stability in the Df(16)A +/− mice contributes to the impaired task performance—the Df(16)A +/− mice are shifted lower on the recurrence–performance curve. In a similar manner to recurrence probability, place cell firing location stability also correlated with task performance for the WT mice and trended similarly in the Df(16)A +/− mice (centroid shift versus fraction of licks in reward zone: Pearson's correlation coefficient, WT: −0.306, P = 0.008; Df(16)A +/− : −0.218, P = 0.097; WT correlation versus Df(16)A +/− correlation, Fisher z -transformation of correlations, general linear model, univariate ANOVA: genotype × z stability interaction: F 1,132 = 0.268, P = 0.605; linear mixed effects model with genotype as fixed effect, centroid shift as covariate and mouse as random effect: genotype × centroid shift: F 1,133.000 = 0.001, P = 0.982; centroid shift F 1,133.000 = 8.804, P = 0.004; Fig. 3f ; place field correlation; Spearman's correlation coefficient, WT: 0.335, P = 0.004; Pearson's correlation coefficient, Df(16)A +/− : 0.224, P = 0.088; Supplementary Fig. 5c ). In addition, as suggested by the overall correlation of task performance with stability, the trajectory of these metrics across conditions mirrors the trajectory of the behavioral deficit in the task. Namely, just as we did not see a difference in behavior during Condition I ( Figs. 1c and 3g ), stability was also similar between WT and Df(16)A +/− mice during Condition I, but while the WT place cell population continued to stabilize in Condition II and III, the Df(16)A +/− population stability dropped off as the task demands change (recurrence probability: linear mixed-effects model with condition and genotype as fixed effects and mouse as random effect, genotype effect: F 1,11.084 = 7.293, P = 0.021; condition × genotype interaction, ns; Fig. 3h ; centroid shift: linear mixed-effects model as before, genotype effect: F 1,10.107 = 6.771, P = 0.026; Fig. 3i ). Thus, the learning strategy employed by both genotypes did involve the formation and maintenance of stable hippocampal spatial maps, but the stability of these maps was impaired in Df(16)A +/− mice, particularly from day to day and when the task demands changed, as reflected in their decreased performance on the GOL task. Goal-oriented learning requires dorsal hippocampal area CA1 and relies on allocentric navigation To confirm the necessity of the hippocampus to our GOL task, we pharmacologically silenced bilateral dorsal hippocampus area CA1 using the GABA A -receptor agonist muscimol during initial learning of a fixed reward location (Online Methods ). Mice in which the hippocampus was silenced during initial learning of the reward location performed significantly worse than mice with an active hippocampus (Mann-Whitney U test, Days 1–3, muscimol to saline versus saline to muscimol: U = 126.5, P < 0.0001; Supplementary Fig. 7 ) and mice that successfully learned the task with an active hippocampus showed a significant drop in task performance following dorsal hippocampus inactivation (saline to muscimol, Mann-Whitney U test, Days 1–3 versus Day 4: U = 111, P = 0.0235), now performing similar to the initially silenced training group (Day 4, independent samples t test, saline to muscimol versus muscimol to saline: t = 0.633, P = 0.535). Local cues and fabric segments of the treadmill belt are aimed to primarily provide an allocentric reference frame for spatial maps during the GOL task, but mice could in principle also use egocentric, path-integration strategies 5 , 21 , 27 to find the reward location. To elucidate the relative contribution of allocentric navigation and path integration in the learning task, we imaged WT mice in the absence of local cues on the treadmill belt, where we found that place cells were practically absent (place cell fraction, cue-rich versus cue-free: independent samples t test, t = 3.006, P = 0.004; Supplementary Fig. 8a,b ), and the tuning of all cells was significantly more diffuse (circular variance, cue-rich versus cue-free: Mann-Whitney U test, U = 499,131, P < 0.0001; Supplementary Fig. 8c ). Furthermore, in the case of path integration, we would expect that during the transition from Condition I to II, when fabric sequence is the only belt feature remaining constant, place cells near the fabric transitions would be more stable than place cells farther from the fabric transitions, as errors in path integration would accumulate with distance 21 , 27 . Instead, we found no difference in stability that could have been due to the distance from the initial preferred tuning to the nearest fabric transition (two-way ANOVA, main effect of binned distance: F 2,24 = 0.024, P = 0.977; Supplementary Fig. 8d ). These results together suggest that egocentric navigation alone would be insufficient to maintain place cell firing, and thus mice primarily employ allocentric navigational strategies for learning in the GOL task. Disrupted sharp wave-ripple activity in Df(16)A +/− mice Decreased task performance following long delays (i.e., overnight), coupled with the decreased recurrence and similarity of neuronal ensemble activity from day to day, suggests a consolidation deficit in the Df(16)A +/− mice. Reactivation and consolidation of memories of previous experiences are thought to occur during sharp wave-ripples (SWRs), large-amplitude and high-frequency events detected in the local field potential during quiet wakefulness and sleep 28 . To assess SWR activity in WT and Df(16)A +/− mice, in a separate cohort of mice we implanted electrodes in hippocampal area CA1 to record the local field potential and detect SWRs ( Supplementary Fig. 9a,b and Online Methods ). During periods of immobility, we found that Df(16)A +/− mice had significantly more SWRs (Wilcoxon rank-sum test, h = 3,777.5, P < 0.001; Supplementary Fig. 9c,f ), though the SWRs were irregular, as reflected by a higher mean ripple-band power (Wilcoxon rank-sum test, h = 98,423, P < 0.001; Supplementary Fig. 9d,g ) and a higher peak frequency in the ripple-band (Wilcoxon rank-sum test, h = 94,798, P < 0.001; Supplementary Fig. 9e,h ). This dysregulation of hippocampal excitability during periods of rest in Df(16)A +/− mice provides a possible mechanism behind their failure to efficiently retain a memory of the reward location. Change in context induces disrupted place cell stability in Df(16)A +/− mice In addition to deficits following the overnight period, Df(16)A +/− mice showed significantly impaired performance after a change in context during Condition II in the GOL task (Condition II, Day 1; Fig. 1b,c ), which comprised a change in both the nonspatial (tone, light and odor) and proximal spatial cues (shuffled local cues on belt, constant fabric sequence). When we compared the day-to-day stability of place fields in WT and Df(16)A +/− mice across this transition (Condition I–Day 3 to Condition II–Day 1), we found that place fields in WT mice were significantly more stable than in Df(16)A +/− mice (WT: 0.195 ± 0.008, n = 6; Df(16)A +/− : 0.227 ± 0.002, n = 5; independent samples t test: t = −3.626, P = 0.0055; Fig. 4a ). Since this change of local cues from Condition I to Condition II dissociated position relative to the sequence of fabrics and position relative to the cues, we looked at coding of space relative to these two distinct reference frames in the WT and Df(16)A +/− mice. We looked at all the place cells that were active near a cue on the last day of Condition I and asked whether on the first day in Condition II it fired closer to that same cue ('cue-preferring') or to the position relative to the fabric sequence where the cue was previously ('position-preferring'; Online Methods ). We found a significantly different distribution of cue-preferring and position-referring cells between WT and Df(16)A +/− mice (Pearson chi-square test: χ 2 = 85.7776, P < 0.0001; Fig. 4b ), with notably fewer position-preferring cells in the Df(16)A +/− mice and a significantly lower ratio of cue- to position-preferring cells per mouse (independent samples t test: t = −3.172, P = 0.0131; Fig. 4c ). Thus, changes to the nonspatial context and the shuffling of local cues induced remapping and disrupted the stability of spatial maps in Df(16)A +/− mice significantly more than in WT mice, and in particular, fewer cells remained anchored to the task-relevant belt reference space. Figure 4: Lack of context change and task-dependent stability of spatial maps in Df(16)A +/− mice. ( a ) Mean centroid shift from the last day of Condition I to the first day of Condition II (WT: 0.195 ± 0.008, n = 6; Df(16)A +/− : 0.227 ± 0.002, n = 5; independent samples t test: t = −3.626, P = 0.0055). Horizontal dashed line represents shuffled data. Error bars represent s.e.m. for mice. ( b ) Fraction of all cells classified as place-preferring (position), cue-preferring (cue) or neither (pooled across mice) for WT, Df(16)A +/− and shuffled data (Pearson chi-square test: χ 2 = 85.7776, P < 0.0001). ( c ) Ratio of the number of cue-preferring to place-preferring cells per mouse for WT mice, Df(16)A +/− mice (independent samples t test: t = −3.172, P = 0.0131) and shuffled data (horizontal dashed line). Error bars represent s.e.m. for mice. ( d ) Schematic of RF task. Rewards (blue circles) are presented randomly throughout the belt in the same context (Context B): same belt fabric sequence but different auditory, visual, olfactory and tactile cues. ( e ) Distribution of mean centroid shift per session from day to day during RF task (dotted line is cell-identity shuffled distribution; WT: 0.222 ± 0.004, n = 30 session pairs; Df(16)A +/− : 0.220 ± 0.004, n = 42 session pairs; shuffle: 0.244 ± 0.002, n = 72; WT vs. shuffle: independent sample t test: t = −5.05, P < 0.0001; Df(16)A +/− vs. shuffle: Welch's t test, t = −5.12, P < 0.0001; WT vs. Df(16)A +/− : independent samples t test: t = 0.451, P = 0.653) and aggregated by mouse (inset; horizontal dashed line is cell-identity shuffle; WT vs. Df(16)A +/− : independent samples t test: t = 0.799, P = 0.448). ( f ) Comparison of mean centroid shift per mouse in the RF and GOL tasks (GOL data replotted from Fig. 3e ; linear mixed-effects model, genotype × task interaction: F 1,19.471 = 4.316, P = 0.051; effect of task: F 1,19.471 = 4.924, P = 0.039; post hoc analysis: WT, GOL vs. RF: F 1,11.285 = 10.472, P = 0.008, significant after B Benjamini-Hochberg correction; Df(16)A +/− , GOL vs. RF: F 1,8.562 = 0.006, P = 0.940). In box plots, center line represents the median, the top and bottom of the box denote the 1st and 3rd quartile of the data, respectively, and the whiskers mark the full range of the data. * P < 0.05, ** P < 0.01. Full size image Task-dependent stabilization of place cell populations is impaired in Df(16)A +/− mice To better understand the conditions in which place cell stability is affected in the Df(16)A +/− mice, we used a separate random foraging (RF; Fig. 4d and Online Methods ) task that did not require spatial learning. From day to day, preferred firing locations were more stable than expected by chance in both WT and Df(16)A +/− mice (WT: 0.222 ± 0.004, n = 30 session pairs; Df(16)A +/− : 0.220 ± 0.004, n = 42 session pairs; shuffle: 0.244 ± 0.002, n = 72; WT versus shuffle: P < 0.0001; Df(16)A +/− versus shuffle: P < 0.0001), but in contrast to during the GOL task, they were not significantly different from each other (WT versus Df(16)A +/− : independent samples t test: t = 0.451, P = 0.653; and aggregated by mouse: independent samples t test: t = 0.799, P = 0.448; Fig. 4e ). More specifically, the WT spatial tuning was significantly stabilized in the GOL task while the Df(16)A +/− spatial tuning was not (linear mixed-effects model, genotype × task interaction: F 1,19.471 = 4.316, P = 0.051, effect of task: F 1,19.471 = 4.924, P = 0.039, post hoc analysis WT, GOL versus RF: F 1,11.285 = 10.472, P = 0.008, significant after Benjamini-Hochberg correction; Df(16)A +/− , GOL versus RF: F 1,8.562 = 0.006, P = 0.940; Fig. 4f ). This finding suggests that the presence of a salient reward location selectively stabilizes hippocampal spatial maps in WT mice, a phenomenon absent from Df(16)A +/− mice. Enrichment of goal location by place cells in WT but not Df(16)A +/− mice Place maps incorporate goal-related information 19 , 20 , 22 , 23 , 24 , 26 , particularly as an over-representation of goal locations by place cells 23 , which has been shown to correlate with learning performance 26 . While we did not observe place cell enrichment during initial goal learning in a novel context (Condition I) or the subsequent change of context (Condition II), upon learning the new reward location in an already familiar context (Condition III), we found robust and organized remapping of place cells toward the new reward location in WT mice, though this goal-directed reorganization was strikingly absent in Df(16)A +/− mice (linear mixed-effects model, condition and genotype as fixed effects and day nested under condition as covariate, with mouse as random factor: genotype × condition × interaction: F 2,174.406 = 4.257, P = 0.016; genotype × day (nested under condition) interaction: F 3,112.789 = 3.257, P = 0.024; post hoc analysis with Benjamini-Hochberg correction for multiple comparisons, Conditions I and II: no significant effect of genotype; Condition III genotype effect: F 1,71.243 = 8.776, P = 0.004; genotype × day interaction: F 1,64.041 = 12.307, P = 0.001; Day 3: P < 0.0001; Fig. 5a,b ). Additionally, we found that the magnitude of place cell enrichment at the goal location correlated with learning performance in WT mice (Pearson correlation, z = 0.362, P = 0.023; Fig. 5c ) but not in Df(16)A +/− mice (Pearson correlation, z = −0.068, P = 0.791; Fig. 5c ). Alternatively, use of a linear mixed-effects model with genotype as factor and fraction of place cells near the reward location as covariate in fixed effects and mouse as random factor showed no overall effect of the covariate on the fraction of licks in the reward zone ( F 1,72.829 = 0.229, P = 0.634; genotype × fraction of place cells near the reward location interaction, P = 0.143, n.s). However, post hoc analysis revealed a significant effect of the number of place cells near the reward location on the mouse performance for WT mice (linear mixed-effects model with fraction of place cells near the reward location as covariate in fixed effects and mouse ID as random factor, significant effect of the covariate: F 1,31.460 = 11.436, P = 0.002, significant after Benjamini-Hochberg correction). Thus, place cell enrichment supports learning of new reward locations in a familiar context in WT animals, while in Df(16)A +/− mice, the lack of place cell enrichment is associated with significantly worse performance during this phase of the GOL task. Figure 5: Place field enrichment of goal location. ( a ) Tuning profiles for all place cells in WT and Df(16)A +/− mice on the first and last days of Condition III. Each row is an individual place cell. The intensity corresponds to the normalized transient rate in each spatial bin along the x axis. Goal location is between dotted lines. WT mice show more place cells near the reward by Day 3, an enrichment lacking in Df(16)A +/− mice. ( b ) Fraction of place cells near the goal location (within 1/16 of the belt length) across all days of the experiment. Horizontal dotted line is uniformly distributed fraction (linear mixed-effects model, condition and genotype as fixed effects and day nested under condition as covariate, with mouse ID as random factor: genotype × condition × interaction: F 2,174.406 = 4.257, P = 0.016; genotype × day (nested under condition) interaction: F 3,112.789 = 3.257, P = 0.024; post hoc analysis with Benjamini-Hochberg correction for multiple comparisons, Conditions I and II: no significant effect of genotype; Condition III, genotype effect: F 1,71.243 = 8.776, P = 0.004; genotype × day interaction: F 1,64.041 = 12.307, P = 0.001; Day 3: t = 4.669, P < 0.0001). Error bars are s.e.m. of place cell number. ( c ) Place cell goal-zone enrichment is correlated with task performance during Condition III in WT but not Df(16)A +/− mice (WT: Pearson correlation: 0.362, P = 0.023; fraction of licks in the reward zone: F 1,31.460 = 11.436, P = 0.002, significant after Benjamini-Hochberg correction; Df(16)A +/− : Pearson correlation: −0.068, P = 0.791). Linear regression and confidence intervals as in Figure 3c . *** P < 0.001. Full size image Modeling of place cell dynamics suggests that place field shift is the primary factor leading to reward enrichment Several aspects of place cell population dynamics may explain the enrichment of firing fields at the goal location in the familiar context. For example, place cells within the reward zone may be more likely to recur; existing place fields may shift toward the reward 29 ; or place fields at the reward location may be selectively stabilized ( Supplementary Fig. 10 ). To distinguish between these possibilities, we calculated the mean position-dependent recurrence probability and centroid shift ( Fig. 6 ). We found a slight increase in the recurrence probability of place cells that were active immediately preceding the reward ( Fig. 6b ), and, on average, place fields drifted toward a location on the belt just after the reward zone, such that fields preceding it tended to shift forward and fields following it tended to shift backwards ( Fig. 6c,d ). In addition, place fields just after the reward location shifted more consistently, as evidenced by a relatively lower place field shift variance ( Fig. 6c,e ). We next modeled day-to-day shifts in the population of spatially active cells and their preferred spatial tuning based on these parameters ( Fig. 7a , Supplementary Fig. 11 and Online Methods ). Figure 6: Session-to-session place field shift dynamics. ( a ) Example fields of view from two consecutive sessions. Background is time-averaged GCaMP6f movie. Place cells are colored corresponding to their spatial tuning within each session. The color bar shows the mapping of place field location on the belt. The reward zone for these sessions was between the dotted lines. Place fields are generally stable (red arrow), but some shift their place field (yellow arrow), while others stop being spatially active (blue arrow). ( b ) Recurrence probability as a function of original distance from the reward. For all pairs of consecutive sessions during Condition III, each place cell during the first session is plotted with the centroid of its place field along the x axis and whether or not it was also a place cell in the second session on the y axis (top cluster is place cell in second session; bottom cluster is not a place cell; random y -axis jitter within each cluster for visualization). Cyclic logistic regression fit with 95% confidence interval from cross-validation plotted on left axis. ( c ) Session-to-session place field shift as a function of original distance from the reward. For all pairs of consecutive sessions during Condition III, each place cell during the first session is plotted with the centroid of its place field along the x axis and the change in centroid position in the second session along the y axis. Data is fit as a continuous series of von Mises distributions for each position, with the offset (solid purple line) and variance (shaded band, 1/ κ , where κ is the concentration parameter) shown. Green dotted line denotes cells that move directly to the reward position in the second session. While it is possible that a subset of the goal-enriching cells are reward cells that directly follow the reward, the larger effect is the gradual drift of the entire place cell population toward the reward, not the active recruitment of reward cells directly remapping to the reward location (for example, lack of cells clustered around green dotted line). ( d ) Same offset curve (solid line, shaded region is 90% confidence interval calculated from refitting bootstrap resampled data) as in c . Positive values to the left of the zero-crossing and negative values to the right correspond to drift toward the reward position. ( e ) Same variance fit as in c , plotted independently. Shaded region represents 90% confidence interval calculated from refitting bootstrap resampled data. Place field shift is most consistent (minimum variance) at a position that corresponds to the most stable place field location from d , just after the goal location. Full size image Figure 7: Place field drift toward reward drives enrichment in the place field dynamics model. ( a ) Schematic of place cell recurrence (left) and stability (right) model including the four parameters that were fit from our data: non-place cell to place cell transition probability ( P on ), place cell recurrence probability by position ( P recur ), session-to-session place field shift variance and session-to-session place field shift offset. ( b ) Mean population enrichment by simulated iteration (solid lines) for WT and flat parameter sets (dashed lines: 90% confidence intervals from 100 simulations). WT parameters reproduce the enrichment observed during Condition III. ( c ) Final distribution of place fields after eight iterations for WT and flat-model parameters. Vertical dashed line denotes reward location. ( d ) Mean population enrichment after eight iterations with true-fit parameters and then with swapping each set of position-dependent parameters individually between WT and the flat model: recurrence probability ( P recur ), place field shift variance and place field shift offset. ( e ) Final WT place field distributions after eight iterations with the same parameter swaps as in d . Mean place field shift (offset) toward the reward is revealed as the main factor underlying enrichment in GOL. Full size image Simulating the same number of session transitions as in our experimental scheme, our model shows gradual enrichment of the goal location similar to the observed enrichment we saw in the WT mice during Condition III ( Fig. 7b,c ). In contrast, when we run the model with data taken from Condition I or II, we do not see enrichment of the reward location, as expected ( Fig. 7b,c and Supplementary Fig. 12 ). We next swapped parameters one by one between our WT model and flat model (flattened parameter fits; see “Modeling goal-directed remapping” section in Online Methods ) and reran the simulation to see the effect of each parameter individually on the final goal enrichment. The only parameter with a significant effect on the final place field enrichment is the place field shift offset ( Fig. 7d,e and Supplementary Fig. 13 ). We conclude that place field enrichment of goal locations is driven by an active recruitment of place fields shifting coherently toward the reward location: place fields before the reward shift forward, and place fields after the reward shift backwards. Absence of enrichment in the Df(16)A +/− mice is through lack of place field shift toward reward While we saw robust place field enrichment of the reward location in WT mice following a change in the reward location, population enrichment was completely absent in Df(16)A +/− mice. We calculated the position-dependent recurrence probability and centroid shift (as in Fig. 6 ) during Condition III with the Df(16)A +/− data and saw no dependence on position for any of these properties ( Fig. 8a–d ). Consistent with previous studies 30 , across the entire belt, on average, place fields shifted slightly backwards ( Fig. 8c ), and when we simulate session-to-session place field shifts with our model we do not see any enrichment of the goal location ( Fig. 8e,f ). So, while WT place fields shift toward the reward location, leading to an over-representation of this location, this effect is disrupted in Df(16)A +/− mice. Figure 8: Df(16)A +/− mice place fields do not drift toward goal and the model produces no enrichment. ( a ) Place cell recurrence by distance from reward, as in Figure 6b . ( b ) Session-to-session place field shift as a function of original distance from the reward, as in Figure 6c . ( c , d ) Place field shift and variance fits from b , as in Figure 7c,d (shaded region is 90% confidence interval calculated from refitting bootstrap resampled data). ( e ) Unlike the WT model, the enrichment model with Df(16)A +/− parameters shows no enrichment (see Fig. 7b ). ( f ) Final distribution of place fields after eight iterations for Df(16)A +/− parameters. Full size image Discussion Our study provides a comparative characterization of learning-related neural population dynamics in hippocampal area CA1 in WT mice and mutant mice carrying a SCZ-predisposing genetic lesion. We found that mice carrying the 22q11.2 deletion, one of the strongest genetic risk factors for cognitive dysfunction and SCZ, exhibit compromised stability and plasticity of hippocampal place cell maps during spatially guided reward learning. By tracking place cell dynamics over different phases of a multiday learning task, our study extends previous findings 14 , 15 , 16 , 17 , 18 by showing a positive correlation between place cell map stability and learning performance in both WT and Df(16)A +/− mice. Indeed, task performances and spatial map stabilities for each genotype followed a similar trajectory as task demands changed; task performance and stability were most similar during Condition I, Df(16)A +/− mice were slightly impaired in Condition II and the largest difference was observed in Condition III ( Supplementary Fig. 4 ). These findings suggest that the neural coding strategy employed during all phases of a spatial reward learning task relies on the formation and maintenance of stable hippocampal representations. Our results also show task-dependent stabilization of spatial maps in WT mice, an effect possibly mediated by the attentional demands of GOL 15 , 24 , 25 , 31 . In contrast, place field stability between GOL and RF tasks was indistinguishable in Df(16)A +/− mice, indicative of a failure to conditionally stabilize spatial maps. However, Df(16)A +/− mice were comparable to WT littermates in their ability to initially learn a reward location, as well as in baseline place cell stability, which suggests that the Df(16)A +/− learning deficit was related to the stabilization and rearrangement of spatial maps in response to changing task demands (Condition I; Fig. 1c,d and Supplementary Fig. 4 ). Our results also suggest that the memory deficit throughout the GOL task may result from impaired consolidation processes. Df(16)A +/− mice were capable of solving this task, but they were significantly impaired at the beginning of each day ( Fig. 1d ), and their spatial maps were less stable overnight than those of the WT mice ( Fig. 3b,e ). In that respect, the altered SWR activity we observed in the Df(16)A +/− mice may underlie the decreased stability of spatial maps. Although we did not directly assess SWR-related place cell reactivation 28 , the increased rate and power of SWRs we observe in the Df(16)A +/− mice ( Supplementary Fig. 9 ), similarly to the effect seen in other SCZ mouse models 32 , 33 , could reflect either a failure to selectively reactivate task-related representations or a compensatory mechanism, as aberrant SWRs are not efficiently consolidating task memories. As an alternative explanation, it is possible that compromised forgetting mechanisms drive the behavioral deficits of the Df(16)A +/− mice. Place cell remapping has been proposed as a population-based coding mechanism that supports storage of similar memories with minimal interference 34 , 35 , 36 . Therefore, compromised forgetting mechanisms in terms of in place cell remapping toward the reward location in the Df(16)A +/− mice might reflect enhanced interference in forming new memories or recalling earlier memories. Another possibility is that an inability to suppress licking due to impaired impulsivity control could also contribute to the lagging performance in the Df(16)A +/− mice. While the effect of time cannot be completely excluded over the course of our multiday learning model, our results are more consistent with an interpretation in which distinct hippocampal coding strategies are employed as learning demands change in our task. Specifically, learning of a reward location in a novel environment is primarily supported by the stability of spatial maps, while learning of a change in reward location in an otherwise familiar environment is additionally dependent on the plasticity of these maps, as place cells shift toward the new reward location in WT mice ( Figs. 5 , 6 , 7 , 8 ). Prior studies of goal-directed learning were mostly performed in familiar environments 19 , 20 , 22 , 23 , 24 , 26 , and our results here are in line with previous observations showing that prominent changes in place cell firing in response to a goal were elicited when the pattern of the reinforcement was changed in the same environment 19 , 20 , 24 , 25 , following several trials in the same maze 26 , translocation of a reward location 19 , 24 or during the probe trial in an annular water maze 23 . Previous literature has also demonstrated that CA1 place fields undergo experience-dependent stabilization during the transition from a novel to familiar context 37 , 38 , 39 , including experience-dependent changes in place fields shift and directionality 30 , 40 . Therefore, one potential explanation for the lack of reward-related enrichment during Conditions I and II is that goal-directed place cell dynamics were obscured by conflicting demands related to the formation of a stable contextual representations. Additional changes within a stabilized (or well-encoded) context, such as the incorporation of reward-related information, could occur through goal-directed reorganization of place fields, resulting in over-representation of place fields near the reward in WT mice. We find that WT place fields before the reward tended to shift forward, while place fields after the reward shifted backwards during learning of a new reward location in a familiar environment ( Fig. 6c,d ). This finding is consistent with the prior observation of gradual shifts place fields toward goal locations 29 . Df(16)A +/− mice failed to employ a goal-location enrichment-coding strategy and were significantly delayed in learning the new reward location. The fact that the Df(16)A +/− mice still improved their behavioral performance despite their lack of goal-related remapping implies that Df(16)A +/− mice rely on other alternative, albeit less-efficient, strategies to find the reward location. Although we find that self-motion-generated information alone is not sufficient for the maintenance of stable firing fields, local cues and fabric segments of the belt could, in principle, also serve as anchors to reduce error accumulation in path integration 21 , 27 . Df(16)A +/− mice did not track position as accurately as WT mice when local cues were shuffled (Condition II, Day 1; Figs. 1c and 4a ), indicating that Df(16)A +/− mice overly rely on local cues (but not the fabric segments; Supplementary Fig. 6d ) and potentially egocentric navigational strategies. Nonetheless, this topic deserves further exploration. The observed decrease in the fraction of spatially tuned cells and altered firing field properties at the neuronal population level, although only partially corroborated in the level of mouse population, could be an additional contributor to the disruption in the processing of spatial information in the Df(16)A +/− mice. The tendency toward unimodal and narrower place fields we observed in the Df(16)A +/− mice may be sufficient to support accurate spatial coding by combining independent location estimates from individual cells under basal conditions. In contrast, during modification of task contingencies 41 , place cells with multiple fields and high rate of spatial information could increasingly contribute to population coding in WT but not in Df(16)A +/− mice. Bidirectional interactions between the hippocampus and the prefrontal cortex play a critical role in normal memory processing 42 . In this respect, we note that learning deficits were revealed in the mutant mice by manipulations of the environmental context and the reward location, conditions requiring cognitive flexibility, and that the decreased fraction of cells tracking the place reference frame following a shuffling of the local cues during the GOL task ( Fig. 4b,c ) suggests a misattribution of salience to irrelevant cues. These behavioral and neuronal abnormalities together point to impaired interactions between the prefrontal cortex and the hippocampus, a feature of both SCZ patients and animal models of SCZ 43 , 44 . Disrupted spatial map stability and plastic reorganization in the Df(16)A +/− mice could result from deficits in local circuit dynamics, long-range communication or neuromodulation, all of which would presumably be attributable to the deficiency of one or more genes in the 22q11.2 locus 4 , 43 , 45 . While the precise mechanisms remain to be determined, our findings indicate that impaired stability and the inability of hippocampal place fields to reorganize in response to salient information together represent important neuronal correlates of memory deficits in Df(16)A +/− mice. Given that the memory deficit revealed by the GOL task is reminiscent of episodic memory deficits and learning impairments in 22q11.2-deletion carriers 46 , 47 , impaired hippocampal ensemble dynamics may be a central component of cognitive memory dysfunctions emerging from the 22q11.2DS and SCZ in general 48 . In SCZ patients with and without the 22q11.2 deletion, cognitive dysfunction is a key manifestation of SCZ, is highly correlated with functional outcome and is a robust indicator of the risk of developing a psychotic illness 49 , 50 . Investigations of 22q11.2DS as a genetic model could thus aid in elucidating neurobiological mechanisms underlying the development of cognitive dysfunction under the assumption that the diversity of dysfunction that occurs at the molecular, cellular and synaptic levels could be functionally convergent at the level of altered neuronal ensembles 45 . Methods All experiments were conducted in accordance with the US National Institutes of Health guidelines and with the approval of the Columbia University Institutional Animal Care and Use Committee. No statistical methods were used to predetermine sample sizes but our sample sizes are similar to those reported in previous publications 51 , 52 , 53 . Mice and viruses. For all experiments we used adult (8–12 weeks) male and female Df(16)A +/− and wild-type (WT) littermates that had been backcrossed into C57BL/6J background for over ten generations. Hemizygous Df(16)A +/− mice carry a 1.3-Mb deficiency on chromosome 16, in a region syntenic to the human 22q11.2 region and encompassing 27 genes, from Dgcr2 to Hira 51 , 54 . Mice were housed in the Columbia University vivarium (1–5 mice per cage) and were maintained on a 12-h light/dark cycle. Experiments were performed during the second half of the light portion of the cycle. GCaMP6f expression in neurons located in the hippocampal CA1 pyramidal layer was induced with a recombinant adeno-associated virus (rAAV) expressing GCaMP6f 55 under a Synapsin promoter (rAAV1/2( Synapsin-GCaMP6f )). Viral delivery to dorsal CA1 was performed by stereotactically injecting 50 nL (10-nL pulses) of rAAV at three dorsoventral locations using a Nanoject syringe (−2.3 mm AP; −1.5 mm ML; −0.9, −1.05 and −1.2mm DV relative to bregma). Optimal levels of viral expression of GCaMP6f occur 3–4 weeks postinjection. A subset of Df(16)A +/− mice were crossed with mice expressing Cre-recombinase under interneuron promoters ( Som , Pvalb , VIP ) 52 to identify interneurons located in the CA1 pyramidal layer to exclude them from further analysis. However, none of these crosses completely label the interneuron population in the pyramidal layer (data not shown), and therefore putative interneurons were identified and excluded from image analysis based on morphological criteria (see the “Data processing for Ca 2+ imaging” section, below). In total, 6 WT mice (5 males and 1 female) and 6 Df(16)A +/− mice (4 males and 2 females) were used for behavioral analysis. One of the male Df(16)A +/− mice was excluded from the imaging analysis due to poor quality of the imaging window. Experimenters were blind to mouse genotype throughout the experiment and initial data preprocessing steps. Littermate mutant and wild-type mice from at least five different litters were used and randomly assigned to each experiment (GOL and RF). For the hippocampus inactivation experiment, wild-type C57Bl6j mice were used and were randomly assigned to the two experimental groups. Imaging window implant. Mice were surgically implanted with an imaging window over the left dorsal hippocampus, along with a steel head-post for head-fixation during the experiments as described previously 52 , 53 . Imaging cannulas were constructed by gluing (Norland optical adhesive) a 3-mm glass coverslip (64-0720, Warner) to a cylindrical steel cannula (3.0-mm diameter, 1.5-mm height). The surgical protocol was performed as described previously 52 , 56 . Analgesia was continued for 3 d postoperatively. Behavioral training. After recovery from surgery but before the beginning of the behavioral experiments, mice were water deprived (>85% predeprivation weight) and habituated to handling and to the experimental setup including imaging equipment (shutter sounds, laser, objective). Next, water-deprived mice were head-fixed and trained to operantly lick to receive water rewards (water delivered in response to tongue contact with a capacitive sensor) at random hidden locations while running on a single-fabric, cue-free treadmill for 10 d (one 15-min trial/d). Mice initially received 40 randomly placed rewards per lap, and the reward frequency was decreased until the mice ran reliably for 3 randomly placed rewards per lap at a rate of at least 1 lap per min. Upon entering the reward zone, a drop of water was delivered in response to every other lick from the mouse. Water delivery stopped either when the mouse traveled 10 cm past the beginning of the reward zone or 3 s had elapsed. Randomization of reward zones during training encouraged mice to continuously run and lick simultaneously. Goal-oriented learning . For GOL, the reward location was fixed to a 20-cm reward zone within the ∼ 2-m long treadmill belt (180–200 cm) during context presentation as described below (see “Contexts” section). Under this set up, each mouse was trained to learn the initial reward position for 3 × 10-min trials/d, separated by ∼ 1 h for three consecutive days (Days 1–3, Context A, Condition I, 9 sessions total). We then changed the treadmill belt and nonspatial context, and mice were given 3 × 10-min trials/d for three consecutive days (Days 4–6, Context A′, Condition II) under this changed context. In Condition III, the reward zone was moved to a new location, while the other features of the belt and context were kept the same as in Condition II. Mice were given 3 × 10-min trials/d for three consecutive days to learn the new reward position (Days 7–9, Context A′, Condition III). As a point of clarity, we use the term 'context' to refer to the entire environment and set of features present during the experiment, including the fabric belts, local cues, nonspatial odor, tone and light, as well as the head-fix apparatus and the microscope itself, but, notably, not the uncued reward location. We always use 'position' in reference to the sequence of three distinct belt fabrics, which were always in the same order throughout all conditions of the experiment. Random foraging. For RF, water-deprived mice were trained to run for water rewards that were randomly administered nonoperantly throughout cue-rich belts. When the experiment started, mice received, on average, 3 water rewards per lap, but positions of the rewards from lap to lap remained random. This task involved no learning of a particular reward position, as the reward schedule was changed such that water was presented to the mice probabilistically as they ran, independent of both position on the belt and whether or not they licked. Mice ran 2 sessions per d, either in the same context or in paired contexts as described below (see “Contexts”). Behavioral readout. We used the location and quantity of licks to measure performance on the goal-directed task. As a measure of learning, we computed the fraction of licks in the goal window, where the goal window was spatiotemporally defined as the time when the animal was eligible for rewards (within both the 20-cm spatial zone and the 3-s temporal window). Comparison of GOL task to freely moving goal-directed learning task in Dupret et al . The GOL task used in this study was motivated by the cheeseboard maze task used by Dupret and colleagues 26 . The hidden reward cheeseboard maze used by Dupret et al . requires rats to learn the location of hidden food rewards over successive trials. In their primary task, these locations were uncued, and following learning, rats would travel directly to each baited location to retrieve the food reward. To facilitate chronic two-photon functional imaging from hippocampal CA1 place cells throughout learning, we designed this head-fixed paradigm for mice on a linear treadmill instead of a freely moving maze. Our head-fixed goal-oriented learning task required mice to learn the unmarked ('hidden') location (a single location instead of three as in Dupret et al . 26 ) of water rewards (instead of food) over successive laps (instead of discrete trials). Mice searched for these rewards by sampling the lick port, which only dispensed water in the correct location, while traversing a circular treadmill. In Dupret et al . 26 , rats moved around the cheeseboard maze and sampled each well to find the baited reward locations. Both tasks use measures of behavioral efficiency to determine the degree of learning; Dupret et al . 26 looked at the length of the path taken by the rats to collect all of the rewards, and in our task we (in effect) looked for the suppression of wasted/unrewarded licks. In essence, both of these tasks require animals to remember a location in space where a reward had previously been received and effectively return to that reward location to receive another reward. Both of these tasks depend on normal activity in hippocampal area CA1 to complete this task ( Supplementary Fig. 5 ). Stimulus presentation. Visual, auditory and olfactory stimuli were presented and all behavior signals digitized as described previously 52 , 53 , 56 . To track the linear position of the treadmill, we established three registration anchors at known positions along the belts and interpolated between them using a quadrature-encoded movement signal tied to the rotation of the treadmill wheels. Registration anchors were marked by radio-frequency identification (RFID) buttons (16 mm, 125 kHz; SparkFun Electronics) at evenly spaced positions along the belt and were detected when they passed over a fixed RFID reader (ID-12LA, SparkFun). The rotational quadrature signal was produced by marking treadmill wheels with offset tick marks, and this signal was encoded by a pair of photodiodes (SEN-0024, SparkFun) aligned to the wheels (<0.5 cm resolution). Contexts. Distinct multisensory contexts were created using the system described in our previous work 52 . This included presentation of a constant odor (carvone or isopentyl acetate), blinking red LED (100-ms duration at 1 Hz or off) and either a pure tone (10 kHz) or continuous beeps (2 kHz, 100-ms duration at 1 Hz). All spatial information was presented to the mice via the treadmill belts. The 2-m long imaging belts used in these experiments were constructed by stitching together 3 fabrics and then adhering six local tactile cues. Paired contexts (A–A′) consisted of two belts with an identical sequence of three fabrics and the same local cues, but the order of the cues was randomized between the two belts. Preservation of the fabric order allowed for comparison of spatial representations between contexts. In addition, each belt was paired with a unique multisensory context, such that when a mouse experienced context A′ after A, the belt was the same fabric sequences as in A but paired with a new local cue order and with a change in the background odor, light and tone. The composition of the first context (A) was randomized between mice. Hippocampal inactivation. To selectively silence dorsal hippocampus during the GOL task, we infused the GABA A -agonist muscimol (Sigma) through chronically implanted cannulae. Guide cannulae (24-gauge stainless steel) were implanted in wild-type C57BL/6J mice bilaterally over dorsal area CA1 (anteroposterior, −1.7 mm; mediolateral, ±1.5 mm; dorsoventral, −1.0 mm) and plugged with dummy cannulae (31-gauge stainless steel wire) matching the inner dimension of the guide cannula. The injection cannulae (31-gauge stainless steel) extended 0.5 mm past the end of the guide cannulae, targeting CA1. Surgical procedures were similar to those for imaging window implantation, except that a modified head post was used to accommodate the bilateral guide cannulae. Following implantation, mice were given 3 d to recover before head-fixation habituation, followed by 2 weeks of GOL task training (see the “Behavior training” section above.). To test for effects of dorsal hippocampus silencing on GOL, we used a modified GOL task model that consisted of a single condition (all days used the same belt, context and reward location). On the first day, mice were randomly divided into two groups (saline, n = 4 and muscimol, n = 3). The saline group was infused with 0.9% saline (0.15 μL at 0.25 μL/min) for the first 3 d (3 sessions per d, 30 min between sessions) and then switched to muscimol (0.15 μL of 1 μg/μL at 0.25 μL/min) on the fourth day as a reversal trial. The muscimol group received the opposite drug schedule: muscimol on the first 3 d and saline on the fourth day. To allow for drug diffusion, injection cannulae were left in place for 2 min following infusion. Mice were briefly head-restrained on a separate training treadmill during drug infusion. Infusions were performed sequentially (one hemisphere at a time) with a 5-μL Hamilton syringe and microinfusion pump (World Precision Instruments). Following infusions, the dummy cannulae were replaced and mice returned to the homecage for 30 min before behavior training/testing. In vivo two-photon imaging. All imaging was conducted using a two-photon 8-kHz resonant scanner (Bruker). We acquired 300-μm × 300-μm images (512 × 512 pixels) at 7–30 Hz using a 920-nm laser (50–100 mW, Coherent) through the approximate midline of the CA1 pyramidal cell body layer. To align the CA1 pyramidal layer with the horizontal two-photon imaging plane, we adjusted the angle of the mouse's head using two goniometers (±10° range, Edmund Optics). All images were acquired with a Nikon 40× NIR water-immersion objective (0.8 NA, 3.5 mm WD) in distilled water. Green (GCaMP6f) fluorescence was detected with a GaAsP PMT (Hamamatsu Model 7422P-40). A custom dual stage preamp was used for optimal signal amplification before digitization (Bruker). Data processing for Ca 2+ imaging. All imaging data were analyzed using the SIMA software package written in Python ( ) 5 . Motion-artifact correction was achieved by implementing a plane-wise version of the 2D hidden Markov model 56 , 57 , 58 . Segmentation was performed on each field of view (FOV) by manually drawing polygons around GCaMP6f-labeled somata for the first imaged session of each FOV. Polygons were drawn along the inner edge of the cytosolic border to minimize neuropil contamination. Putative interneurons in the pyramidal layers, predominantly GABAergic basket cells 59 , 60 , 61 , were identified and excluded from further analysis based on their multipolar and morphologically larger soma diameter compared to CA1 pyramidal cells 62 , 63 , 64 and on their higher baseline and nuclear fluorescence, consistent with their higher baseline tonic firing rate in vivo 61 , 65 , 66 , 67 . Regions of interest were imported to the SIMA project's ROI Buddy graphical user interface 58 and were transformed to the other imaging sessions of the same FOV using a piecewise-affine transformation. This tool also allowed for registration of the regions of interest (ROIs) across experiments, allowing us to track identified cells across imaging sessions. GCaMP6f fluorescence time-series were extracted from the ROIs using SIMA as previously described 58 . We computed the relative fluorescence changes (Δ F / F ) as previously described 68 , with a uniform smoothing window t 1 = 3 s and baseline size t 2 = 60 s. To identify significant calcium events, we modified a method first implemented by Dombeck et al . in 2007 69 and since used by both our lab 52 , 53 , 70 and others 57 , 71 . The general idea is that for a Δ F / F calcium trace, positive and negative deflections from 0 should occur with equal probability for any noise associated with the photon counting/image acquisition and also for uncorrectable motion along the dorsoventral axis ( z axis) of the mouse. This assumption allows us to empirically calculate the false-positive rate for each putative event and thus identify a duration and amplitude threshold above which an event has a fixed (5%) maximum false-positive rate, the level at which there are 20 times more positive events than negative events. To implement this approach, we identified putative events by finding consecutive imaging frames that started 2 s.d. above or below the mean, ended when the signal fell down to 0.5 s.d. above/below the mean and lasted for at least 250 ms. These events were classified by their duration and amplitude (in sigma, s.d.) and binned into 0.5-sigma amplitude and 250-ms duration bins. For each bin, we then calculated the associated false-positive rate as the ratio of negative to positive events. We only included positive events from amplitude–duration bins with a false-positive rate less than or equal to 0.05. Data analysis. Selection of spatially tuned cells. When evaluating the spatial tuning of pyramidal cells, we restricted our analysis to running-related epochs, defined as consecutive frames of forward locomotion (an imaging frame in which at least one forward pair of beam breaks occurred) at least 1 s in duration and with a minimum peak speed of 5 cm/sec. Consecutive epochs separated by < 0.5 s were merged. Running-related transients were defined as those that were initiated during a running-related epoch. To identify cells with significant spatial tuning, we calculated the spatial information relative to an empirically calculated shuffle distribution. For each cell we first computed the spatial information 72 content as where λ i and p i are the transient rate and fraction of time spent in the i th bin, λ is the overall firing rate, and N is the number of bins. We computed I N for multiple values of N = {2,4,5,10,20,25,50,100}. We then created 1,000 random reassignments of the transient onset times within the running-related epochs and recomputed the values of , where s is the index of the shuffle. To roughly correct for biases in the calculation of mutual information, we then subtracted the mean of this null distribution from all estimates to obtain values Finally, we computed a single estimate of the information content for the true transient onset times, , and for the shuffles, . The spatial tuning P value was taken as the fraction of values of s for which . exceeded . Cells falling in the top 5% of their respective shuffle distributions were classified as place cells on the basis of their spatial information content. For all cells, rate maps were formed by dividing the number of transients initiated in each spatial bin by the occupancy of that bin. We calculated rate maps with 100 position bins and smoothed with a Gaussian kernel (σ = 3 bins). To define place fields for cells that were identified as containing significant spatial information, we fit each local maximum in the rate map with a Gaussian, merged overlapping putative fields and then discarded any with an area less than 50% of the largest. Place cell properties. For each cell, we calculated a spatial tuning vector as where θ j is the position of the mouse at the onset time of the j th running transient, and o ( θ j ) is the fraction of running frames acquired at position θ j . The circular variance is defined as 1 minus the magnitude of this mean resultant vector (smaller values convey sharper tuning specificity). Transient sensitivity is defined for a place cell as the fraction of laps in which a significant Ca 2+ transient occurred in a place field. Transient specificity is defined as the fraction of significant Ca 2+ transients that occurred within a place field. Single-cell sparsity is defined as described previously 73 as where r i is the transient rate in in spatial bin i of n total bins. Lifetime place coding is the fraction of all cells that were ever previously identified as a place cell by the n th session they were imaged. Remapping analysis. Recurrence probability was defined for a given pair of experiments as the fraction of place cells in the first experiment that were also identified as a place cell in the second experiment. The centroid shift for each cell was defined as the distance between the spatial tuning vectors calculated for a pair of experiments. As noted above (see “Goal-oriented learning” section), the actual treadmill belts used for the experiments ranged from 180 to 200 cm, so we normalized the values to the length of the belt to directly compare centroid shift values. These values range on the interval [−0.5, 0.5), and the units have been labeled 'fraction of belt'. In Figures 3 and 4 we plot the absolute value of this shift. A cell was required to have fired at least one transient in both experiments for inclusion. In our analysis of cell firing location following the shifting of local cues, we define a 'cueness' metric for all cells that fired within ±5% (belt units) of the cue before cue shift as where d p is the distance from the activity centroid after cue shift to the position of the preferred cue on the fabric sequence before the cue shift, so a d p value of 0 means that cell maintained its firing at the location where the cue had been. We defined d c similarly, as the distance from activity centroid after cue shift to the new position of the cue after the cue shift, so that a d c value of 0 means that a cell's activity followed the movement of the cue exactly (and a value of 0.5 means it is now at the opposite side of the belt). So the cueness metric, c , has a value of 1 for a cell that followed the cue and a value of 0 for a cell that stayed at the original cue position. All cells with cueness >0.67 were classified as 'cue-preferring' and all cells with cueness < 0.33 were classified as 'place-preferring'. Cueness shuffle distributions were calculated by randomizing the cell identities before and after the cue shift. The fraction of place fields near the reward was defined as the fraction of place cells with a spatial tuning vector within 1/16 of a belt length of the reward zone. Shuffle distributions. For recurrence probability shuffle distributions, we selected every pair of experiments and calculated the fraction of place cells in the first experiment that were still place cells in the second experiment (recurrence probability), as well as the fraction of all cells in the second experiment that were identified as place cells (recurrence probability chance level). We pooled this chance-level calculation across all pairs of experiments in both genotypes to create the shuffle CDF and inset bar. Centroid shift and place/cue-preferring shuffle distributions were calculated by randomly choosing 10,000 pairs of activity centroids (taken from correctly paired experiments but ignoring cell identity) and calculating the difference in centroid position or the distance to the cue/position. Recurrence and stability by position. Recurrence and stability as a function of position were calculated from all data during Condition III, the only condition during which we detected remapping toward the reward location. For every session, we first identified the significantly spatially tuned cells, and then for these place cells we calculated the activity centroid position relative to the reward location (positions after the reward are positive; those before the reward are negative). To get a continuous estimate of recurrence as a function of position, we used nonparametric logistic regression to fit a cyclic cubic spline to whether a place cell recurred (1) or not (0) for all place cell pairs of sessions. Overfitting was controlled for by leave-one-out cross-validation, which determined an appropriate smoothness penalty on the spline. Confidence intervals were calculated by generating 1,000 new datasets of the same size as the original, by resampling with replacement. Splines were fit to each new dataset, and the confidence interval was defined as the 5th and 95th percentiles of the fit values 74 , 75 . Session-to-session place field shift by position was modeled as a continuous series of von Mises distributions, defined as where x is the distance from the reward, I 0 is the modified Bessel function of order 0, μ is the offset (mean of the distribution) and κ is the concentration (1/ κ is analogous to variance). Both the offset and concentration parameters are assumed to change smoothly across the belt. We first fit the mean shift (offset) of place fields as a function of their initial position as a cyclic cubic spline, minimizing mean squared error between the predicted and actual second session shift. Using this fit as the offset for the von Mises distributions, we fit the concentration factor again as a cyclic cubic spline, minimizing the negative log-likelihood of the actual data. Similarly, overfitting was controlled by leave-one-out cross-validation to determine the penalty on the second derivative of the splines. Confidence intervals were calculated by resampling the original dataset as described above. LFP acquisition and SWR analysis . Wideband signals were acquired at 25 kHz using a digital acquisition system (Intan Technologies, Los Angeles) from n = 2 WT and 2 Df(16)A +/− mice. For each mouse, LFP signals from a four-channel silicon probe (NeuroNexus, Ann Arbor) centered around the stratum pyramidale layer of CA1 were recorded for 20 min while the mouse was head-fixed on a cue-free treadmill belt, with randomly distributed water rewards. LFP signals were subsequently derived by bandpass filtering wideband signals between 0.1 and 625 Hz and downsampling to 1,250 Hz. For each animal, a pyramidal-layer recording site was chosen based on the amplitude of LFP ripple-events and on its location dorsal to the sites showing prominent negative sharp-waves, which are visible in the stratum radiatum. LFP signals originating in the pyramidal layer during epochs that did not show evidence of muscle-related electrical artifacts and in which the animal was immobile (velocity < 3 cm/s) were included in the analysis. Gabor wavelet spectrograms were computed between 1 and 250 Hz; power within each frequency band was subsequently z -scored within each session. To detect sharp-wave/ripple events, the pyramidal layer LFPs were bandpass filtered at the ripple-band frequency (125 to 250 Hz), rectified, smoothed with a 25-ms STD Gaussian kernel and z -scored. For the main analysis, ripples were detected as 'trigger' peaks at least 6 s.d. above the mean, with the ripple 'edges' set at 2 s.d. above the mean. The trigger thresholds also varied between 2 and 9 s.d. above the mean. Across all conditions, candidate ripple events occurring within 30 ms of each other were concatenated, and only ripples lasting at least 30 ms were included. Ripple incidence rates were calculated by binning immobility epochs into non-overlapping 30-s bins and calculating the ripple incidence within these bins. Modeling goal-directed remapping. All parameters in our model of session-to-session recurrence and remapping were fit from our WT and Df(16)A +/− place cell data, separately for each condition. To determine how the reward affected stability of place cells, we used WT mouse data from sessions during Condition III, the sessions in which we saw robust remapping toward the reward location, to calculate the session-to-session place cell recurrence probability as well as the mean and variance of the place field centroid shift, as a function of the original place field's distance from the reward location ( Fig. 6 and see the “Recurrence and stability by position” section). For the flat model, recurrence, shift offset and shift variance were all set to the mean across all positions from the WT fits. Our model assumes that every cell has a preferred spatial tuning each day and that the tuning is either latent (non-place cell) or expressed as significant spatial activity (place cell). This assumption is supported by the observation that even when a cell is not identified as a place cell, it retains a 'memory' of the last time it was spatially active, firing more closely to the old place field than expected by chance. Specifically, for all place cells from pairs of experiments separated by one session, the mean place field centroid shift variance between those two sessions was independent of the spatial information in the middle session ( Supplementary Fig. 11 ). At each iteration (similar to one elapsed session) of the model ( Fig. 7a ) a fixed fraction of non-place cells become spatially active, which we denote as P on (WT: 24.83%; Df(16)A +/− : 20.58%). Place cells remain spatially active as place cells with a position-dependent recurrence probability, which we denote as P recur ( x i ). Finally, all cells shift their place field location, with the new position being drawn from a von Mises distribution with position-dependent offset μ( x i ) and concentration κ ( x i ) as described previously: . For all simulations, we ran eight iterations, similar to the eight transitions between the nine sessions within each condition of our experiment. We calculated the mean enrichment as the mean absolute centroid distance to the reward across all place cells minus the expected mean distance from the reward (0.25). For all model simulations, initial spatial tuning and place cell identity were chosen pseudorandomly; initial place cell identities and masks were randomized until the mean distance to the reward was less than 0.00001 but then held constant across 100 simulations of eight iterations each. For enrichment by iteration curves ( Figs. 7b and 8e and Supplementary Figs. 10a,b and 11 ), the mean and 90% confidence intervals are calculated from the 100 simulations. Final distribution histograms ( Figs. 7c,e and 8f and Supplementary Figs. 10c,d and 11 ) are aggregated across all simulations. To compare the influence of each set of parameters to the final enrichment, we reran the simulation with each of the parameters swapped between WT and flat-model fits ( Fig. 7d,e and Supplementary Fig. 13 ). For example, the WT enrichment for swap P recur is the simulation run with all WT-fit parameters, except with P recur kept the same for all positions and equal to the mean, effectively removing the dependence on the distance to reward by flattening out the fits. Statistics. Behavioral results were analyzed with a linear mixed-effect model or mixed-design repeated-measures two-way ANOVA. Differences in conditions were revealed by use of single linear combinations of parameters with the following formulae Z was then transformed to P value. All data was tested for equal variance (Levene's test) and for normal distribution (Kolmogorov-Smirnov normality test). Means were compared by two-sample unpaired t tests, unless the variances were significantly different or the data was not normally distributed, in which cases we used Welch's t test or Mann-Whitney U test, respectively. Wilcoxon rank-sum tests were used to compare genotypes in SWR data. Chi-squared tests, Cox regression or two-way mixed-effects ANOVA were used for all other individual parameter comparisons. The Benjamini-Hochberg procedure was used for multiple comparisons post hoc analyses with FDR = 0.1. Linear regression analyses with Pearson's correlation coefficient were calculated for correlations of behavioral performance and place cell stability and enrichment ( Figs. 3 and 5 ). Comparisons of significant correlations between groups were made with GLM after z -score transformation with z -transformed variables on the x axis as covariates. Cox regression was used to compare lifetime place coding between genotypes ( Supplementary Fig. 4a ). All analyses were performed with SPSS software. Unless otherwise noted, values are plotted as mean ± s.e.m. A Life Sciences Reporting Summary is available online. Plotting. For all box and whisker plots ( Figs. 1d , 3b,e and 4f, and Supplementary Fig. 5b ), the center line is the median, the top and bottom of the box denote the 1st and 3rd quartile of the data, respectively, and the whiskers mark the full range of the data. Data availability and code availability. The datasets generated and analyzed during the current study are available in the Dryad Digital Repository at . Data analysis and simulation code are available on GitHub at . Additional information Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | A team of Columbia scientists has found that disruptions to the brain's center for spatial navigation—its internal GPS—result in some of the severe memory deficits seen in schizophrenia. The new study in mouse models of the disorder marks the first time that schizophrenia's effects have been observed in the behavior of living animals—and at the level of individual brain cells—with such high-resolution, precision and clarity. The findings offer a promising entry point for attacking a near-universal and debilitating symptom of schizophrenia, memory deficits, which has thus far withstood all forms of treatment. The results of this study were published today in Nature Neuroscience. "An almost intractably complex disorder, schizophrenia is nearly impossible to fully treat—in large part because it acts as two disorders in one," said Joseph Gogos, MD, PhD, a principal investigator at Columbia's Mortimer B. Zuckerman Mind Brain Behavior Institute and the paper's co-senior author. "On one hand, you have paranoia, hallucinations and delusions; while on the other you have severe memory deficits. Antipsychotic drugs, which treat the first class of symptoms, are entirely ineffective when dealing with the second. The reasons for this are simple: we do not yet understand what happens in the brains of schizophrenia patients. Cracking schizophrenia's code must therefore start with deciphering its biological origins, says Dr. Gogos, who is also professor of physiology, cellular biophysics and neuroscience at Columbia University Medical Center (CUMC). This has led to a recent focus on the memory impairments that are so common among schizophrenia patients. In this new study, Dr. Gogos teamed up with Attila Losonczy, MD, PhD, a fellow Zuckerman Institute principal investigator, to investigate episodic memory, which is severely impaired in cases of schizophrenia. "Episodic memory is the brain's repository of information about the past; a way of traveling backwards to recall a specific moment in time," said Dr. Losonczy, who was also a senior author. "This type of memory is critical for learning about and functioning in everyday life." For this study, the team focused on a brain region called CA1, located in the hippocampus, which plays a role in both navigation and in episodic memory. Physical alterations to CA1 have been previously reported among schizophrenia patients. CA1 is home to place cells, which collectively form internal maps in the brain critical for navigating one's present surroundings. The CA1 place cells also encode the spatial aspects of episodic memories, such as where you were when you last saw your best friend, or the place your parents always kept the holiday decorations. Video showing neuronal activity during memory formation in the brain of a mouse genetically modified to mimic schizophrenia. Credit: Jeffrey Zaremba/Losonczy Lab/Columbia's Zuckerman Institute "Recent advances in imaging technologies now give us the power to watch the activity of hundreds of place cells in the CA1 in real time while an animal forms and recalls memories," said Dr. Losonczy, who is also an associate professor of neuroscience at CUMC. "We developed experiments to record CA1 activity in mice that were genetically modified to mimic schizophrenia, and compared them to normal, healthy mice." The researchers placed both groups of animals on a treadmill under a high-resolution, two-photon microscope, where they were exposed to a variety of sights, sounds and smells (including a water reward placed at unmarked locations on the treadmill). These experiments were designed to test the animals' ability to navigate a new environment, remember how to navigate a familiar one and adapt quickly when that environment was altered. The two groups of mice showed striking differences in behavior and in cell activity. While both groups could successfully navigate a new environment, the schizophrenia-like mice had more trouble remembering familiar environments from day to day, as well as adapting when aspects of that environment changed. By simultaneously tracking the animals' place cells via the two-photon microscope, the team spotted the difference. "When the healthy mice approached something familiar, such as water, their place cells fired with increasing intensity, and then quieted down as the animals moved away," explained Dr. Losonczy. "And when we moved the location of the water, and gave the animals a chance to relearn where it was, the activity of their place cells reflected the new location." But the brains of the schizophrenia-like mice were different. Their place cells did not shift when the water reward was moved. The brain cells' lack of adaptability, the scientists argue, could reflect a key and more general mechanism of memory deficits in schizophrenia. It could also represent a new target for drug intervention. "These studies are helping to build an understanding of a disorder that has remained a biological mystery," said Dr. Gogos. "By pinpointing schizophrenia's many causes, we are opening up multiple points of intervention to slow, halt and even prevent the disorder—which stands to dramatically improve the lives of patients and their families." | 10.1038/nn.4634 |
Medicine | Scientists curb growth of cancer cells by blocking access to key nutrients | Pharmacological activation of REV-ERBs is lethal in cancer and oncogene-induced senescence, Nature (2018). nature.com/articles/doi:10.1038/nature25170 Journal information: Nature | http://nature.com/articles/doi:10.1038/nature25170 | https://medicalxpress.com/news/2018-01-scientists-curb-growth-cancer-cells.html | Abstract The circadian clock imposes daily rhythms in cell proliferation, metabolism, inflammation and DNA damage response 1 , 2 . Perturbations of these processes are hallmarks of cancer 3 and chronic circadian rhythm disruption predisposes individuals to tumour development 1 , 4 . This raises the hypothesis that pharmacological modulation of the circadian machinery may be an effective therapeutic strategy for combating cancer. REV-ERBs, the nuclear hormone receptors REV-ERBα (also known as NR1D1) and REV-ERBβ (also known as NR1D2), are essential components of the circadian clock 5 , 6 . Here we show that two agonists of REV-ERBs—SR9009 and SR9011—are specifically lethal to cancer cells and oncogene-induced senescent cells, including melanocytic naevi, and have no effect on the viability of normal cells or tissues. The anticancer activity of SR9009 and SR9011 affects a number of oncogenic drivers (such as HRAS, BRAF, PIK3CA and others) and persists in the absence of p53 and under hypoxic conditions. The regulation of autophagy and de novo lipogenesis by SR9009 and SR9011 has a critical role in evoking an apoptotic response in malignant cells. Notably, the selective anticancer properties of these REV-ERB agonists impair glioblastoma growth in vivo and improve survival without causing overt toxicity in mice. These results indicate that pharmacological modulation of circadian regulators is an effective antitumour strategy, identifying a class of anticancer agents with a wide therapeutic window. We propose that REV-ERB agonists are inhibitors of autophagy and de novo lipogenesis, with selective activity towards malignant and benign neoplasms. Main The cell-autonomous circadian clock pleiotropically coordinates a complex network of physiological processes 1 . In both mice and humans, disruption of circadian rhythms increases cancer incidence 1 , 7 . Given the unique ability of the circadian clock to directly control several pathways that are crucial for tumorigenesis 2 , 8 , 9 , 10 , 11 , pharmacological modulation of circadian components may offer promising selective anticancer strategies. REV-ERBs are haem-binding circadian clock components 6 , 12 , 13 that act as repressors of processes involved in tumorigenesis, including metabolism 5 , 14 , 15 , proliferation 16 and inflammation 2 . Binding to tetrapyrrole haem enhances the repressive function of REV-ERBs 13 . The development of the pyrrole derivatives SR9009 and SR9011 14 as specific agonists of REV-ERBs, with potent in vivo activity, prompted us to investigate whether pharmacological activation of these circadian repressors affects cancer cell viability by restraining pathways that are aberrantly activated in cancer. SR9009 had a cytotoxic effect on cancer cells derived from a range of tumour types, namely brain cancer, leukaemia, breast cancer, colon cancer and melanoma ( Fig. 1a, d, g, j, o ). SR9011 displayed similar cytotoxic properties against the same cancer cell lines ( Extended Data Fig. 1a–j ). Notably, SR9009 and SR9011 are effective against tumour cell lines that harbour a range of oncogenic drivers, including HRAS, KRAS, BRAF, PTEN deficiency and β-catenin ( Fig. 1 , Extended Data Fig. 1 ), but have little or no toxic effect on normal cells at comparable concentrations ( Fig. 1a, b , Extended Data Fig. 1a, b ). Therefore, the antitumour activity of REV-ERB agonists is not limited to a single oncogenic driver, but is instead effective against a broad spectrum of tumorigenic pathways. Figure 1: SR9009 is selectively lethal in cancer cell lines driven by different oncogenic signalling. a , SR9009 treatment is cytotoxic specifically in cancer cells (72 h). One-way ANOVA. n indicates biological replicates: astrocytes, n = 12 (mock), n = 12 (2.5 μM), n = 12 (5 μM), n = 15 (10 μM), n = 18 (20 μM); astrocytomas, n = 8 (mock), n = 9 (2.5 μM), n = 10 (5 μM), n = 11 (10 μM) and n = 6 (20 μM), * P = 0.037; and brain-tumour-initiating cells (BTICs), n = 10 (mock), n = 9 (2.5 μM), n = 9 (5 μM), n = 15 (10 μM) and n = 18 (20 μM), * ***P < 0.0001. b , SR9009 treatment impairs the viability of BJ-ELR, but not BJ, cells (proliferation assay, 7 days, 20 μM). LT/ST, large T antigen, small T antigen. c , Expression of REV-ERBs in BJ and BJ-ELR cells; quantitative PCR with reverse transcription (qRT–PCR), n = 3 biologically independent samples, two-tailed Mann–Whitney test. Expression of mRNA shown relative to housekeeper RPLP0 . d , Jurkat cell viability is reduced by SR9009; n = 12 biological replicates, 72 h 20 μM, one-tailed Mann–Whitney test, ****P < 0.0001. e , f , Immunostaining ( e ) and quantification ( f ) of cleaved caspase 3 (Cl. Casp. 3) and TUNEL assays (72 h, 20 μM); in f , n = 5 (mock) and 6 (SR9009) biologically independent samples, one-tailed Mann–Whitney test, cleaved caspase 3 assay, ** P = 0.0022; TUNEL assay, ** P = 0.0022. g , Breast cancer cell line MCF-7 viability is reduced by SR9009; n = 12 (mock) or 8 (SR9009) biological replicates, 72 h 20 μM, one-tailed Mann–Whitney test, ****P < 0.0001. h , i , Immunostaining ( h ) and quantification ( i ) of cleaved caspase 3 and TUNEL assays (72 h, 20 μM). In i , n = 5 biologically independent samples, one-tailed Mann–Whitney test; cleaved caspase 3 assay, ** P = 0.004; TUNEL assay, ** P = 0.004. j , Colon cancer cell line HCT116 viability is reduced by SR9009; n = 8 biological replicates, water-soluble tetrazolium salt (WST-1) assay, 72 h, one-tailed Mann–Whitney test, ****P < 0.0001. k , l , Induction of apoptosis is shown by cleaved caspase 3 and TUNEL staining ( k , 72 h, 20 μM); for quantification in l , n = 8 (mock) or 5 (SR9009) biologically independent samples, one-tailed Mann–Whitney test; cleaved caspase 3 assay, *** P = 0.0008; TUNEL assay ** P = 0.0021. m – o , Prolonged SR9009 treatment eradicates cancer cells (7 days, 20 μM), but does not affect cells that express NR1D1 and NR1D2 shRNA. shNS, non-silencing shRNA. p , NR1D1 and NR1D2 qRT–PCR; n = 4 biologically independent samples; one-tailed Mann–Whitney test, * P = 0.0286. NS, not significant. AU, arbitrary unit, shREV-ERBs, NR1D1 and NR1D2 shRNA. Scale bars, 50 μm. All panels representative of three biologically independent experiments. Data are mean ± s.e.m., except c , mean ± s.d. PowerPoint slide Source data Full size image Levels of REV-ERB mRNA are comparable between normal cells and their transformed counterparts ( Fig. 1c ). The anticancer activity of SR9009 and SR9011 is abolished following the downregulation of REV-ERBs by multiple short hairpin RNAs (shRNAs) ( Fig. 1o, p , Extended Data Fig. 1d, l ). The impairment of cancer cell viability on treatment with SR9009 and SR9011 is due to induction of apoptosis, as assessed by cleaved caspase 3 and terminal deoxynucleotidyl transferase (TdT) dUTP nick-end labelling (TUNEL) assays and further verified by electron microscopy ( Fig. 1e, f, h, i, k, l , Extended Data Fig. 1g–k ). As the tumour suppressor p53 has an important role in apoptosis and is often inactivated in cancer, we tested whether the induction of apoptosis by agonists of REV-ERBs requires p53. Agonist-induced apoptosis was largely intact in cells with compromised p53 function (mutation, deletion or shRNA-mediated downregulation; Fig. 1a, b , Extended Data Fig. 2a–j ), which indicates that the downstream apoptosis trigger is independent of p53. Agonists of REV-ERBs do not, therefore, require the presence of wild-type p53 and are effective against several oncogenic pathways; these observations expand the potential therapeutic repertoire of agonists of REV-ERBs against multiple tumour types. The selectivity of agonists of REV-ERBs towards cancer cells suggests that SR9009 and SR9011 may affect cellular processes that are critical specifically for the survival of tumour cells, and not essential for normal cells. The increased production of reactive oxygen species (ROS) is detrimental specifically to cancer cells 17 , insofar as normal cells exhibit a greater tolerance for increased ROS production than do cancer cells. Agonists of REV-ERBs and other circadian clock components regulate mitochondrial metabolism and its oxidative activity 15 , 18 . If ROS overproduction is involved in the enhanced sensitivity of cancer cells to agonists of REV-ERBs, lowering oxidative stress would protect them against the agonists. We co-treated cancer cells with agonists of REV-ERBs and the antioxidant N -acetyl- l -cysteine (NAC). As a second way of lowering oxidative stress, we administered agonists of REV-ERBs under hypoxic conditions. In neither experimental setting was the ability of agonists of REV-ERBs to trigger apoptosis in cancer cells impaired ( Extended Data Figs 2k–n , 3 ), which suggests that excessive ROS production is not involved in the enhanced sensitivity of cancer cells to these agonists. Next we investigated whether agonists of REV-ERBs target anabolic pathways that are selectively critical for cancer cell survival. REV-ERBs tightly control lipid metabolism by repressing several lipogenic enzymes, including fatty acid synthase (FAS) and stearoyl-CoA desaturase 1 (SCD1) 14 . Unlike normal cells, cancer cells are highly dependent on de novo lipogenesis; major efforts are underway to develop cancer therapeutics on the basis of specific inhibitors of FAS and SCD1 19 . Agonists of REV-ERBs strongly reduced both mRNA and protein expression of these two key rate-limiting enzymes, which are involved in de novo lipogenesis ( Extended Data Fig. 4a, b ). This reduction led to the perturbation of several fatty acids and phospholipids ( Extended Data Fig. 4c–i ). Because oleic acid is the final product of SCD1 ( Extended Data Fig. 4j ), we investigated whether supplementing culture medium with oleic acid could attenuate the anticancer activity of agonists of REV-ERBs. Oleic acid impaired the anticancer activity of REV-ERB agonists ( Extended Data Fig. 4k ) but did not completely abrogate cytotoxicity, which suggests the involvement of additional mechanisms. By contrast, palmitic acid supplementation did not confer any protection ( Extended Data Fig. 4l ). Cancer cells deal with their high metabolic demands through complex metabolic rewiring that involves the hyperactivation of autophagy 20 . Autophagy is essential for cancer cell survival, whereas normal cells depend on this catabolic cellular process only in starvation conditions 20 . Accordingly, inhibition of autophagy is a promising therapeutic strategy. However, chloroquine and its derivatives, which are the most common autophagy inhibitors, lack specificity and are toxic at high doses, potentially limiting their utility in clinical settings 21 . Autophagy is modulated in a circadian fashion and is controlled by NR1D1 15 , 22 . These observations prompted us to investigate whether the inhibition of autophagy is involved in the anticancer activity of agonists of REV-ERBs. We initially analysed the autophagosome marker LC3B to investigate whether agonists of REV-ERBs affect the numbers of autophagosomes; both SR9009 and SR9011 reduced the number of autophagosomes ( Fig. 2a, b , Extended Data Fig. 5a, b ). This decrease in autophagosomes suggests that the administration of agonists of REV-ERBs inhibited autophagy. To expand upon this observation, we tested whether p62 (also known as sequestosome-1), which is a protein that is specifically degraded by autophagy, accumulates following treatment with agonists of REV-ERBs. These agonists induced a marked accumulation of p62 in a range of cancer cell lines. ( Fig. 2c–e , Extended Data Fig. 5c–e ). If autophagy has a dominant role in the induction of apoptosis triggered by agonists of REV-ERBs, autophagy inhibition should precede apoptosis induction: indeed, the blockage of autophagy shown by p62 accumulation occurred before the induction of apoptosis ( Fig. 2f, g , Extended Data Fig. 5f, g ). The inhibition of autophagy was further confirmed by analysis of the autophagic flux and by electron microscopy, with the latter showing that autophagosome formation was also impaired on starvation ( Extended Data Fig. 6a–c ). In addition, treatment with agonists of REV-ERBs prevented lysosome turnover, as shown by an increase in the lysosomal protein LAMP1 and by the enhanced activity of LysoTracker Red, which stains acidic vesicles ( Extended Data Fig. 6d, e ). The accumulation of lysosomes was also observed by transmission electron microscopy ( Extended Data Fig. 6f ). Together, these results indicate that agonists of REV-ERBs potently inhibit autophagy. Figure 2: SR9009 agonist of REV-ERBs inhibits autophagy. a , b , SR9009 treatment reduces the number of autophagosomes, as shown by immunofluorescence of LC3B. n indicates biologically independent samples. MCF-7, n = 6 (mock) or 5 (SR9009); breast cancer cell line T47D, n = 5 (mock) or 4 (SR9009). One-tailed Mann–Whitney test; MCF-7 20 μM 24 h, * P = 0.0152, T47D 20 μM 48 h, ** P = 0.0079. c , d , SR9009 induces accumulation of p62 as shown by immunofluorescence. n indicates biologically independent samples; MCF-7, n = 3 (mock) or 8 (SR9009); T47D, n = 5 (mock) or 4 (SR9009). One-tailed Mann–Whitney test; MCF-7 p62 48 h, ** P = 0.0061; T47D 48 h, ** P = 0.0079. e , Inhibition of autophagy is confirmed by the immunoblot for p62 (20 μM, 48 h, melanoma cell line A375). f , g , Inhibition of autophagy precedes apoptosis induction as shown by immunofluorescence of p62, cleaved caspase 3 and TUNEL assays. n indicates biologically independent samples. One-tailed Mann–Whitney test; A375 20 μM, cleaved caspase 3 assay 48 h, n = 3, * P = 0.0179; cleaved caspase 3 assay 72 h, n = 7, **** P < 0.0001; TUNEL assay 48 h, n = 3, * P = 0.0179; TUNEL assay 72 h, n = 7, **** P < 0.0001; p62 48 h, n = 8, **** P < 0.0001; p62 72 h, n = 9, **** P < 0.0001. h , Starvation markedly accelerates the cytotoxic effect of the REV-ERB agonist SR9009 (A375, 3 days, 20 μM, starvation time 24 h). i , Overexpression of ULK3 impairs SR9009 induction of apoptosis (MCF-7, 6 days, 20 μM). j , Overexpression of ULK2 and LKB1 impairs SR9009 induction of apoptosis (A375, 6 days, 20 μM). k , WST-1 viability assay shows abrogation of apoptosis in ULK2 (left panel) and LKB1 (right panel) overexpressing cells (6 days). n indicates biological replicates. One-tailed Mann–Whitney test; left panel A375, 20 μM, n = 12, empty vector (EV) mock versus EV SR9009, **** P < 0.0001; ULK2 mock ( n = 12) versus ULK2 SR9009 ( n = 10), **** P < 0.0001; right panel A375, n = 12, EV mock versus EV SR9009, **** P < 0.0001; LKB1 mock versus LKB1 SR9009, ** P = 0.0028. Scale bars, 50 μm. All panels representative of three biologically independent experiments with similar results. All data are mean ± s.e.m. For gel source data, see Supplementary Fig. 1 . PowerPoint slide Source data Full size image When challenged with starvation, cancer cells are extremely sensitive to the inhibition of autophagy. The cytotoxicity of SR9009 and SR9011 was enhanced by starvation in a range of cancer cell lines ( Fig. 2h , Extended Data Fig. 6g, h ), which indicates the involvement of autophagy inhibition. Starvation did not induce the expression of REV-ERBs ( Extended Data Fig. 6i, j ), showing that autophagy inhibition is responsible for the increased sensitivity to agonists of REV-ERBs in starved cancer cells. Finally, the overexpression of core autophagy genes ( ULK2 , ULK3 and LKB1 (also known as STK11 ) abrogated induction of apoptosis in various tumour cell lines ( Fig. 2i–k , Extended Data Fig. 6k–p ). Next, we sought to determine how agonists of REV-ERBs block autophagy. We initially compared differential autophagy outcomes observed between chloroquine and agonists of REV-ERBs. Chloroquine inhibits autophagy at a late stage by blocking the fusion of autophagosomes and lysosomes, and thereby leads to the accumulation of autophagosomes ( Extended Data Fig. 6a–c ). By contrast, agonists of REV-ERBs decreased the number of autophagosomes, which suggests that they block autophagy at an early stage. To gain additional mechanistic insights, we investigated whether agonists of REV-ERBs can regulate the expression of core autophagy genes. Analysis of a published report 5 on chromatin occupancy of REV-ERBs revealed the presence of peaks in Ulk3 , Ulk1 , Becn1 and Atg7 ( Extended Data Fig. 7a ). Using HOMER software ( ), we found that NR1D1 and NR1D2 consensus binding sites were also present within these genetic loci ( Extended Data Fig. 7b–e ). Accordingly, ULK3 , ULK1 , BECN1 and ATG7 mRNAs and protein levels of ULK3, ULK1, BECN1 and ATG7 were downregulated on treatment with agonists of REV-ERBs, whereas the expression of these genes was induced following depletion of REV-ERBs by shRNA ( Extended Data Figs 7f–j , 8a–e ). Furthermore, in REV-ERB-depleted cells, agonists of REV-ERBs did not repress autophagy genes ( Extended Data Figs 7k , 8f ). The activation of aberrant oncogenic stimuli is an early step in tumorigenesis. Oncogene-induced senescence (OIS) arises in normal cells to limit the expansion of cells affected by oncogenic stress 23 , 24 . Although this provides an immediate benefit by arresting potentially dangerous cells, the accumulation of senescent cells over long periods of time contributes to tumour formation, tumour progression and age-related diseases 25 . Furthermore, the induction of cellular senescence upon anticancer chemotherapy treatment promotes chemotherapy resistance and generates an environment that may support uncontrolled growth of neighbouring cells and fuel relapse 25 ; this highlights the need for senolytic agents. Although de novo lipogenesis is upregulated in cancer cells but not in OIS cells 26 , an elevated level of autophagy is a known characteristic of OIS cells 27 . Agonists of REV-ERBs are lethal when administered to cells characterized by oncogenic RAS signalling ( Fig. 1 , Extended Data Fig. 1 ), affect slowly proliferating cancer stem cells and potently inhibit autophagy ( Fig. 2 , Extended Data Figs 5 , 6 , 8g–l ). We investigated whether treatment of OIS cells with agonists of REV-ERBs would block autophagy and lead to apoptosis. The overexpression of the HRAS proto-oncogene GTPase with the oncogenic mutation G12V (HRAS G12V ) ( Extended Data Fig. 9a ) established OIS, as shown by an increase in senescence-associated β-galactosidase activity and by upregulation of cell-cycle inhibitors ( Extended Data Fig. 9b, c ). Notably, agonists of REV-ERBs triggered the induction of apoptosis in OIS cells without affecting normal proliferating or quiescent cells ( Fig. 3a–c , Extended Data Fig. 9d–g ). In agreement with previous results ( Fig. 2c–e , Extended Data Fig. 5c–e ), treatment with agonists of REV-ERBs led to the accumulation of p62 and lysosomes and a reduction in autophagosomes ( Fig. 3d, e , Extended Data Fig. 9h, i ). Therefore, agonists of REV-ERBs inhibit autophagy in OIS cells. Finally, on overexpression of ULK3, the pro-apoptotic ability of the agonists of REV-ERBs was impaired ( Fig. 3f ). These results show that through their ability to block autophagy, these agonists can target a premalignant non-proliferating cellular population. Therefore, agonists of REV-ERBs display senolytic activity in addition to their oncolytic effects. Figure 3: SR9009 and SR9011 treatment evokes an apoptotic response and induces inhibition of autophagy in OIS cells. a , Proliferation assay shows that agonists of REV-ERBs impair viability of OIS cells (6 days, 20 μM). b , c , Immunofluorescence assay for cleaved caspase 3 and TUNEL assay show apoptosis specifically in OIS cells. n indicates biologically independent samples; n = 7 (mock), n = 9 (SR9009) and n = 14 (SR9011). One-way ANOVA, 72 h, 20 μM; cleaved caspase 3 assay, **** P < 0.0001; TUNEL assay, **** P < 0.0001; mean ± s.e.m. d , e , p62 accumulates on treatment with agonists of REV-ERBs, as assayed by immunofluorescence for p62. n indicates biologically independent samples, n = 11 (mock), n = 10 (SR9009) and n = 8 (SR9011). One-way ANOVA, 72 h, 20 μM, **** P < 0.0001; mean ± s.e.m. f , ULK3 overexpression protects OIS cells from cytotoxicity induced by agonists of REV-ERBs; 6 days, 20 μM. Scale bars, 50 μm. All panels representative of three biologically independent experiments with similar results. PowerPoint slide Source data Full size image To understand whether agonists of REV-ERBs represent an effective therapeutic strategy, we investigated whether agonists of REV-ERBs affect OIS and tumour viability in vivo . Naevi are benign lesions consisting of cutaneous melanocytes that have undergone OIS upon aberrant activation of RAS signalling 28 . Consistent with the in vitro observations discussed earlier, SR9009 treatment led to an increase in apoptosis in NRAS-induced naevi in mice, and the repression of autophagy genes ( Fig. 4a–c , Extended Data Fig. 10a ). This indicates the potential for non-proliferating premalignant cells to be selectively targeted by agonists of REV-ERBs in vivo . However, although naevi have been used as a model for studying OIS in vivo 28 , they do not affect neighbouring tissues and do not develop into melanomas: further studies are necessary to assess the therapeutic relevance of agonists of REV-ERBs as senolytic tools 25 , 29 . As previously reported 14 , 15 , agonists of REV-ERBs do not show overt toxicity; our TUNEL analyses—performed in normal skin and brain tissues—and our body weight analyses confirm this finding ( Extended Data Fig. 10b–d ). By contrast, established anticancer agents such as temozolomide are characterized by several side effects ( Extended Data Fig. 10d ). Figure 4: SR9009 impairs viability of NRAS-driven naevi and glioblastoma growth and extends survival. a , b , SR9009 treatment induces apoptosis in vivo in NRAS-driven naevi, as assayed by immunofluorescence analysis (representative images of two independent experiments with similar results, TRP2 melanocytic marker and TUNEL assay; one-tailed Mann–Whitney test, ** P = 0.0058. n indicates biologically independent samples, n = 7 (mock) and 6 (SR9009), 12 days of SR9009, 20 μM, four mice. Scale bar, 10 μm. c , Autophagy genes are downregulated after treatment of NRAS-driven naevi, n = 4 mice. One-tailed Mann–Whitney; Ulk1 , * P = 0.0249 and Atg7 , ** P = 0.007. d , NR1D2 expression correlates with survival in patients with brain cancer. n indicates biologically independent samples. Yellow line, intermediate expression ( n = 224); green line, downregulated ( n = 119). NIH Rembrandt database ( ); two-sided log-rank, * ***P < 0.0001. e , f , SR9009 treatment impairs in vivo growth of glioblastoma. Representative images of one experiment, n = 5 mice, 6 days, 200 mg kg −1 SR9009 twice a day. One-tailed Mann–Whitney test, ** P = 0.004. g , h , SR9009 induces apoptosis in glioblastoma, as shown by TUNEL assay; tumour cells are GFP-positive. Representative images of one independent experiment, 6 days 200 mg kg −1 twice a day. One-tailed Mann–Whitney test, * P = 0.02. n indicates biologically independent samples, n = 7 (mock) or 8 (SR9009), five mice. i , In vivo treatment with SR9009 results in downregulation of main autophagy genes. Six days, 200 mg kg −1 twice a day, n = 5 mice. One-tailed Mann–Whitney test, * P = 0.0476. j , SR9009 improves survival of mice affected by glioblastoma. SR9009, 100 mg kg −1 . Vehicle, n = 8; SR9009, n = 9 mice; two-tailed log-rank, *** P = 0.0009. k , Scheme illustrating how agonists of REV-ERBs selectively affect OIS and cancer cells. All bar charts mean ± s.e.m. PowerPoint slide Source data Full size image SR9009 is known to cross the blood–brain barrier 14 , and several glioblastoma cell lines—including brain-tumour initiating cells (005 and RIGH), A172 and glioblastoma stem cells derived from patients—are sensitive to treatment with agonists of REV-ERBs in vitro ( Fig. 1a , Extended Data Figs 1a , 2f, g , 8g and 10e ). On analysis of REV-ERB status in glioblastoma data from The Cancer Genome Atlas ( ) 30 , 31 , we observed that NR1D1 and NR1D2 status was unaltered in nearly all patients with glioblastoma ( Extended Data Fig. 10f , and data not shown), which suggests a possible therapeutic use for agonists of REV-ERBs in clinical settings. Finally, NR1D2 expression was positively correlated with survival in brain cancer patients ( Fig. 4d , NR1D1 data are not available). For the above reasons, and because of the low toxicity of agonists of REV-ERBs, we tested SR9009 for treating brain tumours. Brain-tumour-initiating cells were transplanted into mouse brains by stereotaxic injection, and on tumour establishment, SR9009 treatment was initiated. SR9009 reduced glioblastoma growth, triggered apoptosis and downregulated the expression of autophagy genes ( Fig. 4e–i , Extended Data Fig. 10g ). Additionally, SR9009 reduced tumour growth in a xenograft model of a patient-derived glioblastoma ( Extended Data Fig. 10h, i ). Most notably, SR9009 effectively and significantly improved survival in two glioblastoma models, including a xenograft derived from a patient ( Fig. 4j , Extended Data Fig. 10j ). The anticancer activity of SR9009 was similar to that of temozolomide, the current therapeutic standard for glioblastoma ( Extended Data Fig. 10j ). Unlike temozolomide, however, SR9009 lacked toxicity. Together, these results indicate that agonists of REV-ERBs are pharmacological tools that target a potentially wide spectrum of tumours and therapeutic windows, are highly selective and exhibit low toxicity ( Fig. 4k ). Importantly, REV-ERB agonists selectively target slowly proliferating tumorigenic and premalignant populations, such as naevi ( Fig. 4k ). We propose that the anticancer activity of agonists of REV-ERBs involves the inactivation of at least two cancer hallmarks 3 : de novo lipogenesis and autophagy, with autophagy—given its importance in meeting the metabolic demands of cancer cells—possibly having a major role. By simultaneously targeting two essential cancer hallmarks, agonists of REV-ERBs may represent an improved therapeutic strategy for treating cancer. These results strongly indicate that pharmacological modulation of circadian machinery is an innovative and selective strategy for cancer treatment ( Fig. 4k ). Methods Cell culture and treatments BJ, WI38, BJ-ELR, A375, Jurkat, MCF7, T47D, HCT116, Becker (astrocytoma line, JCRB Cell Bank), PANC-1 and SK-MEL28 cells were grown under standard tissue-culture conditions and obtained through ATCC. No further authentication was performed. HCT116 p53 −/− cells were a gift from B. Amati. BTICs (005, RIGH) were cultured as previously described 32 . OIS cells were generated as previously described 33 . Quiescent cells were obtained by contact inhibition. Cell lines were tested for mycoplasma contamination. Senescence-associated (SA)-β-galactosidase assay (Cell Biolabs) was performed as previously described 33 . SR9009 (Xcessbio, Millipore) and SR9011 (Xcessbio) were dissolved in DMSO for in vitro studies and for ear topical administration; SR9009 and temozolomide (Cayman Chemicals) were dissolved in 15% Cremophor (Sigma-Aldrich) in sterile water for in vivo studies. Hypoxia was induced by lowering incubator oxygen percentage to 1–2%, or with NAC supplementation in the medium (10 mM; Sigma-Aldrich); EBSS (Life Technologies) was used to induce starvation. Proliferation assays were performed to assess the cytotoxicity of SR9009 and SR9011 in normal and cancer cells by using crystal violet and cell proliferation reagent WST-1 (Roche); all treatment started when cells were 80% confluent (except for the BTIC experiments). MTS (3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium) assays were performed according to the manufacturer’s instructions (CellTiter 96 Aqueous One, Promega). Human samples Glioblastoma stem cells (GSCs) were isolated from specimens from patients with glioblastoma, who had undergone surgery at the University of Texas MD Anderson Cancer Center (UTMDACC) 34 . The diagnosis of glioblastoma was established by histological examination, according to the WHO (World Health Organization) classification. Samples derived from patients were obtained with the consent of patients, under an ethically approved Institutional Review Board protocol LAB04-0001 chaired by F. F. Lang (UTMDACC). Tumour specimens were dissociated and resuspended in Dulbecco’s modified Eagle’s medium/F12 (Gibco) supplemented with B27 (×1, Invitrogen), bFGF (20 ng ml −1 Peprotech), and EGF (20 ng ml −1 , Peprotech). Cells were cultured as neurospheres and passaged every 5–7 days, on the basis of sphere size. Plasmids pBABE-Puro and pBABE-Puro H-RasV12 were used as previously described 33 . pLKO.1 NR1D1 shRNA (shNR1D1), pLKO.1 NR1D2 shRNA (shNR1D2) (Sigma-Aldrich), pLPCULK3, pLPCLC3B (gift from the Narita laboratory) pLenti-ULK2 (ABM), pBABE-LKB1 (gift from the Shaw laboratory) were obtained as indicated. shNR1D1 no. 1: CCGGGCGCTTTGCTTCGTTGTTCAACTCGAGTTGAACAACGAAGCAAAGCGCTTTTT; shNR1D1 no. 2: CCGGCCAGCCCTGAATCCCTCTATACTCGAGTATAGAGGGATTCAGGGCTGGTTTTT; shNR1D2 no. 1: CCGGGCCCTCCAACTTAGTGATGAACTCGAGTTCATCACTAAGTTGGAGGGCTTTTT; shNR1D2 no. 2: CCGGCCAGTACAAGAAGTGCCTGAACTCGAGTTCAGGCACTTCTTGTACTGGTTTTT. Immunofluorescence microscopy For brain-tissue immunofluorescence, all mice were perfused with 0.9% NaCl followed by 4% paraformaldehyde in PBS. The brains were collected, fixed overnight and transferred to 30% sucrose in PBS. For fluorescent staining, 40-μm coronal sections on a sliding microtome were prepared and imaged with the Zeiss LSM 780 Side Port Laser Scanning Confocal microscope. Mouse ears were fixed in 4% paraformaldehyde and subjected to histology. Paraffin-embedded sections were stained with anti-TRP2 at 2 μg ml −1 (Santa Cruz), TUNEL In Situ Cell Death Detection Kit, Fluorescein (Roche). Cells were fixed and probed as previously described 33 . Images and confocal sections were acquired using the Zeiss LSM 780 Side Port Laser Scanning Confocal microscope. Comparative immunofluorescence microscopy analyses were performed in parallel, with identical acquisition and analysis parameters. ImageJ software (v1.49g) was used to perform quantitative analyses and to assay intensity differences. To count LC3B puncta, after selecting a threshold to minimize any effect of background signal analyses were performed on projected stack using the ImageJ function ‘analyse particles’. 3D Objects Counter was used to analyse intensity. Apoptosis was evaluated by the immunostaining of cleaved caspase 3 (Cell Signaling No. 9664 1:200), and by TUNEL assay using In Situ Cell Death Detection Kit, Fluorescein or TMR red (Roche). Antibodies used were LC3B (Cell Signaling No. 3868 1:200), Lamp1 (Cell Signaling #9091 1:200), Sqstm1/p62 (Abcam ab56416 1:100), Sqstm1/p62 (MBL PM045) and Ras (BD #610002, 1:200). LysoTracker Red DND-99 (Lifetech) was used to visualize lysosomes. Electron microscopy Cells grown on ACLAR coverslips were fixed in 2.5% glutaraldehyde with 2% paraformaldehyde in 0.15 M cacodylate buffer containing 2 mM calcium chloride, pH 7.4, at 37 °C for five minutes, followed by an hour at 4 °C. The coverslips were then washed in 0.15 M cacodylate buffer containing 2 mM calcium chloride, and secondarily fixed in 1% osmium tetroxide and 0.3% potassium ferrocyanide in the same buffer. Subsequently, the coverslips were washed in water and stained en bloc with 2% uranyl acetate, followed by a graded dehydration in ethanol (35%, 50%, 70%, 90%, 100% and 100%). Samples were then rapidly infiltrated in EPON resin using a Pelco BioWave microwave processing unit (Ted Pella), flat embedded and cured at 60 °C overnight. Regions of interest were excised and remounted on blank resin stubs. Ultrathin (70 nm) sections were then cut on a UC7 ultramicrotome (Leica) and cells were imaged on a transmission electron microscope at 120 kV (Zeiss Libra 120 PLUS). Immunoblotting Cells were lysed in sample buffer and 20–50 μg of whole cell lysate was resolved by gel electrophoresis (Bolt 4–12% Bis-Tris Plus Gels, Life Technologies), transferred to nitrocellulose (iBlot Transfer Stack, nitrocellulose, Life Technologies) and probed with the following antibodies: anti-cleaved caspase 3 (1:250 Cell Signaling No. 9664); anti-vinculin clone hVIN-1 (SIGMA; 1:10,000), anti Sqstm1/p62 (Abcam ab56416; 1:500), BECN1 (Santacruz H-300, sc-11427; 1:250), ATG7 (Sigma-Aldrich, A2856; 1:1,000), ULK1 (Sigma-Aldrich, A7481; 1:250), ULK3 (Sigma-Aldrich, SAB4200132; 1:500), SCD1 (Abcam, ab19862; 1:1,000), FASN (Cell Signaling, 3180; 1:1,000) and Tubulin (Millipore, 05-829; 1:5,000). qRT–PCR Total RNA was extracted from cells and tissues using RNAeasy (Qiagen) according to the manufacturer’s instructions, and treated with DNase before reverse transcription. cDNA was generated using qScript cDNA SuperMix (Quanta BioSciences). The cDNA was used as a template in real-time quantitative PCR reactions with specific primers on an ABI 7900HT Fast Real-Time PCR System. The reactions were prepared using SyBR Green reaction mix from Roche. The gene ( RPLP0 ) encoding ribosomal protein P0 (RPP0) was used as a control for normalization. Human primer sequences for qRT–PCR: RPLP0 -fw, 5′-TTCATTGTGGGAGCAGAC-3′; RPLP0 -rev, 5′-CAGCAGTTTCTCCAGAGC-3′; NR1D1 -fw, 5′-GCATGGAGAATTCCGCTTC-3′; NR1D1 -rev, 5′-CGGTTCTTCAGCACCAGAG-3′; NR1D2 -fw, 5′-CATTTCTATATTTGAAAGTAGCCCAAT-3′; NR1D2 -rev, 5′-ACTCAATCAAAGAATGTGCTTGTAA-3′; ULK2 -fw, 5′-TCAAGCATCTTCCAACCTGTT-3′; ULK2 -rev, 5′-ATTCCCGTGGCTCATTCCCAT-3′; L KB1 -fw, 5′-GAGCTGATGTCGGTGGGTATG-3′; LKB1 -rev, 5′-CACCTTGCCGTAAGAGCCT-3′; ULK1 -fw, 5′-AAGCACGATTTGGAGGTCGC-3′; ULK1 -rev, 5′-TGATTTCCTTCCCCAGCAGC-3′; BECN1 -fw, 5′-CCATGCAGGTGAGCTTCGT-3′; BECN1 -rev, 5′-GAATCTGCGAGAGACACCATC-3′; ULK3 -fw, 5′-TGAAGGAGCAGGTCAAGATGA-3′; ULK3 -rev, 5′-GCTACGAACAGATTCCGACAG-3′; CDKN2B -fw, 5′-GCGGG GACTAGTGGAGAAG-3′; CDKN2B -rev, 5′-CTGCCCATCATCATGACCT-3′; CDKN2A -fw, 5′-CCCAACGCACCGAATAGTTAC-3′; CDKN2A -rev, ATTCCAATTCCCCTGCAAACT-3′; SCD1 -fw, 5′-GACGATGAGCTCCTGCTGTT-3′; SCD1 -rev, 5′-CTCTGCTACACTTGGGAGCC-3′; FASN -fw, 5′-CATCGGCTCCACCAAGTC-3′; FASN -rev, 5′-GCTATGGAAGTGCAGGTTGG-3′; Mouse primer sequences for qRT–PCR: ulk1 -fw, 5′-GAGCCGAGAGTGGGGCTTTGC-3′; ulk1 -rev, 5′-GCCCTGGCAGGATACCACGC-3′; atg7 -fw, 5′-CCGGTGGCTTCCTACTGTTA-3′; atg7 -rev, 5′-AAGGCAGCGTTGATGACC-3′. Chromatin immunoprecipitation with sequencing data analysis Peak calling was performed using model-based analysis of chromatin immunoprecipitation with sequencing (ChIP–seq) (MACS) 35 with Galaxy Tool Version 1.0.1 35 . P -value cutoff for peak detection was selected as ≤10 −5 , and the false discovery rate as ≤0.05. Promoter motif analysis Regions that are 2,000 bp upstream and 100 bp downstream from the transcription start sites of Becn1 , Atg7 , Ulk1 and Ulk3 were scanned using the mouse (mm10) genome annotation using the HOMER v4.9.1 findMotifs.pl script with start = −2,000 and end = 100 and a log odds-score threshold of 5, looking for the motif ‘GTAGGTCACTGGGTCA’ and trained on data from a previous study 36 . Lipid extraction Lipid extraction was performed as previously described 37 , 38 . A172 cells were vortexed for 30 s in a mixture of 1:1:2 PBS:methanol:chloroform. A 13 C 16 -palmitic acid standard (200 pmol per sample) was added to chloroform before extraction. The resulting mixture was centrifuged at 2,200 g , 5 min, 4 °C to separate organic and aqueous phases. The organic phase (bottom layer) was collected and dried under a stream of nitrogen. Lipidomic analysis Lipidomic analysis was performed as previously described 39 . In brief, a Bio-Bond 5U C4 column (Dikma) was used to achieve separation of lipids. The liquid chromatography solvents were as follows: buffer A, 95:5 H 2 O:methanol + 0.03% NH 4 OH; buffer B, 60:35:5 isopropanol:methanol:H 2 O + 0.03% NH 4 OH. A typical liquid chromatography run consisted of the following for 70 min after injection: 0.1 ml min −1 100% buffer A for 5 min, 0.4 ml min −1 linear gradient from 20% buffer B to 100% buffer B over 50 min, 0.5 ml min −1 100% buffer B for 8 min and equilibration with 0.5 ml min −1 100% buffer A for 7 min. Lipidomic analysis was performed using a Thermo Fisher Scientific Q Exactive Plus fitted with a heated electrospray ionization source in negative ionization mode. The MS source parameters were 4-kV spray voltage, with a probe temperature of 437.5 °C and capillary temperature of 268.75 °C. Full-scan mass spectrometry data was collected at a resolution of 70 k, automatic gain control target 1 × 106, maximum injection time of 100 ms and a scan range of 150–2,000 m / z . Data-dependent mass spectrometry (top 5 mode) was acquired at a resolution of 35 k, automatic gain control target 1 × 105, maximum injection time of 50 ms, isolation window 1 m / z , scan range 200–2,000 m / z and a stepped normalized collision energy of 20, 30 and 40. Extracted ion chromatograms for each lipid were generated using a threshold of 5 p.p.m. around the molecular anion [M−H] − exact mass. Lipids, acyl chain composition and degree of unsaturation were validated by analysing the product ions in the corresponding tandem mass spectra. Relative quantification of lipids was performed by measuring the area under the peak and dividing this number by the area under the peak for the internal standard 13 C 16 -palmitatic acid. In vivo experiments Mice were purchased from The Jackson Laboratories. All the colonies were bred as indicated by Jackson Laboratories and maintained in pathogen-free conditions at The Salk Institute, except NOD.Cg- Prkdc scid Il2rg tm1Wjl /SzJ mice, which were maintained at The University of Texas MD Anderson Cancer Center and Tyr-Nras Q61K mice and their wild-type counterparts, which were maintained at the University of California, Irvine. C57BL/6J, NOD.Cg- Prkdc scid Il2rg tm1Wjl /SzJ and Tyr-Nras Q61K 3–14-week-old male and female mice were used. No statistical methods were used to predetermine sample size but it was determined according to previous experimental observations. All the procedures performed in this study were approved by the Institutional Animal Care and Use Committee of the SALK institute, the University of California, Irvine and the University of Texas MD Anderson Cancer Center. In all experiments, mice were monitored daily for signs of illness and were euthanized when they reached endpoints. The 005 cells (5 × 10 4 cells per 1.5 μl) or GSC 8.11 cells (1.5 × 10 5 per 1.5 μl) were stereotaxically injected into anaesthetized 8–16-week-old mice (C57BL/6J for 005 cells, and NOD.Cg- Prkdc scid Il2rg tm1Wjl /SzJ for GSCs). The following coordinates were used: subventricular zone, 1.5 mm, 2.0 mm and 2.3 mm; hard palate, 2.0 mm, 1.5 mm and 2.3 mm; cerebral cortex, 1 mm, 1 mm, and 0.5 mm or 1.0 mm; 0 mm, 1 mm and 0.5 mm or 1.0 mm; and 2.0 mm, 1.5 mm and 0.5 mm; striatum, 0 mm, 1.4 mm and 3.0 mm (all measurements are posterior, lateral and dorsal to the bregma, respectively). To ensure that each experimental group had an equivalent starting tumour, after tumour sizes were quantified by magnetic resonance imaging mice were divided into two groups (vehicle and SR9009) and three weeks after injection, the treatment was started. Similarly, NOD.Cg- Prkdc scid Il2rg tm1Wjl /SzJ mice were imaged by bioluminescence imaging and after tumour sizes were quantified they were divided in two (vehicle and SR9009) or three groups (vehicle, SR9009 and temozolomide). The experiments were not randomized and investigators were not blinded to allocation during experiments and outcome assessment. For all bioluminescence imaging, d -luciferin (150 mg kg −1 ) was administered by subcutaneous injection to mice 10 min before imaging. Mice were fed with Picolab Diet 20 No. 5058. SR9009 was administered twice per day by intraperitoneal injection at the indicated concentrations for one week, and on subsequent days once a day unless otherwise stated. Temozolomide was administered once per day by intraperitoneal injection at 82.5 mg kg −1 for 5 days. All mice in this study were kept according to guidelines approved by the Animal Care and Use Committee of the Salk Institute. For the naevi studies, both ears of Tyr-Nras Q61K and wild-type littermate mice at postnatal day 21 were treated with SR9009 or DMSO. Forty microlitres of drug SR9009 at 20 μM diluted with DMSO was applied to each ear twice daily for twelve consecutive days. Mice were killed one hour after the final treatment. Four mice (eight ears) were used in each group. Statistical analysis Results are shown as means ± s.d. or s.e.m., as indicated in the figure legends. P values were calculated as indicated in figure legends, with 95% confidence level. Data availability All data and reagents are available from the authors upon reasonable request. All gel source data are available in Supplementary Fig. 1 . Tumour source data ( Fig. 4f and Extended Data Fig. 9d, i ) are available as Source Data; ChIP–seq data 5 and The Cancer Genome Atlas data 30 , 31 have previously been published. | Salk researchers have discovered how to curb the growth of cancer cells by blocking the cells' access to certain nutrients. The approach, detailed in a new paper published today in Nature, took advantage of knowledge on how healthy cells use a 24-hour cycle to regulate the production of nutrients and was tested on glioblastoma brain tumors in mice. "When we block access to these resources, cancer cells starve to death but normal cells are already used to this constraint so they're not affected," says Satchidananda Panda, a professor in the Salk Institute's Regulatory Biology Laboratory and lead author of the paper. The circadian cycle, the intrinsic clock that exists in all living things, is known to help control when individual cells produce and use nutrients, among many other functions. Scientists previously discovered that proteins known as REV-ERBα and REV-ERBβ are responsible for turning on and off cells' ability to synthesize fats, as well as their ability to recycle materials—a process called autophagy—throughout the day. In healthy cells, fat synthesis and autophagy are allowed to occur for about 12 hours a day when REV-ERB protein levels remain low. The rest of the time, higher levels of the REV-ERB proteins block the processes so that the cells are not flooded with excessive fat synthesis and recycled nutrients. In the past, researchers developed compounds to activate REV-ERBs in the hopes of stopping fat synthesis to treat certain metabolic diseases. Panda and his colleagues wondered whether activating REV-ERBs would slow cancer growth, since cancer cells heavily rely on the products of both fat synthesis and autophagy to grow. "While current cancer research was investigating established cancer hallmarks/characteristic we decided to explore something completely new," says Research Associate Gabriele Sulli, the paper's first and co-corresponding author. "Given the importance of the circadian clock in the regulation of many cellular and physiological processes we hypothesize that targeting the circadian clock with drugs may open the way to novel anticancer strategies. This study is very exciting because it sheds light on a new uncharacterized way to treat cancer with very limited toxicity." Adds Panda, "We've always thought about ways to stop cancer cells from dividing. But once they divide, they also have to grow before they can divide again, and to grow they need all these raw materials that are normally in short supply. So cancer cells devise strategies to escape the daily constraints of the circadian clock." Salk researchers discover how to curb the growth of cancer cells by activating their biological clocks. Credit: Salk Institute Although cancer cells contain REV-ERB proteins, somehow they remain inactive. Panda's team used two REV-ERB activators that had already been developed—SR9009 and SR9011—in studies on a variety of cancer cells, including those from T cell leukemia, breast cancer, colorectal cancer, melanoma and glioblastoma. In each cell line, treatment with the REV-ERB activators was enough to kill the cells. The same treatment on healthy cells had no effect. "Activating REV-ERBs seemed to work in all the types of cancer we tried," says Panda. "That makes sense because irrespective of where or how a cancer started, all cancer cells need more nutrients and more recycled materials to build new cells." Panda then went on to test the drugs on a new mouse model of glioblastoma recently developed by Inder Verma, a professor in the Salk Institute's Laboratory of Genetics. Once again, the REV-ERB activators were successful at killing cancer cells and stopping tumor growth but seemed not to affect the rest of the mice's cells. Verma says the findings are not only exciting because they point toward existing REV-ERB activators as potential cancer drugs, but also because they help shine light on the importance of the link between the circadian cycle, metabolism and cancer. "These are all fundamental elements required by all living cells," says Verma. "By affecting REV-ERBs, you get to the heart of how cells grow and proliferate, but there are lots of other ways to get at this as well." Verma says his group is planning follow-up studies on how, exactly, the REV-ERB activators alter metabolism, as well as whether they may affect the metabolism of bacteria in the microbiome, the collection of microbes that live in the gut. Panda's team is hoping to study the role of other circadian cycle genes and proteins in cancer. Panda, Verma and coauthor Alan Saghatelian, also a Salk professor, are all members of the Salk Cancer Center, a National Cancer Institute–designated basic research cancer center. Other researchers on the study were Amy Rommel and Matthew J. Kolar of the Salk Institute; Xiaojie Wang and Maksim V. Plikus of the University of California, Irvine; and Francesca Puca of The University of Texas MD Anderson Cancer Center. | nature.com/articles/doi:10.1038/nature25170 |
Medicine | Strobe eyewear training improves visual memory | "Stroboscopic visual training improves information encoding in short-term memory," L. Gregory Appelbaum, Matthew S. Cain, Julia E. Schroeder, Elise F. Darling, and Stephen R. Mitroff, Attention, Perception, & Psychophysics, July 2012. DOI 10.3758/s13414-012-0344-6 | http://dx.doi.org/10.3758/s13414-012-0344-6 | https://medicalxpress.com/news/2012-07-strobe-eyewear-visual-memory.html | Abstract The visual system has developed to transform an undifferentiated and continuous flow of information into discrete and manageable representations, and this ability rests primarily on the uninterrupted nature of the input. Here we explore the impact of altering how visual information is accumulated over time by assessing how intermittent vision influences memory retention. Previous work has shown that intermittent, or stroboscopic, visual training (i.e., practicing while only experiencing snapshots of vision) can enhance visual–motor control and visual cognition, yet many questions remain unanswered about the mechanisms that are altered. In the present study, we used a partial-report memory paradigm to assess the possible changes in visual memory following training under stroboscopic conditions. In Experiment 1 , the memory task was completed before and immediately after a training phase, wherein participants engaged in physical activities (e.g., playing catch) while wearing either specialized stroboscopic eyewear or transparent control eyewear. In Experiment 2 , an additional group of participants underwent the same stroboscopic protocol but were delayed 24 h between training and assessment, so as to measure retention. In comparison to the control group, both stroboscopic groups (immediate and delayed retest) revealed enhanced retention of information in short-term memory, leading to better recall at longer stimulus-to-cue delays (640–2,560 ms). These results demonstrate that training under stroboscopic conditions has the capacity to enhance some aspects of visual memory, that these faculties generalize beyond the specific tasks that were trained, and that trained improvements can be maintained for at least a day. Working on a manuscript? Avoid the common mistakes Exposing an organism to an altered visual environment often results in modifications to the visual system of the organism (e.g., Blake & Hirsch, 1975 ; Webster, Mizokami, & Webster, 2007 ). As humans, our typical visual experiences begin with a continuous and uninterrupted flow of incoming information, and our visual system has developed the ability to transform this information into a seamless and continuous perceptual experience of motion and form. What would happen if the stream of input to the visual system were altered to only allow successive glimpses of the world? Intermittent, or stroboscopic, vision provides an interesting manipulation because it interrupts the normal flow of visual information, and therefore reduces the feedback that is available to guide movements as they are carried out. For example, imagine trying to catch a ball while in a stroboscopic environment. Because you cannot see continuously, you are forced to extrapolate between discrete visual samples to correctly judge the ball’s trajectory. If the stroboscopic rate is sufficiently fast, visual feedback is frequent, and catching the ball is not complicated. However, as the rate decreases, more time passes between visual samples, and essential visual information is lost. Under these conditions, trying to catch a thrown ball becomes quite difficult. When placed in a stroboscopic environment, one must adjust to perform visual tasks adequately, and several possible mechanistic changes could be made. For example, individuals may seek to compensate for their lack of continuous vision by preferentially storing information in visual memory to facilitate perception and/or motor control. This need for compensation suggests that stroboscopic environments may offer an especially powerful research tool for studying visual processing and the interplay between vision and motor actions. Previous research has used intermittent visual experience to study what aspects of sight are important for regulating perceptual–motor performance, such as during driving (Senders et al., 1967 ), one-handed catching (Lyons, Fontaine, & Elliot, 1997 ), or manual aiming (Elliott, Chua, & Pollock, 1994 ). More recently, training in a stroboscopic environment was shown to enhance visual sensitivity in a number of “low-level” perceptual domains, including foveal motion sensitivity and transient attentional selectivity (Appelbaum et al., 2011 ). For example, participants who practiced physical activities under stroboscopic conditions were more accurate afterward at reporting briefly presented stimuli in a dual-task setting than were matched controls who performed the same physical activities under full-vision conditions. Similarly, improvements in anticipatory timing have been observed after stroboscopic training (Smith & Mitroff, under review), and professional ice hockey players have been found to improve in their on-ice skills after stroboscopic training (Mitroff et al., under review). While not all aspects of visual cognition tested in these studies were altered by stroboscopic training (suggesting that the observed effects were not solely due to motivational differences; see the Discussion section), the findings above indicate that experience with stroboscopic vision can enhance some aspects of perception and visual–motor control. In the present study, we expanded upon these research findings to investigate the hypothesis that stroboscopic vision may force individuals to more robustly engage visual memory for successful motor planning (e.g., through greater retention of motion samples in order to estimate motion trajectories), and that this increased engagement may lead to enhancements in the early stages of visual memory. In particular, we focused here on the interplay between visual sensory memory and visual short-term memory. Visual sensory memory, also called “iconic memory” (Neisser, 1967 ), refers to the very brief, precategorical, representation of a visual stimulus that persists following the disappearance of that item (Coltheart, 1980 ; Gegenfurtner & Sperling, 1993 ; Sperling, 1960 ). This initial, short-lived, and high-capacity system is accessible for several hundred milliseconds and has been shown to be a critical component of motor planning (Elliott et al., 1990 , 1994 ). In the second stage of memory, “visual short-term memory,” a subset of the elements contained in the initial sensory buffer are amplified and sustained for a short period, thereby creating a more durable memory store (Chun & Potter, 1995 ; Hoffman, 1979 ). This second stage has been shown to depend on top-down factors, such as attention (e.g., Awh, Vogel, & Oh, 2006 ; Cowan & Morey, 2006 ), and to be susceptible to training-based plasticity and learning (Berry et al., 2010 ; Klingberg, 2010 ; Westerberg et al., 2007 ). To assess possible effects on visual memory due to stroboscopic experiences, participants in the present study either completed an experimental condition, in which they trained with stroboscopic eyewear, or a control condition, in which they underwent identical training with nonstroboscopic eyewear. Visual memory was then assessed either immediately after training (Exp. 1 ) or after a 24-h delay (Exp. 2 ), which provided a means to test for both immediate and delayed changes to performance as a result of stroboscopic training. Stroboscopic vision was created via Nike Vapor Strobe™ eyewear, which contains liquid-crystal-filtered plastic lenses that can alternate between clear and opaque states. The clear state is held constant at 100 ms and the opaque state (which is a dark gray) can vary along eight levels between 67 and 900 ms. The nonstroboscopic eyewear was identical except that the lenses were replaced with clear plastic. The training consisted of multiple sessions of athletic activities during which participants performed simple drills such as throwing and catching. All participants were tested on a modified “partial-report task” (Lu et al., 2005 ; Sperling, 1960 ) before and after training. This task measures memory retention by cueing participants at various delays to report the identity of a subset of items from a larger display that was briefly presented and removed. Because participants do not know which items will be cued for recall, performance on this task can be regarded as a random sample of the memory for the entire display, thereby providing a means to assess memory retention at delays spanning sensory and short-term durations (Long, 1980 ). Alterations in the time course and capacity of memory were assessed through psychometric modeling of three parameters (see Lu et al., 2005 ): the initial visual availability of stimulus information ( a 1 ); the duration of sensory memory, determining how much information is available for transfer into short-term memory ( τ ); and the amount of information retained in short-term memory ( a 0 ). Experiment 1: Immediate assessment Method Participants For Experiment 1 , 84 participants completed the full training and testing protocol, and they were drawn from two cohorts: in-lab and team-based . In-lab participants were members of the Duke University community and were recruited through campus advertisements and the Psychology Department participant pool. Team-based participants were members of three Duke University varsity athletic teams (men’s soccer, women’s soccer, and men’s basketball) and were recruited through the team coaches and athletic trainers. For all analyses, we categorized participants into one of two training cohorts: Footnote 1 in-lab participants or varsity athletes (there were no differences across the three participating teams, so we collapsed their data). Participants were advised not to participate if they had a history of seizures, migraines, or light sensitivity. Each participant was compensated for the computer-based testing with either cash or with experiment participation credit in partial fulfillment of a Psychology Department requirement. Voluntary informed consent was obtained for every session in accordance with the Duke University Institutional Review Board. Of the 84 participants, 43 were assigned to the strobe condition and 41 to the control. One participant from each group was excluded for failure to follow directions. The data were then prescreened for outliers whose overall accuracy on the partial-report task fell ± 2 standard deviations from the group average. On the basis of these criteria, the data from two strobe and three control participants were removed, leaving 40 strobe and 37 control participants in the final analysis. Study design Each participant completed two aspects of the study: computer-based assessments and visual–motor training. The computer-based assessments were administered prior to training and immediately after the final training session. Participants were assigned to either the strobe training condition or the control training condition randomly, for the in-lab participants, and pseudorandomly by athletic skill set, for the team-based participants (e.g., half of the soccer forwards were randomly assigned to the strobe condition and half to the control condition). Computer-based iconic memory assessment Computer-based assessments were administered via Dell Inspiron computers running MATLAB R2010a and the Psychophysics Toolbox ( ). All computers were attached to CRT monitors that were calibrated in order to assure that all visual stimuli were the same size, regardless of slight differences in screen size. Monitors were set to a 75-Hz screen refresh and 1,280 × 1,024 resolution. Assessments for the in-lab participants were collected in the Visual Cognition Laboratory at Duke University with one or two participants at a time. Team-based participants either completed the assessments in the same lab or in a temporary computer lab created in the basketball practice facility, which could accommodate up to four participants at the same time. Some team-based participants also performed additional computer-based assessments measuring motion interpolation (flash-lag effect) and simple reaction times, but these tasks were not included in the final analysis because of the more limited sample size. The partial-report task was a modified version of the task from Lu et al. ( 2005 ). Visual stimuli (Fig. 1 ) were viewed by the participants from an approximate distance of 57 cm in a dimly lit room. Each trial began with a black fixation cross at the center of a gray (30 cd/m 2 ) background. After 400 ms, eight black uppercase letters (each 1.3º × 1.3º) appeared for 105 ms, arranged on an imaginary circle (3.50º radius) around fixation. The letters were drawn randomly from the set “D,” “F,” “J,” and “K,” with the constraint that no neighboring letters were the same. The letter display was replaced by a fixation cross and then a red line (1º in length with a circle at the end) appeared after a variable delay, or interstimulus interval (ISI), of 13, 40, 80, 160, 320, 640, 1,280, or 2,560 ms. The line pointed randomly at one of the eight letter locations and remained visible until a response was given. The participants were to report the identity of the letter at the cued location using the corresponding key on a standard keyboard. To provide a performance baseline with no memory component, some previous partial-report studies had implemented precue and simultaneous-cue conditions (e.g., in Lu et al., 2005 , where the arrow could also appear 147 ms before or simultaneously with the letter array). These conditions were omitted here for logistical reasons, as we wanted to present a difficult task and were under a time constraint for some testing sessions. Importantly, these baselines were not necessary for the present purposes, since we were comparing performance within individuals from their pretraining to their posttraining sessions. Fig. 1 Depiction of a sample trial of the partial-report task (not drawn to scale). Each trial started with a central fixation that was replaced by a circular array of eight letters. Participants were instructed to maintain central fixation, and then after a variable delay a red line (shown here in black) appeared at fixation indicating the location of the target letter, which the participant was to report via a keyboard buttonpress Full size image Stroboscopic training regimens The activities engaged in during the stroboscopic training were tailored to each participant cohort, but importantly, the strobe condition and the control condition were always run in the same manner within each cohort. Prior to training, participants were instructed how to operate the eyewear. They trained with the eyewear for a specified duration in each training session, as is described for the three training cohorts below and summarized in Table 1 . In general, training began at the fastest (i.e., easiest) strobe rate (6 Hz) and was made progressively harder by reducing the strobe rate (i.e., increasing the occlusion length) over the course of the training. (Note: The visible period was always constrained to be 100 ms.) Table 1 Summary of Experiment 1 training cohorts Full size table In-lab: participants, training, and assessments A group of 58 participants made two visits to the lab and undertook the same training regimen reported in Appelbaum et al. ( 2011 ). On the first day, individuals completed the computer-based assessments and then participated in a 27-min training session that consisted of forward-facing and turn-and-catch drills (see Appelbaum et al., 2011 , for more details). For the strobe group participants, the strobe eyewear started at Level 1 and was increased up to Level 6 on the basis of catching performance (control participants underwent the same procedure, but the eyewear remained transparent throughout). All participants returned to the lab within 48 h to complete a second, identical training session and then were immediately readministered the iconic memory computer-based assessments. In-lab training was conducted by members of the research team in a well-lit 20-foot hallway near the computer assessment room. Men’s and women’s varsity soccer: participants, training, and assessments Members of the men’s ( n = 8) and women’s ( n = 8) soccer teams participated in a multiday training version of this study. Computer assessments were administered in the Visual Cognition Laboratory within one week prior to the start of the first training session. The men’s team completed six training sessions, and the women’s team completed seven. All sessions, except for the final one, were conducted at the teams’ practices and consisted of typical soccer activities, such as passing and dribbling drills. Participants on each team were split evenly into the strobe and control conditions. The stroboscopic frequency level for the strobe condition varied across training sessions—during some drills, training was done at a single rate, and for others the frequency was altered from faster to slower rates at set time intervals between organized drills. Overall, strobe condition participants primarily experienced Levels 2 – 4 (5 – 3 Hz). The control participants did everything exactly the same, including pressing the buttons on the eyewear to change the level, but their lenses remained transparent throughout. The final training session for each team member was conducted outside of the Visual Cognition Laboratory (in participant pairs or with a lab member). This session lasted 24 min, was modeled after the teams’ practice sessions, and was completed immediately before the posttraining computer-based assessments. The time from the initial training session to the final training session and posttraining assessment was no more than two weeks for any participant. Physical measures (cone dribbling times) were also collected, but are not reported here. Men’s varsity basketball: participants, training, and assessments Computer-based assessments were administered before and after training. All testing and training were conducted in the basketball training building on campus. Ten team members (six strobe/four control) participated in five or six total training sessions that were led by coaches, athletic trainers, or senior members of the team. These sessions were completed within an eight-day period. The training consisted of warm-up and agility drills, with variability in timing (between 15 and 40 min) and activities across sessions. The stroboscopic frequency level for the basketball team varied across training sessions, with participants primarily experiencing Levels 2 – 4 (5 – 3 Hz), and control participants mimicking the same procedure but with eyewear that remained transparent throughout. The final training session took place immediately before the posttraining computer-based assessments. A physical measure (free-throw shooting) was collected, but is not reported here. Results All participants produced typical memory decay functions, with high accuracy at short ISIs and performance falling to near chance at the longest ISI (Fig. 2a ). At the pretraining assessment, performance did not differ between the strobe and control conditions at any ISI (all p values > .18). For both training conditions, performance improved from pretraining to posttraining; however, the magnitude of this improvement was significantly larger in the strobe participants than the control participants (Fig. 2b ). A mixed-model ANOVA performed on the accuracy data with the between-participants factors Cohort (in-lab vs. varsity) and Condition (strobe vs. control) and the within-participants factors Session (pre vs. post) and ISI (eight delays) revealed significant within-participants main effects of Session [ F (1, 73) = 20.34, p < .001] and ISI [ F (7, 289.34) = 670.00, p < .001], and a significant Session × Condition interaction [ F (1, 73) = 4.95, p = . 029]. The between-participants factor Cohort was not significant [ F (1, 73) = 0.66, p = . 42] and did not interact with any other factors. This lack of a cohort effect suggests that none of the potential differences between the in-lab and team-based participants (e.g., number of training sessions, initial athletic skill, knowledge of more than one experimental condition, or location of training) had a meaningful impact. Fig. 2 a Accuracy as a function of interstimulus interval (ISI) for the strobe (black) and control (gray) measures at pretraining (dashed lines) and at posttraining (solid lines). b Retest improvement for the strobe (black) and control (gray) groups. The ISIs tested in the present experiment are indicated by arrows at the bottom of each plot Full size image Paired comparisons of the pre- to posttraining differences (Table 2 ) for the two conditions indicated that while both training regimens resulted in improved performance for several of the short ISIs, only the strobe training resulted in significantly improved performance at the longer ISIs ( ≥ 640 ms) Table 2 The p values for paired comparisons between pretraining and posttraining accuracy at each ISI Full size table Parameter estimates In order to characterize the temporal properties of iconic memory, the data were converted from four-alternative forced identification to d' (a measure of sensitivity; see, e.g., Wickens, 2001 ), and the following exponential-decay function was fit to the data: $$ {d\prime}\left( {\text{ISI}} \right) = {a_0} + {a_{{1}}}{e^{{--{\text{ISI}}/\tau }}}. $$ (1) In this three-parameter function, a 1 is the fast-decaying sensitivity that reflects the initial visual availability of stimulus information, τ is the time constant of the fast-decaying sensitivity that represents the duration of iconic memory, and a 0 is the sensitivity at long delays that reflects the amount of information transferred into short-term memory without the benefit of cuing (see Lu et al., 2005 ). The corresponding model parameters were derived for each participant’s pretraining and posttraining performance (Table 3A ). Table 3 (A) Best-fitting parameters for the strobe and control conditions at pretraining and posttraining, as well as the paired-comparisons significance levels of their differences. (B) Posttraining minus pretraining differences in parameter estimates with the between-subjects comparisons significance levels Full size table Paired comparisons (one-tailed) between the parameter estimates (see Table 3A and B ) revealed a significant increase in the a 1 parameters for both the strobe ( p = . 006) and control ( p = . 04) conditions, indicating that the fast-decaying sensitivity to the initial visual stimulus information was improved at retest for both conditions. This retest improvement was statistically equivalent for the two groups ( p = . 371). No difference was observed for either group on the τ parameter (and there was not a significant difference between conditions, p = . 299), indicating that the time constant of the fast-decaying sensitivity that represents the duration of sensory memory was not changed at retest. Confirming the differences between conditions in retest improvement at the longer ISIs shown in Fig. 2 , stroboscopic training resulted in a significantly higher a 0 parameter estimate at retest ( p = . 019), while control training did not produce a statistically significant difference in this parameter ( p = . 303). The significant difference between conditions on retest improvement in this parameter ( p = . 035) indicates that stroboscopic training increased the amount of information retained in short-term memory. Experiment 2: 24 hour delayed assessment The goal of this experiment was to examine whether the training effects observed in Experiment 1 for the strobe group would last for 24 h after the final training session. Such retention is an important facet of learning, and here we probed this issue by testing a separate group of participants with an approximately 24-h delay inserted between training and the posttraining assessment. We chose this time frame because it involves an additional night of sleep, and sleep-dependent memory consolidation is known to improve learning (e.g., Censor, Karni, & Sagi, 2006 ; Mednick et al., 2009 ). Method All aspects of this experiment were identical to those of Experiment 1 , except for the following points. The participants ( N = 33) were all run through the in-lab training protocol, and all completed training using the strobe eyewear. Two assessments were collected on each participant; the modified partial-report task described in Experiment 1 , and a motion coherence task (see Appelbaum et al., 2011 ). These assessments were counterbalanced in order, and we will focus here on the memory task (the motion coherence data can be found in the supplementary materials ). Participants were recruited for a three-day protocol: Day 1, pretraining assessments and first in-lab training session; Day 2, second in-lab training session; Day 3, posttraining assessments. The posttraining assessment was administered the day following the second and final training session and was scheduled to be as close to 24 h later as possible, given the participants’ scheduling constraints (mean = 23 h 5 min, standard deviation = 1.99 h). The data from two participants were excluded from the final analysis, as their mean accuracy was greater than two standard deviations from the group’s mean. By maintaining all other aspects of the strobe condition of Experiment 1 , this “retention” condition provided a direct means to explore preservation of the stroboscopic training effects: We could simply compare the parameter estimates produced here to those from the strobe and control conditions from Experiment 1 . If there was significant retention of the stroboscopic training, the parameters should be significantly different from those in Experiment 1 ’s control condition, but not from those in the strobe condition. Results Retention condition As was observed in Experiment 1 , the retention condition produced typical memory decay functions (Fig. 3a ) and demonstrated improvement from pretraining (dashed) to posttraining (solid). An ANOVA performed on the accuracy data for the within-participants factors Session (pre vs. post) and ISI (eight delays) revealed significant main effects of Session [ F (1, 30) = 25.36, p < .001] and ISI [ F (3.54, 106.2) = 299.27, p < .001], as well as a nonsignificant interaction [ F (4.66, 139.9) = 0.475, p = .78]. Paired comparisons of the pre- to posttraining differences, shown in Table 4 , indicated that the main effect of retest was driven by significant improvement at a number of the short and long ISIs (gray shading). Fig. 3 a Accuracy as a function of interstimulus interval (ISI) for the retention condition (Exp. 2 ) at pretraining (dashed line) and at posttraining (solid line). b Retest improvement for all three conditions (Exps. 1 and 2 ) across ISIs. The ISIs tested are indicated by arrows at the bottom of each plot Full size image Table 4 The p values for paired comparisons between pretraining and posttraining accuracy at each ISI for the retention condition Full size table Comparison between the strobe, control (Exp. 1 ), and retention (Exp. 2 ) conditions The primary goal of Experiment 2 was to examine whether the stroboscopic training effects observed in Experiment 1 could be retained over a 24-h delay. A significant main effect of session and significant paired-comparison t tests (cf. Table 3 ) demonstrated that the retention condition did, in fact, result in improved visual sensory memory performance. However, as can be seen in Fig. 3a , there were some differences between the retention condition and the strobe and control conditions from Experiment 1 . For example, the retention condition produced a significant difference from pretraining to posttraining for the 320-ms ISI (Table 4 ), whereas neither the strobe nor the control conditions did so in Experiment 1 (Table 2 ). The parameter estimates a 1 , τ , and a 0 offer the most direct means to compare the retention condition to the strobe and control conditions. Estimates were derived for 30 of the 31 retention participants, with one excluded participant having estimates that failed to converge. For the retention condition, paired-comparison one-tailed t tests for the pretraining and posttraining estimates revealed significant increases for a 1 [ t (29) = 2.43, p = . 011] and a 0 [ t (29) = 1.88, p = . 035], but not for τ [ t (29) = 0.81, p = . 22] (white bars in Fig. 4 ). Fig. 4 Posttraining minus pretraining parameter estimates for the strobe, control, and retention conditions (black, gray, and white bars, respectively). A significant improvement in a 1 emerged for all three conditions, and no significant training effect on τ for any condition. The strobe and retention conditions revealed significant improvements for a 0 , but the control condition did not, indicating improvements in visual memory encoding from stroboscopic training that was retained for at least 24 h following training. * p < .05 Full size image To compare the retention condition to the Experiment 1 conditions, a separate one-way ANOVA was conducted for each parameter estimate (see Fig. 4 ). The a 1 parameter did not differ across the three conditions [ F (2, 106) = 0.37, p = . 69], indicating that comparable improvements were seen for all three groups. The τ parameter, which was not reliably changed from pre- to posttraining for any of the three conditions, also did not differ significantly across conditions [ F (2, 106) = 0.54, p = . 58]. The a 0 parameter revealed a trending relationship across the three conditions [ F (2, 106) = 2.26, p = . 11]. Having already established a difference between the strobe and control conditions in Experiment 1 , planned paired comparisons were conducted between each of these Experiment 1 conditions and the retention condition. These comparisons demonstrated that there was no significant difference between the retention and strobe conditions [ t (68) = 0.33, p = . 37], but there was between the retention and control conditions [ t (65) = 1.84, p = . 035], thereby supporting the view that training-based improvements were maintained for at least 24 h. Discussion Previous research has shown that stroboscopic training can improve performance, but much remains unknown about how it can improve performance. This important question has begun to be answered with evidence that stroboscopic training can affect processes such as anticipatory timing (Smith & Mitroff, 2012 ), eye–hand coordination (Bennett et al., 2004 ; Mitroff et al., 2012 ), and basic perceptual abilities (Appelbaum et al., 2011 ). The goals of the present study were to determine whether measurable changes to visual memory could also be observed, how this would manifest across the different stages of visual memory, and whether such improvement could be retained for a day following the training. The results from Experiment 1 demonstrated an increase in short-term memory capacity following stroboscopic training. Parameter estimates of the time course of visual memory informed us about what aspects of the process were affected by the training. First, both the strobe and control conditions experienced significant increases in initial visual sensitivity at retest (parameter a 1 ), suggesting a general test–retest improvement in the ability to process the incoming visual information. Second, neither group experienced a change in the duration of sensory memory (parameter τ ), suggesting no effect of training on the ability to transfer information from sensory memory into short-term memory. Third, and critically, only the strobe group experienced improved performance at long cue delays (parameter a 0 ), indicating a specific benefit of stroboscopic training that led to an increase in the amount information that could be retained in the more durable short-term memory store. The results of Experiment 2 replicated the strobe condition findings from Experiment 1 with a posttraining assessment coming after a 24 h delay. This indicates that the observed visual memory benefits were not only immediate, but were maintained after training. Implications for the study of visual memory Visual memory involves the ability to store and retrieve previously experienced visual sensations when the original stimuli are no longer present. This ability is typically conceptualized as reflecting three separate temporal stages; sensory memory, short-term memory, and long-term memory (Atkinson & Shiffrin, 1968 ). Residing at the front end of this memory cascade, sensory memory is a crucial faculty that allows some characteristics of our sensory experience to be preserved very briefly in a high-capacity, precategorical buffer after they have already disappeared from sight (Coltheart, 1980 ; Di Lollo, 1977 ; Long, 1980 ; Sperling, 1960 ). With very short presentations, individuals often report that they seem to “see” more than they can actually report, thereby supporting the notion that information that is not quickly transferred into a durable memory store is lost and inaccessible for later recollection. This transfer and retention of information through the early stages of memory has typically been explored using cued partial-report tasks, such as the one employed in the present study. In these tasks, an interval is introduced between a stimulus array and the presentation of a cue, which prompts the participant to report the contents of memory at a given location. When the cue is presented, participants are able to focus attention on the cued content and begin transferring that information into the more durable short-term memory storage, a process that Gegenfurtner and Sperling ( 1993 ) called “selective transfer.” While such cued-retrieval strategies allow individuals to preferentially encode some information into short-term memory, it is also clearly the case that not all memory is the result of selective transfer. For example, at very long stimulus-to-cue delays, partial-report accuracy does not decay all the way to zero, indicating that “nonselective transfer” occurs in the absence of any explicit cue (Averbach & Coriell, 1961 ). In the present study, we tested for the effects of stroboscopic visual training on visual memory. The results indicated two forms of learning that may differentially influence selective and nonselective visual memory transfer. The most relevant for the present question is that stroboscopic training increased the overall number of items encoded into short-term memory at long stimulus-to-cue delays, a form of nonselective transfer. As reflected by the significant session-by-condition interaction, subsequent paired tests, and changes in the a 0 model parameter estimates, training under stroboscopic conditions resulted in greater retest performance at the longer ISIs, whereas no such improvement was seen for the control participants. This finding indicates that more information is being registered into short-term memory, without the benefit of cuing, as a result of stroboscopic experience. Interestingly, this improvement in nonselective transfer was retained over a 24-h interval in the retention condition, indicating a relatively stable form of learning. The second form of learning revealed in the present study is that individuals in all three conditions (strobe, control, and retention) showed significant retest improvements at short ISIs (and, similarly, in the a 1 model parameter estimates). This finding suggests that at retest all three groups experienced a significant increase in their initial visual sensitivity and selective transfer of information into memory occurring with short stimulus-to-cue intervals. The lack of any differential effect due to the training conditions, however, indicates that this stage of memory was not specifically affected by stroboscopic exposure, or an additional 24-h delay before the retest, but rather was generally enhanced, likely due to practice with the partial-report task. Somewhat unexpectedly, the time constant of sensory memory decay ( τ ) did not differ at retest for any of the groups. It has been suggested that the duration of sensory memory, as captured by this decay parameter, reflects the rate at which individuals are able to switch attention between elements maintained in memory (Gegenfurtner & Sperling, 1993 ). While it is difficult to make strong inferences from the lack of retest or group differences, the present findings suggest that sensory memory duration (and thus the ability to rapidly shift attention through the memory store) was not the limiting factor for memory capacity in this task. Rather, it appears from these data that improved short-term memory may be the cause of the observed improvements in performance. In interpreting why stroboscopic training would selectively improve short-term memory retention, it is interesting to consider the nature of the training itself. For example, the in-lab participants played catch under stroboscopic conditions, which for all intents and purposes is a partial-report task. In this situation, the participants see a glimpse of the visual world with a ball about to be thrown or in flight, and then they are presented with an opaque visual environment, which is then again replaced by a glimpse of the visual world. They do their best to catch the ball, but there are multiple possible outcomes of where the ball could go and at what time it will arrive. As such, the better they are at retaining visual information in a usable memory store, the better they will be after the visual interruption imposed by the eyewear. Implications for the study of stroboscopic vision and training Intermittent visual experience has been used to study which aspects of vision are important for regulating perceptual–motor performance (Elliott et al., 1994 ; Lyons et al., 1997 ; Senders et al., 1967 ). By manipulating visual input, researchers can test what information is important, when is it necessary, and for how long it must persist. Previous research has generally shown that although there are preferred sources of visual information, participants are able to adapt in degraded vision conditions by making optimal use of whatever sources of information the environment provides (Assaiante, Marchand, & Amblard, 1989 ; Bennett et al., 2006 ; Bennett et al., 2004 ; reviewed in Elliott, 1990 ). Such studies have employed a wide variety of techniques and manipulations, including comparing how novice and expert athletes react to intermittent vision (e.g., Bennett et al., 2004 ; Robertson et al., 1994 ) and focusing on adaptation during intermittent vision (e.g., Lyons et al., 1997 ; Olivier et al., 1998 ). Likewise, previous studies have shown that short-lived visual memories can serve as a substitute for direct vision when the visible environment has been disturbed by a mask (Elliott et al., 1994 ). The stroboscopic tool employed here, Nike Vapor Strobe™ eyewear, has been used previously to reveal effects on anticipatory timing (Smith & Mitroff, 2012 ), eye–hand coordination in professional athletes (Mitroff et al., 2012 ), and various aspects of perceptual processing (Appelbaum et al., 2011 ). The present results add to this existing literature, suggesting that stroboscopic training can enhance visual memory abilities and that the effects can last for at least 24 h (see Smith & Mitroff, 2012 , and Mitroff et al., 2012 , for further discussions of retention). This 24-h retention finding is important, given that training protocols are most significant when they produce lasting effects (Karni & Sagi, 1993 ; Savion-Lemieux & Penhune, 2005 ; Scott et al., 2008 ). The findings above suggest that experience with stroboscopic vision can enhance aspects of perception and visual–motor control, yet it is also informative to know what stroboscopic training does not seem able to affect. To understand the mechanisms fostering these abilities, positive and negative effects can work in tandem to build a more complete story. For example, Appelbaum et al. ( 2011 ) found that stroboscopic visual training did not improve the ability to track multiple moving objects over several seconds. This suggests that stroboscopic training may not alter sustained attention abilities that do not involve the detection and processing of briefly presented information. Likewise, the same study found several occurrences of benefits for centrally presented visual information, but not for information in the periphery. More work is needed to further explore this result, but the effects of stroboscopic training for visual cognition abilities do not seem uniformly distributed across the visual field. Beyond constraining the possible underlying mechanism of stroboscopic training, these results also speak to concerns over placebo, motivational, or “Hawthorne” effects: The same participants who revealed significant effects in other paradigms run during the same testing session (and sometimes in the very same paradigm) did not reveal any significant differences on these measures. These findings help to alleviate concerns over such alternative accounts of stroboscopic training effects (see Appelbaum et al., 2011 , for further treatment of this issue). Moreover, while the eyewear employed here offers a convenient means to conduct training studies, as the stroboscopic environment is portable and can be restricted to some individuals and not others, it also presents a potential drawback: Portable stroboscopic eyewear proves extremely useful when conducting studies in a group setting, as the experimental and control groups can be simultaneously engaged in the same training, but this creates the possibility that the experimental and control groups may realize that there are two conditions and may be differentially motivated to perform (Boot, Blakely, & Simons, 2011 ). Under this situation, might the experimental group be more motivated to try harder? While we cannot completely discount this possibility, the lack of cohort differences between our in-lab participants (who all completed the experiment one participant at a time and did not know about the various experimental conditions) and our varsity athlete participants suggests that motivation was not a critical concern. Moreover, previous work conducted in the same fashion had found several factors that diminish such concerns (see Appelbaum et al., 2011 ). Conclusion In the present study, we investigated stroboscopic vision to learn about the malleability of visual memory. After undergoing stroboscopic training, participants revealed an improved ability to retain visual information in short-term memory. Furthermore, this improved ability was still present 24 h later. While this is only one specific means by which visual processing can adapt, it indicates that stroboscopic training can lead to general improvements in higher-level visual cognition. More broadly, this result advances the scientific study of perceptual processing by providing an example of generalized learning. As well, this result informs athletic training by suggesting that stroboscopic experiences might be able to improve performance through benefits in visual memory. Sports often rely on the ability to keep fleeting information in memory (e.g., a basketball player making a no-look pass must remember the locations of his teammates and opponents), and any boost in visual memory abilities could manifest in improved performance. Notes For clarity, we use the term “cohort” to refer to the different collections of training participants (in-lab vs. varsity athletes) and “condition” to refer to the strobe versus control training regimens. | Stroboscopic training, performing a physical activity while using eyewear that simulates a strobe-like experience, has been found to increase visual short-term memory retention, and the effects last for 24 hours. Participants in a Duke University study engaged in physical activities, such as playing catch, while using either specialized eyewear that limits vision to only brief snapshots or while using eyewear with clear lenses that provides uninterrupted vision. Participants from the Duke community, including varsity athletes, completed a computer-based visual memory test before and after the physical activities. The study found that participants who trained with the strobe eyewear gained a boost in visual memory abilities. Participants completed a memory test that required them to note the identity of eight letters of the alphabet that were briefly displayed on a computer screen. After a variable delay, participants were asked to recall one of the eight letters. On easy-level trials, the recall prompt came immediately after the letters disappeared, but on more difficult trials, the prompt came as late as 2.5 seconds following the display. Because participants did not know which letter they would be asked to recall, they had to retain all of the items in memory. "Humans have a memory buffer in their brain that keeps information alive for a certain short-lived period," said Greg Appelbaum, assistant professor of psychiatry at Duke and first author of the study. "Wearing the strobe eyewear during the physical training seemed to boost the ability to retain information in this buffer." The strobe eyewear disrupts vision by only allowing the user to see glimpses of the world. Users must adjust their visual processing in order to perform normally, and this adjustment produces a lingering benefit: once participants removed the strobe eyewear, there was an observed boost in their visual memory retention that was found to still be active 24 hours later. Earlier work by Appelbaum and the project's senior researcher, Stephen Mitroff, had shown that stroboscopic training improves visual perception, including the ability to detect subtle motion cues and the processing of briefly presented visual information. Yet the earlier study had not determined how long the benefits might last. "Our earlier work on stroboscopic training showed that it can improve perceptual abilities, but we don't know exactly how," said Mitroff, associate professor of psychology and neuroscience and member of the Duke Institute for Brain Sciences. "This project takes a big step by showing that these improved perceptual abilities are driven, at least in part, by improvements in visual memory." "Improving human cognition is an important goal with so many benefits," said Appelbaum, also a member of the Duke Institute for Brain Sciences. "Interestingly, our findings demonstrate one way in which visual experience has the capacity to improve cognition." Participants for the study came from the 2010-2011 Duke University men's and women's varsity soccer teams, Duke's 2010-2011 men's basketball team and members of the general university community. Mitroff reported that participants had little or no trouble with the stroboscopic training, and several participants later returned to inquire about how they could be involved as research assistants. The research was supported by Nike SPARQ Sensory Performance, which designed the eyewear and is marketing it as Nike SPARQ Vapor Strobe. The study appeared online July 19 in Attention, Perception, & Psychophysics. | DOI 10.3758/s13414-012-0344-6 |
Biology | Climate change may push some species to higher elevations—and out of harm's way | Paul R. Elsen et al, Topography and human pressure in mountain ranges alter expected species responses to climate change, Nature Communications (2020). DOI: 10.1038/s41467-020-15881-x Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-020-15881-x | https://phys.org/news/2020-04-climate-species-higher-elevationsand.html | Abstract Climate change is leading to widespread elevational shifts thought to increase species extinction risk in mountains. We integrate digital elevation models with a metric of human pressure to examine changes in the amount of intact land area available for species undergoing elevational range shifts in all major mountain ranges globally ( n = 1010). Nearly 60% of mountainous area is under intense human pressure, predominantly at low elevations and mountain bases. Consequently, upslope range shifts generally resulted in modeled species at lower elevations expanding into areas of lower human pressure and, due to complex topography, encountering more intact land area relative to their starting position. Such gains were often attenuated at high elevations as land-use constraints diminished and topographic constraints increased. Integrating patterns of topography and human pressure is essential for accurate species vulnerability assessments under climate change, as priorities for protecting, connecting, and restoring mountain landscapes may otherwise be misguided. Introduction Climate change is causing widespread elevational range shifts in plant and animal species in mountainous regions 1 , 2 , 3 , 4 , 5 , 6 . Such elevational range shifts are often thought to be associated with an increased risk of extinction as topographic constraints impose significant reductions in the amount of area available for species following range shifts 7 , 8 . However, owing to complex topography in mountain ranges, such topographic constraints can occur at virtually any position along elevational gradients 9 . For instance, topographic constraints are roughly uniform along elevations in mountain ranges with unimodal declines in surface area (‘pyramid’ mountains), such as the European Alps, whereas they are greatest at high elevations in mountain ranges with mid-elevation peaks of surface area (‘diamond’ mountains), such as the Rocky Mountains of North America 9 . Few studies account for topographic patterns using high-resolution data, which could lead to inaccurate expectations of where species may experience range contractions following climate change 10 . Another limitation in many approaches to forecasting changes in available area for species persistence under climate change involves accounting for how human pressures limit species distributions and movement required to shift their range to adapt to change 11 . Human pressure on montane landscapes is predominantly through resource extraction, infrastructure development, and habitat conversion. Indeed, habitat conversion for agriculture, pasture, and cropland is extensive in mountainous regions 12 , is a leading driver of biodiversity loss globally 13 , and is often expected to exacerbate the negative effects of climate change 14 . However, because human pressure is typically biased towards low elevations 15 and protection towards high elevations in mountain ranges globally 16 , higher elevation lands could provide refuge from human activities 11 , 15 . Ultimately, accurate assessments of changes in habitable area for species undergoing upslope range shifts rely on integrating fine-scale topography with current land use patterns. Predictions of area changes due to upslope movement under climate change may be inaccurate when omitting high-resolution topography in assessments 17 , 18 , 19 . While mountainous regions are generally considered well-protected 16 , 20 , there is significant variation in protection across continents and ecoregions 21 , and even greater variation across individual mountain ranges 16 , the spatial scale most aligned with typical montane species distributions 22 . Moreover, many of these regions have faced increasing pressure from human populations over recent decades 23 , including within protected areas 24 . Consequently, population responses and threats to extinction arising from elevational range shifts will likely be highly context specific and potentially deviate from expectations derived from considerations of topographic constraints or habitat availability in isolation. Despite the recognized importance of topography and land-use in constraining species distributions 11 , to date there has been no global assessment of how these two factors combined alter the availability of intact land area for species undergoing elevational range shifts in mountain ranges. We addressed this need by evaluating the elevational distributions of total and intact land area for 1010 global mountain ranges. Here, we adopt the definition of mountain ranges used by the Global Mountain Biodiversity Assessment 25 and use the term ‘intact’ to refer to areas that are not under intense human pressure that can negatively impact species persistence (see also sections “Results” and “Methods”). We followed existing approaches to classify mountain range topography to one of four mountain topography classes (pyramid, diamond, hourglass, and inverse pyramid) based on the statistical properties of area-elevation distributions for total area 9 . We then reclassified mountain ranges based only on intact area, i.e., areas not under intense human pressure, by applying a threshold 24 to the human footprint index (HFI), a spatially explicit map of weighted cumulative threat that combines eight separate direct threats from anthropogenic activities: croplands, pastures, roads, railways, navigable waterways, human population density, nighttime lights, and the built environment, circa 2009 26 (Supplementary Fig. 1 ; see section “Methods”). We then modeled range shifts on all mountain ranges for an extensive set of hypothetical species based on expected temperature changes from a suite of general circulation models (GCMs) under two warming scenarios, assuming species could occupy all available land area in one case and that they would be restricted only to intact land area in a second case. Our approach enabled us to quantify how interactions between topography and current patterns of human pressure potentially influence the amount of intact area available for species following range shifts across the full array of elevations for all the world’s mountain ranges. Results Global patterns of human pressure over elevation Human pressure in mountain ranges has resulted in 57% of all mountainous land being considered under intense human pressure (Supplementary Fig. 1 ). For roughly 24% of ranges (239 of 1010), the entire land area is under intense human pressure (Fig. 1b ). The average elevation of peak human pressure in mountain ranges occurred at ~1210 m (range −75 to 6550 m; Supplementary Fig. 2 ). While human pressure is generally highest at low elevations and declines with elevation, it is not restricted to low elevations: roughly 30% of all land in mountain ranges >4500 m is under intense human pressure (Supplementary Fig. 3 ). Furthermore, pressure is predominantly focused at the bases of mountains, which can sometimes occur thousands of meters above sea level. For example, the Altiplano in Peru, the Medicine Bow Mountains in the United States, and the Tibetan Plateau all have their bases >2000 m above sea level. Roughly 30% of ranges had peak human pressure within the bottom 5% of their elevational range, with pressure declining rapidly with increasing elevation (Supplementary Fig. 2 ). Fig. 1: The global distribution of topographic classes within mountain ranges. The distribution of classes when considering all land a and intact land b . Inset donut plot in a shows the proportion of mountain ranges per mountain class considering total land area. Inset donut plots in b show the proportion of ranges from their original classification in each mountain class (denoted by donut labels) that were then reclassified considering only intact land area. Mountain ranges classified as intensified have no intact land area remaining when removing all land under intense human pressure. c Comparisons of area-elevation distributions considering total land area (left column) and intact land area (right column) for four example mountain ranges, colored by mountain classification. Black rectangles with alphanumeric symbols in a and b indicate the geographical location of each mountain range in c . Full size image Overall, trends in human pressure over elevation are non-linear at both global and regional scales (Supplementary Fig. 3 ). For example, at the global scale, there is a greater proportion of intact land from sea level to 2000 m than from 2000 to 4000 m elevation (Supplementary Fig. 3b ). At continental scales, this trend holds for ranges in Africa, Asia, and Oceania, but not for ranges in Europe, North America, or South America, which have equal or greater proportions of intact land at higher elevations (Supplementary Fig. 3c ). Mountain classification accounting for human pressure The frequency and spatial distributions of our mountain topography classifications were consistent with previous classifications of global mountain topography using alternative data sources 9 (Fig. 1a ). Roughly 50% of ranges (507 of 1010) were reclassified when calculations were based on the availability of intact land area (Figs. 1b and S4 ): pyramid mountains accounted for 17% of all mountain ranges, diamond mountains accounted for 30% of ranges, hourglass mountains accounted for 28% of ranges; and inverse pyramid mountains accounted for 2% of ranges. The remaining ~24% of mountain ranges had no remaining intact land area after removing all area under intense human pressure from the analysis and were classified as ‘intensified’ (Fig. 1b ). Reclassifications were geographically heterogeneous; for example, the European Alps changed from originally being classified as a pyramid mountain range to being reclassified as an hourglass mountain range due to disproportionate amounts of human pressure at lower elevations (Fig. 1c : B1 and B2). By contrast, the Himalayas changed from an hourglass mountain range to an inverse pyramid mountain range (Fig. 1c : C1 and C2). Concentrations of such reclassifications occurred throughout the Great Basin in North America; along the Andes and in mountain ranges of Brazil’s Atlantic forest in South America; throughout the Atlas ranges of Morocco, the Ethiopian Highlands, and nearly all mountain ranges of Madagascar in Africa; throughout much of the Middle East and Central Asia, the Indian subcontinent, and mountain ranges of East Asia; throughout much of western and central Europe; and across New Guinea and New Zealand in Oceania (Fig. 1 ; Supplementary Fig. 4 ). Mountain ranges that were classified as ‘intensified’ occurred mainly in northern Africa; the Middle East; and in South, East, and Southeast Asia (Fig. 1b ). Constraints for species undergoing elevational range shifts We assessed how the availability of land area would change for a species shifting its elevational range on each mountain range under two cases—one case where species would occupy all land area within their elevational range, and a second case where species would only occupy intact land area within their elevational range (see examples in Fig. 2 ). This provides an indication of how more or less vulnerable a species may be to reductions in potential area in each of the mountain ranges in the future under climate change. We did this by modeling range shifts of a set of hypothetical montane species with a wide variety of elevational range sizes—meant to capture different ecologies, degrees of specialization, and climatic niche breadths—over the complete elevational gradient for all mountain ranges based on mountain range-specific average warming rates across 17 GCMs for two warming scenarios (representative concentration pathways, RCPs 4.5 and 8.5) and on mountain range-specific temperature lapse rates (Supplementary Fig. 5 ) under these two cases (see “Methods” section). For the purposes of our analysis, our modeled range shifts operate on the assumption that species will closely track shifting isotherms and therefore do not explicitly address lagged responses or disequilibrium dynamics (but see “Discussion” section). Fig. 2: Comparing projected area from modeled elevational range shifts with and without accounting for human pressure. Map of global mountain ranges shows the proportion of the elevational gradient for each range where percentage of change in intact area equals or exceeds the percentage of change in total area following range shifts for a suite of modeled hypothetical montane species. Range shifts are calculated using mountain range-specific rates of mean annual temperature change averaged across 17 GCMs for RCP 8.5 in 2070, mountain range-specific adiabatic lapse rates, and varying elevational range sizes for hypothetical species (see “Methods” section and Supplementary Fig. 9 for descriptive and schematic overviews of the modeling procedure). Insets for select mountain ranges show mean and standard error percentage of changes in area across all modeled species in two cases, one where species are allowed to occupy all land area (blue lines ΔArea total ), and one where species only occupy intact land area (red lines ΔArea intact ). Values of –100% signify that the species have no available total or intact land area remaining and face local extinction. The x -axis elevation represents the lower boundary of elevation prior to the range shift. See the plot for Cordillera Occidental for a detailed legend to other insets. Therefore, the global map depicts the proportion of each mountain range’s elevational gradient where the change in area in the second case equals or exceeds the change in area in the first case (i.e., the proportion of elevation where the red curve equals or exceeds the blue curve). See also Supplementary Data 1 for analogous inset plots for all mountain ranges ( n = 1010) and Supplementary Fig. 7a for the analogous global map using RCP 4.5. Full size image Compared to the initial area of occupancy, the amount of projected area following range shifts was typically greater in the second case where the modeled species were only allowed to occupy intact land area (Fig. 3a ). That is, species more often experienced greater percentage of changes in area following range shifts in the intact land area case. This was observed when we averaged across all mountain ranges for nearly the entire elevational gradient for all mountain topography classes, and this effect was consistently more pronounced at low elevations (Fig. 3a ). For example, modeled range shifts where species closely track shifting isotherms in the European Alps in the first case led to continued reductions in projected area available for species as total land area tended to decline monotonically with elevation (Fig. 2 inset, blue line). However, in the second case, the modeled range shifts in the European Alps led to area gains up until about 600 m and then again from 1500 to 2500 m, due to shifting away from areas under greater human pressure at lower elevations (Fig. 2 inset, red line). At higher elevations (>2500 m), species encountered area reductions in both cases owing largely to topographic constraints. Fig. 3: Average projected area changes for species undergoing elevational range shifts. a Mean (lines) and standard error (shaded regions) percentage of change in projected total (gray lines) and intact (red lines) land area across all modeled species and all mountain ranges by mountain classification. b Elevation bin-wise comparison of percentage of change in projected total and intact area. Dashed line is 1:1 line. Cells are colored by the total number of elevation bands (50 m) across modeled species and all mountain ranges, with panels arranged by mountain classification based on total area availability. See text and “Methods” section for details of modeled range shifts and Supplementary Fig. 6 for analogous results using RCP 8.5 for panel a . Full size image However, average responses across all modeled species (i.e., across all elevational range class sizes considered) were variable across mountain ranges and there were some notable exceptions to this general trend described above. For example, in mountain ranges like the Sierra Madre de Chiapas of Mexico, the Kapuas Mountains on Borneo, the Anti-Atlas Range of Morocco, and the Ethiopian Highlands, current human pressure is expected to reduce land area available for species undergoing range shifts, compounding reductions in area driven by topographic constraints (Fig. 2 ; see also Supplementary Data 1 for results from all 1010 mountain ranges and Supplementary Software 1 for example code and data for modeling elevational range shifts). On average, modeled species with smaller elevational range sizes showed larger percentage gains in projected area than species with larger elevational ranges, especially at low elevations (Fig. 4a ). Differences were then attenuated above roughly 1500 m on diamond, hourglass, and inverse pyramid mountains, and above roughly 1000 m on pyramid mountains. This was true in both the case where species were allowed to occupy all land as well as where species were restricted to intact land only, but the effect was much more pronounced in the latter. This makes sense because as species’ elevational range sizes approach the amplitude of a given mountain range, upslope shifts are more likely to result in species losing available area as their upper limits exceed those of mountaintops. We found that the choice of modeled elevational range size influenced the overall proportion of mountain ranges where the percentage changes in area of intact land equaled or exceeded the percentage changes in area of total land (Fig. 4b ). In general, species with smaller modeled elevational range sizes tended to decrease this proportion at lower elevations, particularly for hourglass and pyramid mountains (Fig. 4b ). Fig. 4: The influence of species’ elevational range sizes on projected area changes following modeled elevational range shifts. a Mean and standard error percentage of change in projected total (gray) and intact (red) land area across all modeled species (within elevational range size classes, x -axis) and all mountain ranges by mountain classification. b Proportion of mountain ranges where percentage of change in intact land area equals or exceeds percentage of change in total land area following elevational range shifts across the elevational gradient by mountain classification. Blue lines indicate proportions for each elevational range size class considered in modeling (100–4000 m, in 100-m increments); red line indicates the mean response across all range class sizes. See text and “Methods” section for details of modeled range shifts. Full size image Patterns of mean percentage of changes in projected area following modeled range shifts in the two cases were generally similar under the two warming scenarios we considered (RCPs 4.5 and 8.5; compare Fig. 3 and Supplementary Fig. 6 ). However, mean percentage gains under RCP 8.5 were significantly greater for diamond and inverse pyramid mountains at lower elevations, and were significantly lower for hourglass and pyramid mountains at lower elevations. We found that the proportion of each mountain range’s elevational gradient where the percentage of changes in area of intact land equaled or exceeded the percentage of changes in area of total land following modeled range shifts was generally greater under the RCP 4.5 scenario, though there was significant geographic variation (Fig. 2 ; Supplementary Fig. 7 ). In addition, the choice of HFI threshold determining intact land area had little effect on the patterns we observed and was similar across a range of alternate threshold values (Supplementary Fig. 8 ; see “Methods” section). Our modeling procedure highlights changes in area arising from temperature-induced range shifts that are predominantly upslope, though we acknowledge species respond to other climatic factors and shift heterogeneously along elevational gradients 1 , 27 (see Supplementary Fig. 9 for a descriptive and schematic overviews of the procedure). Discussion Topography and current patterns of human pressure across the world’s mountain ranges influence the extent of intact land area available to species undergoing elevational range shifts. By including topography and a metric of human pressure for all mountain ranges on Earth, we provide a more accurate estimate of species vulnerability to area loss as a result of elevational range shifts under climate change. While human pressure has undoubtedly reduced the amount of habitable area available with potential severe consequences for biodiversity historically 28 , we found that human pressure in mountains has functionally changed the ‘shape’ of mountains when viewed from the perspective of species that are restricted to intact landscapes (Fig. 1 ). There is evidence from models as well as empirical documentation of species benefitting from climate change by expanding their ranges 27 , 29 and increasing their population size 30 . Our results suggest that montane species that are restricted to intact landscapes—particularly those at lower elevations—could potentially realize similar benefits following upslope range shifts in at least some portions of the elevational range for the majority of mountain ranges globally (Figs. 2 – 4 ). However, an important caveat is that projections of agriculture 31 and human population dynamics 32 suggest that patterns of human pressure might also show upslope trajectories, akin to those of species responding to warming temperatures. Thus, species shifting upslope might continue to face increasing human pressure over time. Future land-use scenarios hold a high degree of uncertainty and depend on a host of factors, such as rates of agricultural intensification versus expansion 33 and human reliance on existing facilities and infrastructure 34 , among others. Moreover, future land-use scenarios do not universally predict increasing human pressure towards higher elevations. For instance, overall agricultural productivity is projected to decline in the tropics due to changes in the timing and length of the growing cycle and significantly reduced potential for multiple cropping 35 , 36 , which may reduce human pressure in tropical mountain ranges over time at all elevations. Higher elevation areas may also offer fewer opportunities for human activities, such as agriculture or road building because they are generally more topographically complex 36 . While we currently lack spatially explicit models of future land-use change scenarios at high enough resolutions in global mountain ranges to incorporate into our vulnerability assessments for species under climate change in this study, future studies would undoubtedly benefit from such information to improve predictions. While our results counter the existing paradigm that temperature-driven movements necessarily lead to overall area loss for species and may thus increase optimism, it is important to recognize that montane species will face numerous challenges in accessing and adapting to intact land at higher elevations. Nearly 400 million people permanently live in mountains, and the total number of people jumps to nearly 1.2 billion when including seasonal visitors to mountains 25 . Our analysis revealed that nearly 60% of all mountainous land has been transformed by human activities to a state of intensive human pressure that, in addition to potentially facilitating invasive species and having other indirect effects on ecosystems 37 , will likely reduce habitat connectivity and hinder species’ dispersal abilities to track climate change 38 . Moreover, while species undergoing range shifts may experience large percentage increases in area in many cases, it is important to note that the total amount of intact area may still be extremely small. For example, in the case of an upslope-shifting species at the base of the Karakorum Mountains, species may experience a >200% increase in area following a range shift (Fig. 2 ), but still only have <100 km 2 of intact area available to occupy (Fig. 1c ). This is an important consideration for all mountain ranges where human activities have reduced the overall area of intact land to nearly nothing, and suggests restoration and rehabilitation activities may still be necessary for long-term sustainability. We note that our focus on the amount of intact land area—and not its configuration along elevational gradients that could facilitate or hinder connectivity between intact landscapes or the physical environment 39 , 40 , 41 —is a limitation of our study that should be addressed in future work to more accurately understand montane biodiversity responses to global change. In fact, while protected areas are disproportionately biased towards high elevations, comprise more intact landscapes 24 , and reduce the rate of intact forest loss from timber harvest 42 , they are not evenly distributed or well-connected along elevational gradients in many cases 16 . Consequently, restoring degraded landscapes and strategically planning future protected areas to bolster elevational connectivity will be crucial to increase the chances that species can realize any benefits from increased intact land area at higher elevations. Indeed, global prioritizations for biodiversity conservation have heavily targeted mountain landscapes for additional protection and restoration. For example, a recent global synthesis identified that the highest global and regional priorities for protected area expansion are located primarily in mountainous regions, including in the Neotropics (Central America and along the Andes and the Brazilian coast), Africa (Madagascar, the Eastern Arc Mountains, and west African forests) and South and Southeast Asia (the Himalayan slopes, Indonesia, Papua New Guinea, and the Philippines) 43 . Where additional protection of intact land cannot meet current conservation targets, restoration priorities have been identified to meet shortfalls in several montane ecoregions, like the Sulaiman Range alpine meadows on the border of Afghanistan and Pakistan, the Madagascar subhumid forests along the Ankaratra Range, and the Huon Peninsula montane rain forests spanning the Finisterre and Saruwaged Ranges in Papua New Guinea 44 . Our results show that protecting and restoring montane landscapes might also act to provide important benefits for lowland biodiversity in mountains under climate change. Rates of plant richness have already increased due to warming-driven upslope range shifts across European mountain ranges 45 . If these patterns are consistent in other mountain landscapes and trends continue, species richness may continue to increase in mountains globally. Some evidence suggests that montane species may be more tolerant of land-use change than lowland species because they evolved in more variable climates and may thus be more adapted to cope with temperature changes arising from habitat modification (e.g., increased temperatures following logging) 46 , 47 , 48 . This could provide further rescue effects from pressure related to human activities. Related to this issue is whether species can successfully shift their ranges fast enough to keep up with the pace of warming, whether the habitats that species depend on shift in similar directions and with similar magnitudes, and whether species that do colonize higher elevation habitats can persist there. Our analysis assumes species will closely track shifting isotherms, but several studies of plant and animal communities have shown that species range shifts may lag behind shifting isotherms and that such lags influence disequilibrium dynamics between colonization credits and extinction debts 6 , 49 , 50 . While there is significant variation in lags across species owing to variation in species’ physiological and demographic responses, biotic interactions, and properties of the physical environment 50 , some assessments have reported significant extinction debt is looming for montane species that is more acute for endemic and cold-adapted, high-elevation species 51 , 52 . Expanding our models to incorporate disequilibrium dynamics, lagged responses, and extinction debt in future work would be an important step to ensuring realistic forecasts of extinction risk for range-shifting species. There are several ecological, demographic, and political processes that could facilitate landscape recovery in mountainous ecosystems that could potentially provide opportunities for species retreating upslope. Despite global reductions in the area of intact forested landscapes 42 , remote sensing of land cover change has indicated that mountain systems across all climate domains have experienced net tree canopy gain and net bare ground loss since the early 1980s 53 . This could represent an improvement in habitat quality and/or quantity for montane species restricted to forests. Similarly, historical and ongoing agricultural land abandonment—the ceasing of agricultural activities on croplands and grasslands due to a host of climatic, environmental, and socio-political factors 54 —can in some cases lead to natural regeneration and afforestation 55 . Furthermore, large-scale restoration efforts, such as China’s Grain-for-Green Program, have resulted in significant additional forest cover in mountainous landscapes 56 , though the net benefits to biodiversity may not equal those provided by natural forests 57 . Overall, our analysis reveals the importance of integrating topography and land use in examining available area for species conservation. Our results strongly suggest that extinction risk from climate-induced range shifts is highly context dependent in mountain ranges and can be driven in large part not only by the ‘fundamental shape’ of the mountain, but its ‘realized shape’ after accounting for human pressure. Indeed, conservation actions informed by analyses of topography alone may be misleading. For example, restoration actions might be targeted towards low elevations in the Cordillera Oriental in Colombia, a diamond mountain, when based on topographic patterns alone. However, factoring in human pressure reveals that the mountain range is shaped like a pyramid for species restricted to intact landscapes, which means conservation actions may better be targeted towards mid and high elevations where species may have limited access to intact land area when shifting upslope. More broadly, analyzing global topography and land use in concert reveals that species driven upslope in response to future warming may have access to more intact land away from intense human pressures that many montane species currently face. Conservation practitioners must integrate topography and human pressure to accurately assess species vulnerability to climate change. Methods Mountain ranges We obtained a previously published global dataset delineating 1010 mountain ranges that account for 26 million km 2 of terrestrial land (~17.5% by area) and 83.7% of the Earth’s mountainous terrain 25 . Delineations in the dataset were made by expert evaluation using maps, atlases, and inventories of mountain ranges, and boundaries were subsequently informed by terrain ruggedness, which is a measure of the elevational change between focal and neighboring cells of a digital elevation model (DEM). Boundary delineations were optimized to maximize the inclusion of rugged terrain while minimizing nonrugged terrain. To our knowledge, it is the most comprehensive dataset on mountain ranges with the greatest number of mountain ranges delineated currently available. Elevation data We obtained a high-resolution, near-global, void-filled DEM from the NASA Shuttle Radar Topography Mission (SRTM) at 3 arc-second (90 m) resolution (SRTM3). We then resampled the SRTM3 raster to the 1 km 2 resolution of the HFI raster using bilinear interpolation. The SRTM3 DEM extends from approximately 56°S to 60°N. Sixty mountain ranges extend beyond 60°N, so we combined the resampled SRTM3 DEM with a second DEM at 30 arc-second (1 km 2 ) resolution (SRTM30), which already matched the resolution of the HFI raster. This resulted in a seamless elevation dataset that matched the spatial resolution of the HFI (see below). All elevation data are available at . Human pressure We obtained a previously published global map of the HFI representing a metric of human pressure 23 . The HFI is a weighted cumulative threat map at a spatial resolution of 30 arc-seconds (1 km 2 ) that combines eight separate direct threats from anthropogenic activities, circa 2009: croplands, pastures, roads, railways, navigable waterways, human population density, nighttime lights, and the built environment. Global maps of each threat were scaled by the original authors based on their degree of influence on the terrestrial environment: the built environment was given a value of 0 or 10 (all areas mapped as built given score of 10); human population density and nighttime lights were scaled to a continuous range of 0–10; croplands were given a value of 0 or 7 (all areas mapped as crops given score of 7); pasture was given a value of 0 or 4 (all areas mapped as pasture given score of 4); roads were scaled from 0 to 8; railways were given a value of 0 or 8 (all areas within 500 m of a railway given a score of 8); and navigable waterways were scaled to a range of 0–4 26 . Scores for individual threats were then summed and weighted by the original authors to make a composite map with a range of 0–50. HFI values were then extensively validated against satellite imagery 26 yielding high confidence they represent conditions of human pressure. Classifying mountain topography based on total and intact land area For each mountain range we calculated the skew and modality of the extracted elevation values. We followed established methods 9 to assign mountain ranges to one of four classes based on combinations of skew and modality. We assigned mountain ranges with a dip value (a test statistic that measures the degree of multimodality of an empirical distribution, calculated using Hartigan’s dip test 58 ) > 0.01 and with significant ( p < 0.05) deviations from unimodality to the hourglass classification (irrespective of skew). For all other mountain ranges, we assigned those with Type-I skewness 59 ≥ 0.5 to the pyramid classification, those with skewness ≤ –0.5 to the inverse pyramid classification, and the remainder to the diamond classification (Fig. 1a ). We then repeated this procedure after removing all elevation values that were derived from pixels under intense human pressure based on the HFI to assign each mountain a classification based on its distribution of intact land area over elevation. To do this, we followed previous analyses using the HFI to set a threshold that separates intact land from land under intense human pressure 24 , 60 , 61 . Consequently, we used a threshold value of ≥4—roughly equivalent to land being converted to (at least) pasture and reasonably approximating when human activities have converted land to a non-natural state—to indicate land area under intense human pressure, such that HFI values < 4 were considered intact (Supplementary Fig. 1 ; see also ‘Sensitivity analyses’ below for treatments using alternate thresholds). For those mountain ranges where all elevations were found to be under intense human pressure using our threshold, we assigned them to an ‘intensified’ classification (Fig. 1b ). Human pressure over elevation We clipped both global HFI and DEM rasters to the boundaries of the mountain range delineations and extracted the raster values from each layer for each mountain range. We created histograms of the elevation data binned within 50-m elevational bands—chosen to produce accurate elevation–area relationships while aligning with the spatial scale of empirically documented elevational range shifts on decadal-to-century time scales 4 , 62 —to visualize trends in total land area over elevation for (i) each mountain range, (ii) for all mountain ranges within continents, and (iii) for all mountain ranges combined. To visualize trends in intact land area over elevation, we created a second set of histograms of the elevation data comprising only intact land using the threshold as defined above, again binned within 50-m elevational bands. For each range we also calculated the proportion of land area intact within each elevational band as the number of HFI values <4 in the band divided by the total number of raster values in the band (Supplementary Fig. 3 ). We also used this approach to calculate the overall proportion of intact land area for each entire mountain range (i.e., across all elevational bands). We assessed bias in human pressure over elevation in two ways. For each mountain range, we first calculated the elevation of peak human pressure. We did this by determining the elevational band where the proportion of intact land area was minimized. In cases where multiple elevational bands had equal proportions, we took the median value. We next calculated the relative elevational position of peak human pressure per mountain range by determining the position of the elevational band denoting peak human pressure relative to each mountain range’s amplitude. This value was then scaled from 0 (a mountain range’s base) to 1 (a mountain range’s peak; Supplementary Fig. 2 ). For example, two mountain ranges with peak human pressures at 1000 m where the first mountain ranges from sea level to 2000 m and the second mountain ranges from 1000 to 3000 m would receive values 0.5 (i.e., halfway up the elevational gradient) and 0, respectively. Modeled range shifts We modeled range shifts for a suite of hypothetical montane species on each mountain range in two cases—one with and one without removing land area under intense human pressure. To do this, we created a series of hypothetical species to subject to temperature-induced range shifts over all mountain ranges in the world. We first generated species with a wide variety of elevational range sizes, from 100 m up to the total amplitude of a given mountain range, in 100-m increments. For mountain ranges with amplitudes >4000 m, we set the maximum elevational range size to 4000 m. We then distributed each species within the suite such that its lower elevational range started every 50-m of elevation, starting at the base of the mountain. When species placements would result in the species upper elevational range extending beyond the elevational gradient of the given mountain range, we removed that species from the suite. This resulted in a species set per mountain range consisting of all possible combinations of species elevational range sizes (in 100-m increments) starting at all possible elevational bands (every 50-m of elevation) where species elevational ranges are completely contained within the elevational gradient. Consequently, the number of species modeled, s , varied by mountain range and was calculated by $$s_{i} = b_{i} \times (a_{i} {/} 100) {/} {2}\,{\mathrm{for}}\,a_{i} \le {4000}\,{\mathrm{m}}\,{\mathrm{when}}\,b_{i}\,{\mathrm{is}}\,{\mathrm{even}},$$ (1) $$s_{i} = \left( {b_{i}-{\mathrm{1}}} \right) \times \left( {\left( {a_{i} + {\mathrm{5}}0} \right) / {\mathrm{1}}00} \right) / {\mathrm{2}}\,{\mathrm{for}}\,a_{i} \le {\mathrm{4}}000\,{\mathrm{m}}\,{\mathrm{when}}\,b_{i}\,{\mathrm{is}}\,{\mathrm{odd}},$$ (2) $$s_{i} = {\mathrm{16}}00 + \left( {b_{i}-{\mathrm{8}}0} \right) \times {\mathrm{4}}0\,{\mathrm{for}}\,a_{i} \,> \,{\mathrm{4}}000\,{\mathrm{m}}\,{\mathrm{when}}\,b_{i}\,{\mathrm{is}}\,{\mathrm{even}},\,{\mathrm{and}}$$ (3) $$s_{i} = 1600 + \left( {b_{i}-80.5} \right) \times 40\, {\mathrm{for}}\, a_{i} \,> \, 4000\, {\mathrm{m}}\, {\mathrm{and}}\, {\mathrm{when}}\, b_{i}\, {\mathrm{is}}\, {\mathrm{odd}},$$ (4) where b is the number of 50-m elevational bands and a is the amplitude (maximum minus minimum elevation) of mountain range i . For example, a mountain range extending from 1000 to 4000 m would have b = 60 elevational bands and a = 3000 m amplitude for a total of (60 × (3000/100))/2 = 900 species modeled (a schematic and further details are presented in Supplementary Fig. 9 ). We then subjected each species to an elevational range shift on each mountain range depending on mountain range-specific warming scenarios and adiabatic lapse rates. We calculated rates of warming separately for each mountain range to capture geographic variation in warming across mountains globally 63 . To do this, we extracted projected mean annual temperature data from within each mountain range for 17 GCMs (ACCESS1-0, BCC-CSM1-1, CCSM4, CNRM-CM5, GFDL-CM3, GISS-E2-R, HadGEM2-AO, HadGEM2-CC, HadGEM2-ES, INMCM4, IPSL-CM5A-LR, MIROC-ESM-CHEM, MIROC-ESM, MIROC5, MPI-ESM-LR, MRI-CGCM3, and NorESM1-M) from two representative concentration pathways (RCPs 4.5 and 8.5) for 2070 (average for 2061–2080) from WorldClim v1.4 64 . We calculated the pixel-wise temperature difference between each projected temperature layer and current mean annual temperature, also using data from WorldClim v1.4 to enable unbiased comparisons with the future projections. We then averaged all resultng difference rasters and took the mean value across all pixels within each mountain range to determine mountain range-specific average rate of warming for each RCP (Supplementary Fig. 5b, c ). We calculated adiabatic lapse rates separately for each mountain range to capture geographic variation in temperature–elevation relationships arising from complex topographic and orographic features in mountains 65 . To do this, we extracted mean annual temperature data from within each mountain range using the current (1970–2000) climate data from WorldClim v2.0 66 . We then fit linear models of temperature–elevation and used the slope of this relationship (the coefficient) as the lapse rate for each mountain range (Supplementary Fig. 5a ; Supplementary Data 2 ). For both cases (one with and one without removing land area under intense human pressure) and for each hypothetical species modeled, we calculated the amount of area within the range of the species prior to the range shift (Area baseline ) and after the range shift (Area projected ). We then calculated the percentage of change in area for a given hypothetical species starting from a given 50-m elevational band as $$\% \,{\mathrm{of}}\,{\mathrm{change}}\,{\mathrm{in}}\,{\mathrm{area}} = (({\mathrm{Area}}_{{\mathrm{projected}}} / {\mathrm{Area}}_{{\mathrm{baseline}}})-{\mathrm{1}}) \times {\mathrm{1}}00$$ (5) The results for the first case reflect a situation where a species can occupy any area regardless of human pressure. By contrast, the second case relies on baseline and projected areas only from intact land area and therefore reflects a situation where a species can only occupy intact area where HFI values were <4. We contrasted the results of the two cases by separately plotting the average percentage of change in projected total area (ΔArea total ) and the average percentage change in intact land area (ΔArea intact ) across all hypothetical species over elevation, separately for each mountain range and also by mountain classification. We provide several detailed examples of this procedure as insets in Fig. 2 and provide the full set of plots for all mountain ranges in Supplementary Data 1 . Calculating the mean and standard error percentage of change in projected total and intact land area across all species separately per elevational band (Fig. 3a ), and creating a heatmap of expected changes in total versus intact land area at the scale of individual elevational bands across all species (Fig. 3b ), provides an assessment of the average response to elevational range shifts under the two cases considered. For descriptive and schematic overviews of our modeling procedure, we refer readers to Supplementary Fig. 9 . Sensitivity analyses We performed three sensitivity analyses to test whether and how our results would be influenced by the choice of (i) the HFI threshold value used to designate intact land area, (ii) the warming scenario (RCP) considered, and (iii) the elevational range size of a hypothetical montane species used in modeling range shifts. For our first sensitivity analysis, to test how our results were influenced by the choice of the HFI threshold used to designate intact land area, we repeated our analyses after using a more stringent (HFI value ≥ 3) and a more conservative (HFI value ≥ 7) threshold for intact land area following an existing methodology 24 . HFI values < 3 roughly denote land that has not been developed to the level of a pasture and contains no heavy infrastructure, while HFI values < 7 roughly denote land that has not been converted to cropland. Using these alternate thresholds, we repeated the mountain classification, human pressure, and modeled range shift analyses described above, using warming scenarios derived from RCP 8.5. The results using these two alternate thresholds were qualitatively similar to those obtained using our original threshold value of 4 (Supplementary Fig. 8 ). Generally speaking, using a threshold value of 7 resulted in a pattern more similar to that using no threshold value (i.e., the total land case), while using a threshold value of 3 resulted in a pattern where changes in projected area for species undergoing range shifts were slightly greater than those using the threshold value of 4. The largest discrepancies in results driven by the choice of threshold values were observed for inverse pyramid mountains when considering a threshold value of 7. For our second sensitivity analysis, to test how our results were influenced by the choice of the warming scenario used to model elevational range shifts, we repeated our analyses using RCP 4.5, which represents a more moderate warming scenario compared to RCP 8.5, again using projected mean annual temperature data using WorldClim v1.4 64 . On average, mountain-range specific warming rates using RCP 4.5 were two-thirds as high as when using RCP 8.5 (Supplementary Fig. 5b, c ). This led to smaller modeled elevational range shifts for most mountain ranges. Patterns of mean and standard error percentage of change in projected area assessments under RCP 4.5 were qualitatively similar to those using RCP 8.5 at the global scale (Supplementary Fig. 10 ), but mean percentage gains under RCP 4.5 were significantly lower for diamond and inverse pyramid mountains at lower elevations, and were significantly greater for hourglass and pyramid mountains at lower elevations (Fig. 3 ; Supplementary Fig. 6 ). The proportion of each mountain range’s elevational gradient where the percentage of change in area of intact land equaled or exceeded the percentage of change in area of total land following modeled range shifts was generally greater under the RCP 4.5 scenario, though not always (Fig. 2 ; Supplementary Fig. 7 ). Finally, for our third sensitivity analysis, to test how our results were influenced by the choice of the elevational range size of a hypothetical montane species used in modeling range shifts, we considered a wide variety of elevational range sizes from 100 m to a maximum of 4000 m meant to represent species with different ecologies, degrees of specialization, and climatic niche breadths in our modeled range shift procedure. We then plotted mean and standard error percentage of change in projected area following range shifts for all modeled species within elevational range size classes across all mountain ranges by mountain classification to assess the influence of elevational range size (Fig. 4a ). Across all mountain classes, smaller elevational range sizes were associated with larger mean percentage of change in projected area gains. Gains were generally similar for species with elevational range sizes >1500 m on diamond, hourglass, and inverse pyramid mountains, and for species with elevational range sizes >1000 m on pyramid mountains. We also investigated how the choice of elevational range size affected our calculation of the proportion of mountain ranges where the percentage of change in area of intact land equaled or exceeded the percentage of change in area of total land following modeled range shifts. In general, patterns were qualitatively similar across elevational range sizes on diamond, hourglass, and pyramid mountains. We found a trend for greater elevational range sizes to be associated with greater proportions for inverse pyramid mountains, and this trend was also slightly apparent for hourglass mountains. However, generally speaking, the overall average response (i.e., response averaged over all elevational range sizes) was similar to that of a given elevational range size. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The SRTM elevation data are freely available at . The Human Footprint index data are freely available from the original author’s Dryad Digital Repository ( ). The WorldClim current and future mean annual temperature data are freely available at . The mountain range boundary delineations from the Global Mountain Biodiversity Assessment are freely available at . Code availability R scripts for performing the mountain classification analysis and modeling elevational range shifts for an example mountain range are available as Supplementary Software. | A new WCS-led study reveals that mountain-dwelling species fleeing warming temperatures by retreating to higher elevations may find refuge from reduced human pressure. A new study published in Nature Communications by scientists at WCS, the University of California, Berkeley, and the United States Forest Service shows that nearly 60 percent of all mountainous area is under intense human pressure. Most of the pressure is at low elevations and mountain bases, which tend to be easier places for people to live, grow food, and build roads. The scientists then used climate models to make predictions about how species would move under climate change. Based on their predictions, they found that species tend to move to higher elevations, and that these higher elevations tend to have more intact land for species because there is less human pressure. Without factoring in human pressure, the authors warn that conservation actions may be misguided. Factoring in human pressure reveals the true 'shape' of a mountain for species that are restricted to intact landscapes, which are often the species of greatest conservation concern. Here, the 'true shape' refers to how much land area is potentially available as habitat for a species as it moves up in elevation, not simply how much total land area is available. The true shape can reveal where species will tend to lose versus gain intact land area as they shift under climate change: the elevations where species are expected to lose area represent the priority zones for conservation. Mountains are home to over 85 percent of the world's amphibians, birds, and mammals, making them global conservation priorities. But mountain-dwelling species are at risk from human activities, such as agriculture, livestock grazing, and development that reduce their habitat, and climate change that threatens to push species upslope as they struggle to find tolerable temperatures. "Species are adapted to certain temperature conditions. As temperatures warm in mountains, scientists have documented species moving to higher elevations to maintain the same temperatures," said Paul Elsen, a WCS Climate Adaptation Scientist and lead author of the study. "This was always seen as a problem, because species would have less land area and less habitat to occupy at high elevations. But what we found is that as species move upslope, they tend to move away from areas that are already under intense human pressure and into areas with reduced human pressure. As a result, they can occupy more intact land area, even if the total amount of land area declines." The authors combined several global databases to make their assessments: high-resolution digital elevation models gave a picture about how much surface area is available at different elevations. The Human Footprint Index provided information on pressure from human activities. Global climate models projected how temperatures are likely to change by the late 21st century. The authors then used computer simulations to place hundreds of thousands of hypothetical 'species' across all mountain ranges at different elevations and then predicted how they would shift their ranges based on climate projections. For each simulation, they compared the amount of area the species had to begin with to the amount they would have after the range shift under climate change. Said Elsen: "We were surprised to find that many species had more intact land area available after the range shift compared to when they started." The results suggest that many species in mountain ranges may have more intact land area available in the future if they track warming temperatures to higher slopes, though there were exceptions. "Our results offer a glimmer of hope for montane species under climate change," Elsen said. "Montane species are still facing tremendous human pressure, especially at low elevations, but we have the opportunity now to protect intact habitats at higher elevations to give these species the best possible chance going forward." | 10.1038/s41467-020-15881-x |
Medicine | Researchers find biomarker for autism that may aid diagnostics | www.nature.com/tp/journal/v5/n … full/tp2015123a.html Journal information: Translational Psychiatry , Nature | http://www.nature.com/tp/journal/v5/n9/full/tp2015123a.html | https://medicalxpress.com/news/2015-09-biomarker-autism-aid-diagnostics.html | Abstract Autism spectrum disorder (ASD) affects 2% of children, and is characterized by impaired social and communication skills together with repetitive, stereotypic behavior. The pathophysiology of ASD is complex due to genetic and environmental heterogeneity, complicating the development of therapies and making diagnosis challenging. Growing genetic evidence supports a role of disrupted Ca 2+ signaling in ASD. Here, we report that patient-derived fibroblasts from three monogenic models of ASD—fragile X and tuberous sclerosis TSC1 and TSC2 syndromes—display depressed Ca 2+ release through inositol trisphosphate receptors (IP 3 Rs). This was apparent in Ca 2+ signals evoked by G protein-coupled receptors and by photoreleased IP 3 at the levels of both global and local elementary Ca 2+ events, suggesting fundamental defects in IP 3 R channel activity in ASD. Given the ubiquitous involvement of IP 3 R-mediated Ca 2+ signaling in neuronal excitability, synaptic plasticity, gene expression and neurodevelopment, we propose dysregulated IP 3 R signaling as a nexus where genes altered in ASD converge to exert their deleterious effect. These findings highlight potential pharmaceutical targets, and identify Ca 2+ screening in skin fibroblasts as a promising technique for early detection of individuals susceptible to ASD. Introduction Autism spectrum disorder (ASD) is a complex heterogeneous disorder 1 , 2 , 3 , 4 with a poorly defined etiology 5 , 6 , 7 , 8 and diagnosis criteria that are strictly clinical because there are as yet no objective biomarkers of the disorder. 9 , 10 Its high heritability, however, suggests a strong genetic component, 8 and a wealth of genetic data now implicate a host of genes encoding ion channels and associated intracellular Ca 2+ signaling proteins in the molecular architecture of ASD, 5 , 6 , 7 , 8 placing Ca 2+ homeostasis at a central node. Cytosolic Ca 2+ homeostasis involves ion flux from intracellular organellar stores, as well as transport across the plasma membrane. Diseases of the intracellular organelles are an emerging area of medicine. Several prototypes are already well-developed for neurogenetic diseases of mitochondria and the lysosomes, 11 , 12 , 13 , 14 and increasing evidence implicates the endoplasmic reticulum (ER). 15 Ca 2+ release from the ER through inositol trisphosphate receptors (IP 3 Rs) has been shown to be altered in cognitive disorders including Alzheimer’s 16 , 17 and Huntington’s diseases, 18 and IP 3 Rs have recently been identified among the genes affected by rare de novo copy number variants in ASD patients. 19 In neurons, IP 3 R-mediated Ca 2+ release is involved in crucial functions—including synaptic plasticity and memory, 20 , 21 neuronal excitability, 22 , 23 neurotransmitter release, 24 , 25 axon growth 26 and long-term changes in gene expression 27 —highlighting the central integrating position played by IP 3 Rs. 28 Ca 2+ release is activated in response to the second messenger IP 3 , which is produced on stimulation of G protein-coupled receptors (GPCRs) 29 and tyrosine kinase-linked 30 cell surface receptors. The specificity of the resulting cellular responses is ensured by an exquisite temporo-spatial patterning of cytosolic Ca 2+ signals. 31 , 32 Opening of the IP 3 R channel requires not only IP 3 , but also binding of Ca 2+ to receptor sites on the cytosolic face. This leads to biphasic regulation, such that small elevations of cytosolic Ca 2+ induce channel opening, whereas larger elevations cause inactivation. 33 The positive feedback by Ca 2+ (Ca 2+ -induced Ca 2+ release; CICR), may remain restricted to individual or clustered IP 3 Rs, producing local Ca 2+ signals known, respectively, as Ca 2+ blips and puffs, 34 or may propagate throughout the cell as a saltatory wave by successive cycles of Ca 2+ diffusion and CICR. Thus, IP 3 -mediated Ca 2+ signaling represents a hierarchy of Ca 2+ events of differing magnitudes. 35 , 36 The spatial patterning it orchestrates is critical to proper cellular function, and we hypothesize that disruptions in the magnitude and organization of neuronal Ca 2+ signals may contribute to the pathogenesis of ASD. Our understanding of the etiology of ASD 8 , 9 , 37 has been greatly advanced by studies of syndromic forms of ASD caused by rare single gene mutations. Fragile X (FXS) is the most common monogenic cause of ASD, 38 and is a widely used and well-characterized model of ASD. 37 , 39 It results from silencing of the fragile X mental retardation ( FMR1) gene and absence of its corresponding protein, the FXS mental retardation protein (FMRP). Tuberous sclerosis (TS) is a syndrome caused by dominant mutations in one of two genes, hamartin ( TSC1 ) or tuberin ( TSC2 ), causing ASD-like behaviors, seizures, intellectual disability and characteristic brain and skin lesions. Here, we used primary, untransformed skin fibroblasts derived from patients with FXS and TS to evaluate ASD-associated functional deficits in IP 3 -mediated Ca 2+ signaling. The physiology of IP 3 signaling in fibroblasts has been extensively characterized, 40 , 41 , 42 providing a validated and convenient model for the study of Ca 2+ signaling in ASD, with the further advantage that cell lines are readily obtained as clinical samples from both disease and matched control patient populations. Moreover, identification of disease-specific signaling defects in skin cells have potential as biomarkers for diagnostic purposes, much as is now routine in other organelle diseases, such as Tay–Sachs and Niemann–Pick diseases, 43 , 44 and through which novel therapies for these diseases have emerged. 45 Our results demonstrate that IP 3 -mediated Ca 2+ signals are significantly depressed in fibroblasts from both FXS and TS patients and, by resolving signals at the single-channel level, we provide evidence of fundamental defects in IP 3 R channel activity in ASD. We thus propose dysregulated IP 3 R signaling as a nexus where genes altered in ASD converge to exert their deleterious effect. Materials and methods Materials The membrane-permeant caged IP 3 analog ci-IP 3 /PM (D-2,3-O-Isopropylidene-6-O-(2-nitro-4,5-dimethoxy)benzyl-myo-Inositol 1,4,5-trisphosphate-Hexakis (propionoxymethyl) Ester) was obtained from SiChem (Bremen, Germany), diluted in 20% pluronic F-127 solution in dimethylsulfoxide to a stock concentration of 200 μ M and was frozen down into 2-μl aliquots until needed. EGTA-AM and pluronic F-127 were from Molecular Probes/Invitrogen (Carlsbad, CA, USA). Fluo-8 AM and Cal520 were purchased from AAT Bioquest (Sunnyvale, CA, USA). Fibroblast cells Primary, untransformed human skin fibroblasts were purchased from Coriell Cell Repository (Camden, NJ, USA). ASD cell lines and matched controls with their corresponding Coriell numbers are as follows: FXS-1 (GM05848)/Ctr-1 (GM00498), FXS-2 (GM09497)/Ctr-2 (GM02912), FXS-3 (GM05185)/Ctr-3 (GM03440), FXS-4 (GM04026)/Ctr-4 (GM02185), FXS-5 (GM05131)/Ctr-5 (GM05659), TS1-A (GM06148)/Ctr-6 (GM01863), TS1-B (GM06149)/Ctr-3 (GM03440) and TS2 (GM06121)/Ctr-2 (GM02912). All cell lines came from male Caucasian patients. Cells were cultured in Dulbecco’s Modified Eagle’s Media (ATCC 30-2002; ATCC, Manassas, VA, USA) supplemented with 10% (v/v) fetal bovine serum and 1 × antibiotic mix (penicillin/streptomycin) at 37 °C in a humidified incubator gassed with 95% air and 5% CO 2 , and used for up to 15 passages. Cells were harvested in Ca 2+ , Mg 2+ -free 0.25% trypsin-EDTA (Life Technologies, Grand Island, NY, USA) and sub-cultured for 2 days before use. High-throughput Ca 2+ imaging Skin fibroblasts were seeded in clear-bottom black 96-well plates (T-3026-16; Greiner Bio One, Monroe, NC, USA) at 1.3 × 10 4 cells per well and grown to confluency. On the day of the experiment, cells were loaded by incubation with 2 μ M of the membrane-permeant Ca 2+ indicator Fluo-8 AM 46 in standard buffer solution (130 m M NaCl, 2 m M CaCl 2 , 5 m M KCl, 10 m M glucose, 0.45 m M KH 2 PO 4 , 0.4 m M Na 2 HPO 4 , 8 m M MgSO 4 , 4.2 m M NaHCO 3 , 20 m M HEPES and 10 μ M probenecid) with 0.1% fetal bovine serum for 1 h at 37 °C, then washed with a standard buffer solution. Ca 2+ -free solution (120 m M NaCl, 4 m M KCl, 2 m M MgCl 2 , 10 m M glucose, 10 m M HEPES, 1 m M EGTA) was added to each well (100 μl), and cells were allowed to equilibrate for 5 min prior to assay with a Fluorometric Imaging Plate Reader (FLIPR; Molecular Devices, Sunnyvale, CA, USA). A basal read of fluorescence in each well (470–495 nm excitation and 515–575 nm emission, expressed in terms of a.u.) was read for 2 s. Next, 100 μl of 2 × ATP (1 μ M , 10 μ M , 100 μ M final concentration) or 100 μl of 2 × ionomycin (to 1 μ M final concentration) in Ca 2+ -free HBSS was added to each well. Only a single recording was obtained from a given well. Ionomycin-induced fluorescence changes from wells without prior addition of ATP were used to normalize ATP-evoked responses. Recordings were performed in triplicate. Whole-cell Ca 2+ imaging Cells seeded in glass-bottomed dishes were loaded for imaging using membrane-permeant esters of Fluo-8 and caged i-IP 3 (ci-IP 3 ). 47 , 48 Briefly, cells were incubated at room temperature in HEPES-buffered saline (2.5 m M CaCl 2 , 120 m M NaCl, 4 m M KCl, 2 m M MgCl 2 , 10 m M glucose, 10 m M HEPES) containing 1 μ M ci-IP 3 /PM for 45 min, after which 4 μ M Fluo-8 AM was added to the loading solution for further 45 min before washing three times with the saline solution. [Ca 2+ ] i changes were imaged using a Nikon Eclipse microscope system (Nikon, Melville, NY, USA) with a × 40 (numerical aperture=1.30) oil objective. Fluo-8 fluorescence was excited by 488-nm laser light, and emitted fluorescence (λ>510 nm) was imaged at 30 frames per s using an electron-multiplied CCD Camera iXon DU897 (Andor, Belfast, UK). A single flash of ultraviolet (UV) light (350–400 nm) from an arc lamp focused to uniformly illuminate a region slightly larger than the imaging field was used to uncage i-IP 3 , a metabolically stable isopropylidene analog of IP 3 , which evoked activity persisting for a few minutes. Image data were acquired as stack.nd2 files using Nikon Elements for offline analysis. Fluorescence signals are expressed as a ratio (Δ F / F 0 ) of changes in fluorescence (Δ F ) relative to the mean resting fluorescence at the same region before stimulation ( F 0 ). Recordings were performed in triplicate, and the measurement outcomes were compared using Mann–Whitney test. Imaging local Ca 2+ events For experiments studying local Ca 2+ signals, cells were incubated at room temperature in HEPES buffer containing 1 μ M ci-IP 3 /PM and 4 μ M Cal520 for 1 h, 48 washed and further incubated with 10 μ M EGTA-AM for an another hour. Cells were then washed three times and remained in buffer for 30 min to allow for de-esterification of loaded reagents. [Ca 2+ ] i signals were imaged using the Nikon Eclipse microscope system described above, but now utilizing an Apo total internal reflection fluorescence × 100 (numerical aperture=1.49) oil objective. The imaging region on the camera sensor was cropped to 128x512 pixels (20.48 × 81.92 μm) to enable rapid (129 frames per s) imaging. Cal520 fluorescence ( λ > 510 nm) was excited by 488-nm laser light within an evanescent field extending a few hundred nanometers into the cells. Image acquisition and processing was as described above for whole-cell imaging, except that local events were identified and analyzed using a custom-written algorithm based on MatLab. 48 Western blot analysis Cell lines were grown in triplicates and lysed in mammalian protein extraction reagent (Thermo Scientific, Waltham, MA, USA) with complete mini protease inhibitor cocktail tablets (Roche, Dallas, TX, USA) and phosphatase 2 inhibitor cocktail (Sigma-Aldrich, St. Louis, MO, USA). Lysates were subsequently centrifuged at 14 000 r.p.m. for 15 min at +4 °C. Protein levels in the cell lysate were measured using the Bradford method. 49 About 20 μg of protein was loaded per well with 5% β-mercaptoethanol on 3–8% gradient Tris-Acetate gels with Tris-Acetate SDS running buffer (Invitrogen) and separated by electrophoresis at 130 V. Proteins were transferred at 50 mA for 6 h to 0.2 μm nitrocellulose membranes, which were blocked in 5% nonfat milk in tris-buffered saline supplemented with 0.1% tween-20 for 1 h. Membranes were probed overnight at +4 °C with the following primary antibodies: rabbit polyclonal anti-IP 3 R1 (AB5882, Millipore, Billerica, MA, USA), rabbit polyclonal anti-IP 3 R2 (LS-C24911, LifeSpan Biosciences, Nottingham, UK), mouse monoclonal anti-IP 3 R3 (610312, BD Transduction Laboratories, Franklin Lakes, NJ, USA), rabbit polyclonal anti-IP 3 R1/2/3 (sc-28613, Santa-Cruz Biotechnology, Dallas, TX, USA), rabbit polyclonal anti-beta actin (ab8227, Abcam, Cambridge, MA, USA). Membranes were then incubated, as appropriate, with goat anti-rabbit (1:5000, Sigma-Aldrich) or goat anti-mouse (1:5000, Sigma-Aldrich) HRP-conjugated secondary antibodies for 1 h. Bands were visualized by an ImageQuant LAS 4000 imager (GE Healthcare, Uppsala, Sweden) using peroxidase substrate for enhanced chemiluminescence (ECL Prime; Amersham, Marlborough, MA, USA). Levels of protein expression were quantified via densitometry analysis using ImageJ ( ), and are expressed normalized to actin levels. Results Agonist-induced Ca 2+ signaling is depressed in FXS and TS fibroblasts To screen for defects in IP 3 -mediated signaling associated with ASD, we used a FLIPR to monitor cytosolic Ca 2+ changes in fibroblasts loaded with the Ca 2+ -sensitive fluorescent indicator Fluo-8. Primary skin fibroblasts derived from five FXS males and five ethnicity- and age-matched unaffected male donors were grown to confluency on 96-well plates. Cells were stimulated by application of ATP to activate purinergic P2Y receptors 50 , 51 and thereby evoke GPCR-mediated intracellular Ca 2+ release through IP 3 Rs. Recordings were made in Ca 2+ -free extracellular solution to exclude complication from Ca 2+ influx through plasmalemmal channels. Different concentrations of ATP were applied to individual wells containing FXS and matched control cells. Figure 1a (top panel) illustrates representative results, showing smaller ATP-evoked Ca 2+ signals in FXS cells. To determine whether differences in ATP-evoked signals may result from differences in filling of ER Ca 2+ stores, we recorded signals evoked in separate wells by application of 1 μM ionomycin in Ca 2+ -free medium to completely liberate all intracellular Ca 2+ stores ( Figure 1a , lower panel). No significant difference was observed between mean ionomycin-evoked Ca 2+ signals in FXS and control cells ( Figure 1b ), suggesting that there is no systematic defect in ER Ca 2+ store filling in FXS cells. To normalize for differences in store content among different cell lines and experimental days, we expressed ATP-evoked signals as a percentage of the ionomycin response obtained in parallel measurements in the same 96-well plate for each given cell line. Mean normalized Ca 2+ signals evoked by 100 μ M ATP were significantly depressed in all five FXS fibroblast lines in comparison to their matched controls ( Figure 1c ). A similar depression was observed at lower concentrations of ATP, pooling data across all five FXS and control cell lines ( Figure 1d ). These results were consistently reproducible across different experimental days and matched cell pairs (total of 12 paired trials). Figure 1 Ca 2+ responses to extracellular application of ATP in Ca 2+ -free solution are depressed in human skin fibroblasts from FXS patients as compared with matched controls. ( a ) Representative FLIPR traces showing response to various concentrations of extracellular ATP (top panel) and to the Ca 2+ ionophore ionomycin (lower panel) in control (Ctr) and FXS cells loaded with the Ca 2+ indicator Fluo-8. Traces show fluorescence in arbitrary units, and each recording was obtained from a separate well. ( b ) Peak Ca 2+ responses to 1 μ M ionomycin in five independent control and five independent FXS cell lines. Bars show mean and s.e.m. of triplicate measurements on five independent cell lines; n =5. ( c ) Cells from five FXS cell lines (gray bars) and matched controls (black bars) were stimulated with 100 μ M ATP in Ca 2+ -free solution to stimulate Ca 2+ release from intracellular Ca 2+ stores. Recordings were performed in triplicate, averaged and normalized with respect to corresponding ionomycin responses in Ca 2+ -free solution. n =3 in each group. ( d ) Normalized Ca 2+ responses to various concentrations of ATP derived by combining results from five FXS and five matched controls. All data in this and following figures are presented as mean±s.e.m.; * P <0.05; ** P <0.01 calculated from a two-sample Student’s t -test.; FXS, Fragile X; n/s, not significant. Full size image We further extended our findings to another genetic disorder with high co-morbidity with ASD, TS, caused by mutations in either of two distinct and independent genes—hamartin ( TSC1 ) or tuberin ( TSC2 ). Figure 2 shows data obtained by FLIPR screening in the same way as performed for Figure 1 . Three cell lines derived from TS patients demonstrated a consistent and highly significant deficit in ATP-evoked Ca 2+ signals as compared with matched controls ( Figures 2a–c ), but without any appreciable difference in intracellular Ca 2+ store content as assessed by ionomycin application ( Figure 2a , lower panel). These findings were consistently replicated on different experimental days (total of six paired trials). Figure 2 Ca 2+ responses are strongly depressed in TS1 and TS2 fibroblasts, but IP 3 receptor expression is not correlated with Ca 2+ signal depression in TS or FXS cells. ( a ) Representative FLIPR traces showing response to various concentrations of extracellular ATP (top panel) and to the Ca 2+ ionophore ionomycin (lower panel) in control (Ctr) and TS cells loaded with the Ca 2+ indicator Fluo-8. ( b ) Three cell lines from TS patients (gray bars) and matched controls (black bars) were stimulated with 100 μ M ATP in Ca 2+ -free solution to stimulate Ca 2+ release from intracellular Ca 2+ stores. Recordings were performed in triplicate, averaged and normalized with respect to corresponding ionomycin responses in Ca 2+ -free solution. ( c ) Normalized Ca 2+ responses to various concentrations of ATP derived by combining results from three TS and three matched controls. n = 3 cell lines in each group. All data are presented as mean ± s.e.m.; * P <0.05; ** P <0.01 calculated from a two-sample Student’s t -test. ( d ) Scatter plot showing IP 3 R expression levels in TS and FXS cell lines determined by western blotting versus the mean ATP-evoked Ca 2+ signals in these cells relative to matched control cells. Different symbols represent different cell lines (TS2, downward arrow; TS1-B, circle; FXS-2, upward arrow and FXS-4, square), and different colors represent IP 3 R expression levels as determined using antibodies for type 1 (black), type 2 (red), type 3 (blue) IP 3 Rs and a non type-specific antibody (green). All data are normalized relative to matched control cells. Solid lines are regression fits to data for IP 3 R1 (black), IP 3 R2 (red), IP 3 R3 (blue), and total IP 3 Rs (green). The gray dashed line represents a one-to-one relationship between normalized Ca 2+ signal and normalized IP 3 R expression. FLIPR, fluorometric imaging plate reader; FXS, fragile X; IP 3 R, inositol trisphosphate receptor; n/s, not significant; TS, tuberous sclerosis. Full size image The diminished Ca 2+ signals in FXS and TS cells could result from lower expression levels of IP 3 R proteins. To investigate this, we performed western blot analysis on four cell lines selected as showing pronounced defects in Ca 2+ signaling (FXS-2, FXS-4, TS1-B and TS2), together with three matched control lines (Ctr-2, Ctr-3 and Ctr-4), using antibodies specific to type 1, 2 and 3 IP 3 Rs, as well as a non type-specific antibody ( Supplementary Figure 1 ). Our results showed an overall slight decrease in IP 3 R expression across all isotypes in FXS and TS cells relative to their matched controls ( Figure 2d ). However, in all cases the depression of IP 3 R expression was much smaller than the corresponding depression of Ca 2+ signaling as measured in the FLIPR experiments, and there was little or no correlation between IP 3 R expression and Ca 2+ signaling in the TS and FXS cells after normalizing relative to their matched controls ( Figure 2d ). IP 3 -induced Ca 2+ release is reduced in FXS and TS cells To then discriminate whether the observed deficits in ATP-induced Ca 2+ signals in FXS and TS cell lines arose through defects in any of the intermediate steps from binding to purinergic GPCR receptors to generation of IP 3 or at the level of IP 3 -mediated Ca 2+ liberation itself, we circumvented upstream GPCR signaling by loading cells with a caged analog of IP 3 (ci-IP 3 ). 47 UV flash photolysis of ci-IP 3 to photorelease physiologically active i-IP 3 then allowed us to directly evoke Ca 2+ liberation through IP 3 Rs in a graded manner by regulating flash duration and intensity to control the amount of i-IP 3 that was photoreleased. Figure 3a illustrates images obtained by epifluorescence microscopy of FXS and control fibroblasts loaded with Fluo-8 and ci-IP 3 by incubation with membrane-permeant esters of these compounds. Figure 3b shows superimposed fluorescence ratio (Δ F / F 0 ) traces measured from several representative FXS-2 and matched control Ctr-2 cells in response to uniform photolysis flashes. Concordant with our observations of defects in ATP-induced global Ca 2+ signals, global cytosolic Ca 2+ responses evoked by equivalent photorelease of i-IP 3 in these FXS cells were smaller than in control cells ( Figure 3c ); and displayed a longer time to peak ( Figure 3d ) and slower rate of rise ( Figure 3e ). Similar results were obtained from two other FXS-Ctr cell pairs (FXS-1/Ctr-1: 20.7±3.9/44.6±12.2 %ΔF/F 0 , FXS-3/Ctr-3: 20.1±4.8/156.8±17.3). Moreover, we observed a consistent proportional depression of Ca 2+ signals for different relative UV flash strengths corresponding to photorelease of different i-IP 3 concentrations (25% flash strength, pooled FXS response 61% of control; 50% flash, 65% of control; 100% flash, 74% of control: n =13–17 cells for each flash duration). Figure 3 Ca 2+ release evoked by photoreleased IP 3 is depressed in FXS and TS cells. ( a ) Representative frames taken from image sequences of control (top) and FXS fibroblasts (bottom) loaded with Fluo-8 and stimulated by photorelease of i-IP 3 . Increasing cytosolic [Ca 2+ ] (increasing fluorescence ratio % F / F 0 ) is depicted on a pseudocolor scale, as indicated by the color bar. Time-stamps indicate time from beginning of the record; the photolysis flash was delivered at 3 s. The monochrome panels on the left show resting fluorescence before stimulation to indicate cell outlines. ( b ) Superimposed traces of representative global single-cell Ca 2+ responses to uncaging of i-IP 3 in FXS (red) and control fibroblasts (black). Traces represent average fluorescence ratio signals (%Δ F / F 0 ) throughout regions-of-interest encompassing the whole cell. Arrow indicates time of the UV flash. Data are from the cell pair labeled as FXS-2/Ctr-2 in Figure 1c . ( c ) Mean peak amplitude of Ca 2+ responses is significantly depressed in FXS cells relative to matched controls. ( d ) Mean latency from time of photolysis flash to peak IP 3 -evoked Ca 2+ response is prolonged in FXS fibroblasts. ( e ) Mean rate of rise of Ca 2+ fluorescence signal (peak amplitude/time to peak) is reduced in FXS cells as compared with control cells. Data in ( c–e ) are from 13 control cells and 14 FXS cells. ( f–i ) Corresponding traces ( f ), and mean values of amplitude ( g ), latency ( h ) and rate of rise ( i ) derived from cells labeled as Ctr-3 and TS1-B in Figure 2b . Data are from 11 TS cells and 12 matched controls. All data are presented as mean±s.e.m.; * P <0.05; ** P <0.01 calculated from a two-sample Student’s t -test.; FXS, Fragile X; IP 3 R, inositol trisphosphate receptor; n/s, not significant; TS, tuberous sclerosis. Full size image TS cells also showed depressed and slowed Ca 2+ responses to photoreleased i-IP 3 . Measurements from the matched TS1-B and Ctr-3 cell lines ( Figure 3f ) revealed a pronounced deficit in average Ca 2+ signal amplitudes ( Figure 3g ); and again the time to peak was lengthened ( Figure 3h ) and the rate of rise slowed ( Figure 3i ). These differences were apparent employing two different relative UV flash strengths (15% flash strength, TS response 18% of control; 25% flash, 20% of control: n =13–15 cells for each flash duration). IP 3 -signaling is affected at the level of local events IP 3 -mediated cellular Ca 2+ signaling is organized as a hierarchy, wherein global, cell-wide signals, such as those discussed above, arise by recruitment of local, ‘elementary’ events involving individual IP 3 R channels or clusters of small numbers of IP 3 Rs. 34 , 52 We therefore imaged these elementary events to elucidate how deficits in the global Ca 2+ signals in FXS and TS cells may arise at the level of local IP 3 R clusters. We selected one FXS (FXS-3) fibroblast line, one TS1 (TS1-B) line and a common control (Ctr-3) cell line matched to both. Ca 2+ release from individual sites was resolved utilizing total internal reflection fluorescence microscopy of Cal520 (a Ca 2+ indicator that provides brighter signals than Fluo-8), in conjunction with cytosolic loading of the slow Ca 2+ buffer EGTA to inhibit Ca 2+ wave propagation. 53 This technique captures in real time the duration and magnitude of the underlying Ca 2+ flux, providing a close approximation of the channel gating kinetics as would be recorded by electrophysiological patch-clamp recordings. 54 Ca 2+ release evoked by spatially uniform photolysis of ci-IP 3 across the imaging field was apparent as localized fluorescent transients of varying amplitudes, arising at numerous discrete sites, widely distributed across the cell body ( Figure 4a ). Representative fluorescence traces illustrating responses at several sites (marked by large circles in Figure 4a ) are shown in Figures 4b – d , respectively, and illustrate the time course and spatial distribution of selected individual events. Figure 4 Local IP 3 -evoked Ca 2+ events. ( a ) Resting Cal520 fluorescence of a control fibroblast (outlined) imaged by TIRF microscopy. Circles mark all sites where Ca 2+ release events were identified within a 40 s imaging record following photorelease of i-IP 3 in a 128 × 512 pixel (20.48 × 81.92 μm) imaging field. Larger circles mark sites from which traces in b were obtained. ( b ) Representative traces from sites numbered in a . Dots underneath the traces mark events arising at that particular site; unmarked signals represent fluorescence bleed-through from events localized to adjacent but discrete sites. Arrow indicates the timing of the UV flash. ( c ) Examples of individual events shown on an expanded timescale to better illustrate their kinetics. ( d ) Surface intensity plot of three individual puffs near their peak times. ( e ) A single Ca 2+ event shown on an expanded scale to illustrate measurements of peak amplitude and event duration ( τ o ) at half-maximal amplitude. IP 3 , inositol trisphosphate; TIRF, total internal reflection fluorescence. Full size image To quantify differences in elementary Ca 2+ events between the cell lines, we utilized a custom-written, automated algorithm 48 to detect events and measure their amplitudes and durations ( Figure 4e ). A striking difference between control and ASD lines was apparent in the numbers of detected sites, with control cells showing on average 97 sites per imaging field, whereas FXS and TS cells showed only 12 and 29 sites, respectively ( Figure 5a ). The mean frequency of events per site appeared higher in control cells than in both FXS and TS cells ( Figure 5b ), but quantification was imprecise because many sites, particularly in the FXS and TS cells, showed only a single event. Using the latency between the UV flash and first event at each site as an alternative measure of the probability of event initiation 55 , 56 showed no significant difference among FXS, TS and control cell lines ( Figure 5c ). Mean event amplitudes were also similar among the three cell lines ( Figure 5d ). A second key difference between the control and FXS and TS cells was apparent in the durations of the local events. In all cell lines, event durations were statistically distributed as single-exponentials, as expected for stochastic events. However, the time constants fitted to these distributions were appreciably shorter in FXS and TS cells as compared with control cells ( Figure 5e ). Figure 5 IP 3 -mediated Ca 2+ signaling in FXS and TS fibroblasts is impaired at the level of local events. Data are from 17 FXS-3 cells, 17 TS1-B cells and 16 control cells (Ctr-3) matched to both experimental groups. Open black squares in a – d represent mean measurements from individual cells; histograms and error bars are overall mean+1 s.e.m. across all cells in each group. ( a ) Total numbers of Ca 2+ release sites detected within cells during 40 s imaging records following uniform photorelease of i-IP 3 . ( b ) Mean event frequency per site, calculated from the number of events observed per site throughout the recording period. ( c ) Mean latencies following the photolysis flash to the first event at each site within a cell. ( d ) Mean amplitudes of all events within each cell. ( e ) Distributions of event durations (at half-maximal amplitude) derived from all events identified in FXS (open diamonds), TS (stars) and control cells (black squares). The data are fit by single-exponential distributions with time constants t o of 15 ms (both FXS and TS) and 32 ms (control). Outcomes were compared using two-sample Mann–Whitney test. * P <0.05; ** P <0.01. FXS, fragile X; IP 3 , inositol trisphosphate; n/s, not significant; TS, tuberous sclerosis. Full size image Discussion We report abnormalities of IP 3 -mediated Ca 2+ signaling in three distinct genetic models that display high co-morbidity with ASD—FXS syndrome and two genetically-distinct forms of TS (TSC1 and TSC2). Ca 2+ responses evoked by agonist stimulation of GPCR-mediated IP 3 signaling were significantly smaller in fibroblasts derived from patients with FXS and TS, as compared with matched control cell lines. In contrast, we found no significant differences in Ca 2+ liberation evoked by application of the Ca 2+ ionophore ionomycin, indicating that the diminished responses to IP 3 do not result from diminished ER Ca 2+ store content. Moreover, Ca 2+ signals evoked by intracellular uncaging of IP 3 were depressed in FXS and TS cell lines, pointing to a deficit at the level of Ca 2+ liberation through IP 3 Rs and not solely because of diminished GPCR-mediated production of IP 3 . Finally, we conclude that the depression of Ca 2+ signals cannot be attributed entirely or substantially to reduced expression of IP 3 R proteins, because mean agonist-evoked Ca 2+ responses across four FXS and TS lines were about 22% of matched controls, whereas western blots showed mean IP 3 R levels to be about 80% of controls and uncorrelated with the extent of Ca 2+ signaling depression in these different cell lines. By resolving Ca 2+ liberation during ‘elementary’, local signals evoked by photoreleased IP 3 , 34 we further demonstrate that defects in global Ca 2+ signaling in these three distinct ASD-associated models are reflected at the level of Ca 2+ release through individual and small clusters of IP 3 Rs. In both FXS and TS cell lines, we observed fewer sites of local Ca 2+ release as compared with a control cell line, and the durations of these events were shorter. Because functional sites are comprised of clusters of small numbers of individual IP 3 Rs, the amplitude of the fluorescence signal at a site depends on the channel permeability, together with the number of active channels in the cluster. 34 We observed similar amplitudes of local Ca 2+ signals across the cell lines, suggesting that the Ca 2+ -permeation properties and cluster organization of IP 3 Rs are not appreciably affected in FXS and TS. However, the shorter average duration of local events points to a modulation of IP 3 R gating kinetics, and would lead to an overall decrease in the amount of Ca 2+ released over time. Compounding this, we found the numbers of local Ca 2+ release sites within a cell to be dramatically lower in FXS and TS cells as compared with control cells (87% and 70%, respectively), although it is possible that the short duration events observed in the mutants may have contributed to undercounting their release sites. Taken together, our findings on local IP 3 -mediated Ca 2+ signals indicate that the deleterious effects of FXS and TS mutations manifest at the level of the functional channel gating of IP 3 Rs, although the underlying molecular mechanism remains to be determined. The IP 3 R is a key signaling hub in the canonical metabotropic glutamate receptor (mGluR) pathway in neurons, 20 , 57 and the mGluR theory of FXS fragile X 58 postulates that disrupted mGluR signaling underlies the pathogenesis of the disorder. Activation of mGluRs leads to a brief hyperpolarization followed by a more prolonged depolarization. 23 , 59 The initial outward current results from the opening of small conductance Ca 2+ -activated K + channels. 60 , 61 This current is proportional to the Ca 2+ signal amplitude; 23 and can be triggered directly by intracellular uncaging of IP 3 . 23 , 59 As a result, IP 3 -evoked Ca 2+ release transiently hyperpolarizes the cell and briefly depresses neuronal excitability, leading to a reduction in firing frequency. 23 Suppressed IP 3 -mediated Ca 2+ release from the internal stores, as we report in diverse models of ASD, is thus expected to diminish the inhibitory K + conductance, and as such would tend to produce neuronal hyperexcitability, consistent with observations following mGluR stimulation of ASD-model neurons. 62 , 63 A complex array of downstream signals arises from mGluR activation, 64 whereas IP 3 R Ca 2+ signaling is one immediate downstream target; to our knowledge its function has not yet been molecularly dissected in ASD. At present, we cannot directly extrapolate our results to IP 3 -mediated signaling in neurons, given that fibroblasts predominantly express type 3 IP 3 Rs whereas neurons predominantly express type 1 IP 3 Rs. 65 Nevertheless, because expression levels of all three isotypes of IP 3 Rs are only slightly diminished in FXS and TS fibroblasts, we conclude that the pronounced depression of Ca 2+ signaling does not result from diminished expression of a specific isotype. Instead, the depressed Ca 2+ signals likely result from modulatory effects on IP 3 R function, which might extend across different isotypes. Depression of IP 3 -mediated Ca 2+ signaling may further disrupt neurodevelopment through separate mechanisms. IP 3 Rs have been shown to be central participants in autophagy. 66 , 67 , 68 , 69 Decreased levels of autophagy result in defective synaptic pruning, which has been repeatedly associated with ASD in humans and mouse models, 70 and promotion of autophagy rescues behavioral defects in mouse models of ASD. 70 Because of the ubiquitous nature of IP 3 R signaling and its diverse roles in almost all cells of the body, deficits in IP 3 -mediated Ca 2+ signaling may not be limited to neurological correlates of ASD, but may also explain other characteristic ASD-associated heterogeneous symptoms, such as those of the gastrointestinal tract 71 , 72 and immune system. 73 , 74 Furthermore, since the ER serves as a sensor of a host of environmental stressors, this same mechanism may contribute to the known environmental component to the ASD phenotype, and holds the potential to reveal relevant stressors. In conclusion, our findings indicate that ER IP 3 R signaling is affected in three distinct genetic models of ASD, pointing to the ER as a functional ‘hub’ where different cellular signaling pathways merge to contribute to the pathogenesis of ASD. In addition to its role in Ca 2+ homeostasis, the ER serves as a key integrator of environmental stressors with metabolism and gene expression, as it mediates a host of broad ranging cell stress responses such as the heat shock and unfolded protein responses. 75 In this light it can be seen to integrate a matrix of ASD-associated risk factors. We identify the IP 3 R as a functional target in monogenic models of ASD, and we are currently exploring potential defects in IP 3 -mediated Ca 2+ signaling in ‘typical’ ASD patients without any identifiable underlying genetic cause. Ca 2+ screening in skin fibroblasts, which are routinely acquired as clinical specimens, may thus offer a promising technique in conjunction with behavioral testing for early detection of ASD, and potentially for high-throughput screening of novel therapeutic agents. | By identifying a key signaling defect within a specific membrane structure in all cells, University of California, Irvine researchers believe, they have found both a possible reliable biomarker for diagnosing certain forms of autism and a potential therapeutic target. Dr. J. Jay Gargus, Ian Parker and colleagues at the UCI Center for Autism Research & Translation examined skin biopsies of patients with three very different genetic types of the disorder (fragile X syndrome and tuberous sclerosis 1 and 2). They discovered that a cellular calcium signaling process involving the inositol trisphosphate receptor was very much altered. This IP3R functional defect was located in the endoplasmic reticulum, which is among the specialized membrane compartments in cells called organelles, and may underpin cognitive impairments - and possibly digestive and immune problems - associated with autism. "We believe this finding will be another arrow in the quiver for early and accurate diagnoses of autism spectrum disorders," said Gargus, director of the Center for Autism Research & Translation and professor of pediatrics and physiology & biophysics. "Equally exciting, it also presents a target of a molecular class already well-established to be useful for drug discovery." Study results appear online in Translational Psychiatry, a Nature publication. Autism spectrum disorder is a range of complex neurodevelopmental disorders affecting 2 percent of U.S. children. The social and economic burden of ASD is enormous, currently estimated at more than $66 billion per year in the U.S. alone. Drug development has proven problematic due to the limited understanding of the underlying causes of ASD, as demonstrated by the recent failure of several much anticipated drug trials. There are also no current, reliable diagnostic biomarkers for ASD. Genetic research has identified hundreds of genes that are involved, which impedes diagnosis and, ultimately, drug development. There simply may be too many targets, each with too small an effect. Many of these genes associated with ASD, however, have been found to be part of the same signaling pathway, and multiple defects in this pathway may converge to produce a large functional change. The UCI scientists detected such a convergence in the IP3R calcium channel in an organelle called the endoplasmic reticulum. Organelles are membrane structures within cells with specialized cellular functions. According to Gargus, diseases of the organelles, such as the ER, are an emerging field in medicine, with several well-recognized neurological ailments linked to two other ones, the mitochondria and lysosomes. The IP3R controls the release of calcium from the ER. In the brain, calcium is used to communicate information within and between neurons, and it activates a host of other cell functions, including ones regulating learning and memory, neuronal excitability and neurotransmitter release - areas known to be dysfunctional in ASD. "We propose that the proper function of this channel and its signaling pathway is critical for normal performance of neurons and that this signaling pathway represents a key 'hub' in the pathogenesis of ASD," said Parker, a fellow of London's Royal Society and UCI professor of neurobiology & behavior, who studies cellular calcium signaling. To see if IP3R function is altered across the autism spectrum, clinical researchers at the Center for Autism & Neurodevelopmental Disorders - which is affiliated with the Center for Autism Research & Translation - are currently expanding the study and have begun to examine children with and without typical ASD for the same signaling abnormalities. These patients undergo complete behavioral diagnostic testing, and sophisticated EEG, sleep and biochemical studies are performed. This includes the sequencing of their entire genome. Also, skin cell samples are cultured and made available to lab-based researchers for functional assays. In the area of drug discovery, scientists at the Center for Autism Research & Translation continue to probe the IP3R channel, specifically how it regulates the level of neuron excitability. The brains of people who have autism show signs of hyperexcitability, which is also seen in epilepsy, a disorder increasingly found to be associated with ASD. Cells from individuals who have autism exhibit depressed levels of calcium signaling, and this might explain why these patients experience this hyperexcitability. By restoring the release of calcium from the IP3R, the researchers believe, they can apply a "brake" on this activity. | www.nature.com/tp/journal/v5/n … full/tp2015123a.html |
Medicine | Chronic inflammation leads to imbalanced blood system and potentially cancer risk | Chronic interleukin-1 drives haematopoietic stem cells towards precocious myeloid differentiation at the expense of self-renewal, Nature Cell Biology, DOI: 10.1038/ncb3346 Journal information: Nature Cell Biology | http://dx.doi.org/10.1038/ncb3346 | https://medicalxpress.com/news/2016-04-chronic-inflammation-imbalanced-blood-potentially.html | Abstract Haematopoietic stem cells (HSCs) maintain lifelong blood production and increase blood cell numbers in response to chronic and acute injury. However, the mechanism(s) by which inflammatory insults are communicated to HSCs and their consequences for HSC activity remain largely unknown. Here, we demonstrate that interleukin-1 (IL-1), which functions as a key pro-inflammatory ‘emergency’ signal, directly accelerates cell division and myeloid differentiation of HSCs through precocious activation of a PU.1-dependent gene program. Although this effect is essential for rapid myeloid recovery following acute injury to the bone marrow, chronic IL-1 exposure restricts HSC lineage output, severely erodes HSC self-renewal capacity, and primes IL-1-exposed HSCs to fail massive replicative challenges such as transplantation. Importantly, these damaging effects are transient and fully reversible on IL-1 withdrawal. Our results identify a critical regulatory circuit that tailors HSC responses to acute needs, and is likely to underlie deregulated blood homeostasis in chronic inflammation conditions. Main All lineages of haematopoietic cells, including those of the immune system, arise from a rare population of self-renewing HSCs residing in the bone marrow (BM) of adult mammals 1 . Blood production by HSCs is regulated by the concerted action of cell-intrinsic transcription factors such as PU.1 and GATA-1, and cell-extrinsic determinants produced by the stromal and haematopoietic components of the BM niche, which together regulate HSC self-renewal and specify lineage commitment 2 , 3 . Although normally maintained in a largely quiescent or dormant state, most HSCs can be rapidly activated to proliferate and differentiate in response to acute needs such as regenerative challenges including myeloablation and transplantation, and physiological insults that induce an inflammatory state 4 , 5 , 6 . Inflammation is a critical physiological process that mediates host defence against invading pathogens, injury and other insults, and is characterized by rapid mobilization and overproduction of specialized immune cells, particularly myeloid cells 7 . Inflammation is communicated to the haematopoietic system, and HSCs in particular, either by direct sensing through Toll-like receptors (TLRs), or indirectly through a series of pro-inflammatory cytokines 8 , 9 , 10 . In particular, interferons (IFNs), both type-I (IFN-α/β) and type-II (IFN-γ), and tumour necrosis factor alpha (TNFα) directly impact HSC fate during an inflammatory response 11 , 12 , 13 , 14 and drive HSC specification during embryonic development 15 , 16 . Pro-inflammatory cytokines are therefore exciting new regulators of HSC function 17 , with much remaining to be understood regarding how inflammatory insults tailor blood production under homeostatic and disease conditions. Interleukin-1 (IL-1) was the first interleukin identified and is the founding member of a group of 11 cytokines (the IL-1 family), with a central role in responses to infections or sterile insults 18 , 19 . IL-1 consists of two related genes ( Il1a and Il1b ) with distinct regulation but similar biological activities 18 , which bind a broadly expressed surface receptor (IL-1R) and trigger downstream transcriptional responses through the adaptor protein MyD88 and a broad range of signalling pathways including NF-κB, p38 MAPK, JNK and AP-1 (ref. 20 ). IL-1 is a key ‘emergency’ signal that rapidly activates host defence and repair in many tissues, including the blood system, but also drives tissue dysfunction in the context of chronic inflammation and autoimmune diseases 7 , 19 . Acute IL-1 signalling is associated with increased myeloid cell production both in culture and in vivo in response to infection, irradiation or myeloablative chemotherapy 21 , 22 , 23 , 24 . Many of the inflammatory disease conditions associated with chronic IL-1 production such as rheumatoid arthritis, obesity and type-2 diabetes also feature severe haematological complications, including overproduction of tissue-damaging myeloid cells, loss of naive lymphoid cell production and chronic anaemia 25 , 26 , 27 . However, the mechanism by which IL-1 contributes to deregulated blood output in these conditions, and the functional consequences of both acute and chronic IL-1 exposure on HSC fate, is largely unknown. RESULTS IL-1 accelerates HSC differentiation To investigate IL-1 effects, we isolated HSCs (Lin − c-Kit + Sca-1 + Flk2 − CD48 − CD150 + ; Supplementary Fig. 1a ) from wild-type mice and monitored their expansion in liquid culture with or without (±) IL-1α or IL-1β (25 ng ml −1 ). Notably, HSCs cultured with IL-1 differentiated and expanded significantly faster than untreated HSCs over an 8-day period ( Fig. 1a ), which seemed to result from faster division rates as measured by carboxyfluorescein succinimidyl ester (CFSE) dilution assay after 60 h ( Fig. 1b ). To confirm accelerated cell division in HSC cultures, we used an automated single-cell tracking approach to continuously monitor cell division over a 6-day period ( Fig. 1c ) 28 . Remarkably, although the timing of the exit from quiescence first division seemed relatively unaffected in IL-1-treated HSCs, the kinetics of the subsequent differentiation divisions were significantly compressed ( Fig. 1d, e ). This effect was specific to HSCs, as expansion, survival and proliferation were unchanged in IL-1-treated granulocyte/macrophage progenitors (GMPs: Lin − c-Kit + Sca-1 − CD34 + FcγR + ) and multipotent progenitors (MPPs), including myeloid-biased MPP2 (Lin − c-Kit + Sca-1 + Flk2 − CD48 + CD150 + ) and MPP3 (Lin − c-Kit + Sca-1 + Flk2 − CD48 + CD150 − ) or lymphoid-primed MPP4 (Lin − c-Kit + Sca-1 + Flk2 + ) 29 , 30 , which all express IL-1R at similar levels to HSCs ( Supplementary Fig. 1a–e ). These results indicate that IL-1 specifically targets HSCs and accelerates their division kinetics. Figure 1: IL-1 accelerates HSC differentiation along the myeloid lineage. ( a ) Representative expansion in liquid culture ( n = 3 biological replicates per group). ( b ) CFSE dilution assays after 60 h. Data represent one of two replicate experiments. Grey histogram shows −IL-1β HSCs at 24 h. ( c – e ) Continuous single-cell tracking experiments ( n = 47 and 51 HSCs per group): experimental design ( c ), single-cell pedigrees of median division times ( d ), and box plot quantification of division times ( e ). Results show median (lines) and 10–90th percentile (whiskers). ( f – i ) Colony-forming unit (CFU) assays in methylcellulose: experimental design ( f ), single-cell clonogenic assays ( n = 3 replicate experiments with 60 HSCs per group) ( g ), representative colony type (scale bar, 100 μm) and morphology (scale bar, 10 μm; arrow indicates a macrophage) ( h ), and replating (repl.) experiments ( i ). Data represent one of two replicate experiments. Colonies were scored after 7 days. MkE, megakaryocyte/erythrocyte; M, macrophage; G, granulocyte; GM, granulocyte/macrophage; GMMkE, mix colony. ( j , k ) Myeloid differentiation in liquid culture ( n = 6 biological replicates per group): experimental design with representative FACS plots ( j ), and quantification of myeloid marker expression ( k ). Source data for a are shown in Supplementary Table 1 . Data are means ± s.d.; ∗ ∗ P ≤ 0.01, ∗ ∗ ∗ P ≤ 0.001. P values in a were determined by one-way ANOVA with Dunnett’s test, in g by paired Student’s t -test, and in e and k by Mann–Whitney U -test. Exact P values, number of replicates used to derive statistical data ( n ) and statistical tests used are shown in Supplementary Table 2 . Full size image To directly address the effects of IL-1 on HSC differentiation, we performed colony formation assays in methylcellulose ± IL-1β ( Fig. 1f ). Strikingly, IL-1-treated HSCs produced almost exclusively myeloid-committed granulocyte/macrophage (GM)-type colonies containing abundant macrophages, in contrast to untreated HSCs, which had a higher proportion of immature multilineage granulocyte/macrophage/megakaryocyte/erythrocyte (GMMkE)-type colonies containing mostly immature myeloblasts and mast cells ( Fig. 1g, h ). We also re-plated the progeny of these cultures once in methylcellulose without IL-1β, and observed near exhaustion of colony-forming ability in cells from IL-1-treated HSC cultures ( Fig. 1i ). Moreover, HSCs grown in liquid culture with IL-1β rapidly lost expression of the immaturity markers c-Kit and Sca-1, and gained expression of the myeloid differentiation markers Mac-1 and FcγR, resulting in greatly increased absolute numbers of c-Kit+ progenitors and mature myeloid cells in IL-1-treated HSC cultures ( Fig. 1j, k and Supplementary Fig. 1f ). This pro-myeloid differentiation effect was diminished in MPPs and completely absent in GMPs ( Supplementary Fig. 2a–e ). Collectively, these results demonstrate that IL-1 accelerates the production of mature myeloid cells by HSCs. Precocious activation of a PU.1 molecular circuit To understand the effects of IL-1 at the molecular level, we used quantitative PCR with reverse transcription (qRT–PCR) and a custom-made Fluidigm PCR array to analyse gene expression in HSCs cultured ± IL-1β for 12 or 24 h ( Fig. 2a and Supplementary Fig. 3a ). Remarkably, we observed a strong induction of the transcription factor PU.1 ( Spi1 ) and its target genes Csf2ra (GM-CSFR) and Csf1r (M-CSFR) in IL-1-treated HSCs ( Fig. 2b and Supplementary Fig. 3b ). In contrast, lymphoid and megakaryocyte/erythrocyte (MkE) lineage genes were largely unaffected, suggesting that IL-1 functions by activating, rather than suppressing, lineage-specific gene programs in HSCs 31 . Consistent with the effects of IL-1 on cell division, we also observed activation of the cell cycle machinery and decreased expression of the quiescence-enforcing cyclin D1 ( Ccnd1 ) and p57 ( Cdkn1c ) genes, alongside induction of p21 ( Cdkn1a ), another PU.1 target ( Fig. 2b ). We confirmed rapid upregulation of M-CSFR and two other PU.1-dependent myeloid surface markers, Mac-1 and CD18, in IL-1-treated HSCs ( Fig. 2c ), and found limited or absent activation of PU.1 and its target genes in IL-1-treated MPPs and GMPs ( Supplementary Fig. 3b ). Moreover, using HSCs isolated from PU.1–eYFP reporter mice, we confirmed fast upregulation of PU.1 activity on IL-1 exposure, with uniformly high PU.1 levels in IL-1-treated PU.1–eYFP HSCs before their first division in single-cell tracking experiments ( Fig. 2d, e and Supplementary Fig. 3c ). We also uncovered a small subpopulation of PU.1 hi HSCs in untreated cultures whose progeny showed division kinetics similar to IL-1-treated HSCs ( Supplementary Fig. 3d ). In particular, both had a delayed first division when compared with untreated PU.1 lo HSCs, consistent with the decreased BrdU incorporation observed after 24 h in IL-1-treated HSCs ( Supplementary Fig. 3e ). The subsequent divisions of untreated PU.1 hi HSCs were also accelerated, thus directly linking elevated PU.1 activity with the overall effect of IL-1 in accelerating division kinetics. This is in contrast to the recently reported function of steady-state PU.1 levels in limiting HSC cell cycle re-entry 32 and is likely to reflect differences in gene dosage. In fact, high PU.1 levels similarly delayed HSC first division while simultaneously priming their progeny to undergo accelerated proliferation through induction of myeloid lineage determinants and cell cycle activators including Cdk4 , Myc , Ccnb and Ccne1 ( Fig. 2b ). To directly demonstrate the importance of PU.1 for IL-1 effects, we isolated HSCs from PU.1 ΔURE hypomorphic mice lacking the PU.1 upstream regulatory element (URE) and expressing PU.1 at 10–20% normal levels 33 . Strikingly, myeloid differentiation, including Mac-1 expression, was severely attenuated in PU.1 ΔURE HSC cultures ( Fig. 2f ), confirming the requirement for PU.1 in IL-1-driven HSC differentiation. Moreover, to establish that PU.1 upregulation is sufficient to drive accelerated HSC differentiation, we overexpressed PU.1 in HSCs using a validated lentiviral vector 34 . Similar to IL-1-treated HSCs, PU.1-overexpressing HSCs showed accelerated gain of the myeloid markers Mac-1/FcγR relative to control HSCs ( Fig. 2g ). These results establish that IL-1 acts instructively and ‘primes’ HSCs to undergo accelerated myeloid differentiation through precocious activation of a PU.1-dependent molecular circuit. Figure 2: IL-1 induces precocious activation of a PU.1 gene program in HSCs. ( a , b ) Gene expression analyses (using the Fluidigm platform) of lineage determinant and cell cycle genes: heatmap ( a ), and expression of individual genes ( n = 8 pools of 100 cells for each condition; bars are means) ( b ). Results are expressed as fold changes compared with levels in −IL-1β HSCs (set to 1). MkE: megakaryocyte/erythroid. ( c ) Expression of PU.1 targets ( n = 5 biological replicates per group; M-CSFR day 4, n = 7). Results are expressed as mean fluorescence intensity (MFI) levels. ( d ) Representative histogram and PU.1 levels in PU.1–eYFP HSCs ( n = 6 biological replicates per group; day 8, n = 4). ( e ) Representative images and PU.1 levels in individual PU.1–eYFP HSCs before first division ( n = 33–37 HSCs per group; scale bar, 10 μm). Results are expressed in arbitrary units (a.u.) and box plots show the median (lines) with the 10–90th percentile (whiskers). ( f ) Experimental design and myeloid marker expression in control (Ctrl) and PU.1 ΔURE HSCs ( n = 3 biological replicates per group). ( g ) Experimental design, representative FACS plots and myeloid marker expression in lentivirally transduced (GFP + ) Ctrl or PU.1-overexpressing HSCs ( n = 3 biological replicates per group for day 4; day 1 represents the mean of two biological replicates per group). Source data for d , f and g are shown in Supplementary Table 1 . Data are means ± s.d.; ∗ P ≤ 0.05, ∗ ∗ P ≤ 0.01, ∗ ∗ ∗ P ≤ 0.001. P values in b – e were determined by Mann–Whitney U -test, in f by one-way ANOVA with Dunnett’s test, and in g by paired Student’s t -test. Exact P values, number of replicates used to derive statistical data ( n ) and statistical tests used are shown in Supplementary Table 2 . Full size image Mechanism of PU.1 activation To gain insight into the mechanism of PU.1 activation by IL-1, we first addressed its dependency on IL-1R signalling. As expected, Il1r1 −/− HSCs did not show accelerated myeloid differentiation in the presence of IL-1β ( Fig. 3a ), and IL-1-treated Il1r1 −/− ::PU.1–eYFP HSCs failed to upregulate PU.1 ( Fig. 3b ). We then exposed PU.1–eYFP HSCs cultured for 12 h ± IL-1β (25 pg ml −1 , a lower dose titrated for sensitivity), to inhibitors of various signalling pathways acting downstream of IL-1R ( Fig. 3c, d and Supplementary Fig. 3f, g ) 20 . Interestingly, whereas blockade of p38, MEK and PKC partially decreased IL-1-induced PU.1 levels, inactivation of IKK almost entirely abolished PU.1 activity, consistent with the ability of its downstream target, NF-κB, to bind the PU.1 promoter 35 . On the other hand, PI(3)K and mTOR inhibition had no effect, and blockade of Src kinase, which drives PU.1 activation downstream of M-CSFR 36 , only partially inhibited IL-1 effects. As M-CSFR expression was increased in IL-1-treated HSCs, we also tested whether IL-1 could directly activate an M-CSF autocrine loop to induce PU.1. However, treatment with an anti-M-CSFR blocking antibody did not attenuate IL-1-mediated PU.1 activation in PU.1–eYFP HSCs, despite preventing M-CSF-mediated macrophage differentiation in BM cultures ( Fig. 3e and Supplementary Fig. 3h ). Furthermore, the effect of IL-1 required constant IL-1R signalling and was lost if IL-1 was washed out after 24 h ( Fig. 3f ), further arguing against a role for IL-1-induced secondary factors. These results demonstrate that direct and sustained IL-1 signalling, mediated largely through NF-κB pathway activation downstream of IL-1R, is required for PU.1 induction. Figure 3: PU.1 activation requires direct and sustained IL-1R signalling. ( a ) Experimental design and myeloid marker expression in Il1r1 −/− HSCs ( n = 3 biological replicates per group). ( b ) Experimental design and PU.1 levels in Il1r1 +/+ and Il1r1 −/− ::PU.1–eYFP HSCs ( n = 3 biological replicates per group). ( c ) Experimental design, representative histograms and PU.1 levels in PU.1–eYFP HSCs treated with the indicated inhibitors ( n = 3 biological replicates per group for MEK, Src, PI(3)K; n = 4 for p38, PKC, mTOR; n = 7 for IKK). Data are expressed as log 2 fold change in PU.1–eYFP MFI levels relative to −IL-1β PU.1–eYFP HSCs. ( d ) Schematic of the signalling pathways downstream of IL-1R highlighting NF-κB and indicating the connection with M-CSFR signalling. ( e ) Experimental design, representative histograms and PU.1 levels in PU.1–eYFP HSCs treated with anti-M-CSFR blocking antibody (anti-MR) or isotype control (iso) ( n = 3 biological replicates per group). ( f ) Experimental design, representative FACS plots and myeloid marker expression in IL-1 wash-out (wo) experiments ( n = 3 biological replicates per group). Source data for a – c , e and f are shown in Supplementary Table 1 . Data are means ± s.d.; ∗ P ≤ 0.05, ∗ ∗ P ≤ 0.01, ∗ ∗ ∗ P ≤ 0.001. P values in a were determined by Mann–Whitney U -test, in b and e by paired Student’s t -test, and in c and f by one-way ANOVA with Tukey’s and Dunnett’s tests, respectively. Exact P values, number of replicates used to derive statistical data ( n ) and statistical tests used are shown in Supplementary Table 2 . Full size image Haematopoietic remodelling following IL-1 exposure To investigate the effects of IL-1 on haematopoiesis in vivo , we injected mice intraperitoneally with 0.5 μg IL-1β for 1 day (acute treatment) up to 20 consecutive days (chronic exposure; Fig. 4a ). As expected, IL-1-treated mice exhibited a rapid and sustained increase in circulating myeloid cells, with concomitant decreases in lymphoid cells and erythrocytes ( Fig. 4b, c and Supplementary Fig. 4a ). In the BM, 1 day acute IL-1 treatment did not significantly alter the overall cellularity or lineage distribution of mature cell populations, save for a small but significant increase in Mac-1 + Gr-1 int pre-granulocytes/monocytes (preGrs; Supplementary Fig. 4b–d ). However, acute IL-1 treatment already resulted in a rapid MPP expansion and substantial erosion of common lymphoid progenitors (CLPs: Lin − Flk2 + IL-7R + c-Kit int Sca-1 int ; Supplementary Fig. 4c, d ), suggesting quick myeloid priming on IL-1 exposure. These changes were maintained and associated with a clear rebalancing of lineage output following 20 days of chronic IL-1 exposure, with significant expansion of Mac-1 + Gr-1 + granulocytes (Grs) and corresponding loss of B220 + CD19 + B cells ( Fig. 4d ). This was associated with a specific amplification of GMPs and myeloid-biased MPP2 and MPP3 subsets ( Fig. 4e ), suggesting direct activation of a myeloid differentiation axis at the level of HSCs 28 . Consistently, we observed a rapid and sustained increase in the metabolically activated MPP1 (Lin − c-Kit + Sca-1 + Flk2 − CD48 − CD150 + CD34 + ), which is produced by dormant long-term HSCs (HSC LT ; Lin − c-Kit + Sca-1 + Flk2 − CD48 − CD150 + CD34 − ) 29 , and solely accounted for the overall expansion of phenotypic HSCs observed following 20 days of chronic IL-1 exposure ( Fig. 4g and Supplementary Fig. 4e ). Importantly, HSC activation and BM remodelling was not observed in IL-1-treated Il1r1 −/− mice, confirming its dependency on IL-1R signalling ( Supplementary Fig. 4e ). Moreover, using a sensitive in vivo CFSE dilution assay 37 and 7 consecutive days of IL-1β treatment, we demonstrated that fewer IL-1-treated HSC LT remain in an undivided state compared with untreated HSC LT ( Fig. 4h, i and Supplementary Fig. 5a, b ). This confirms in vivo that IL-1 exposure directly increases HSC division rates, which is likely to fuel the MPP overproduction observed even after acute treatment. In addition, we showed persistent PU.1 activation in a substantial fraction of HSCs from PU.1–eYFP mice on both acute and chronic IL-1 treatment ( Fig. 4j ). We also found decreased Cebpa levels and increased Runx1 expression in HSCs exposed to IL-1 for 20 days ( Fig. 4k ), which is strikingly similar to the molecular changes observed in regenerating HSCs that overproduce myeloid-biased MPP2/3 as they re-build the myeloid lineage 30 . PU.1 was also activated in MPP4 from IL-1-treated PU.1–eYFP mice ( Supplementary Fig. 5c ), which is likely to reflect the myeloid lineage reprogramming of this lymphoid-primed MPP subset on IL-1 exposure. This is consistent with the observed loss of lymphoid output and previously reported inhibition of lymphopoiesis by IL-1 (ref. 38 ). Collectively, these results indicate that in vivo IL-1 exposure triggers a rapid PU.1-mediated myeloid differentiation program in HSCs, which amplifies myeloid cell production at the expense of the other blood lineages. Figure 4: IL-1 enforces HSC myeloid differentiation in vivo . ( a – c ) Peripheral blood (PB) parameters in mice injected ± IL-1β for 20 days: experimental design ( a ), percentage of myeloid (Mac-1 + Gr-1 + ) and lymphoid (B220 + and CD3 + ) cells over time ( n = 5 mice per group; ∗ versus −IL-1β;° versus day 0) ( b ), and complete blood counts after 20 days ( c ). My, myeloid; Ly, lymphoid; RBC, red blood cell; Plt, platelet. ( d – g ) Size of the indicated BM populations (see main text for definitions) in mice injected ± IL-1β for 20 days ( n = 18 mice per group). ( h , i ) In vivo CFSE dilution assay ( n = 4 −IL-1β and 7 +IL-1β mice per group): experimental design ( h ), and representative histograms and quantification of HSC LT (long-term HSC) division kinetics ( i ). ( j ) Representative histograms, PU.1 levels and frequency of PU.1 + HSCs from PU.1–eYFP mice injected ± IL-1β for 1 or 20 days ( n = 4 −IL-1β and 5 +IL-1β mice per group for day 1; n = 5 −IL-1β and 3 +IL-1β mice per group for day 20). ( k ) Gene expression analyses (using the Fluidigm platform) of myeloid determinants in HSCs from mice injected ± IL-1β for 20 days ( n = 8 pools of 100 cells for each condition; bars are means). Results are expressed as fold changes compared with levels in −IL-1β HSCs (set to 1). Source data for i and j are shown in Supplementary Table 1 . Data are means ± s.d.; ∗ /° P ≤ 0.05, ∗ ∗ P ≤ 0.01, ∗ ∗ ∗ P ≤ 0.001. P values in b – k were determined by Mann–Whitney U -test and in b (relative to day 0) by one-way ANOVA with Dunnett’s test. Exact P values, number of replicates used to derive statistical data ( n ) and statistical tests used are shown in Supplementary Table 2 . Full size image Acute IL-1 stimulation contributes to myeloid recovery To investigate the importance of IL-1 for blood regeneration, we focused on 5-fluorouracil (5-FU)-mediated myeloablation, as we found that both IL-1α and IL-1β levels were significantly increased in BM plasma, but not in blood serum, whereas no other pro-inflammatory cytokines were induced following one injection of 150 mg kg −1 5-FU ( Fig. 5a and Supplementary Fig. 6a ). To identify the cellular source(s) of IL-1, we analysed Il1a and Il1b expression by qRT–PCR in mature BM myeloid and lymphoid cells, as well as in BM stromal populations 39 , specifically osteoblastic lineage cells (OBCs: Lin − CD45 − Sca-1 − CD51 + ), multipotent stromal cells (MSCs: Lin − CD45 − Sca-1 + CD51 + ) and endothelial cells (ECs: Lin − CD45 − Sca-1 + CD31 + ) ( Supplementary Fig. 6b ). At steady state, Il1a was expressed largely by BM CD4 + T cells, and Il1b by Grs ( Fig. 5b ). However, on 5-FU treatment, Il1a was strongly induced in myeloid cells and ECs, but no increase in Il1b expression was detected, suggesting that it might be released by dying Grs on inflammasome activation 40 . Strikingly, myeloid recovery was significantly and specifically delayed in 5-FU-injected Il1r1 −/− mice ( Fig. 5c ), emphasizing the importance of IL-1R signalling for myeloid recovery following BM injury. However, HSCs isolated from Il1r1 +/+ and Il1r1 −/− mice 10 days post-5-FU treatment, and at the peak of IL-1 production, showed equivalent engraftment and long-term multilineage reconstitution on transplantation ( Fig. 5d–g ). This indicates that acute exposure to IL-1 and transient activation of pro-myeloid differentiation pathways required for optimal blood recovery does not affect HSC self-renewal activity. In addition, Il1r1 −/− mice had unaffected blood production as previously described 41 , and normal stem and progenitor BM compartments, aside from a specific decrease in myeloid-biased MPP3 that may represent impaired basal myeloid lineage priming in the absence of tonic IL-1 signalling ( Supplementary Fig. 6c, d ). Together, these results demonstrate that although IL-1 signalling is dispensable in HSCs at steady state, it becomes important in physiological ‘emergency’ conditions when acute IL-1 production rapidly activates HSC myeloid differentiation to enhance blood regeneration. They also identify ECs as one of the key producers of IL-1 following haematopoietic damage, consistent with a local delivery of IL-1 to quiescent HSCs lodged in their perivascular BM niches 42 . Figure 5: IL-1 accelerates myeloid regeneration following acute BM injury. ( a ) Experimental design and IL-1α/β levels following 5-FU treatment ( n = 6 mice per group, day 0; n = 5 mice per group, day 8–10; n = 4 mice per group, day 12; n = 3 mice per group, day 14). ( b ) qRT–PCR analysis of Il1a/b expression in the indicated populations ( n = 8 pools of 100 OBCs, MSCs and ECs per group; n = 6 pools of 100 cells for all other populations). Results are expressed as fold changes compared with levels in B cells or d0 populations (each set to 0). ( c ) Experimental design and complete blood counts in 5-FU-treated Il1r1 +/+ and Il1r1 −/− mice ( n = 5 Il1r1 +/+ and 6 Il1r1 −/− mice per group). ( d – g ) Engraftment of Il1r1 +/+ and Il1r1 −/− HSCs isolated from mice 10 days post-5-FU treatment and transplanted into lethally irradiated (IR) recipients: experimental design ( d ), donor chimaerism ( e ), and donor myeloid (Mac-1 + Gr-1 + ) and lymphoid (B220 + or CD3 + ) lineage distribution in PB over time ( f ), and donor chimaerism in BM HSCs 16 weeks post-transplant ( g ). Source data for a and e – g are shown in Supplementary Table 1 . Data are means ± s.d.; ∗ P ≤ 0.05, ∗ ∗ P ≤ 0.01, ∗ ∗ ∗ P ≤ 0.001. P values in a and b were determined by one-way ANOVA with Dunnett’s and Tukey’s tests, respectively, and in c – g by Mann–Whitney U -test. Exact P values, number of replicates used to derive statistical data ( n ) and statistical tests used are shown in Supplementary Table 2 . Full size image Chronic IL-1 exposure impairs HSC function To better understand the consequence of chronic IL-1 exposure, we used HSCs isolated from mice injected ± IL-1β for 20 days. First, we confirmed that IL-1-treated HSCs had unaffected clonogenic efficiency in methylcellulose and normal survival as measured by cleaved caspase-3 (CC3) levels compared with naive HSCs isolated from PBS-treated mice ( Supplementary Fig. 7a ). We then transplanted both naive and IL-1-treated HSCs into sublethally irradiated congenic recipients that were subsequently injected ± IL-1β for another 30 days to track lineage reconstitution and BM HSC chimaerism ( Fig. 6a ). We observed accelerated donor blood output from naive HSCs exposed to IL-1 after transplantation ( Fig. 6b ), which is consistent with earlier literature describing enhanced myeloid regeneration in similar conditions 21 . In contrast, donor blood output rapidly eroded in mice transplanted with IL-1-treated HSCs on continued IL-1 exposure ( Fig. 6b and Supplementary Fig. 7b ). In both cases, IL-1 treatment following transplantation completely blocked donor lymphoid output and rebalanced blood production towards enhanced myelopoiesis ( Fig. 6c and Supplementary Fig. 7b ). However, chimaerism analyses uncovered a near complete exhaustion of donor HSCs in the BM of mice transplanted with IL-1-treated HSCs, even in recipients not subsequently injected with IL-1 ( Fig. 6d and Supplementary Fig. 7c ). To determine whether these effects were due to IL-1-mediated PU.1 activation, we transplanted lentivirus-transduced HSCs overexpressing PU.1 into sublethally irradiated recipient mice and tracked these mice for 30 days ( Fig. 6e ). Strikingly, compared with control-transduced HSCs, we observed rapid erosion of donor blood output and complete exhaustion of PU.1-overexpressing HSCs in the BM ( Fig. 6f, g and Supplementary Fig. 7d ), a behaviour similar to IL-1-exposed HSCs. Together, these short-term lineage-tracking results indicate that although IL-1 treatment always enhances myeloid cell production regardless of its duration, it also severely compromises HSC function and blood regeneration in conditions of chronic exposure, probably through hyperactivation of PU.1. Figure 6: Chronic IL-1 exposure forces myeloid cell production at the expense of HSC engraftment. ( a – d ) Short-term tracking of HSCs from mice injected ± IL-1β for 20 days and transplanted into sublethally irradiated (sub-IR) recipients injected ± IL-1β for another 30 days ( n = 9 +IL-1β and 10 −IL-1β mice per group): experimental design ( a ), donor chimaerism in PB over time (° P < 0.05, °° P < 0.01 versus −IL-1β recipients) ( b ), lineage distribution in PB ( c ), and donor chimaerism in BM HSCs (circles, individual mice; black lines, mean values) 30 days post-transplantation ( d ). ( e – g ) Short-term tracking of lentivirus-transduced PU.1-overexpressing HSCs ( n = 4 Ctrl and 5 PU.1 mice per group): experimental design ( e ), donor chimaerism in PB over time ( f ), and donor chimaerism in BM HSCs (circles, individual mice; black lines, mean values) 30 days post-transplantation ( g ). Source data for f and g are shown in Supplementary Table 1 . Data are means ± s.d.; ∗ P ≤ 0.05, ∗ ∗ ∗ P ≤ 0.001. P values in b and d were determined by one-way ANOVA with Tukey’s test, in c and g by Mann–Whitney U -test, and in f by Student’s t -test. Exact P values, number of replicates used to derive statistical data ( n ) and statistical tests used are shown in Supplementary Table 2 . Full size image To confirm the functional impairment of HSCs chronically exposed to IL-1, we transplanted naive and IL-1-treated HSCs into lethally irradiated recipient mice to assess long-term engraftment and self-renewal activity after 4 months, and also performed limiting dilution analyses with unfractionated BM cells to address HSC function independently of surface markers ( Fig. 7a and Supplementary Fig. 8a ). In both cases, we observed significantly decreased donor chimaerism and reduced lymphoid output from IL-1-treated HSCs ( Fig. 7b, c and Supplementary Fig. 8b, c ), reflecting compromised self-renewal activity. To investigate whether long-term IL-1 treatment could lead to HSC exhaustion, we also injected mice ± IL-1β for a total of 70 days ( Fig. 7d ). Remarkably, even after such a long exposure, the HSC pool remained numerically intact though with an elevated frequency and number of myeloid-primed CD41 + HSC LT 43 , 44 , 45 and expansion of myeloid-biased MPP2/3 ( Supplementary Fig. 8d, e ). We also found that 70-day IL-1-treated HSC LT had unaffected survival in methylcellulose ( Supplementary Fig. 8f ), but significantly impaired self-renewal activity following transplantation ( Fig. 7e, f ). These results demonstrate that chronic IL-1 exposure restricts HSC lineage output and severely compromises HSC regenerative function, thus priming IL-1-exposed HSCs to fail replicative challenges such as transplantation. Figure 7: Chronic IL-1 exposure impairs HSC self-renewal. ( a – c ) Long-term engraftment of HSCs from mice injected ± IL-1β for 20 days and transplanted into lethally irradiated (IR) recipients ( n = 10 mice per group): experimental design ( a ), donor chimaerism ( b ) and lineage distribution in PB over time ( c ). ( d – f ) Long-term engraftment of HSC LT from mice injected ± IL-1β for 70 days ( n = 9 −IL-1β and 10 +IL-1β mice per group): experimental design ( d ), donor chimaerism ( e ) and lineage distribution in PB over time ( f ). ( g – i ) Secondary transplantation (2° txpl) of HSCs re-isolated from primary transplanted mice (1° txpl, shown in a ) reconstituted with HSCs from mice injected ± IL-1β for 20 days ( n = 8 mice per group): experimental design ( g ), donor chimaerism ( h ) and lineage distribution in PB over time ( i ). Source data for b , c , e , f , h and i are shown in Supplementary Table 1 . Data are means ± s.d.; ∗ P ≤ 0.05, ∗ ∗ P ≤ 0.01, ∗ ∗ ∗ P ≤ 0.001. P values in b , e , h and i were determined by Mann–Whitney U -test, and in c and f by Student’s t -test. Exact P values, number of replicates used to derive statistical data ( n ) and statistical tests used are shown in Supplementary Table 2 . Full size image IL-1 effects are reversible on withdrawal Last, we tested whether the damaging effects of chronic IL-1 exposure were retained by the HSC pool. We first re-transplanted into lethally irradiated recipient mice donor HSCs collected from mice initially reconstituted with naive and IL-1-exposed HSCs (20 days or 70 days; Fig. 7g and Supplementary Fig. 8g ). Remarkably we observed a complete recovery relative to naive controls in donor chimaerism and lineage output in HSCs derived from IL-1-treated donor mice, regardless of the length of the original IL-1 exposure ( Fig. 7h, i and Supplementary Fig. 8h, i ). These results suggest that the remaining portion of the IL-1-exposed HSC pool was not functionally compromised. To assess whether IL-1 effects were reversible under native conditions, we used mice that were injected ± IL-1β for 20 days and allowed to rest without further treatment for 8 weeks ( Fig. 8a ). After the rest period, blood parameters had fully normalized to untreated levels with a complete restoration of myeloid and B cell numbers ( Fig. 8b ). Moreover, most of the BM changes had also reverted to untreated levels, with normal numbers of HSCs and MPPs, including MPP1, and restoration of the CLP compartment ( Fig. 8c, d ). The only persisting differences were increased GMP and decreased MPP4 numbers, which may reflect residual myeloid priming effects of the initial IL-1 treatment. Most importantly, donor chimaerism and lymphoid output were indistinguishable from untreated HSCs, indicating full restoration of self-renewal activity ( Fig. 8e, f ). Taken together, these results demonstrate that the damaging effects of chronic IL-1 exposure on HSC regenerative functions are fully reversible on interruption of IL-1 exposure. Figure 8: IL-1 effects are reversible on withdrawal. ( a – d ) Analysis of mice injected ± IL-1β for 20 days and subsequently rested (R) for 8 weeks ( n = 5 mice per group): experimental design ( a ), PB parameters ( b ), progenitor cell numbers ( c ) and HSC LT and MPP1 numbers ( d ). ( e , f ) Long-term engraftment in lethally irradiated (IR) recipients of HSCs from mice injected ± IL-1β for 20 days and rested for 8 weeks ( n = 4 −IL-1β and 5 +IL-1β mice per group): donor chimaerism ( e ) and lineage distribution in PB over time ( f ). ( g ) Model for the effects of IL-1 on HSC function. At steady state, HSCs are essentially kept in a quiescent state that is geared towards maintenance of blood homeostasis. In this context, IL-1 is produced at low levels in the BM, primarily by granulocytes (Gr) and CD4 + T cells, and has minimal impact on HSC function. Following infection or injury, IL-1 is produced at elevated levels in the BM microenvironment, in particular by endothelial cells (EC) that form an essential component of the HSC niche, and thus can provide localized IL-1 signalling to HSCs. In this context, IL-1 functions as an ‘emergency’ signal that directly instructs HSCs towards myeloid differentiation, through activation of the NF-κB pathway and engagement of a PU.1-dependent myeloid gene program resulting in accelerated cell division and precocious differentiation into myeloid progenitors and ultimately mature myeloid cells. Although such a response is advantageous in the context of acute inflammation and blood regeneration, it is ultimately detrimental in situations of chronic exposure because IL-1-exposed HSCs lose their ability to differentiate into the lymphoid and erythroid lineages, and exhibit decreased self-renewal activity and regenerative potential in response to replicative challenges such as transplantation. However, these damaging consequences are fully reversible on IL-1 withdrawal, indicating that IL-1 effects are essentially transient and require continuous exposure to negatively impact HSC function. Source data for b – f are shown in Supplementary Table 1 . Data are means ± s.d.; ∗ P ≤ 0.05. All P values were determined by Mann–Whitney U -test. Exact P values, number of replicates used to derive statistical data ( n ) and statistical tests used are shown in Supplementary Table 2 . Full size image DISCUSSION Our results demonstrate that IL-1 directly regulates HSC fate and instructs quick myeloid differentiation through precocious activation of an instructive PU.1 gene program and exclusive production of myeloid cells in ‘emergency’ situations such as myeloablation and transplantation ( Fig. 8g ) 46 . These results imply that previous models, in which sustained low PU.1 levels initiate eventual myeloid commitment in progenitors 47 , do not apply in stress conditions. IL-1 has long been known to inhibit lymphopoiesis and erythropoiesis 38 , 48 , 49 although, paradoxically, high PU.1 levels also play a key role in B cell development 47 . It is therefore likely that rapid PU.1 induction in HSCs by IL-1 drives myelopoiesis while preventing the establishment of other gene programs specifying lymphopoiesis and erythropoiesis. Interestingly, PU.1 activation proceeds through IKK kinase, which is the primary activator of NF-κB, a known IL-1 target that is also downstream of TLRs 20 . As TLRs also induce HSC myeloid differentiation 50 , it is possible that PU.1 serves as a convergence point for these pro-inflammatory signals. Interestingly, the transcription factor C/EBPβ also regulates ‘emergency’ myeloid differentiation in response to IL-3 or GM-CSF in haematopoietic progenitors 51 , or to IFN-γ stimulation in HSCs 52 . It will therefore be interesting to investigate whether activation of PU.1 and activation of C/EBPβ represent distinct or converging mechanisms accelerating HSC myeloid differentiation in response to inflammation. Our results also demonstrate that chronic IL-1 exposure significantly impairs HSC function. Notably, the numerical HSC pool is not depleted, consistent with previous results of long-term in vivo exposure to IFN-α 13 . This may reflect the recently described ability of the MPP compartment to ‘buffer’ myeloid demand independently of HSCs 53 , 54 , or the replenishment of an IL-1-responsive HSC subset from quiescent HSC LT similar to the IFN-responsive CD41 + HSCs recently involved in emergency megakaryopoiesis 55 . This latest interpretation is supported by our observation that a fraction of, rather than all, HSCs activate PU.1 following IL-1 exposure, and the partial, rather than complete, functional impairment of IL-1-exposed HSCs. In this context, neither CD34 nor CD41 expression seems to separate IL-1-responsive from IL-1-non-responsive HSC LT , and further analyses will be required to identify ways to separate these two functional subsets. Interestingly, the deleterious effects of chronic IL-1 exposure on HSC function are largely resolved following IL-1 withdrawal, suggesting that they are not permanently ‘imprinted’ onto the HSC pool through epigenetic or other means. As IL-1 receptor blockers including anakinra are highly efficacious treatments for a wide range of chronic inflammatory diseases featuring deregulated blood production 19 , restoration of HSC function could be a key benefit of IL-1 blockade. Moreover, it is likely that IL-1 could play a similar dual role, both beneficial and pathogenic, in a variety of tissues by directly reprogramming their stem cell populations. Collectively, our findings demonstrate that IL-1 acts as a double-edged sword for HSC function, promoting myeloid regeneration without functional cost to HSCs during acute need, but significantly impairing their self-renewal and lineage output following chronic exposure. Modulation of IL-1 signalling, particularly its duration, may therefore be an important approach for broadly improving stem cell health and tissue function in the context of chronic inflammation or physiological ageing. □ Methods Mice. Congenic wild-type C57Bl/6 mice of both genders were bred in house and used for these studies. Il1r1 −/− mice 40 and PU.1–eYFP mice (gift from C. Nerlov, Oxford University, UK) were on a pure C57Bl/6 background, and PU.1ΔURE mice 33 were on a mixed 129X1/SvJ × C57Bl/6 background. All experiments were performed in accordance with UCSF IACUC-approved protocols. In vivo assays. For in vivo IL-1 treatment, mice were injected intraperitoneally (i.p.) with 0.5 μg IL-1β (Peprotech) in 100 μl PBS/0.2% BSA or 100 μl PBS/0.2% BSA alone once daily for up to 70 days in primary mice and 30 days in transplanted mice. For myeloablation treatment, mice were injected once i.p. with 150 mg kg −1 5-fluorouracil (5-FU; Sigma-Aldrich) in PBS. For transplantation experiments, 8–12-week-old CD45.1 C57Bl/6-Boy/J recipient mice were sublethally irradiated (9 Gy, split dose 3 h apart) or lethally irradiated (11 Gy, split dose 3 h apart) using a Cs 137 source (J. L. Shepherd), and injected retro-orbitally with CD45.2 + donor HSCs either alone following sublethal irradiation or together with 3 × 10 5 Sca-1-depleted BM cells following lethal irradiation. For transplantation experiments from 5-FU-treated mice, 500 HSCs were injected per lethally irradiated recipient. For transplantation experiments from 20 days (with or without 8 week rest period) and 70 days IL-1-treated mice, 250 HSCs or HSC LT were injected per lethally irradiated recipient. For limiting dilution assays, 1 × 10 4 , 1 × 10 5 , or 1 × 10 6 unfractionated CD45.2 + donor BM cells were combined with 2 × 10 5 CD45.1 + competitor cells, and injected per lethally irradiated recipient. For secondary transplantations, 500 HSCs were injected per lethally irradiated recipient. For transplantation experiments in sublethally irradiated mice receiving ± IL-1β, 1,000 HSCs were injected per recipient. For in vivo CFSE dilution assays, 1.0–1.1 × 10 5 CFSE-labelled CD45.2 + LSK cells or 1.5 × 10 6 naive T cells were injected per non-irradiated CD45.1 + /CD45.2 + F1 recipient. Mice were analysed 21 days after transplantation and the zero-divisional cell fraction was determined according to the CFSE intensity of naive donor T cells as described previously 36 . Irradiated mice were maintained on antibiotic-containing water for at least 4 weeks following transplantation, and analysed for donor-derived chimaerism by regular bleeding. Peripheral blood (PB) was obtained from retro-orbital bleeding, and collected in 4 ml of ACK (150 mM NH 4 Cl/10 mM KHCO 3 ) containing 10 mM EDTA for flow cytometry analyses, or in EDTA-coated tubes (Becton Dickenson) for complete blood count analysis using a Hemavet 950 CBC analyser (Drew Scientific). HSC frequencies from limiting dilution assay experiments were calculated using ELDA software ( ) 56 . Engraftment was defined as ≥0.5% donor PB chimaerism at 16 weeks post-transplantation. Flow cytometry. BM stem and progenitor populations were analysed and/or isolated as described previously 30 . For cell sorting, BM cells were obtained by crushing leg, arm and pelvic bones from mice in Hanks’ buffered saline solution (HBSS) containing 2% heat-inactivated FBS. Erythrocytes and contaminating bone cells were removed by ACK lysis followed by centrifugation on a Ficoll gradient (Histopaque 1119, Sigma-Aldrich). BM was subsequently enriched for c-Kit + cells using c-Kit microbeads (Miltenyi Biotec, 130-091-224) and an AutoMACS cell separator (Miltenyi Biotec). For population analyses, BM cells were flushed from one tibia and one femur per mouse and cellularity was determined using a ViCell automated cell counter (Beckman-Coulter). For wild-type mice, cells were incubated with purified, unconjugated lineage antibodies against CD3 (BioLegend, 100202), CD4 (eBioscience, 16-0041-82), CD5 (eBioscience, 14-0051-85), CD8 (eBioscience, 16-0081-82), B220 (eBioscience, 16-0451-82), Ter119 (eBioscience, 16-5921-85), Mac-1 (eBioscience, 16-0112-85) and Gr-1 (eBioscience, 14-5931-85), followed by goat anti-rat-PE–Cy5 (Invitrogen, A10691), subsequently blocked with purified rat IgG (Sigma-Aldrich) and stained with c-Kit APC–eFluor780 (eBioscience, 47-1171-82), Sca-1–PB (BioLegend, 108120), Flk2–Bio (eBioscience, 13-1351-82), CD48–A647 (BioLegend, 103416), CD150–PE (BioLegend, 115904), FcγR–PerCP–eFluor710 (eBioscience, 46-0161-82), CD34–FITC (eBioscience, 11-0341-85) and either SA–PE–Cy7 (eBioscience, 25-4317-82) or SA–Qdot605 (Invitrogen, Q10101MP). For identification of CD41 + HSCs, a staining including Lin/PE–Cy5, c-Kit–APC–eFluor780, Sca-1–PB, CD48–A647, CD150–PE–Cy7 (BioLegend, 115913), CD34–Bio (BioLegend, 119304) followed with SA–Qdot605, CD41–FITC (eBioscience, 11-0411-82) and CD229–PE (BioLegend, 122905) was performed. For identification of CLP, a separate staining including Lin/PE–Cy5, c-Kit–APC–eFluor780, Sca-1–PB, Flk2–Bio/SA–Qdot605 and IL7R–PE–Cy7 (eBioscience, 25-1271-82) was performed. For PU.1–eYFP mice, two separate stains were performed, one including CD150–PE, CD48–A647 and Flk2–Bio/SA–PE–Cy7, and the second one FcγR–PerCP–eFluor710 and CD34–Bio followed with SA–Qdot605 alongside Lin/PE–Cy5, c-Kit–APC–eFluor780 and Sca-1–PB. For sorting HSCs from IL-1- or 5-FU-treated mice, CD34 and FcγR were excluded and Mac-1 was stained separately from the lineage markers using Mac-1–PE–Cy7 (eBioscience, 25-0112-82) to control for potential increases in expression caused by IL-1, and HSC purity was further verified by inclusion of ESAM–FITC (Biolegend, 136205). Stromal BM ECs, MSCs and OBCs were isolated from haematopoietic-depleted, collagenase-treated bone chips as described previously 40 using Lin/PE–Cy5, Sca-1–PB, CD45–APC–Cy7 (BD Biosciences, 557659), CD31–FITC (BD Biosciences 558738) and CD51–Bio (BD Biosciences 551380) followed by SA–APC (eBioscience 17-4317-82). BM granulocytes (Mac-1 + Gr-1 hi M-CSFR − ), pre-granulocytes (Mac-1 + Gr-1 int Ly6C lo ), monocytes (Mac-1 + Ly6C + M-CSFR + F4/80 − ) and macrophages (Mac-1 + Gr-1 lo M-CSFR − F4/80 + SSC int/lo ) were identified and isolated as described previously 57 using Mac-1–PE–Cy7, Gr-1–PB (eBioscience, 57-5931-82), Ly6C–PE–Cy5.5 (Biolegend, 128012), M-CSFR–Bio (eBioscience 13-1152-81) followed by SA–Qdot605, and F4/80–APC (eBioscience, 17-4801-82). BM B cells (B220 + CD19 + ), CD4 + T cells (CD4 + ) and CD8 + T cells (CD8 + ) were identified and isolated using B220–APC–Cy7 (eBioscience, 47-0452-82), CD19–PE (eBioscience, 12-0193-82), CD3–APC (eBioscience, 17-0032-82), CD4–FITC (BD Biosciences, 553729), and CD8–PE (Biolegend, 100908). For assessing myeloid differentiation in cultures, the same c-Kit–APC–eFluor780, Sca-1–PB, FcγR–PerCP–eFluor710 and Mac-1–FITC (eBioscience 11-0112-82) were used, plus either M-CSFR–Bio/SA–Qdot 605 or CD18–Bio (BD Biosciences, 557439) followed by SA–PE–Cy7, in separate stains. For transplantation experiments, donor and recipient cells were distinguished using CD45.1–FITC (eBioscience, 11-0454-85) and CD45.2–PE (eBioscience, 12-0453) with Mac-1–PE–Cy7, Gr-1–PB, CD3–APC, and B220–APC–Cy7 to assess lineage reconstitution in PB and BM, and CD45.1–FITC and CD45.2–PE–Cy7 (eBioscience, 25-0453-82) with c-Kit–eFluor780, Sca-1–PB, Flk2–Bio/SA–Qdot605, CD48–APC and CD150–PE to assess lineage reconstitution in HSCs. Stained cells were resuspended in HBSS/2% FBS and 1 μg ml −1 propidium iodide (PI) to exclude dead cells. For CFSE dilution assays, cells were stained with biotinylated lineage antibodies and enriched for Lin − cells using anti-biotin microbeads (Miltenyi Biotec, 130-090-485) on MACS cell separation columns (Miltenyi Biotec), followed by SA–PB (Invitrogen, S11222), c-Kit–PE–Cy5 (BioLegend, 105810) and Sca-1–APC–Cy7 (BioLegend, 108126). T cells were enriched from spleen cells with a CD4–Bio (eBioscience, 13-0041-82) antibody and anti-biotin microbeads on MACS columns and subsequently stained with SA–PE–Cy5 (BioLegend, 405205) and CD62L–PE (eBioscience, 12-0621-81). Sorted LSK cells and CD4 + CD62L + naive T cells were subsequently labelled with 2 μM CFSE (Molecular Probes) as described previously 34 . For analysis of CFSE-labelled donor cells following transplantation, cells were stained with biotinylated lineage antibodies and enriched as above, followed by SA–Qdot605, CD45.1–eFluor 450 (eBioscience, 48-0453-82), CD45.2–PerCP–Cy5.5 (eBioscience, 45-0454-82), c-Kit–PE–Cy7 (eBioscience, 25-1171-81), Sca-1–APC–Cy7 (BioLegend, 108126), FcγR–BV510 (BioLegend, 101333), CD150–PE and CD34–A660 (eBioscience, 50-0341-80). Dead cells were excluded using Zombie Aqua Dye (BioLegend, 423102). For intracellular Ki67/DAPI staining, unfractionated BM cells were stained with Lin/PE–Cy5, c-Kit–APC–eFluor780, Sca-1–PE–Cy7 (BioLegend, 108113), CD150–PE and CD48–A647 and HSCs identified without Flk2. Isolated cells were washed in PBS, fixed with Cytofix/Cytoperm (BD Biosciences) for 30 min at 4 °C, washed in Permwash, permeabilized with Cytoperm Plus for 10 min at room temperature, washed again in Permwash, and stained with anti-Ki67–FITC (eBioscience, 11-5698-80) in Permwash for 2 h at 4 °C. Cells were washed with Permwash and resuspended in PBS/3% FBS containing 1 μg ml −1 DAPI and incubated for 20 min before analysis. Cell isolation was performed on a FACSAria II or III (Becton Dickenson) using double sorting, and cell analyses were performed using a FACS LSR II (Becton Dickenson). Additional antibody information, including clones and dilutions, is listed in Supplementary Table 3 . Cell culture. All cultures were performed at 37 °C in a 5% CO 2 water jacket incubator (Thermo Scientific). Cells were grown in StemPro34 medium (Invitrogen) supplemented with penicillin (50 U ml −1 )/streptomycin (50 μg ml −1 ) and L -glutamine (2 mM), SCF (25 ng ml −1 ), Flt3L (25 ng ml −1 ), IL-11 (25 ng ml −1 ), IL-3 (10 ng ml −1 ), GM-CSF (10 ng ml −1 ), Epo (4 U ml −1 ) and Tpo (25 ng ml −1 ) (Peprotech). IL-1α or IL-1β (Peprotech) was added at 25 ng ml −1 and M-CSF (Peprotech) at 100 ng ml −1 where indicated. For expansion assays, cells (400 per well) were directly sorted into 96-well plates and cultured for up to 8 days. For surface marker tracking experiments, cells (2,500–5,000 per well) were directly sorted into 96-well plates and cultured for up to 8 days. Cytokines were refreshed every other day by removing ∼ 30% of the total well volume and replacing with fresh media, and cultures were split as needed. For cleaved caspase-3 and ATP activity assays, cells (400 per well) were directly sorted into 384-well white luminescence culture plates containing 40 μl of media. Either directly after sorting or after 12 h an equal volume of Caspase-Glo 3/7 or CellTiter Glo substrate (Promega) was added to wells and the assay was performed according to the manufacturer’s instructions. For pathway blockade assays, HSCs were cultured for 12 h ± 25 pg ml −1 IL-1β with either DMSO vehicle (1:1,000 dilution) or 10 μm PD-0325901 (MEK inhibitor; Shanghai Chemokine Co., CAS 391210-10-9), 10 μm rottlerin (PKC inhibitor; Tocris, 1610), 10 μm SB203580 (p38 inhibitor; Millipore, 559389), 10 μm BMS-345541 (IKK inhibitor; Sigma-Aldrich, B9935), 10 μm PP2 (Src inhibitor; Calbiochem, 529576), 50 μm LY294002 (PI(3)K inhibitor; Cell Signaling, 9901), and 200 μm rapamycin (mTOR inhibitor; Sigma-Aldrich, R8781). For M-CSFR blocking assays, 10 μg ml −1 purified anti-M-CSFR (Biolegend, 135501) or rat IgG2a isotype control (Biolegend, 400501) was added to the culture media. For colony formation assays, cells (100 per 1 ml per 3 cm dish, or 1 per 100 μl per well of a 96-well plate) were cultured in methylcellulose (Stem Cell Technologies, M3231) supplemented with the above cytokines and colonies were scored visually after 7 days. For serial replating, cells from primary methylcellulose cultures in 3 cm dishes were disaggregated and washed three times in PBS, and 1 × 10 4 cells were re-plated in fresh methylcellulose without IL-1β. For morphology analyses, 5 × 10 4 cells were spun directly onto glass slides using a Cytospin centrifuge (Beckman) and stained with Wright–Giemsa according to standard protocols. For BrdU incorporation assays, cells (5,000 per well) were directly sorted into 96-well plates and cultured for 24 h in the presence of 60 μM BrdU (Sigma). Cells were then fixed in 4% paraformaldehyde in PBS for 20 min at room temperature, washed twice in PBS/50 mM NH 4 Cl, permeabilized with 0.2% Triton-X in PBS, washed once in 0.2% PBS/3% BSA, incubated with anti-BrdU–APC (BD Biosciences, 51-23619L) for 30 min at room temperature, washed once more, resuspended in 0.2% PBS/3% BSA and analysed on a FACS LSR II as described above. For CFSE dilution assays, cells (at least 5,000 per point) were labelled with CFSE as previously described 58 and cultured for up to 60 h before analyses. Lentiviral transduction. Lentiviral transduction experiments were performed as previously described 59 . Filtered lentiviral supernatants were produced by the UCSF ViraCore facility from empty and PU.1-containing pCAD-GFP constructs 34 (gift from U. Stiedl, Albert Einstein Medical College, New York, USA) with titres ranging from 1 × 10 6 to 4.5 × 10 6 infectious viral particles per millilitre. Freshly isolated HSCs were rested in StemPro medium without cytokines for 2 h and subsequently spin-infected with a 1:5 to 1:20 dilution of viral supernatant at 37 °C for 90 min. Viral supernatants were removed and cells were cultured overnight (8–12 h) in medium + cytokines. Spin infection was repeated the next morning. GFP + transduced cells were analysed for myeloid surface marker expression starting 1 day after the last spinoculation. Continuous single-cell tracking. HSCs isolated from PU.1–eYFP mice (200 per chamber) were cultured within fibronectin-pre-coated silicon chamber inserts (IBIDI) placed inside a sealed 25 cm 2 tissue culture flask containing CO 2 -saturated StemPro medium supplemented as described above, and 1 μg ml −1 custom-conjugated Sca-1–A647 and FcγR–A555 antibodies for continuous live staining by time-lapse microscopy as described previously 28 . Briefly, culture flasks were imaged on a Zeiss CellObserver system (Zeiss) with a motorized stage and 37 °C heated culture chamber. Phase-contrast images were obtained using a 10× objective (Zeiss) and recorded by an AxioCamHRm camera controlled with self-written Timm’s Acquisition Tool (TAT) and Zeiss AxioVision 4.5 software. In addition, three fluorescence channels (YFP, A555 and APC) were acquired using mercury or xenon lamps. Phase-contrast images were acquired every 180 s, and fluorescence images for each channel every 240 min. Single-cell tracking analysis was performed using self-written Timm’s Tracking Tool (TTT) software described previously 28 . Individual cells were manually analysed and annotated at each time frame for relevant features including division, apoptosis and fluorescence, and data were subsequently stored as individual trees for each cell. HSCs were tracked for two subsequent generations (three total cell divisions). Rare apoptotic cells, or those with questionable identities or outlier division kinetics (that is, initial division shorter than 12 h or longer than 48 h), were excluded from the analyses. Cells undergoing endomitosis during the tracking period (that is, differentiating into megakaryocytes) were also excluded from the analyses. PU.1–eYFP reporter levels of bulk cells were automatically quantified from background-corrected fluorescence images at 0 and 15 h using self-written CAT software (T.S., unpublished). For individual tracked cells, PU.1–eYFP levels were quantified from fluorescence images of each tracked cell, acquired before the first division, using ImageJ software. Gene expression. For direct qRT–PCR analysis, 1 × 10 4 cells were cultured ± IL-1β for 12 h or isolated directly from IL-1-treated mice and resuspended into Trizol LS (Invitrogen). RNA was isolated according to the manufacturer’s instructions, treated with DNase I (Invitrogen) and reverse-transcribed using a Superscript III kit (Invitrogen). Two hundred cell-equivalents of cDNA per sample were run in triplicate in 384-well qRT–PCR plates (Applied Biosystems) on an ABI 7900HT Fast Real-time PCR System (Applied Biosystems) using 2× Sybr Green master mix (Applied Biosystems). Ct values were normalized to Gapdh and relative changes were calculated using the ΔΔCt method. For Fluidigm gene expression analyses, pools of 100 cells were directly sorted per well of a 96-well plate containing 5 μl of 2× CellsDirect reaction buffer (Invitrogen) and snap-frozen on dry ice until use. RNA was reverse-transcribed using Superscript III (Invitrogen) and subjected to 18 rounds of pre-amplification using a custom-designed set of target-specific primers as described previously 30 . Pre-amplification products were subsequently treated with Exonuclease I (New England Biolabs) to remove excess primers and diluted in DNA dilution buffer (Clontech). Pre-amplified cDNAs and custom-designed primer sets were loaded onto a Fluidigm 96.96 Dynamic Array IFC and run on a BioMark System (Fluidigm) using SsoFast Sybr Green for detection (Bio-Rad). Data were analysed using Fluidigm software and normalized to Gusb expression. For analysis of Il1a and Il1b expression, pools of 100 cells were directly sorted into 2× CellsDirect reaction buffer and pre-amplified as described above, and 1 μl of pre-amplified cDNA per sample was run in triplicate in 384-well qRT–PCR plates using KAPA Fast Sybr Green Master Mix (KAPA Biosystems) on the ABI 7900HT Fast Real-time PCR System. Ct values were normalized to Actb and relative changes were calculated using the ΔΔCt method. Primer information, including sequences and NCBI gene IDs, is included in Supplementary Table 4 . Cytokine analyses. For PB serum, blood was collected from euthanized mice through cardiac puncture, allowed to coagulate at room temperature for 30 min, and subsequently spun down at 12,000 g for 10 min to remove blood cells. For BM plasma, the four long bones (two femurs and two tibiae) of the same mice were flushed with 150–200 μl HBSS/2% FBS using a 0.3 cc insulin syringe with a 28g needle and spun at 500 g for 5 min to remove BM cells. Supernatants were further clarified by spinning down at 12,000 g for 10 min, and samples were subsequently stored at −20 °C until use. For cytokine measurement, 50 μl of 2× -diluted sample was analysed with a Luminex Cytokine Mouse 20-plex panel (Life Technologies) using a BioPlex instrument (Bio-Rad) according to the manufacturer’s instructions. M-CSF levels were analysed using an antibody sandwich ELISA kit (Raybiotech) according to the manufacturer’s instructions. Statistics and reproducibility. All experiments were performed in triplicate and repeated as indicated. N indicates the numbers of independent experiments performed. For in vivo experiments, sample sizes were predetermined on the basis of estimation of the minimum number of animals required to obtain biologically meaningful results (at least 80% power). For other experiments no statistical method was used to predetermine sample size, and replicate experiments were performed on the basis of the variability of results obtained, as well as experiment type. Data were expressed as mean ± standard deviation (s.d.). Statistical analyses were performed using Prism 5.0 software (GraphPad). Pairwise statistical significance was evaluated by two-tailed Mann–Whitney U -test or Student’s t -test. Statistical significance between multiple groups was evaluated by one-way ANOVA with Dunnett’s or Tukey’s test for multiple comparisons. The experiments were not randomized. The investigators were not blinded to allocation during experiments and outcome assessment. P values ≤ 0.05 were considered statistically significant. Source data are provided in Supplementary Table 1 . Exact P values and statistical tests used are shown in Supplementary Table 2 . Change history 28 April 2016 In the version of this Article originally published online, in the Acknowledgements section, the NIH grant number 'K01 DK09831' should have read 'K01 DK098315'. This has been corrected in all versions of the Article. | A study published today in the journal Nature Cell Biology shows that chronic exposure to an inflammatory "emergency" signal, interleukin-1, causes blood-forming bone marrow stem cells to produce cells needed to fight infection and repair injury, but at the expense of their own ability to self-renew and maintain a healthy blood system. This results in overproduction of aggressive immune cells capable of severely damaging tissues. Elevated interleukin-1 (IL-1) accompanies the chronic inflammation associated with human conditions including obesity, diabetes and autoimmune disorders. The imbalance of blood system cell types can result in inefficient oxygen delivery, immunodeficiency, and could predispose the development of cancer. "Inflammation evolved to function for very short periods of time, marshaling resources to fight infections and repair damaged tissue. However, over long periods of time, these conditions become very toxic," says Eric M. Pietras, PhD, investigator at the University of Colorado Cancer Center and assistant professor at the CU School of Medicine Blood Cancer & BMT Program. Pietras performed the work as a postdoctoral researcher in the lab of Emmanuelle Passegué, PhD, professor at the Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research at the University of California San Francisco. IL-1 is a cytokine long understood to be an essential signal the immune system uses to recruit and activate inflammatory cells needed to protect from and repair acute occurrences of infection or injury. However, elevated levels of IL-1 are a feature of chronic inflammation, as is commonly seen in aging, and with a number of disease conditions including obesity and type 2 diabetes, which are associated with Western diet and lifestyle. "If you're working under a constant state of emergency, you become stressed and less effective. I think of blood stem cells in the same way," Pietras says. While blood-forming stem cells, also termed hematopoietic stem cells (or HSCs), are usually dormant in the bone marrow, "waking" occasionally to maintain proper blood levels in healthy individuals, Pietras and colleagues show that, "these cells are also exquisitely sensitive to changes in their environment and react accordingly." Specifically, HSCs are sensitive to the amount of IL-1 they encounter, and go to work creating "first responder" myeloid cells needed to fight what they recognize as a crisis of infection or injury. If the IL-1 signal doesn't end, HSCs continue making these cells but at the expense of their ability to regenerate themselves and correctly build the rest of the blood system. "They're receiving a signal telling them they need to keep building myeloid cells and as a result they don't make the other blood cells you need. You can end up with too few red blood cells, reducing the body's ability to deliver oxygen to cells. Or we see decreased production of new lymphoid cells, leaving the system potentially immunodeficient. These are all common features of chronically inflamed and even aged blood systems," Pietras says. Another major question was whether these effects are reversible, in other words, once an HSC has "learned" to overproduce myeloid cells, can it just as readily unlearn this function? The question has major implications for patient care, for example in the case of bone marrow stem cell transplant. For many years, bone marrow transplant has been used to treat leukemias by removing a patient's blood system and replacing it with that of a compatible donor. However, "Our results show that not only should we be looking for markers of blood system compatibility, but we may also want to explore whether a potential donor's stem cells have been exposed to inflammation and may not be as effective at rebuilding the patient's blood system," Pietras says. "Likewise, the presence of inflammation in the individual receiving the bone marrow could also be an important factor in how well the stem cells regenerate a new blood system once they are transplanted." Pietras also points out increased interest in "autologous" stem cell transplants to potentially treat autoimmune diseases and multiple myeloma, another type of leukemia. In this technique, a patient's healthy blood stem cells are removed and expanded. Components of the blood system responsible for the disease condition are killed and then the patient's original stem cells are reinfused and encouraged to regrow a new blood system. However, this approach would not be ideal if the original blood stem cells retained "injuries" that left them predisposed toward building a blood system that is imbalanced by the insult of chronic inflammation. To test the durability of the IL-1 insult to HSCs following chronic inflammation, Pietras treated mice for 20 days with IL-1 and then took it away for several weeks to see if the HSCs recovered. "Our data suggest that it is possible to turn back the clock and reverse the effects of chronic inflammation on blood stem cells, perhaps using therapies already available in the clinic to block inflammatory signals such as IL-1," Pietras says. "Of course, we don't yet know on a human scale how long it takes a stem cell to 'remember' these insults. It may be that after a longer period of exposure to IL-1, these changes become more fixed." Overall, the study demonstrates for the first time that blood stem cells adapt to meet what they recognize as the body's needs, and that chronic inflammation can act like a thumb on the scale, implying a need that does not really exist. "For decades we have recognized the importance of these bone marrow stem cells in dealing with crisis while also maintaining the stability of the blood system. Now we show that conditions in the rest of the body can have profound implications for how stem cells behave, both in the blood and likely in many other tissues as well," Pietras says. | 10.1038/ncb3346 |
Biology | Biologist's research could lead to more resilient crops | Ashot Papikian et al. Site-specific manipulation of Arabidopsis loci using CRISPR-Cas9 SunTag systems, Nature Communications (2019). DOI: 10.1038/s41467-019-08736-7 Javier Gallego-Bartolomé et al. Co-targeting RNA Polymerases IV and V Promotes Efficient De Novo DNA Methylation in Arabidopsis, Cell (2019). DOI: 10.1016/j.cell.2019.01.029 Journal information: Cell , Nature Communications | http://dx.doi.org/10.1038/s41467-019-08736-7 | https://phys.org/news/2019-02-biologist-resilient-crops.html | Abstract Understanding genomic functions requires site-specific manipulation of loci via efficient protein effector targeting systems. However, few approaches for targeted manipulation of the epigenome are available in plants. Here, we adapt the dCas9-SunTag system to engineer targeted gene activation and DNA methylation in Arabidopsis . We demonstrate that a dCas9-SunTag system utilizing the transcriptional activator VP64 drives robust and specific activation of several loci, including protein coding genes and transposable elements, in diverse chromatin contexts. In addition, we present a CRISPR-based methylation targeting system for plants, utilizing a SunTag system with the catalytic domain of the Nicotiana tabacum DRM methyltransferase, which efficiently targets DNA methylation to specific loci, including the FWA promoter, triggering a developmental phenotype, and the SUPERMAN promoter. These SunTag systems represent valuable tools for the site-specific manipulation of plant epigenomes. Introduction Gene transcription, and thus function, can be controlled in trans , via the binding of transcription factors to promoters, or in cis , via epigenetic modifications. Functional analyses of plant genomes have relied mainly on the indirect reactivation of genes or transposable elements (TEs) through the use of mutants 1 . However, the biological outcomes of these manipulations represent an aggregate view of gene expression changes. Examining epigenetic regulation of gene expression, such as by DNA methylation, faces similar issues, where indirect and widespread changes in epigenetic mutants complicate the exploration of locus-specific effects. Mutant analysis, hairpin-mediated targeting of small interfering RNAs (siRNAs), and zinc-finger-mediated targeting have been used to assess the function of genes and DNA methylation in plants 1 , 2 , 3 . For example, fusion of a zinc-finger protein to the RNA-directed DNA methylation effector SUVH9 enabled methylation of the Arabidopsis FWA promoter 3 . The fwa-4 ( fwa ) epiallele in Arabidopsis plants displays a loss of FWA promoter methylation, leading to FWA activation and a late flowering phenotype 3 . SUVH9-mediated de novo methylation of the FWA promoter in fwa plants restored FWA silencing and an early flowering phenotype 3 , indicating that promoter methylation was sufficient to regulate FWA expression. Although zinc-finger fusions are an effective tool, they are laborious to design, difficult to verify, and often display broad, off-target binding activity 4 . CRISPR-Cas approaches enable targeted manipulation of specific loci 5 . Synthetic transcriptional activators, for instance consisting of deactivated versions of Cas9 (dCas9) fused to transcriptional activation domains, can specifically activate genes in both plants and mammals 6 , 7 , 8 , 9 , 10 , 11 , 12 . Several other CRISPR-Cas9-based activation systems, such as the synergistic activation mediator (SAM) as well as a hybrid VP64-p65-Rta (VPR) activator, have been developed to further enhance dCas9-mediated transcriptional upregulation as well as to recruit multiple protein effectors 13 , 14 . The dCas9-SunTag-VP64 system is a potent transcriptional activator in mammalian cell lines 15 , 16 . This system consists of two modules: dCas9 fused to tandem GCN4 peptide repeats, and a single chain variable fragment (scFv) GCN4 antibody fused to superfolder-GFP (sfGFP) and VP64. Thus, multiple copies of the VP64 transcriptional activator associate with the GCN4 repeats and are recruited to a specific locus via dCas9/guide RNAs. This method has been adapted for site-specific DNA demethylation in mammals and plants, and for DNA methylation in mammals 17 , 18 , 19 . DNA methylation in plants exists in three different nucleotide contexts: CG, CHG, and CHH (where H = A, T, or C) 20 . Maintenance methylation is controlled by several pathways in Arabidopsis : CG methylation is maintained by DNA METHYLTRANSFERASE 1 (MET1), a homolog of DNMT1; CHG methylation is maintained by CHROMOMETHYLASE 3 (CMT3), a plant specific methyltransferase; and much of the CHH methylation is maintained by DOMAINS REARRANGED METHYLTRANSFERASE 2 (DRM2)-a homologue of DNMT3 methyltransferases-through the RNA-directed DNA methylation (RdDM) pathway. While DRM2 is responsible for CHH maintenance methylation in short euchromatic regions, short TEs, and the edges of long TEs, CHROMOMETHYLASE 2 (CMT2) is responsible for CHH methylation in pericentromeric heterochromatin and the bodies of long TEs 21 , 22 . RdDM is also responsible for de novo methylation of all three sequence contexts 20 , 23 . Here, we have developed CRISPR-Cas9-SunTag-based targeting systems to site-specifically and efficiently manipulate gene methylation and expression in plants. We modified the SunTag system to recruit multiple copies of a methylation effector or of VP64 to distinct loci. Using the previously characterized Nicotiana tabacum DRM methyltransferase catalytic domain as our methylation effector 24 , we found that SunTag NtDRMcd effectively targets methylation to specific loci. Importantly, at the FWA locus, this methylated state remains meiotically heritable through multiple generations in the absence of the targeting transgene. Results Targeted transcriptional activation of the FWA locus We previously adapted the SunTag system for site-specific DNA demethylation in plants by targeting the human TET1 catalytic domain to Arabidopsis loci 18 . To generate a transcriptional activator system, we used the Arabidopsis UBIQUITIN10 ( UBQ10 ) promoter to drive the expression of dCas9-10 × GCN4 and of scFv-sfGFP-VP64 (Supplementary Fig. 1a ). Additionally, we added an SV40-type NLS to the scFv module to ensure proper nuclear import in plants. Imaging of the roots of T2 transgenic Arabidopsis plants expressing the SunTag VP64 construct showed clear nuclear localization of the antibody module (Supplementary Fig. 1b). In addition, dCas9-10 × GCN4 was stably expressed in T2 Arabidopsis plants (Supplementary Fig. 1c). To test whether this system activates gene expression, we targeted the DNA methylated and silent FWA gene in Arabidopsis wild-type (Col-0) plants 25 . We observed ectopic activation of FWA in numerous T1 lines containing a single guide RNA (gRNA4) that targets FWA , but not in control lines that lack a guide (nog) or that lack VP64 (Supplementary Fig. 2a, b ). Strong activation of FWA was also observed in the next generation T2 plants (Supplementary Fig. 2b,c ). RNA-seq of T2 gRNA4 plants confirmed that FWA was robustly upregulated (Fig. 1a and Supplementary Fig. 2d ). In addition to gRNA4, we tested a guide (gRNA17) that targets a region further upstream in the promoter, ~170 base pairs upstream from gRNA4. We detected FWA upregulation with gRNA17, although to a lesser extent than with gRNA4, suggesting that gRNAs placed near the transcription start site may be more effective to manipulate gene expression, as previously suggested with the SunTag system in mammalian cell lines 16 (Supplementary Fig. 2e ). Fig. 1 SunTag VP64-mediated FWA activation. a RNA-seq tracks depicting normalized reads at the FWA locus and flanking loci in 1 representative Col-0 replicate, 1 representative T2 SunTag VP64 nog-2 replicate, 1 representative fwa replicate, and 1 representative T2 SunTag VP64 g4 replicate for each of the 3 independent lines. The black triangle indicates the position of gRNA4. b ChIP-seq and WGBS tracks at the FWA promoter. The top track shows a ChIP peak corresponding to gRNA4-mediated SunTag recruitment. The position of gRNA4 is illustrated with a black bar. CG, CHG, and CHH methylation tracks for Col-0, T2 SunTag VP64 nog-3, and 2 independent T2 lines of SunTag VP64 g4. c RNA-seq tracks depicting normalized reads at the FWA locus and flanking loci in 1 representative Col-0 replicate, 1 representative T2 22aa SunTag VP64 nog-1 replicate, 1 representative fwa replicate, and 1 representative T2 22aa SunTag VP64 g4 + g17 replicate for each of the 3 independent lines. The black triangles indicate the positions of gRNA17 and gRNA4. d WGBS tracks for all 3 sequence contexts at the FWA promoter for Col-0 and 3 independent T1 lines of 22aa SunTag VP64 g4 + g17. The positions of gRNA17 and gRNA4 are illustrated with black bars. e WGBS tracks for all 3 sequence contexts at the FWA promoter for Col-0, and a T2 + and T2- plant from 22aa SunTag VP64 g4 + g17-3. f qRT-PCR analysis of FWA expression levels in fwa , Col-0, and 4 different T2 + or T2- plants from 22aa SunTag VP64 g4 + g17-3. Expression fold change relative to fwa is plotted. Error bars represent the mean ± s.e. of 2 technical replicates. Source data of Fig. 1f are provided as a Source Data file Full size image To comprehensively profile the specificity of SunTag VP64-mediated activation, we examined differentially expressed genes (DEGs) in the gRNA4 RNA-seq dataset. All three profiled lines displayed highly specific activation of FWA with very few DEGs compared to a no guide control line (Supplementary Fig. 3a ). To examine dCas9 binding at the FWA promoter, we performed ChIP-qPCR with T2 gRNA4 plants. We observed a strong enrichment of dCas9 at the FWA promoter compared to the control ACT7 locus, and as expected, no enrichment in Col-0 control plants (Supplementary Fig. 3b ). ChIP-seq showed highly specific binding of dCas9 to the FWA promoter, with only one major off-target site (Supplementary Fig. 3c ). This off-target site contains a PAM and 14 base pairs complementary to the gRNA sequence, spanning the previously reported seed region of the protospacer 26 . Therefore, SunTag VP64-mediated gene activation is highly specific due to the highly specific binding properties of the Cas9/gRNA complex. To test whether VP64-mediated FWA activation affected promoter methylation, we performed whole-genome bisulfite sequencing (WGBS) of T2 gRNA4 plants. Compared to Col-0 and no guide controls, T2 gRNA4 lines showed reduced CG methylation within the promoter, whereas gene body methylation downstream of the target site, as well as genome-wide methylation levels, remained unaffected (Fig. 1b and Supplementary Fig. 4a, b, c ). Thus, targeted activation of silenced genes can reduce promoter methylation. Although T2 gRNA4 plants showed FWA upregulation, they did not display altered flowering time compared to controls. Therefore, we hypothesized that the levels of FWA mRNA in T2 gRNA4 plants might not be sufficient to affect flowering time. In order to increase FWA expression even further, we combined gRNA4 and gRNA17 (g4 + g17) in one construct to test whether targeting two regions with gRNAs we have verified proximal to the promoter might further enhance activation. The 10 × GCN4 epitopes within the SunTag construct are separated by linkers of 5 amino acids (aa). To allow for maximum mobility of VP64, we also utilized 22aa linkers, as previously reported 17 . Furthermore, we added another SV40-type NLS within the coding sequence of dCas9-10 × GCN4. qRT-PCR analysis of T1 plants indicated that multiple lines with gRNA4 + gRNA17 had FWA expression levels similar to fwa epiallele plants, resulting in a late flowering phenotype (Supplementary Fig. 5a ). RNA-seq in the T2 generation confirmed the upregulation of FWA expression, where, consistent with the qRT-PCR data, FWA transcript levels in line 3 were similar to those observed in fwa plants (Fig. 1c and Supplementary Fig. 5b ). Analysis of DEGs showed that as with gRNA4, constructs with gRNA4 + gRNA17 were highly specific in activating FWA , with few other genes affected (Supplementary Fig. 5c ). We next tested how FWA promoter methylation was affected after activation with gRNA4 + gRNA17. We performed WGBS of T1 plants and observed a reduction of methylation in lines 1 and 2. Furthermore, promoter methylation in line 3 was completely abolished, correlating with expression data where FWA overexpression was similar to levels observed in fwa plants (Fig. 1c, d and Supplementary Fig. 5a, b ). Thus, expression may be a key contributor that leads to this reduction in methylation. FWA gene body methylation remained unaffected in activated lines (Supplementary Fig. 6a ). In the T2 generation, plants from line 3 that retained the transgene (+) and those that had it segregated away (−) both retained the demethylated state (Fig. 1e and Supplementary Fig. 6b ). Consistently, we observed FWA expression levels similar to an fwa epiallele plant in both T2+ and T2− plants, correlating with the late flowering phenotype (Fig. 1f and Supplementary Fig. 6c ). Activation of loci in different chromatin contexts We next wanted to explore if other methylated loci, such as transposable elements (TEs), could be targeted for ectopic activation. We used two gRNAs to target Evadé ( EVD ), a member of the ATCOPIA93 family of LTR/COPIA transposable elements 27 . Compared to control lines, 5aa SunTag VP64 T1 lines displayed retrotransposon activation by hundreds- to thousands of fold (Supplementary Fig. 7 ). We also confirmed the upregulation of EVD in three independent T2 lines (Supplementary Fig. 7 ). Next, we examined genome-wide effects by RNA-seq. The two gRNAs targeting the 5′ end of EVD had perfect matches to two different ATCOPIA93 loci, one in euchromatin corresponding to the EVD locus, and another in heterochromatin corresponding to the Attrapé ( ATR ) locus. Both loci were highly activated, indicating that SunTag VP64 can manipulate gene expression in distinct chromatin contexts (Fig. 2a, b and Supplementary Fig. 8a, b ). One neighboring TE of the same family adjacent to ATR was also upregulated (Fig. 2b ). This effect might reflect co-regulation of these two TE copies and/or the presence of regulatory regions at the 3′ end. We observed robust activation of EVD and ATR , and few DEGs, in three independent T2 lines compared to a no guide control line (Supplementary Fig. 8c ). Thus, SunTag VP64-mediated activation was highly specific. Fig. 2 Activation of a diverse set of target loci. a RNA-seq tracks depicting normalized reads at the EVD locus and flanking loci in 1 representative Col-0 replicate, 1 representative T2 SunTag VP64 nog-2 replicate, and 2 representative T2 SunTag VP64 EVD replicates for each of the 3 independent lines. The black triangles indicate the positions of the 2 gRNAs. b RNA-seq tracks depicting normalized reads at the ATR locus and flanking loci in 1 representative Col-0 replicate, 1 representative T2 SunTag VP64 nog-2 replicate, and 2 representative T2 SunTag VP64 EVD replicates for each of the 3 independent lines. c qRT-PCR analysis of CLV3 expression levels in 6 independent T1 lines of SunTag VP64 CLV3 and 2 independent Col-0 plants. Expression fold change relative to a line showing the lowest levels of upregulation (line 1) is plotted. Error bars represent the mean ± s.e. of 2 technical replicates. d A Col-0 plant and a representative T1 line of SunTag VP64 CLV3 is shown. Scale bars represent a distance of 5 mm. Source data of Fig. 2c are provided as a Source Data file Full size image We performed WGBS to monitor methylation levels proximal to the gRNA targets within EVD and ATR . We observed a decrease in CG methylation, while genome-wide methylation levels remained unaffected, recapitulating the effects observed at the FWA promoter (Supplementary Fig. 9a –d and 10 ). The SunTag system can thus be used to determine the biological consequences of activating specific TEs or TE families, rather than evaluating stress experiments or mutants that globally upregulate the expression of TEs 21 , 22 , 28 , 29 . To test the generality of this activation system, we also targeted two unmethylated loci within the Arabidopsis genome. A zinc-finger-VP64 fusion has been shown to target and upregulate the expression of the floral development gene APETALA3 ( AP3 ) 30 . To test whether we could activate AP3 with our system, we designed two gRNAs to target the AP3 promoter. By qRT-PCR, we observed over 300-fold and 500-fold upregulation of AP3 in T1 and T2 plants, respectively, compared to controls (Supplementary Fig. 11a ). However, we did not observe the previously described AP3 overexpression phenotype 30 , perhaps due to tissue-specific differences in transgene expression. We also targeted the unmethylated stem cell regulator CLAVATA3 ( CLV3 ) with two gRNAs 31 . We observed strong upregulation of CLV3 in several T1 lines where some exhibited the previously reported wuschel mutant phenotype-where the generation and maintenance of stem cells are perturbed in the meristem-arising from CLV3 overexpression 31 , which persisted in T2 plants (Fig. 2c, d and Supplementary Fig. 11b ). Thus, SunTag VP64 can site-specifically activate genes with methylated or unmethylated promoters. A SunTag-based methylation targeting system Several reports have shown that dCas9 can be adapted to site-specifically modify DNA methylation in mammalian cells 19 , 32 , 33 , 34 , 35 , 36 . However, no such system yet exists to target DNA methylation in plants. We replaced VP64 in our SunTag system with the Nicotiana tabacum DRM methyltransferase catalytic domain (NtDRMcd) to evaluate methylation targeting activity in vivo. We chose NtDRMcd because this fragment was previously shown to be well-folded, well-expressed, and could be crystalized 24 . We utilized the fwa background, which has lost FWA promoter methylation, and established T1 plants that co-express gRNA4 to target the FWA promoter and 5aa SunTag NtDRMcd. These T1 plants showed some establishment of CHH methylation at the FWA promoter by WGBS, but very limited CG and CHG methylation (Supplementary Fig. 12a ). The initial establishment of CHH methylation by NtDRMcd is consistent with the previously reported preference of tobacco DRM for non-CG methylation 37 . Ectopic methylation of the FWA promoter has been shown to generate early flowering plants 3 . However, we did not observe any flowering time differences between T1 plants and controls, likely due to the lack of CG methylation, which is required for FWA silencing 3 . Similarly, T2 plants showed slightly higher levels of CHH methylation but did not exhibit early flowering (Supplementary Fig. 12b ). To improve the efficiency of ectopic methylation targeting, we used our 22aa SunTag construct to accommodate optimum recruitment of NtDRMcd by mitigating the potential effects of steric hindrance. T1 22aa SunTag NtDRMcd gRNA4 transgenic lines showed enhanced FWA methylation compared to the 5aa T1 lines, including the establishment of CHH and CHG methylation, and minimal amounts of CG (Supplementary Fig. 12b ). However, these plants also did not display the early flowering phenotype, suggesting that FWA expression was not silenced. To further enhance targeted methylation, we utilized two additional gRNAs targeting the FWA promoter. Together, these three gRNAs spanned the wild-type methylation patch observed in Col-0 plants. WGBS and McrBC analysis of T1 plants expressing 22aa SunTag NtDRMcd with gRNA4, gRNA10, and gRNA18 (g4 + g10 + g18) displayed efficient methylation establishment in all three sequence contexts within the FWA promoter (Fig. 3a and Supplementary Fig. 12c ), which led to FWA silencing (Supplementary Fig. 13a ) and early flowering plants. T2 plants that retained the SunTag NtDRMcd transgene (T2 + ) displayed FWA promoter methylation in both lines we followed (Fig. 3b, c ). We identified T2 plants lacking the transgene (T2-) in line 2 that retained FWA promoter methylation, indicating that the targeted methylation was meiotically heritable (Fig. 3b ). In line 1, FWA promoter methylation was lost in most T2- plants, which led to the reactivation of FWA (Fig. 3c and Supplementary Fig. 13a ). Thus, the methylation established in T1 line 1 plants was insufficient to confer stable methylation and silencing in most T2- plants. In contrast, RNA-seq of line 2 T2 early flowering plants showed that FWA expression was silenced to wild-type levels in both T2+ and T2- plants, (Fig. 3d and Supplementary Fig. 13b ), resulting in early flowering (Supplementary Fig. 13c ). Fig. 3 Targeted methylation and silencing of FWA in fwa epiallele plants. a WGBS tracks for all 3 sequence contexts are shown at the FWA locus for fwa , and 2 independent T1 lines of 22aa SunTag NtDRMcd g4 + g10 + g18. Black bars indicate the positions of the 3 gRNAs. b WGBS tracks for all 3 sequence contexts are shown at the FWA locus for fwa and T2 + and T2- plants from 22aa SunTag NtDRMcd g4 + g10 + g18-2. (+) and (−) indicate plants that have retained or segregated away the transgene, respectively. c WGBS tracks for all 3 sequence contexts are shown at the FWA locus for fwa and T2+ and T2− plants from 22aa SunTag NtDRMcd g4 + g10 + g18-1. d RNA-seq tracks depicting normalized reads at the FWA locus and its flanking regions in 1 representative fwa replicate, 1 representative Col-0 replicate, and 2 representative T2 22aa SunTag NtDRMcd g4 + g10 + g18 replicates each for T2+ and T2− plants. Black triangles indicate the positions of the 3 gRNAs. e Late flowering fwa plants are shown alongside a segregating population of T3 22aa SunTag NtDRMcd g4 + g10 + g18 plants that show the early flowering phenotype Full size image We followed the T2 + early flowering plants of SunTag DRMcd g4 + g10 + g18 line 1 to the T3 generation. Both T3+ and T3- displayed robust FWA methylation and silencing (Supplementary Fig. 13d, e ), as well as the early flowering phenotype (Fig. 3e ). Thus, two generations in the presence of the SunTag NtDRMcd transgene were required for most plants to induce heritable methylation and silencing in this line. Furthermore, T4- plants derived from T3- plants continued to maintain FWA promoter methylation and the early flowering phenotype (Supplementary Fig. 13d, f ). Although more independent events need to be characterized for further studies of methylation heritability, these findings are in line with the known intergenerational silencing effects associated with hairpins and at FWA and other loci 27 , 38 , 39 , 40 . In addition to targeted DNA methylation, the scFv-sfGFP-NtDRMcd fusion induced background methylation throughout the genome, mainly in the CHH context, consistent with the reported preference of DRM 37 (Supplementary Figs. 14 and 15a,b ). We also observed chloroplast methylation in the presence of scFv-sfGFP-NtDRMcd (Supplementary Fig. 16a ). Neither the genome-wide CHH hypermethylation nor the chloroplast methylation were retained in plants that had segregated away the transgene (Supplementary Figs. 15a, b, and 16a ). Moreover, we profiled the methylation levels of fwa plants expressing the SunTag NtDRMcd transgene without a guide. T1 plants expressing this transgene showed no targeted methylation at FWA (Supplementary Fig. 16b ), but again showed widespread background methylation that arises from the non-specific methyltransferase activity of NtDRMcd, as has been seen for targeting with mammalian methyltransferases 41 . To gain a better understanding of NtDRMcd off-target activity, we profiled genome-wide methylation levels in multiple generations of SunTag NtDRMcd g4 + g10 + g18 plants. Genome-wide methylation profiling of T1 plants showed hypermethylation throughout all chromosomes, mainly in the CHH context, with the CHG context being less affected and almost no effect on global CG methylation levels (Fig. 4a ). In the T2 + generation, SunTag DRMcd g4 + g10 + g18 line 1 showed excessive hypermethylation in the CHH and CHG contexts. In contrast, T2 + line 2 showed minimal amounts of hypermethylation, and only in the CHH context (Supplementary Fig. 17a ). These data emphasize that multiple lines should be evaluated in order to avoid those with excessive off-target hypermethylation. Importantly, segregating away the transgene reduced global hypermethylation to similar levels as the fwa control plants (Supplementary Fig. 17a ). These findings were reiterated in T3+/− and T4− plants (Supplementary Fig. 17b ). Fig. 4 Profiling the genome-wide effects of scFv-sfGFP-NtDRMcd activity. a Chromosome-wide metaplots of CG, CHG, and CHH methylation levels in fwa , and 2 independent T1 lines of 22aa SunTag NtDRMcd g4 + g10 + g18. Dashed vertical lines depict the boundaries of chromosomes 1–5. b WGBS tracks for all 3 sequence contexts at the FWA promoter for fwa and 3 independent T1 lines of 22aa SunTag NtDRMcd noNLS g4 + g10 + g18. Black bars indicate the positions of the 3 gRNAs. c Chromosome-wide metaplots of CG, CHG, and CHH methylation levels in fwa and 3 independent T1 lines of 22aa SunTag NtDRMcd noNLS g4 + g10 + g18 Full size image We hypothesized that the SV40-type NLS in the scFv-sfGFP-NtDRMcd fusion may contribute to high levels of NtDRMcd in the nucleus leading to off-target DNA methylation. To test this hypothesis, we removed the SV40-type NLS, reasoning that nuclear localization of the scFv-sfGFP-NtDRMcd fusion would mainly occur only upon binding to the dCas9-NLS-10 × GCN4 fusion. Indeed, WGBS analysis of three independent T1 lines revealed that the SunTag NtDRMcd construct lacking the SV40-type NLS induced methylation at FWA , leading to early flowering, with limited effects on the surrounding regions (Fig. 4b and Supplementary Fig. 17c ). Further, metaplots of genome-wide methylation showed that global CHH methylation levels in plants expressing the noNLS-SunTag NtDRMcd construct were similar to the fwa epiallele (Supplementary Fig. 17c and Fig. 4c ). However, the noNLS-SunTag NtDRMcd construct was still able to access and methylate chloroplast DNA (Supplementary Fig. 18a ). Thus, removal of the SV40-type NLS reduces nuclear off-target methylation of the SunTag NtDRMcd system. To test the generality of the SunTag NtDRMcd methylation targeting system, we used two gRNAs to target the floral development gene SUPERMAN ( SUP ) in the ecotype Landsberg erecta (L er ) 42 , 43 . We profiled four independent T1 lines and observed efficient establishment of non-CG methylation (Fig. 5a ), consistent with minimal CG sites in this region 42 . As with FWA , we observed off-target CHH methylation in the regions surrounding SUPERMAN (Supplementary Fig. 18b ). We next utilized our noNLS version of the SunTag NtDRMcd construct to avoid genome-wide hypermethylation. Methylation was successfully targeted to the SUPERMAN promoter (Fig. 5b ), with the surrounding regions now showing methylation profiles similar to the L er control (Supplementary Fig. 18c ). Chromosome-wide metaplots showed that contrary to SunTag NtDRMcd, the noNLS version of the construct had global methylation levels similar to those observed in L er controls (Fig. 5c, d ). Fig. 5 SunTag-mediated methylation targeting of the SUPERMAN locus. a WGBS tracks for all 3 sequence contexts are shown at the SUP locus for L er and 4 independent T1 lines of 22aa SunTag NtDRMcd NLS SUP. Red bars indicate the positions of the 2 gRNAs targeting SUP . TSS transcription start site, TTS transcription termination site. b WGBS tracks for all 3 sequence contexts are shown at the SUP locus for L er and 3 independent T1 lines of 22aa SunTag NtDRMcd noNLS SUP. c Chromosome 3 metaplots of CG, CHG, and CHH methylation levels in 2 different L er plants and 4 independent T1 lines of 22aa SunTag NtDRMcd NLS SUP. d Chromosome 3 metaplots of CG, CHG, and CHH methylation levels in 2 different L er plants and 3 independent T1 lines of 22aa SunTag NtDRMcd noNLS SUP Full size image Discussion We have established SunTag systems to induce site-specific expression or methylation in Arabidopsis . SunTag VP64 is a highly efficient transcriptional activator that can be used to study the effects of overexpressing specific endogenous loci. In addition, conditional expression of VP64 could be used in the future to illuminate the tissue- and cell type-specific functions of genes. Compared to zinc fingers or TAL effectors fused to transcriptional activators, CRISPR-Cas9-based systems are more specific and efficient for multiplexing 44 , 45 , and simpler to engineer for new targets by simply designing new guide RNAs 5 , 46 . Because DNA methylated genes can be activated, this system provides an alternative to studying DNA methylation mutants, which can have indirect effects. We also discovered an interesting phenomenon in which the promoter methylation of FWA was decreased or abolished as a result of activation, and also reduced at the 5′ end of EVD and ATR (Fig. 1b, d, e and Supplementary Fig. 9 a–d). This observation suggests that gene expression plays an important role in reducing promoter methylation levels, through a mechanism that is either directly or indirectly perturbing methylation maintenance in promoter regions. Thus, gene activation and DNA methylation pathways are likely competing to regulate gene expression at promoters. Activation-mediated DNA demethylation further underscores that SunTag VP64 can be used to study the epigenetic regulation of methylated loci without altering global DNA methylation levels. SunTag NtDRMcd-mediated DNA methylation targeting could be a valuable tool for agriculture for creating meiotically heritable epialleles without changing the DNA sequence 47 . As one example, plant genes required to support pathogenic bacteria or viruses could be targeted for promoter methylation and silencing 48 . Pleiotropic effects of silencing could potentially be avoided by fine-tuning the levels of repression caused by different levels of targeted methylation. Thus, the SunTag system is a versatile tool for site-specific manipulation of the epigenome in plants, and may have broad applications as a biotechnology tool. Methods Plant material and selection The Columbia-0 (Col-0) ecotype of Arabidopsis thaliana was used in this study, along with the fwa-4 epiallele, which was isolated from a met1 segregating population 3 . The L er ecotype was used for targeting methylation to the SUPERMAN locus. To obtain transgenic lines, plants were transformed using the agrobacterium floral dip method 49 . Transgenic lines were obtained by selecting for hygromycin resistant plants on MS plates containing 0.9% Phytoagar (plantMedia), 1/2 Murashige and Skoog Basal Medium (MP Biomedicals, LLC), and 35 μg/mL Hygromycin B (Invitrogen). For microscopy, root tissue was used for obtaining confocal microscope images (Zeiss). Images were processed using Fiji 50 . SunTag cloning and design Constructs used for activation and methylation targeting were cloned into either the pMOA binary vector 51 or the pEG binary vector 52 . SunTag constructs were adapted from those described in Tanenbaum et al. in order to develop a plant specific SunTag system. Individual modules of the system in the constructs presented in this work were either PCR amplified from Addgene plasmid numbers 60903 and 60904 (gifts from Dr. Ron Vale, University of California, San Francisco) or were synthesized by GenScript. The SunTag system consists of two main modules: dCas9 + epitope (10×GCN4) tail and a single chain antibody (scFv) fused to sfGFP and either VP64 or NtDRMcd. The epitope tail has a defined number of repeats separated by linker regions. In this study we employed 10 repeats of the GCN4 epitope. dCas9 + epitope tail expression was driven by the Arabidopsis UBQ10 promoter that was followed by an omega translational enhancer. scFv module expression was also driven by the UBQ10 promoter in the antisense orientation relative to UBQ10 ::dCas9 + epitope tail. gRNAs were cloned downstream from the scFv module and expressed using the U6 promoter. In order to express multiple gRNAs, each was cloned in tandem along with their independent U6 promoters. All cloning reactions were performed using In-Fusion (Takara) and all necessary components of the SunTag system were in a single binary vector. In order to improve the DNA methylation targeting ability of the targeted NtDRMcd, we made a 22aa linker version of the dCas9 + epitope tail module in addition to the 5aa version, which represents the linker length in between epitope repeats. This was done to avoid the effects of steric hindrance since NtDRMcd is larger than the VP64 fusion. The 22aa linker version of dCas9 + epitope tail was also used for targeting VP64 to FWA with gRNA4 + gRNA17. In addition to adding this linker, we added an extra SV40-type NLS in between the 1xHA tag and the flexible linker separating dCas9 and the epitope tail. An SV40-type NLS was also added to the scFv module for effective nuclear import in Arabidopsis . Plasmids related to this study have been deposited to Addgene for the scientific community (Plasmid numbers: 115480-115490, 117168, 119554, 119672, and 120249–120252). We designed numerous guides in the FWA promoter region. The designed guides were numerically numbered and specific ones were chosen for either activation or methylation targeting. The use of specific guides does not indicate that others did not work. For example the use of gRNA4 does not indicate that gRNAs1-3 did not work. It was chosen for its position in the FWA promoter. Western blotting To extract total protein, leaf tissue was boiled at 95 °C with 2 × SDS buffer for 5 min. Raw protein extracts were then run with 3-8% Tris-acetate gels (NuPAGE) at 180-200 V. An iBlot gel transfer instrument (Thermo Fisher) was then used to transfer proteins to a PVDF membrane. Ponceau staining was used for visualizing loading controls. anti-HA-Peroxidase, High Affinity antibodies (Roche, catalog #12 013 819 001, clone BMG-3F10) were used for blotting. qRT-PCR analysis For all qRT-PCR experiments, RNA was first extracted from leaf tissue using the Direct-zol RNA MiniPrep kit (Zymo). 500 ng–1 μg of total RNA was then used for cDNA synthesis using the SuperScript III First-Strand Synthesis Supermix (Invitrogen). qPCR analysis was then done using the iQ SYBR Green Supermix (Bio-Rad) to detect transcript expression levels. FWA transcripts were detected using oligos 5′-TTAGATCCAAAGGAGTATCAAAG-3′ and 5′-CTTTGGTACCAGCGGAGA-3′. EVD transcripts were detected using the previously reported oligos 5′-GATAGAGGAGATAGAAGATCTACAACTGG-3′ and 5′-CTCTATACTCCGATTCTGCACTCGAACA-3′ 27 . AP3 transcripts were detected using the previously reported oligos 5′-TTTGGACGAGCTTGACATTCAG-3′ and 5′-CGCGAACGAGTTTGAAAGTG-3′ 30 . CLV3 transcripts were detected using the previously reported oligos 5′-GTTCAAGGACTTTCCAACCGCAAGATGAT-3′ and 5′-CCTTCTCTGCTTCTCCATTTGCTCCAACC-3′ 53 . All transcripts were normalized to the housekeeping gene ISOPENTENYL PYROPHOSPHATE:DIMETHYLALLYL PYROPHOSPHATE ISOMERASE 2 ( IPP2 ). IPP2 transcripts were detected using oligos 5′-GTATGAGTTGCTTCTCCAGCAAAG-3′ and 5′-GAGGATGGCTGCAACAAGTGT-3′. McrBC assay To detect differences in methylation using the endonuclease McrBC (NEB), genomic DNA was first extracted using either a CTAB-based method or the DNeasy Plant Mini Kit (Qiagen). 100 ng of genomic DNA was then digested for 4 h at 37 °C. Water was added instead of McrBC for undigested control samples. After digestion, qPCR analysis was done using the iQ SYBR Green Supermix (Bio-Rad) to quantify differences in methylation. FWA promoter region sequences were detected using oligos 5′-TTGGGTTTAGTGTTTACTTG-3′ and 5′-GAATGTTGAATGGGATAAGGTA-3′. RNA-seq library preparation and analysis For transcriptome analyses, RNA was first extracted from leaf tissue using the Direct-zol RNA MiniPrep kit (Zymo). 1 μg of total RNA was used to prepare libraries for sequencing using the TruSeq Stranded mRNA-seq kit (Illumina). Libraries for SunTag VP64 targeting EVD were prepared using the TruSeq Stranded Total RNA with Ribo-Zero kit (Illumina). For sequencing of SunTag VP64 targeting FWA and SunTag NtDRMcd targeting FWA , we obtained single-end 50 bp reads. For sequencing of SunTag VP64 targeting EVD , we selected for larger fragment sizes and obtained paired-end 100 bp reads. Sequencing reads were first aligned to the TAIR10 gene annotation dataset using TopHat (2 mismatches allowed) 54 . We used the one max multihits option (-g 1) and --no-coverage-search. If reads did not map to genes, they were aligned to the TAIR10 genome. Read counts were obtained by implementing HTSeq 55 and subsequent differential expression analyses were done using DESeq (Bioconductor) and custom R scripts. ChIP T2 SunTag VP64 gRNA4 and Col-0 control plants were first grown on MS plates for 2 weeks and 2 grams of tissue were then collected per sample. After grinding the tissue, samples were crosslinked in 1% formaldehyde, chromatin was extracted, and later sonicated using Bioruptor Plus (diagenode). Immunoprecipitations were performed using mouse monoclonal anti-HA.11 epitope tag antibodies (clone 16B12, Covance catalog #MMS-101R). Chromatin-protein complexes were isolated with a 1:1 mix of Protein A and Protein G Dynabeads (Invitrogen) for 3 h at 4 °C. Beads were washed with low salt buffer (2 × ), high salt buffer, LiCl buffer, and TE buffer, and complexes were eluted with elution buffer (2 × 20 min at 65 °C). DNA-protein complexes were reversed crosslinked overnight at 65 °C followed by proteinase K treatment at 45 °C for 5 h. DNA was purified using phenol:chloroform, followed by NaOAc/EtOH precipitation along with GlycoBlue (Invitrogen) overnight at -20 °C. DNA was washed with 70% EtOH and resuspended with water. For ChIP-qPCR, the ACT7 locus was detected using the oligos 5′-AGCACGGATCGAATCACATA-3′ and 5′-CTCGCTGCTTCTCGAATCTT-3′. For detection of the FWA locus, oligos 5′-AAGAGTTATGGGCCGAAGC-3′ and 5′-CGCTCGTATGAATGTTGAATG-3′ were used. Libraries were prepared using the Ovation Ultralow kit (NuGen). ChIP-seq analysis was done by uniquely aligning single-end 50 bp reads to the TAIR10 genome using Bowtie 56 allowing two mismatches (-v 2). Subsequently, peaks were called using MACS2 57 with default parameters. We identified 3 peaks, including FWA , at FDR 5% and above five-fold enrichment. An off-target peak from within this set of 3 peaks was defined by the presence of a potential gRNA binding site in proximity to a called MACS2 peak. We identified one major off-target peak for gRNA4 on chromosome 4. WGBS library preparation and analysis For the preparation of WGBS libraries, genomic DNA was first extracted from leaves and inflorescence tissue (for L er samples) using the DNeasy Plant Mini Kit (Qiagen). 100 ng of DNA was then used for subsequent shearing using a Covaris S2 Focused Ultrasonicator. Libraries were then prepared using either the Ovation Ultralow Methyl-Seq kit (NuGen) in conjuction with the EpiTect Bisulfite Kit (Qiagen), or the Hyper Prep Kit (KAPA Biosystems) in conjuction with either the EZ DNA Methylation-Lightning Kit (Zymo) or the EpiTect Bisulfite Kit (Qiagen). Single-end 50 bp reads were then uniquely aligned to the TAIR10 genome using BS-Seeker2 58 . Methylation levels were then calculated for the CG, CHG, and CHH contexts. A filter was implemented to remove reads with three or more consecutively methylated cytosines in the CHH context, as previously described 59 . Metaplots of BS-seq data were generated with custom Python and R scripts. For methylation calculations over individual chromosomes, each chromosome was split into 100 kb bins. Methylation values were then calculated from these bins. Code availability Custom code/scripts used in this study are available upon request. Custom code/scripts used in this study for generating methylation metaplots have been deposited on GitHub ( ). Reporting summary Further information on experimental design is available in the Nature Research Reporting Summary linked to this article. Data availability The data supporting the findings of this study are available within the article and its Supplementary Information. High throughput sequencing data has been deposited in the Gene Expression Omnibus (GEO) database and can be accessed with the accession number GSE125230 . The source data of Figs. 1 f and 2 c, and Supplementary Figs. 1c , 2a , 2 b, 2c , 2e , 5a , 7 , 11a , 12c , 13a , and 13e are provided as a Source Data file. All data are available from the corresponding author upon reasonable request. A reporting summary for this Article is available as a Supplementary Information file. | UCLA biologist Steve Jacobsen's research has the potential to have a significant impact on the improvement of crops. Jacobsen, who is a professor of molecular, cell and developmental biology, specializes in plant epigenetics—the study of how a gene's function can change without changes to the DNA sequence—and his research could lead to more resilient crops. "Epigenetic science has many applications, with one of the most promising areas being agriculture," said Jacobsen, an investigator of the Howard Hughes Medical Institute. Jacobsen is also scientific co-founder of the company Inari, which has licensed plant breeding patents he developed at UCLA. Inari is a plant-breeding company that equips crops to be more resilient to climate change, and is enhancing plant breeding by tapping natural genetic diversity. Decades of intensive breeding for desirable characteristics, such as higher yield or resistance to specific diseases, has increased our food supply, but has also led to genetic uniformity in many crops. In some cases, this means the loss of natural resistance to disease compared with their more genetically diverse wild relatives. This loss could leave our food supply vulnerable to future stressors, including those induced by climate change, at a time when the global population is projected to see considerable growth. Inari is working to discover and re-introduce these genes, so crops can exhibit natural resilience while meeting the nutritional demands of a growing worldwide population. The company has introduced the world's first seed foundry as part of its mission to revolutionize the seed industry. The agreement provides Inari with new ways to improve plant performance by tapping natural genetic diversity, and provides access to technology that influences a plant's genes without altering its genetic code. "Discoveries that take place in our laboratories directly help solve global issues, and the fragility of the food system has been an issue of concern for some time now," said Roger Wakimoto, UCLA vice chancellor for research. "Through Inari, we're able to apply high-impact research and scientific techniques to the private sector and watch the benefits unfold." Inari is currently developing its first wave of commercial crop varieties, including corn, soy and wheat. "Collaboration and partnerships drive change that addresses the critical problems we face globally in agriculture," said Ponsi Trivisvavet, CEO of Inari. "Licensing this technology from UCLA provides us with a robust new approach that strengthens our efforts to create a winning food system." Jacobsen's epigenetics research, which was funded in part by the Bill & Melinda Gates Foundation, was published online Feb. 7 in the journal Cell, and published today in the journal Nature Communications. These research papers from Jacobsen's laboratory, along with earlier research from his laboratory, describe the inner workings of epigenetic pathways in plants, and describe tools that allow for "precise changes in gene expression through modulation of epigenetics," Jacobsen said. The Cell research describes, among other scientific issues, various proteins in the plant Arabidopsis, and how they can be used to target DNA methylation. Jacobsen's research team explains in detail exactly how specific biological pathways work. (DNA methylation is a process by which a methyl group is added to DNA, and is important in regulating genes) The Nature Communications paper describes the development of a system based on CRISPR—a powerful tool for editing DNA sequences at specific locations and modifying the functions of genes— to target methylation and gene silencing in a much more precise way than ever before and describes a system for targeting the activation of genes using a CRISPR system. A paper by Jacobsen's laboratory published in in the journal PNAS in 2018 describes another scientific tool, also based on a CRISPR system, to target the precise removal of DNA methylation at a gene, which causes the gene to be activated. | 10.1038/s41467-019-08736-7 |
Physics | Large-scale phase retrieval | Xuyang Chang et al, Large-scale phase retrieval, eLight (2021). DOI: 10.1186/s43593-021-00004-w | http://dx.doi.org/10.1186/s43593-021-00004-w | https://phys.org/news/2021-09-large-scale-phase.html | Abstract High-throughput computational imaging requires efficient processing algorithms to retrieve multi-dimensional and multi-scale information. In computational phase imaging, phase retrieval (PR) is required to reconstruct both amplitude and phase in complex space from intensity-only measurements. The existing PR algorithms suffer from the tradeoff among low computational complexity, robustness to measurement noise and strong generalization on different modalities. In this work, we report an efficient large-scale phase retrieval technique termed as LPR . It extends the plug-and-play generalized-alternating-projection framework from real space to nonlinear complex space. The alternating projection solver and enhancing neural network are respectively derived to tackle the measurement formation and statistical prior regularization. This framework compensates the shortcomings of each operator, so as to realize high-fidelity phase retrieval with low computational complexity and strong generalization. We applied the technique for a series of computational phase imaging modalities including coherent diffraction imaging, coded diffraction pattern imaging, and Fourier ptychographic microscopy. Extensive simulations and experiments validate that the technique outperforms the existing PR algorithms with as much as 17dB enhancement on signal-to-noise ratio, and more than one order-of-magnitude increased running efficiency. Besides, we for the first time demonstrate ultra-large-scale phase retrieval at the 8K level ( 7680\times 4320 7680\times 4320 pixels) in minute-level time. Peer Review reports 1 Introduction Wide field of view and high resolution are both desirable for various imaging applications, such as medical imaging [ 1 , 2 , 3 , 4 ] and remote sensing [ 5 ], providing multi-dimensional and multi-scale target information. As the recent development of computational imaging, large-scale detection has been widely employed in a variety of computational imaging modalities [ 3 , 4 , 6 , 7 ]. These computational imaging techniques largely extend the spatial-bandwidth product (SBP) [ 8 ] of optical systems from million scale to billion scale. As an example, the SBP of the real-time, ultra-large-scale, high-resolution (RUSH) platform [ 4 ] and the Fourier ptychographic microscopy (FPM) [ 3 ] have reached to as high as 10 8 –10 9 . Such a large amount of data poses a great challenge for post software processing. Therefore, large-scale processing algorithms with low computational complexity and high fidelity are of great significance for those imaging and perception applications in various dimensions [ 9 ]. In computational phase imaging, phase retrieval (PR) is required to reconstruct both amplitude and phase in complex space from intensity-only measurements. This problem originates from the limitation of the low response speed of photodetectors that impedes direct acquisition of light wavefront. Mathematically, the underlying goal of PR is to estimate an unknown complex-field signal from the intensity-only measurements of its complex-valued transformation, which is described as \begin{aligned} {I}=|{\varvec{A}} u|^2+\omega , \end{aligned} \begin{aligned} {I}=|{\varvec{A}} u|^2+\omega , \end{aligned} (1) where u is the underlying signal to be recovered \left( u \in {\mathbb {C}}^{n \times 1}\right) \left( u \in {\mathbb {C}}^{n \times 1}\right) , I contains the intensity-only measurements \left( I \in {\mathbb {R}}^{m \times 1}\right) \left( I \in {\mathbb {R}}^{m \times 1}\right) , {\varvec{A}} {\varvec{A}} represents measurement matrix \left( {\varvec{A}} \in {\mathbb {R}}^{m \times n} \text{ or } {\mathbb {C}}^{m \times n}\right) \left( {\varvec{A}} \in {\mathbb {R}}^{m \times n} \text{ or } {\mathbb {C}}^{m \times n}\right) , and \omega \omega stands for measurement noise. Phase retrieval has been widely applied in plenty fields such as astronomy, crystallography, electron microscopy and optics [ 10 ]. It solves various nonlinear inverse problems in optical imaging, such as coherent diffraction imaging [ 11 ] (CDI), coded diffraction pattern imaging [ 12 ] (CDP), Fourier ptychographic microscopy [ 3 ] (FPM) and imaging through scattering medium [ 13 ]. In the past few decades, different phase retrieval algorithms have been developed. Gerchberg and Saxton pioneered the earliest alternating projection (AP) algorithm in the 1970s [ 14 ], which was then extended by Fienup et al. with several variants [ 15 ]. Due to its strong generalization ability, AP has been widely employed in multiple phase imaging models. Nevertheless, it is sensitive to measurement noise, suffering from poor noise robustness. Afterwards, researchers introduced optimization into PR, deriving a series of semi-definite programming (SDP) based algorithms [ 16 , 17 ] and Wirtinger flow (WF) based algorithms [ 18 , 19 , 20 ]. These techniques enhance robustness to measurement noise, but they require high computational complexity and high sampling rate, making them inapplicable for large-scale phase retrieval. Although the sparsity prior of natural images in transformed domains can be incorporated as an additional constraint to lower sampling rate [ 21 , 22 ], it further increases computational complexity. Although these algorithms can theoretically employ patch-wise [ 23 ] and parallel strategies to deal with large-scale data, such a manner leads to a heavier load of memory requirement. In the last few years, the booming deep learning (DL) technique has also been introduced for phase retrieval [ 24 ]. Following the large-scale training framework, the DL strategy outperforms the above traditional PR techniques with higher fidelity. However, it provides poor generalization that each suits only for specific models, such as holography [ 24 ] and FPM [ 25 ]. For different models and even different system parameters, the deep neural network requires to be retrained with new large-scale data sets. Recently, the prDeep technique [ 26 ] integrates iterative optimization and deep learning together, enabling to benefit from respective advantages. However, prDeep cannot recover complex-domain signals, leading to limited applications in practice. To sum, despite of different workflows, the above existing PR algorithms suffer from the tradeoff among low computational complexity, robustness to measurement noise and strong generalization, making them inapplicable for general large-scale phase retrieval. In this work, we report an efficient large-scale phase retrieval technique termed as LPR , as sketched in Fig. 1 . It builds on the plug-and-play (PNP) [ 27 ] optimization framework, and extends the efficient generalized-alternating-projection (GAP) [ 9 , 28 , 29 ] strategy from real space to nonlinear. The complex-field PNP-GAP scheme ensures strong generalization of LPR on various imaging modalities, and outperforms the conventional first-order PNP techniques (such as ADMM [ 27 ], ISTA [ 30 ] and FISTA [ 31 ] used in prDeep) with fewer auxiliary variables, lower computational complexity and faster convergence. As PNP-GAP decomposes reconstruction into separate sub-problems including measurement formation and statistical prior regularization [ 9 , 32 ], we further introduce an alternating projection solver and an enhancing neural network respectively to solve the two sub-problems. These two solvers compensate the shortcomings of each other, allowing the optimization to bypass the poor generalization of deep learning and poor noise robustness of AP. As a result, LPR enables generalized large-scale phase retrieval with high fidelity and low computational complexity, making it a state-of-the-art method for various computational phase imaging applications. Fig. 1 The schematic of the reported LPR technique for large-scale phase retrieval. LPR decomposes the large-scale phase retrieval problem into two subproblems under the PNP-GAP framework, and introduces the efficient alternating projection (AP) and enhancing network solvers for alternating optimization. The workflow realizes robust phase retrieval with low computational complexity and strong generalization on different imaging modalities Full size image We compared LPR with the existing PR algorithms on extensive simulation and experiment of different imaging modalities. The results validate that compared to the AP based PR algorithms, LPR is robust to measurement noise with as much as 17dB enhancement on signal-to-noise ratio. Compared with the optimization based PR algorithms, the running time is significantly reduced by more than one order of magnitude. Finally, we for the first time demonstrated ultra-large-scale phase retrieval at the 8K level ( 7680 \times 4320 7680 \times 4320 pixels) in minute-level time, where most of the other PR algorithms failed due to unacceptable high computational complexity. 2 Results We applied LPR and the existing PR algorithms on both simulation and experiment data of three computational phase imaging modalities including CDI, CDP and FPM, to investigate respective pros and cons. The competing algorithms for comparison includes the alternating projection technique (AP) [ 14 , 15 ], the SDP based techniques (PhaseMax (PMAX) [ 33 ], PhaseLift (PLIFT) [ 16 ], PhaseLamp (PLAMP) [ 34 ]), the Wirtinger flow based techniques (Wirtinger Flow (WF) [ 18 ], Reweighted Wirtinger Flow (RWF) [ 35 ]), the amplitude flow based techniques [ 36 , 37 ] (AmpFlow (AF), Truncated AmpFlow (TAF), Reweighted AmpFlow (RAF)), Coordinate Descent (CD) [ 38 ], KACzmarz (KAC) [ 39 ], prDeep [ 26 ] and the deep learning technique (DL) [ 24 ]. Most of these algorithms parameters were tuned based on the Phasepack [ 40 ] to achieve best performance. The convergence is determined when the intensity difference of reconstructed image between two successive iterations is smaller than a preset threshold. We employed the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) [ 41 ] and root mean squared error (RMSE) to quantify reconstruction quality. All the calculation was tested on a desktop PC with an Intel i7-9700 CPU, 16G RAM and an Nvidia GTX 1660s GPU. 2.1 Coherent diffraction imaging CDI is a representative non-interferometric phase imaging technique, and has been widely applied in physics, chemistry and biology due to its simple setup [ 10 ]. It illuminates a target using coherent plane waves, and records the intensity of the far-field diffraction pattern. By oversampling the diffracted light field and applying phase retrieval, both the target’s amplitude and phase information can be reconstructed. Mathematically, the measurement formation of CDI is \begin{aligned} I = |{\mathcal {F}}(u)|^{2}, \end{aligned} \begin{aligned} I = |{\mathcal {F}}(u)|^{2}, \end{aligned} (2) where u denotes the target information, and {\mathcal {F}} {\mathcal {F}} represents the Fourier transformation that approximates the far-field diffraction. Following the above formation model, we employed a high-resolution image ( 1356 \times 2040 1356 \times 2040 pixels) from the DIV2K [ 42 ] dataset and an onion cell image [ 43 ] as the latent real-domain signals to synthesize two groups of CDI measurements. Because the prDeep technique for comparison is only applicable in real domain [ 26 ], we did not introduce phase into the latent signals. Due to the uniqueness guarantee of the solution, CDI requires at least 4 times oversampling in the Fourier domain [ 44 ]. Correspondingly, we padded zeros around the image matrix to generate a 2712 \times 4080 2712 \times 4080 image. We implemented Fourier transform to the image and retained only its intensity as measurements. Additionally, to investigate the techniques’ robustness to measurement noise, we further added different levels of white Gaussian noise (WGN) to the measurements. Table 1 presents the quantitative reconstruction evaluation of different techniques. The results show that the CD and KAC methods failed with no convergence. This is because these techniques require higher sampling ratio. The PLIFT and PLAMP methods do not work as well, because they require matrix lifting and involve a higher dimensional matrix that is out of memory in large-scale reconstruction (Additional file 1 : Fig. S1 shows the memory requirements of different algorithms under different image sizes). The other methods except for prDeep obtain little improvement compared to the AP algorithm. Specifically, the WF, AF and PMAX methods even degrade due to limited sampling ratio and noise corruption. The reconstruction of prDeep is better than the conventional algorithms, but with only 2dB enhancement on PSNR, and almost no SSIM improvement compared to AP. In contrast, LPR produces significant enhancement on reconstruction quality, with as much as 6dB and 0.29 improvement on PSNR and SSIM, respectively. Due to limited space, the results of another set of simulation is presented in Additional file 1 : Table S1 and Figs. S2 and S3, which coincides with the above quantitative results. Table 1 also presents the running time of these techniques. Because all the other algorithms used the result of AP as initialization, we recorded the excess time as the running time of these algorithms. From the results, we can see that prDeep consumes the most running time. LPR takes the same level of running time compared to the conventional algorithms, but with significantly improved reconstruction quality. Table 1 Quantitative comparison under the CDI modality Full size table We further compared these algorithms on experiment CDI data [ 45 ], to validate their effectiveness in practical applications. The imaging sample is live glioblastoma cell line U-87 MG. The setup includes a HeNe laser (543nm 5mW), a dual pinhole aperture that consists of two 100 m pinholes spaced 100 m apart from edge to edge, a 35 mm objective lens and a CCD camera ( 1340 \times 1300, 1340 \times 1300, 16 bits). Although the situ CDI modality used dual-pinhole illumination that is slightly different from the standard CDI, its reconstruction is still a phase retrieval task in essence. The sequential measurements contain far-field diffraction patterns of several moments in the cell fusion process. Because the conventional algorithms obtain little improvement compared to AP and prDeep is not applicable for complex-field samples [ 26 ], we only present the reconstruction results of AP and LPR in Fig. 2 . The results show that there exist serious noise artifacts in AP reconstruction, especially in the amplitude images. The cells are almost submerged by background noise at 0 and 135 min, and the contours and edges of cells can not be clearly observed. In comparison, LPR produces high-fidelity results that effectively preserve fine details while attenuating measurement noise. The complete results of all the 48 moments are shown in Additional file 1 : Figs. S4, S5, S6 and S7. Fig. 2 Comparison of experiment results under the CDI modality [ 45 ]. A dual-pinhole aperture is illuminated by a coherent light. A live glioblastoma cell sample is imaged in a time series of diffraction patterns. The reconstructed results describe the fusion process of two glioblastoma cells and form a high-density area. The AP technique is sensitive to measurement noise, and produces unsatisfying results. The reported LPR technique enables to remove noise artifacts and preserve fine details with high fidelity Full size image 2.2 Coded diffraction pattern imaging CDP [ 12 ] is a coded version of CDI, which introduces wavefront modulation to increase observation diversity. The strategy of multiple modulations and acquisitions enables to effectively bypass the oversampling limitation of the conventional CDI. Generally, the target light field is additionally modulated by a spatial light modulator (SLM), and the measurements after far-field Fraunhofer diffraction can be modeled as \begin{aligned} I=|{\mathcal {F}}(u\odot d)|^{2}, \end{aligned} \begin{aligned} I=|{\mathcal {F}}(u\odot d)|^{2}, \end{aligned} (3) where d represents the modulation pattern, and \odot \odot denotes the Hadamard product. We simulated CDP measurements with five and single phase modulations, respectively. The modulation patterns d are subject to Gaussian distribution [ 12 ]. We employed the same image as CDI to be the ground-truth signal (real domain), and added various levels of WGN to the measurements. Table 2 presents the quantitative evaluation of different techniques under the CDP modality (5 modulations). The results show that the Wirtinger flow based techniques (WF and RWF) failed because of insufficient measurements [ 18 ]. The PLIFT and PLAMP methods are still out of memory. The other conventional methods produce either little improvement or even worse reconstruction compared to AP. Although prDeep outperforms AP, it consumes around triple running time with high computational complexity. In comparison, the reported LPR obtains the best reconstruction performance, with as much as 8.3dB on PSNR and 0.61 on SSIM. Besides, it also shares the same level of running time as AP, which maintains the highest efficiency among all the algorithms. The detailed visual comparison of different methods is presented in Additional file 1 : Fig. S8. Table 2 Quantitative comparison under the CDP modality (5 modulations) Full size table To further demonstrate the strong reconstruction performance of LPR , we also compared these algorithms in the case of a limited sampling ratio with only single modulation, as shown in Table 3 and Fig. 3 . Due to extremely insufficient measurements, most of the methods failed with either no convergence or poor reconstruction quality. Under heavy measurement noise, the target information is either buried or smoothed. In contrast, the reported LPR technique enables as much as 17dB enhancement on PSNR and 0.8 improvement on SSIM. As validated by the close-ups in Fig. 3 , LPR is able to retrieve fine details, even in the case of heavy measurement noise. Meantime, it is effective to attenuate noise and artifacts, producing smooth background. Table 3 Quantitative comparison under the CDP modality (single modulation) Full size table Fig. 3 Visual comparison under the CDP imaging modality (single modulation). In such a low sampling ratio with measurement noise, all the conventional algorithms produce low-contrast resolution. The prDeep technique also produces serious reconstruction artifacts. The reported LPR technique outperforms the other methods with much higher fidelity Full size image In order to further illustrate the computational complexity of different techniques, we show the computation time as a function of image size in Additional file 1 : Fig. S9 . We can see that as the image size increases, LPR obtains a lower computational complexity than prDeep. 2.3 Fourier ptychographic microscopy FPM is a novel technique to increase optical system’s bandwidth for wide-field and high-resolution imaging. It illuminates the target with coherent light at different incident angles, and acquires corresponding images that contain information of different sub-regions of the target’s spatial spectrum. Mathematically, the measurement formation model of FPM is \begin{aligned} I=\left| {\mathcal {F}}^{-1}[P \odot {\mathcal {F}}\{u \odot {\mathcal {S}}\}]\right| ^{2}, \end{aligned} \begin{aligned} I=\left| {\mathcal {F}}^{-1}[P \odot {\mathcal {F}}\{u \odot {\mathcal {S}}\}]\right| ^{2}, \end{aligned} (4) where {\mathcal {F}}^{-1} {\mathcal {F}}^{-1} is inverse Fourier transform, P denotes system’s pupil function, and {\mathcal {S}} {\mathcal {S}} represents the wave function of incident light. Following the formation model, we first implemented a simulation comparison with the following setup parameters: the wavelength is 625nm, the numerical aperture (NA) of objective lens is 0.08, the height from the light source to the target is 84.8mm, and the distance between adjacent light sources is 4mm. The pixel size of camera is 3.4\mu \hbox {m}. 3.4\mu \hbox {m}. Two microscopy images of blood cells [ 46 ] ( 2048 \times 2048 2048 \times 2048 pixels) were employed as the latent high-resolution (HR) amplitude and phase, respectively. The size of captured low-resolution images (LR) was one fourth of the HR images. Figure 4 presents the reconstruction results of AP [ 3 ], WF [ 47 ], deep learning (DL) [ 24 ] and LPR . For the DL technique, we used the result of the AP algorithm as the network’s input, and the network outputted the enhanced reconstruction results. In the training process, we used 20,000 images (10,000 each for amplitude and phase) from the PASCAL Visual Object Classes dataset [ 48 ] and DIV2K dataset [ 42 ], and trained the network individually for different noise levels. From the results, we can see that AP is sensitive to measurement noise. WF can better handle noise, but it requires high computational complexity and long running time (more than one order of magnitude). Although DL consumes the least inferring time and outperforms the AP and WF methods, its reconstruction quality is still worse than LPR in the presence of measurement noise. Compared with AP, LPR obtains as much as nearly 10dB enhancement on PSNR (SNR = 10). Besides, it consumes the same order of running time as AP. The visual comparison also validates that LPR enables high-fidelity reconstruction of both amplitude and phase. Due to space limitation, we present the other two sets of simulation results in Additional file 1 : Figs. S10 and S11. Fig. 4 Comparison of simulation results under the FPM modality. The left table presents quantitative comparison, while the right images show visual comparison. AP suffers from poor noise robustness. WF requires high computational complexity with longer running time (more than one order of magnitude). Although the deep learning technique consumes the least running time and outperforms the AP and WF methods, its reconstruction quality is still worse than LPR in the presence of measurement noise. In contrast, LPR produces the highest reconstruction quality with as much as nearly 10dB enhancement on PSNR (SNR = 10) and consumes the same order of running time as AP Full size image We also implemented the algorithms on experiment FPM measurements. The imaging sample is a blood smear stained by HEMA 3 Wright-Giemsa. The setup consists of a 15 \times 15 15 \times 15 LED array, a 2 \times \times 0.1 NA objective lens (Olympus), and a camera with 1.85\mu \hbox {m} 1.85\mu \hbox {m} pixel size. The central wavelength of the LEDs is 632nm, and the lateral distance between adjacent LEDs is 4mm. The LED array is placed 80mm from the sample. We captured two sets of 225 LR images that correspond to the 15 \times 15 15 \times 15 LEDs, respectively under 1ms and 0.25ms exposure time. The reconstructed results are presented in Fig. 5 , which shows that AP is seriously degraded under limited exposure. Only the cell nucleus can be observed in amplitude, and other details are lost. LPR produces state-of-the-art reconstruction performance. The measurement noise is effectively removed, and the cell structure and morphology details are clearly retrieved. Fig. 5 Comparison of experiment results under the FPM modality. The target is a red blood cell sample that is prepared on a microscope slide stained with Hema 3 stain set (Wright-Giemsa). The limited exposure results in serious measurement noise, which directly flows into the reconstruction results of AP. The WF technique outperforms AP, but it still degrades a lot under a short exposure time (0.25ms). The reported LPR technique maintains strong robustness to measurement noise, and enables to retrieve clear cell structure and morphology details Full size image 2.4 Ultra-large-scale phase retrieval In ultra-large-scale imaging applications such as 4K ( 4096 \times 2160 4096 \times 2160 pixels) or 8K ( 7680 \times 4320 7680 \times 4320 pixels), most reconstruction algorithms are not applicable due to either highly large memory requirement or extremely long running time. Nevertheless, the reported LPR technique still works well in such applications. As a demonstration, we implemented a simulation of 8K-level CDP (5 modulations), using an 8K outer space color image as the real-domain ground truth (released by NASA using the Hubble Telescope). Its spatial resolution is 7680 \times 4320 7680 \times 4320 (each color channel) with in total 33.1 million pixels. We simulated intensity-only measurements individually for different RGB channels, and the reconstruction was also implemented separately for different channels. Figure 6 presents the reconstruction results of AP and LPR , with the input SNR being 5dB. The close-ups show that the result of AP is drowned out by measurement noise, leading to dimness and loss of target details. In comparison, LPR outperforms a lot with strong robustness. Both their running times lie in the minute level. Another set of 8K reconstruction results is shown in Additional file 1 : Fig. S12). Fig. 6 The first demonstration of ultra-large-scale phase retrieval at the 8K level ( 7680 \times 4320 7680 \times 4320 \times \times 3 pixels). The imaging modality is CDP with 5 modulations. At such a large scale, only the AP and the reported LPR techniques still work, while the other ones fail due to high computational complexity. The results validate that LPR significantly outperforms AP with effective noise removal and detail reservation Full size image 3 Methods Following optimization theory, the phase retrieval task can be modeled as \begin{aligned} {\hat{u}}=\arg \min _{u} f(u)+\lambda g(u), \end{aligned} \begin{aligned} {\hat{u}}=\arg \min _{u} f(u)+\lambda g(u), \end{aligned} (5) where u denotes the target complex field to be recovered, f ( u ) is a data-fidelity term that ensures consistency between the reconstructed result and measurements, and g ( u ) is a regularizer that imposes certain statistical prior knowledge. Conventionally, Eq. ( 5 ) is solved following the first-order proximal gradient methods, such as ISTA and ADMM that are time-consuming to calculate gradients in large-scale nonlinear tasks [ 32 ]. In this work, instead, we employ the efficient generalized-alternating-projection (GAP) strategy [ 32 ] to transform Eq. ( 5 ) with fewer variables to \begin{aligned} \begin{array}{c} (u, v)={\text {argmin}} 1 / 2\Vert u-v\Vert _{2}^{2}+\lambda g(v) \\ \text{ s.t. } I=|A u|^{2}, \end{array} \end{aligned} \begin{aligned} \begin{array}{c} (u, v)={\text {argmin}} 1 / 2\Vert u-v\Vert _{2}^{2}+\lambda g(v) \\ \text{ s.t. } I=|A u|^{2}, \end{array} \end{aligned} (6) where v is an auxiliary variable balancing the data fidelity term and prior regularization, A denotes measurement matrix, and I represents measurement. The difference between the conventional ADMM and GAP optimization is the constraint on the measurement [ 32 ]. ADMM minimizes \left\| I-|A u|^{2}\right\| \left\| I-|A u|^{2}\right\| , while GAP imposes the constraint I=|A u|^{2} I=|A u|^{2} . To tackle the large-scale phase retrieval task, we extend the efficient plug-and-play (PNP) optimization framework [ 27 ] from real space to nonlinear complex space. Fundamentally, PNP decomposes optimization into two separate sub-problems including measurement formation and prior regularization, so as to incorporating inverse recovery solvers together with various image enhancing solvers to improve reconstruction accuracy, providing high flexibility for different applications. Mathematically, Eq. ( 6 ) is decomposed into the following two sub-problems, to alternatively update the two variables u and v . Updating u : given v^{(k)} v^{(k)} , u^{(k+1)} u^{(k+1)} is updated via a Euclidean projection of v^{(k)} v^{(k)} on the manifold I=|A u|^{2} I=|A u|^{2} as \begin{aligned} u^{k+1}=v^{(k)}+\lambda \cdot PR\left( I-|A v|^{2}\right) , \end{aligned} \begin{aligned} u^{k+1}=v^{(k)}+\lambda \cdot PR\left( I-|A v|^{2}\right) , \end{aligned} (7) where PR is phase retrieval solver. Considering its great generalization ability on various imaging modalities and low computational complexity, we employ the AP method as the PR solver. It alternates between the target and observation planes allowing to incorporate any information available for the variables, providing low sampling rate requirement. Updating v : given u^{(k+1)} u^{(k+1)} , v^{(k+1)} v^{(k+1)} is updated by an image enhancing solver EN as \begin{aligned} v^{k+1}=E N\left( u^{k+1}\right) . \end{aligned} \begin{aligned} v^{k+1}=E N\left( u^{k+1}\right) . \end{aligned} (8) Although the iterative image enhancing research has made great progress in recent years with such as non-local optimization and dictionary learning [ 49 ], they maintain high computational complexity for large-scale reconstruction [ 50 ]. In this work, considering its state-of-the-art enhancement performance and flexibility to tackle different noise levels, we employed the deep learning based FFDNET [ 51 ] to deal with the sub-problem with high fidelity and self-adaptation. The neural network consists of a series of 3 \times 3 3 \times 3 convolution layers. Each layer is composed of a specific combination of three types of operations including convolution, rectified linear units and batch normalization. The architecture provides a balanced tradeoff between noise suppression and detail fidelity. While an image is input into the network, it is first down sampled into several sub-blocks, which then flow through the network for quality enhancement. Finally, these optimized blocks are stitched together to the original size. Such a workflow enables its great generalization ability on different image sizes. After initialization, the variables are updated alternatively following Eqs. ( 7 ) and ( 8 ). When the intensity difference of the reconstructed image between two successive iterations is smaller than a given threshold, the iteration stops with convergence. Since both the two solvers PR and EN are highly efficient and flexible, the entire reconstruction maintains low computational complexity and great generalization. The complete LPR algorithm is summarized in Algorithm 1 (Additional file 1 ), and the demo code has been released at . 4 Conclusion and discussion In this work, we engaged to tackle the large-scale phase retrieval problem, and reported a generalized LPR optimization technique with low computational complexity and strong robustness. It extends the efficient PNP-GAP framework from real space to nonlinear complex space, and incorporates the alternating projection solver and enhancing neural network. As validated by extensive simulations and experiments on three different computational phase imaging modalities (CDI, CDP and FPM), LPR exhibits unique advantages in large-scale phase retrieval tasks with high fidelity and efficiency. The PNP framework has a theoretical guarantee of convergence for most real-domain tasks, such as denoising, deblurring [ 52 , 53 ], etc. However, to the best of our knowledge, there is no theoretical proof of PNP’s convergence in the complex domain. Further, there is also no theoretical guarantee of convergence for the alternating projection solver that has been widely used for \sim \sim 50 years [ 10 ]. Even though, the extensive experimental results of various imaging modalities in this work and other studies (e.g. Fourier ptychographic microscopy [ 3 ], coherent diffraction imaging [ 11 ], ptychography [ 54 ], and coded diffraction patterns [ 12 ]) have validated that the PNP framework and the alternating-projection solver can successfully converge to a global minimum. The LPR technique can be further extended. First, it involves multiple algorithm parameters that are currently adjusted manually. We can introduce the reinforcement learning technique [ 55 ] in our future work to automatically adjust these parameters for best performance. Second, LPR is sensitive to initialization, especially under low sampling rate. The optimal spectral initialization [ 56 ] technique can be incorporated for stronger robustness. Third, the stagnation problem in blind ptychographic reconstruction [ 54 ] deserves further study under the reported framework. This enables to simultaneously recover both object and system parameters. Fourth, it is interesting to investigate the influence of employing other image enhancing solvers such as super-resolution neural network, deblurring network and distortion removal network. This may open new insights for phase retrieval with further boosted quality. Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Change history 13 September 2021 Author Comment file (Author’s response to reviews) has been updated. | Wide field of view and high resolution are both desirable for imaging applications, providing multi-dimensional and multi-scale target information. As the recent development of phase imaging, large-scale detection has been widely employed in a variety of imaging modalities, which largely extends the spatial-bandwidth product (SBP) of optical systems from million scale to billion scale. Such a large amount of data poses a great challenge for post phase retrieval (PR) processing. Therefore, large-scale PR technique with low computational complexity and high fidelity are of great significance for those imaging and perception applications in various dimensions. However, the existing PR algorithms suffer from the tradeoff among low computational complexity, robustness to measurement noise and strong generalization, making them inapplicable for general large-scale phase retrieval. In a newly published research article in eLight, a team of scientists, led by Professor Jun Zhang from Beijing Institute of Technology, China have developed an efficient large-scale phase retrieval technique for realizing high-fidelity complex-domain phase imaging. They combine the conventional optimization algorithm with the deep learning technique and realize low computational complexity, robustness to measurement noise and strong generalization. They compare the reported method with the existing PR methods on three imaging modalities, including coherent diffraction imaging (CDI), coded diffraction pattern imaging (CDP) and Fourier ptychographic microscopy (FPM). The results confirm that compared to the alternating projection (AP) algorithm, the reported technique is robust to measurement noise with as much as 17dB enhancement on signal-to-noise ratio. Compared with the optimization-based algorithms, the running time is significantly reduced by more than one order of magnitude. Besides, they for the first time demonstrate ultra-large-scale phase retrieval at the 8K level in minute-level time. The reported PR technique builds on the plug-and-play (PNP) optimization framework, and extends the efficient generalized-alternating-projection (GAP) strategy from real space to nonlinear space. These scientists summarize the characters of their technique: "The complex-field PNP-GAP scheme ensures strong generalization of our technique on various imaging modalities, and outperforms the conventional PNP techniques with fewer auxiliary variables, lower computational complexity and faster convergence." "Under GAP framework, the phase retrieval problem is decomposed with two sub-problems. We introduced an alternating projection solver and an enhancing neural network respectively to solve the two sub-problems. These two solvers compensate the shortcomings of each other, allowing the optimization to bypass the poor generalization of deep learning and poor noise robustness of AP. "Benefiting from the flexible optimization framework, our technique is able to introduce the best solvers in the future to update itself. Besides, it is interesting to investigate the influence of employing other image enhancing solvers such as super-resolution neural network, deblurring network and distortion removal network. This may open new insights for phase retrieval with further boosted quality," the scientists write. | 10.1186/s43593-021-00004-w |
Medicine | How gut neurons communicate with the brain to control thirst | Takako Ichiki et al, Sensory representation and detection mechanisms of gut osmolality change, Nature (2022). DOI: 10.1038/s41586-021-04359-5 Journal information: Nature | http://dx.doi.org/10.1038/s41586-021-04359-5 | https://medicalxpress.com/news/2022-01-gut-neurons-brain-thirst.html | Abstract Ingested food and water stimulate sensory systems in the oropharyngeal and gastrointestinal areas before absorption 1 , 2 . These sensory signals modulate brain appetite circuits in a feed-forward manner 3 , 4 , 5 . Emerging evidence suggests that osmolality sensing in the gut rapidly inhibits thirst neurons upon water intake. Nevertheless, it remains unclear how peripheral sensory neurons detect visceral osmolality changes, and how they modulate thirst. Here we use optical and electrical recording combined with genetic approaches to visualize osmolality responses from sensory ganglion neurons. Gut hypotonic stimuli activate a dedicated vagal population distinct from mechanical-, hypertonic- or nutrient-sensitive neurons. We demonstrate that hypotonic responses are mediated by vagal afferents innervating the hepatic portal area (HPA), through which most water and nutrients are absorbed. Eliminating sensory inputs from this area selectively abolished hypotonic but not mechanical responses in vagal neurons. Recording from forebrain thirst neurons and behavioural analyses show that HPA-derived osmolality signals are required for feed-forward thirst satiation and drinking termination. Notably, HPA-innervating vagal afferents do not sense osmolality itself. Instead, these responses are mediated partly by vasoactive intestinal peptide secreted after water ingestion. Together, our results reveal visceral hypoosmolality as an important vagal sensory modality, and that intestinal osmolality change is translated into hormonal signals to regulate thirst circuit activity through the HPA pathway. Main Communication between the periphery and the brain serves as a critical component of appetite induction and satiation. All major circuits for hunger, thirst and sodium appetite receive pre-absorptive feed-forward satiation signals after nutrient ingestion through multiple sensory systems 3 , 4 , 6 , 7 , 8 , 9 , 10 . Forebrain lamina terminalis (LT) is the interoceptive brain area that regulates thirst and fluid intake. Recent studies have shown that thirst neurons in the LT are rapidly inhibited by liquid gulping signals from the upper digestive tract and osmolality signals from the gut 6 , 11 , 12 , 13 . Gut osmolality responses in sensory ganglia Multiple visceral afferent pathways are involved in gut osmolality sensing. Dorsal root ganglion (DRG) neurons are sensitive to hypotonic stimuli through a transient receptor potential (TRP) channel, possibly contributing to blood pressure regulation 14 , 15 . Another line of studies using vagotomy has demonstrated that normal water intake is disrupted without the functioning vagus nerve 12 , 16 , 17 , 18 . Despite these functional implications, osmolality responses in peripheral sensory ganglia have not been directly examined and characterized in vivo. Moreover, because sensory afferents innervate multiple organs of the gut, specific sensory sites for osmolality detection remain elusive. Here we exploit in vivo optical recording from peripheral sensory ganglia and central thirst circuits to examine the representation of visceral hypotonic stimuli. We show that the vagal signalling through the HPA, a combined area of the hepatic portal vein (HPV) and liver hilus, has a critical role in transmitting osmolality information from gut to brain to modulate thirst. To examine visceral osmolality responses, we performed recording from vagal and DRG neurons that carry the majority of gut sensory information. First, we recorded vagus nerve activity by in vivo electrophysiology combined with intestinal perfusion (Fig. 1a , Extended Data Fig. 1a ). Multiple stimuli activated the vagus nerve, including acidic solutions, liquid diet (Ensure) and mechanical distension 1 , 19 , 20 (Fig. 1b , Extended Data Fig. 1b ). Robust activation was also observed by intestinal water infusion. These responses were dose-dependent, as lower osmolality caused larger vagal activity (Fig. 1c ) but flow rate did not affect response amplitude (Fig. 1d , Extended Data Fig. 1c ). Next, we examined osmolality responses in the spinal pathway. We placed a gradient index (GRIN) lens on a thoracic DRG (T12 or T13), where gut signals are most abundantly represented (Extended Data Fig. 1d ). Calcium dynamics of individual neurons were then visualized in Slc17a6-cre ;Ai96 mice (Extended Data Fig. 1e ). While we observed calcium activity by intestinal infusion of water, nutrients or acid, these responses were sparse compared to robust tactile responses, and not precisely induced at the stimulus onset (Extended Data Fig. 1f ). Fig. 1: The vagus nerve responds to visceral osmolality changes. a , Diagram of electrophysiological recording of the vagus nerve. Electrical nerve activity was monitored during intestinal stimulation. b , Representative vagus nerve responses to intestinal infusion of isotonic saline, water, NaCl (1 M), acetic acid (4%), Ensure and intestinal distension (representative responses from 8 mice for saline and water, and from 6 mice for other stimuli). c , Dose dependence of osmolality responses. Representative integrated responses to 0–150 mM NaCl solutions (left) and quantification of responses (right) ( n = 6 mice). d , Water response at different flow rates. Faster flow rate induced shorter latency but did not affect peak response amplitude (representative responses from 5 mice). Blue shaded areas denote infusion periods. AUC, area under the curve. *P < 0.05, **P < 0.01, ***P < 0.001; Kruskal–Wallis test (with Dunn’s post test). Data are mean ± s.e.m. Source data Full size image Given the robust and time-locked responses in the vagus nerve, we next investigated how osmolality signals are represented by individual neurons. To achieve this, we genetically expressed GCaMP6s ( Slc17a6-cre ;Ai96 line) in vagal sensory neurons and performed imaging with confocal microscopy (Fig. 2a ). We found that a subset of gut-innervating vagal neurons were consistently activated upon intestinal water infusion (Extended Data Fig. 2a, b ). Hypo-osmotic, hyper-osmotic and glucose infusion for an extended period (2 min) activated large fractions of vagal neurons. Notably, nearly half of these neurons responded to multiple nutrients, which may reflect a combination of sensory nutrient detection and post-absorptive physiological changes as illustrated previously 19 , 21 (Fig. 2b ). To selectively visualize rapid sensory detection of osmolality, we used a short stimulation paradigm (20 s) (Extended Data Fig. 2c ). With this stimulation scheme, hypotonic responses were largely segregated from hypertonic, glucose and distension responses (Fig. 2c , Extended Data Fig. 2d ). Thus, acute water sensation in the gut activates a dedicated neural population in vagal ganglia. We note that water responses were exclusively observed via intestinal, but not gastric water infusion demonstrating site-specific gut-to-brain communication. Conversely, both intestinal and gastric mechanical distension activated vagal neurons (Fig. 2d, e , Extended Data Fig. 2e, f ). Fig. 2: Visceral hypotonic stimuli activate a dedicated vagal population. a , Diagram of confocal calcium imaging of individual vagal neurons in Slc17a6-cre ;Ai96 mice. b , Representative vagal responses (top) during the 2-min intestinal water (red), 500 mM NaCl (green), 300 mM glucose (blue) infusion and intestinal distension (yellow). Response heat map and Venn diagram for individual neural activity ( n = 6 mice). Stim, stimulation. c , Responses during shorter stimulation period (20 s). Individual neural responses are shown in a heat map and Venn diagram ( n = 7 mice). Note the minimal response overlap between different nutrients with shorter stimulation period. d , Vagal ganglia imaging with multiple organ stimulation. e , Calcium signals during water infusion and mechanical stimulation of the intestine and stomach. Responses of individual vagal neurons to intestinal (int.) and stomach stimulation (sto.) (left) ( n = 4 mice). Most water responses were induced by intestinal, but not by stomach, stimulation (right). Neurons that responded during the stimulus time window were quantified. Scale bar, 200 μm. Full size image How does the gut detect osmolality change? Despite recent transcriptomic studies 22 , 23 , 24 , osmolality-sensitive cell types remain unexplored. To fill this gap, we searched for neural populations involved in water responses. We screened genetic lines that label cell types that had not been previously functionally annotated to date. This analysis identified two genetic markers—tachykinin 1 ( Tac1 ) is expressed by six cell types, and neurotensin ( Nts ) is expressed by one cell type—in the vagal ganglia (Extended Data Fig. 3 ). Among these two populations, calcium responses to osmolality change were observed only in TAC1 + neurons. By contrast, NTS + neurons responded to intestinal distension, but not to hyper- or hypotonic stimuli (Fig. 3a , Extended Data Fig. 4a ). Moreover, chemogenetic stimulation of TAC1 + neurons selectively inhibited water intake under dehydration, but did not affect food intake under food-deprived states (Extended Data Fig. 4b, c ). These results suggest that TAC1 + vagal population contains osmolality-sensitive neurons. Our anatomical projection mapping shows that both TAC1 + and NTS + neurons project to mostly overlapping areas of the gut (Fig. 3b , Extended Data Fig. 5 ). However, a prominent difference was found in one area: only TAC1 + neurons send projections to the HPA through which intestinal nutrient and water are absorbed (Fig. 3b , Extended Data Fig. 6 ). Fig. 3: Vagal sensory inputs from the HPA transmits gut-to-brain osmolality signals. a , Response heat maps of TAC1 + ( Tac1-cre ;Ai162D, n = 4 mice) and NTS + ( Nts-cre ;Ai96; n = 4 mice) neurons to intestinal infusion of water and 500 mM NaCl, and intestinal and gastric (gast.) distension; 25% and 12% of TAC1 + neurons responded to hypotonic and hypertonic stimuli, respectively. b , Whole-mount staining of innervation of SLC17A6 + , TAC1 + and NTS + vagal populations (representative images from three mice for each line). Sensory terminals were visualized by injecting AAV-Flex-tdTomato into vagal ganglia (red). Intraganglionic laminar endings (IGLEs) and mucosal endings (villi) in the intestine are shown. Enteric neurons and vascular smooth muscle cells of HPV were labelled with Fluoro-Gold (green) and α-smooth muscle actin antibody (αSMA) (white), respectively. TAC1 + neurons, but not NTS + neurons, innervate the HPA. Scale bars, 100 μm. c , The effect of hepatic branch loss-of-function (HVx) on acute water responses. Response heat maps for intestinal water infusion and distension ( Slc17a6-cre ;Ai96; n = 3 mice). Responding neurons after HVx are quantified as a percentage of those before HVx (less than 1% for water, 72% for distension). d , Functional imaging of HPA-innervating vagal neurons (left). HPA-innervating neurons were visualized with WGA 647 (right, Slc17a6-cre ;Ai96; n = 3 mice). 62% of water responding vagal neurons was labelled by WGA tracing from HPA. 15.8% of total vagal neurons was labelled by WGA injection to HPA. Full size image HPA transmits gut osmolality signals On the basis of these projection patterns and potential roles of the HPA in nutrient and water sensing 25 , 26 , we suspected that sensory signals from the HPA may have a role in gut osmolality detection. To directly test this possibility, we performed a branch-specific loss-of-function study using surgical and chemical denervation combined with in vivo imaging. Acute hypotonic responses were abolished after selective denervation of hepatic vagal branch (HVx), whereas responses to intestinal distension remained largely intact (Fig. 3c ). Consistent with previous reports, the responses to other nutrients were also blunted by HVx 27 , 28 (Extended Data Figs. 7 , 8 ). To further test whether hypotonic responses are indeed mediated through HPA-innervating vagal neurons, we performed three lines of experiments. First, we directly visualized neural activity in HPA-innervating neurons. We injected wheat germ agglutinin (WGA) 647 to the HPA or intestine, retrogradely labelled vagal neurons, and then performed GCaMP imaging from WGA-positive and -negative vagal neurons. Most (62%) hypotonic-selective neurons were found in HPA-innervating, but not intestine-innervating neurons (Fig. 3d , Extended Data Figs. 7 , 8 ). Second, because HVx is a non-reversible operation that may cause various side effects, we used local anaesthetics for temporal silencing of HPA-innervating nerves. Similar to the denervation results, we found a significant reduction of responses after lidocaine application. These responses partially recovered after the anaesthesia wore off, excluding the possibility that loss of water responses is owing to side effects of HVx (Extended Data Fig. 7d ). Finally, because the hepatic vagal branch also contains some innervation to the duodenum and pancreas, we applied selective loss-of-function to those branches. We found that hypotonic responses were unaffected after duodenum and pancreas denervation (Extended Data Fig. 7e ). Together, these results support our notion that acute water response in the gut is mainly mediated by HPA-innervating vagus nerve. HPA-derived signals quench thirst Previous studies have established that gut water detection rapidly transmits pre-absorptive satiation signals to the brain and inhibits water intake 6 , 12 , 13 . Given the sensory function of the HPA, we next investigated whether HPA-derived water signals mediate thirst satiation. To this end, we subjected mice to water restriction and measured water intake with or without HPA sensory inputs. Mice lacking HPA afferents exhibited significantly increased water consumption after 24 h of water deprivation (Fig. 4a , left, Extended Data Fig. 9a ). However, spontaneous daily water intake did not change regardless of vagal inputs (Fig. 4a , right, Extended Data Fig. 9b ). These results suggest that HPA denervation specifically affects thirst satiation under water-deprived states. By contrast, subdiaphragmatic vagotomy (SVx) produced largely variable and non-significant results on food and water intake owing to effects on internal organs (Extended Data Fig. 9c–e ). To further examine whether HVx affects thirst drive or satiation, we performed an operant conditioning assay. We found that HVx had no effect on lever presses to obtain water compared with intact controls: HPA-denervated and intact mice exhibited similar levels of appetitive behaviour towards water (Fig. 4b ). Therefore, increased water intake in HVx mice is not owing to increased thirst or appetitive drive, but is probably the result of blunted satiation. Fig. 4: HPA vagal afferents are required for pre-absorptive thirst satiation. a , Effect of HVx on water consumption. Water intake during a 20-min session was quantified after 24 h of water restriction in sham (grey) and HVx (red) mice ( n = 9 mice). Spontaneous daily water intake was not affected ( n = 9 mice). b , Measurement of appetitive behaviour in lever-press assay. The number of lever presses in water-restricted sham and HVx mice was quantified ( n = 5 mice). FR3, fixed ratio 3; PR3, progressive ratio 3. HVx did not affect appetitive behaviour. c , Diagram of optical recording from LT thirst neurons before and after HVx. Thirst neuron activity was recorded by injecting AAV-CaMKII-GCaMP6s into wild-type mice or AAV-Flex-GCaMP6s into Nos1-cre mice. d , Thirst neurons were acutely activated by intraperitoneal injection of hypertonic saline (2 M NaCl; n = 10 mice). e , Calcium dynamics and responses (AUC) of thirst neurons after fluid intake ( n = 10 mice). Water intake induced rapid and sustained inhibition in intact mice. After HVx, sustained osmolality-induced inhibition was significantly blunted (top left). Conversely, gulping-induced inhibition was unchanged (bottom left). Isotonic saline intake induced rapid inhibition of thirst neurons only (right). Blue shaded area indicates the drinking period (5 s). *P < 0.05, **P < 0.01; two-tailed Mann–Whitney U -test and two-tailed paired t -test. Data are mean ± s.e.m. Source data Full size image Thirst neurons receive two types of feed-forward satiation signals; rapid inhibition by liquid gulping and sustained inhibition by gut osmolality detection (Extended Data Fig. 9f ). The vagal pathway has been suggested to have a role in these satiation signals 12 , 18 . Indeed, chemogenetic activation of vagal ganglia neurons induced robust inhibition in thirst neurons (Extended Data Fig. 9g, h ). If the HPA is involved in osmolality-induced satiation, we expected that sustained inhibition of thirst neurons would be selectively abolished after HVx. To examine this idea, we virally expressed GCaMP in NOS1-positive thirst neurons in the subfornical organ (SFO). The activity of thirst neurons was then recorded through an implanted optic fibre with or without HPA inputs (Fig. 4c ). These neurons were robustly activated by osmotic challenge (intraperitoneal injection of NaCl solution) and rapidly suppressed upon fluid intake under dehydrated states (Fig. 4d, e ). With intact vagal function, sustained neural inhibition was observed after water intake whereas this osmolality-induced inhibition was significantly reduced after HVx (Fig. 4e , top). Importantly, gulping-induced rapid inhibition was unchanged by the same operation (Fig. 4e , bottom). These functional data demonstrate a selective role of HPA-innervating vagus nerve in osmolality-induced thirst modulation. VIP mediates gut osmolality sensing Finally, we investigated the mechanisms by which the HPA-innervating vagal afferents detect visceral osmolality change. Towards this goal, we developed a dual perfusion system that allows us to monitor vagal ganglion activity in response to water infusion into intestine or HPV in the same mice. We recorded vagal responses while stimulating different areas of the gut (Fig. 5a , left). No calcium response was observed when a physiologically relevant range of hypotonic stimuli were directly applied to the HPV, indicating that HPA vagal afferents are not sensitive to osmolality changes per se (Fig. 5a , Methods). Fig. 5: Visceral osmolality sensing recruits VIP–VIPR2 signalling in the HPA. a , Dual infusion to the intestine and HPV combined with in vivo imaging. Responses to hypotonic stimuli in the intestine and HPV ( Slc17a6-cre ;Ai96; n = 3 mice). HPV infusion of hypotonic (low-osm, 0.225%) saline did not induce calcium responses. b , Vagal responses to HPV hormonal infusion. Individual neural responses to different hormones ( Slc17a6-cre ;Ai96; n = 3 mice). c , Vagal responses to intestinal water, 500 mM NaCl or 300 mM glucose infusion and HPV infusion of VIP ( Slc17a6-cre ;Ai96; n = 3 mice for water, n = 2 mice for NaCl and glucose). Responses to HPV-infused VIP significantly overlapped with water (38%), but not NaCl or glucose infusion in the intestine. d , Dual-responding neurons to intestinal water stimulus and HPV VIP infusion were blocked by VIPR antagonist ([D-p-Cl-Phe 6 ,Leu 17 ]-VIP) applied in the HPV ( Slc17a6-cre ;Ai96; n = 11 neurons). e , qPCR analysis of VIPR1 and VIPR2 expression in the HPA or the rest of the liver (left) ( n = 6 mice). VIPR2 (green) expression was observed in the HPA but not the liver in Vipr2-cre ;Ai3 mice. Representative images from 3 mice are shown. Scale bar, 50 μm. f , Calcium responses in TAC1 + vagal neurons. The heat map indicates that TAC1 + neurons respond to both intestinal acute water stimulation and HPV infusion of VIP (10 and 22 neurons, respectively from Tac1-cre ;Ai162D mice; n = 4 mice). Shown are calcium dynamics from a representative neuron in response to intestinal water, HPV-VIP and HPV-saline infusion. g , Schematic depicting the thirst satiation signal through the vagal–HPA pathway. VIP and additional signals are involved in osmolality responses. **P < 0.01, ***P < 0.001 by two-way ANOVA (Šídák multiple comparisons). Data are presented as mean ± s.e.m. Source data Full size image Because intestinal osmolality and nutrient detection cause various hormone secretions 29 , 30 , we examined whether the hormonal signals are involved in HPA sensory functions. If this were the case, we would expect such a hormone to stimulate vagal neurons when infused into the HPV, and the hormonal response to be abolished after HVx. We found two categories of hormones that induced vagal activity. Peptide YY (PYY), cholecystokinin (CCK) and vasopressin (AVP) triggered robust responses, but the majority of responses remained after HVx (Fig. 5b , Extended Data Fig. 10a ); by contrast, vasoactive intestinal peptide (VIP) and glucagon-like peptide-1 (GLP1) induced HPA-dependent responses—around 90% of vagal responses were abolished after HVx. If these hormones are involved in osmolality-induced vagal response through the HPA, their responses should overlap with the intestinal water response. Indeed, a considerable fraction (38%) of water-responding vagal neurons also responded to HPV infusion of VIP, but not GLP1 (Fig. 5c , Extended Data Fig. 10b, c ). Of note, VIP responses selectively overlapped with water responses, but not with NaCl or glucose responses (Fig. 5c ). These data support our idea that visceral hypoosmotic responses are, at least in part, mediated by VIP. Consistently, VIP levels in the HPV were significantly increased after spontaneous water intake (Extended Data Fig. 10d ). Moreover, VIP-sensitive hypotonic responses were abolished in the presence of VIP receptor antagonist (Fig. 5d , Extended Data Fig. 10e ), suggesting a role of VIP receptor (VIPR) in visceral osmolality sensing. We used quantitative PCR (qPCR) to demonstrate that the HPA, but not the rest of the liver, exhibited high expression of VIPR type 2 (VIPR2). Furthermore, our transgenic approach showed VIPR2 expression in the HPA in Vipr2-cre ;Ai3 mice (Fig. 5e , Extended Data Fig. 10f ). Thus, osmolality-induced VIP is probably mediated by VIPR2 in the HPA. We further examined osmolality and VIP responses in TAC1 + vagal neurons. Sixty per cent of TAC1 + neurons that responded to intestinal water infusion were also activated by HPV infusion of VIP (Fig. 5f ). These results provide functional evidence that a genetically defined vagal population is involved in gut osmolality sensing. Discussion Physiologically and behaviourally, the importance of gut osmolality sensing for thirst regulation has been recognized for decades. Recent studies have provided more insights into the neural and molecular mechanisms underlying water detection. Nevertheless, in vivo sensory responses to gut water stimuli are not fully understood. In this study, we characterize osmolality responses in sensory ganglia and demonstrate that the vagal pathway transmits gut-to-brain thirst satiation signals. We show that intestinal water stimuli induce hypoosmolality-specific and general osmolality and nutrient responses in vagal neurons—the osmolality responses are mediated exclusively by sensory afferents from the HPA. Moreover, we show that HPA-derived sensory signals contribute to feed-forward satiation and thirst-circuit modulation (Fig. 5g ). This study reveals peripheral representation, signal transmission pathway and functional significance of gut hypoosmolality sensing. HPA-innervating sensory nerves detect visceral osmolality changes both directly and indirectly. We have demonstrated that HPA-innervating vagus nerve is activated by hormonal signals such as VIP. Previous studies showed that DRG neuron terminals in the HPA are sensitive to hypotonic stimuli through TRPV4 14 . These spinal signals probably mediate pressor responses after water intake 15 . These findings together with our results suggest that combinatorial action of osmolality and hormones through the HPA mediate different physiological effects. It is known that several hormones, including VIP, are secreted upon water intake 31 . How is VIP secreted from the gut after water intake? The main source of gut VIP is from enteric neurons 32 , 33 . It is feasible that intestinal osmolality changes stimulate the enteric nervous system leading to increased portal VIP levels 34 . Notably, our data show that VIP-independent pathway(s) are also involved in gut hypoosmolality sensing through the HPA (Fig. 5g ). These results illustrate a complex gut-to-brain signalling for fluid regulation. Further studies are necessary to address how VIP stimulates vagal afferents and how osmotic and hormonal signalling work together to regulate drinking behaviour. Gut osmolality change induces distinct types of vagal responses: hypotonic-specific and general responses. HVx selectively eliminates hypotonic responses, suggesting that gut osmolality changes stimulate multiple vagal pathways. Notably, non-specific nutrient responses emerged with extended gut stimulation, as indicated previously 19 , 21 . We speculate that rapid hypotonic-specific responses represent sensory detection of osmolality change, whereas general responses may be a consequence of delayed physiological effects such as changes in blood pressure or heart rate. Further work is needed to investigate how different types of osmolality and nutrient responses contribute to appetite regulation and other physiological changes. Methods Animals All experimental procedures were carried out in accordance with US NIH guidelines for the care and use of laboratory animals and approved by the Caltech Institutional Animal Care and Use Committees (IACUC; protocol no. 1694-14). Mice used for data collection were both males and females, at least 7 weeks of age. Ai162D (JAX stock no. 031562), Ai3 (JAX stock no. 007903), Ai96 (JAX stock no. 028866), C57BL/6J (JAX stock no. 000664), Nos1-cre ( Nos1 cre ; JAX stock no. 017526), Nts-cre ( Nts cre ; JAX stock no. 017525), Vipr2-cre ( Vipr2 cre ; JAX stock no. 031332) mice were obtained from the Jackson Laboratory. The Tac1-cre ( Tac1 cre ) line was provided by D.J.A. Slc17a6 (Vglut2) -cre ( Slc17a6 cre ) mice were a gift from V. Gradinaru. Mice were housed in a temperature and humidity-controlled environment with 13:11 h light:dark cycle with ad libitum access to chow and water. The housing facility had a temperature of 21.7–23.8 °C and a humidity of 30–70%. Viral constructs The following adeno-associated viruses (AAVs) were purchased from Addgene: AAV1-Syn-Flex-GCaMP6s (100845-AAV1), 2.9 × 10 13 viral genomes per ml. AAV9-CaMKII-GCaMP6s (107790-AAV9), 2.1 × 10 13 viral genomes per ml. AAV5-hSyn-hM3D(Gq)-mCherry (50474-AAV5), 1.5 × 10 13 viral genomes per ml. AAV5-hSyn-DIO-hM3D(Gq)-mCherry (44361-AAV5), 8.2 × 10 12 viral genomes per ml. AAV9-Flex-tdTomato (28306-AAV9), 3.8 × 10 13 viral genomes per ml. AAV5-flex-taCasp3-TEVp, 4.2 × 10 12 viral genomes per ml was purchased from UNC. In vivo vagus nerve recordings Cervical vagus nerve recordings were conducted as previously described using Clampex 10.4.0.36 software 20 . In brief, mice were anaesthetized with pentobarbital (100 mg kg −1 body weight) and placed in a head-fixation apparatus. After making a midline neck incision, tracheotomy was performed, and muscles were separated to expose the vagus trunk. A high-impedance tungsten electrode (FHC) was used to record from the nerve bundle. A drop of halocarbon oil was applied to the surgical cavity for electrical insulation. Gastrointestinal infusion A skin incision was made along the abdominal midline. For the intestinal input port, a tubing (60-011-04, HelixMark) was inserted in the proximal duodenum (within 0.5 cm from the sphincter) in the initial set of experiments (Figs. 2 , 3a , Extended Data Figs. 1 d, 2 a, b, 4a ). For the remaining experiments to test the HPA function, the input port was set at the jejunum (5 cm from the sphincter) to eliminate the duodenum contribution to osmolality responses through the hepatic branch. The exit port (60-011-09, HelixMark) was made in the distal intestine before the cecum. Surgical thread (K802H, Ethicon) was used to fasten and secure tubing sites. Intestinal distension was achieved by temporally closing the exit port while infusing saline. For gastric infusion, a small incision was made in the fundus of the stomach for the input port. The output tubing was inserted through a duodenum incision into the stomach. Gastric distension was achieved by volume-controlled manual inflation of a surgically implanted latex balloon (73–3478, Harvard Apparatus). Gastrointestinal contents were flushed with isotonic saline before experiments. Different fluid stimuli were delivered by switching solenoid valves with Arduino. Stimuli used were deionized water, 1 M or 500 mM NaCl, 300 mM glucose, 4% acetic acid, and Ensure. All solutions were warmed up to 37 °C. In vivo thoracic DRG imaging Sixteen-hour-fasted mice were anaesthetized with pentobarbital. A midline skin incision at the back was made, and tendons attached to the vertebrae were severed to expose the T8-L1 level of the spine. A small portion of the vertebrae was removed using Friedman-Pearson Rongeurs (16221-14, F.S.T.) to expose T12 or T13 DRG. The vertebrae were fixed using custom stainless-steel bars and plate with a hole for GRIN lens placement. The GRIN lens (100-000588, Inscopix) was placed on a target DRG for calcium imaging. GCaMP6 fluorescence was measured by confocal microscopy (TCS SP8, Leica) via Leica Application Suite X 3.5.19976.5 software, framing rate at 1 Hz. In vivo vagal ganglion imaging Vagal ganglion imaging was performed as previously described 19 , 21 . In brief, 16-h-fasted mice were anaesthetized with pentobarbital. We exposed the vagal ganglion by retracting the carotid artery, and the vagus nerve was transected superior to the jugular ganglion. The ganglion was then immobilized on a custom 5-mm-diameter glass coverslip station, and immediately immersed in the silicon adhesive (KWIK-SIL, World Precision Instruments). Imaging was conducted with Leica SP8 confocal microscope, framing rate at 1 Hz. Less than 10 µW laser power was used to prevent tissue damage. Analysis of the imaging data Imaging data were analysed using modified MATLAB scripts based on CaImAn-MATLAB 35 . In brief, Imaging frames collected from the same mouse were registered to correct for motion artifacts. Constrained non-negative matrix factorization (CNMF) algorithm was used to recognize individual cells and to extract the Ca 2+ activity traces. CNMF outputs were manually inspected to remove neuropil or other non-neuronal signals. For calculating sensory responses, 25 s (10 s for DRG imaging) before the stimulus onset was used as baseline period. The responses are reported in units of baseline standard deviation ( σ ) as previously published 36 . The mean ( μ ) and standard deviation ( σ ) of F 0 ( t ) over a baseline period were computed as F ( t ) = (F 0 ( t ) − μ )/ σ . Cells were defined as responsive to each stimulus if the average Δ F / F ( σ ) value during the stimulus period exceeded 2.5 s.d. Behavioural assays All behavioural assays were performed in a custom gustometer system (Dialog Instruments) or BioDAQ monitoring system (Research Diets). Mice were water-deprived for 24 h for training before the behavioural assays 37 . The number of water licks within 20 min was measured after 24-h water restriction. For food intake measurement, mice were food deprived for 24 h and acclimatized in the BioDAQ monitoring system 20 min before the experiments. The total food (chow) intake in 1 h was quantified. Mice were accustomed to BioDAQ cages for 12 h before measurement for 3 consecutive days. The average daily intake amount was calculated as spontaneous consumption. Lever pressing for water reward Lever-pressing experiments were done in an operant chamber equipped with two levers (active and inactive, Med Associates). Mice were water-deprived for 24 h and trained to press the active lever on fixed ratio 1 (FR1) paradigm, followed by fixed ratio 3 (FR3) schedules to obtain 1-s water reward. After training, mice were tested on FR3 or PR3 schedule for 20 min under water-deprived states. The shutter was closed right after first lick to limit satiety effects. SVx, selective HPA denervation and reversible local anaesthesia Mice were anaesthetized with a mixture of ketamine (100 mg kg −1 body weight) and xylazine (5 mg kg −1 body weight) intraperitoneally. Ketoprofen was subcutaneously administered at 5 mg kg −1 body weight and buprenorphine SR (1 mg kg −1 body weight) was applied prior to surgery. Mice were placed on their backs and the abdomen was incised along the midline. For the subdiaphragmatic vagotomy, anterior and posterior vagal trunks were exposed, and about a 0.5-mm fragment was dissected out. For the HVx, the hepatic branch was completely transected for the hepatic-branch-specific cut. For the HPA denervation or reversible inhibition, 85% phenol solution or 5% lidocaine was applied to the HPA using a paper point with the surrounding tissue covered by parafilm. For duodenum/pancreas denervation, phenol solution was applied to the gastro-duodenal branch in the same fashion. Sham-operated mice were subjected to all the surgical procedures except the nerve operation. Hepatic portal vein infusion After exposing the inferior mesenteric vein, a catheter (C10PU-MCA1459, Instech) loaded with heparin was inserted into the vein. The catheter was gently fixed with surgical thread circumferentially. For the hypotonic stimuli to the HPV (Fig. 5a ), previous studies 38 established that the portal blood flow of mice was around 2 ml min −1 , and the portal osmolality dropped 12 mOsm kg −1 after intestinal water infusion 39 . Based on these parameters, we calculated the flow rate to recapitulate the physiological conditions. Mice were infused with isotonic saline, 0.225% saline (for hypotonic stimuli), VIP (064-16, Phoenix Pharmaceuticals, 25 or 2.5 μg kg −1 ), GLP1 (4030663, Bachem, 8 μg kg −1 ), PYY (1618, R&D Systems, 8 μg kg −1 ), CCK (4033010, Bachem, 1.25 μg kg −1 ), or AVP (V9879, Sigma, 0.5 μg kg −1 ) for 30 s (100 μl min −1 ) using an infusion syringe pump (NE-300, New Era Pump Systems). A VIP receptor antagonist ([D-p-Cl-Phe 6 ,Leu 17 ]-VIP, Tocris Bioscience, 2.5 μg kg −1 ) was pre-infused through the HPV. During intestinal water infusion and HPV-VIP infusion, the same concentration of antagonist (2.5 μg kg −1 ) was co-infused. Surgery for photometry Surgery for photometry was performed as previously described 13 . In brief, mice were anaesthetized with a mixture of ketamine and xylazine solution. The mouse was then placed in a stereotaxic apparatus (SR-5M-HT, Narishige) on a heating pad at 37 °C. A total of 150–300 nl of viral constructs were injected using a microprocessor-controlled injection system (Nanoliter 2000, World Precision Instruments) at 100 nl min −1 . SFO virus injections were performed using the following stereotaxic coordinates; AP: −4030, ML: 0, DV: −2550. For photometry, a 400-μm diameter optic fibre (FT400UMT, Thorlabs) and a ceramic ferrule (CF440, Thorlabs) were glued to be implanted to SFO. Virus expression and fibre implant position was verified after data collection. Fibre photometry Bulk fluorescence signals were collected using fibre photometry as previously described 6 , 13 , 40 . In brief, GCaMP signals were extracted and subjected to a low-pass filter at 1.8 Hz. To obtain the fitted 405-nm signal, a linear function was used to scale up the 405-nm to 490-nm channel signal. The change in fluorescence intensity (Δ F / F ) was calculated as (raw 490 nm signal – fitted 405 nm signal)/(fitted 405 nm signal), and then time-binned by a factor of 2.5 times the sampling frequency and down-sampled to 1 Hz. For all photometry assays, mice were acclimatized for at least 15 min in the chamber before the stimuli were presented. Mouse licks were simultaneously recorded. The AUC was quantified by integrating the baseline-subtracted fluorescence signals for 10 min for the osmolality-induced inhibition, 20 s for the gulping-induced inhibition after the first bout. For chemogenetic experiments (Extended Data Fig. 10 ), AUC was calculated between 300 and 600 s after the CNO administration. This time window was set to avoid injection-induced artifactual calcium signals. Chemogenetic manipulations For chemogenetic manipulation, clozapine- N -oxide (CNO) (Sigma, 2.5 mg kg −1 ) or vehicle (PBS) was administered intraperitoneally 20 min before the experiment. For Extended Data Fig. 10 , CNO was applied during the photometry recording. Vagal ganglion injection Vagal ganglion injection was performed as previously described 41 . Mice were anaesthetized with a mixture of ketamine and xylazine solution. After exposing the vagal ganglia as described above, 400 nl of virus containing 0.02 mg ml −1 Fast Green (F7252-5G, Sigma) was injected at a rate of 100 nl min −1 . For the tracing experiments using AAV9-FLEX-tdTomato, tissue collection was done at least 4 weeks after surgery. For the chemogenetic experiments, mice had at least two weeks of recovery period before subjected to behavioural assays. Retrograde tracing from visceral organs After anaesthesia, retrograde tracer (WGA555 or WGA647, Thermo Fisher, 5 mg ml −1 ) was injected at 100 nl min −1 . For injection to stomach and intestine, a total of 2 μl tracer was injected into the layer between muscularis externa and serosa layer at multiple sites. For liver and pancreas injection, a total of 2 μl tracer was injected into the organ at multiple sites. For the portal vein injection, 0.5–1 μl tracer was injected to the vessel wall and connective tissue wrapping the portal vein. All histological and imaging experiments were performed within 2 weeks after surgery. Fluoro-Gold injection To label enteric neurons, 20 mg kg −1 of Fluoro-Gold (Fluorchrome) was injected intraperitoneally. Tissue collection for histology was done five days after Fluoro-Gold administration. Histology Mice were euthanized and perfused with PBS followed by 4% paraformaldehyde (PFA) pH 7.4. Tissue was extracted and post-fixed overnightat 4 °C in PFA. For whole-mount gut staining, tissue was blocked (10% donkey serum, 0.2% Triton X-100 in PBS) overnight at 4 °C and incubated with primary antibodies overnight at 4 °C. The following primary antibodies were used: rat anti-mCherry (1:500; M11217, Invitrogen), goat anti-α-smooth muscle actin (1:800; NB300-978, Novus Biologicals), chicken anti-NeuN antibody (1:500; ABN91, Merck Millipore). After washing 3 times for 1 h with 0.1% PBST, the tissue was stained with secondary antibodies overnight at room temperature, washed 3 times for 30 min with 0.1% PBST, then stained with DAPI (2 µg mL −1 ) for 3 h at room temperature. The following secondary antibodies were used: Donkey anti-Rat Cy3 (1:500; 712-165-150, Jackson Immunoresearch), Donkey anti-Goat 647 (1:500; 705-605-147, Jackson Immunoresearch), Donkey anti-Chicken 647 (1:500; 703-605-155, Jackson Immunoresearch). After three PBST washes, tissue was cleared in ScaleS 42 (a sorbitol-based optical clearing method) solution overnight at room temperature. Samples were mounted on slides using ScaleS as the mounting medium and imaged on a confocal microscope. Single-cell RNA sequencing data analysis Single-cell RNA sequencing data from vagal/glossopharyngeal sensory neurons 24 (GEO: GSE145216) were reanalysed in R (3.5.1) using Seurat (v.3.0.3.9019). The Phox2b + / Prdm12 − subset of the neurons was analysed to exclude neurons originating from the jugular ganglion and focus on sensory neurons originating from the nodose ganglion. Standard Seurat data analysis workflow was applied for quality control (cells with >25% mitochondrial reads excluded), normalization and clustering (1,000 variable genes, PCs = 21). RNA-Scope-based in situ hybridization In situ hybridization was performed with RNAscope Multiplex Fluorescent Assay (catalogue (cat.) no. 320850, Advanced Cell Diagnostics,). Fixed frozen brains from Tac1 - cre mice were prepared following the manufacturer’s instructions. In brief, 10 µm cryosectioned vagal ganglion slices were mounted on Superfrost Plus slides (Fisher Scientific, 22–037–246). The tissue sections were pre-treated with Target Retrieval solution (cat. no. 322001) and Protease III (cat. no. 322337). In the vagal ganglion, gene expression was visualized with Tac1 (cat. no. 410351) and Cre (cat. no. 474001) probes. Following target probe hybridization, the sections were treated with Hybridize Amp 1–4 and stained with DAPI. The sections were imaged with confocal microscopy and probe labelling was manually quantified. Plasma VIP concentration measurements After water deprivation, HPV blood or systemic blood from retro-orbital sinus was collected to an EDTA coated tube, from wild-type mice with (or without; control) 10 min or 20 min of water repletion. Plasma was then separated by centrifugation at 1,500 g for 20 min. Plasma VIP concentration was measured using ELISA kit (EIAM-VIP-1, RayBiotech). Plasma osmolality measurements After water deprivation, sham or SVx mice were anaesthetized with pentobarbital. Trunk blood was collected 20 min after intestinal water infusion from the proximal duodenum (0.5 ml min −1 for 2 min). Plasma was then separated by centrifugation at 1,500 g for 20 min. Plasma osmolality was measured using a vapor pressure osmometer (Vapro 5520). Quantitative PCR RNA was obtained from the liver, HPA, duodenum, and colon with TRIzol (Invitrogen). Equal amount of RNA was reverse transcribed and obtained cDNA using SuperScript IV VILO Master Mix with ezDNase Enzyme (Invitrogen). The samples were subjected to StepOne Real-Time PCR System (Applied Biosystems). Expression of target genes was determined using the 2 −ΔΔ C t Ct method. Hypoxanthine guanine phosphoribosyl transferase ( Hprt ) was amplified as an internal control for each sample. The following TaqMan Gene Expression Assay probes were used: Vipr1 (Mm00449214_m1), Vipr2 (Mm01238618_g1) and Hprt (Mm00446968_m1). Statistics and data collection Data were processed and analysed using Prism 8.0.2 & 9.0.2. No statistical methods were used to predetermine sample sizes. The sample sizes and statistically significant effects are reported in each figure or figure legend. The significance threshold was held at * P < 0.05, ** P < 0.01, *** P < 0.001. For data analysis, sample sizes are indicated in the main text and figure legends. No randomization was used for gastrointestinal infusion of different solutions. No blinding was used for data collection. However, more than two laboratory members blinded to group allocation successfully replicated data analyses. No statistics were used to determine sample size. Sample size is similar to previous papers in the field 6 , 44 . Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability Additional data that support the finding of this study are available from the corresponding author upon reasonable request. Source data are provided with this paper. Code availability The MATLAB code used to perform the imaging analysis is modified from the CalmAn code at , and is available at . | Drinking a glass of water is often sufficient to quench thirst after exercising. But while the sensation of thirst may be satiated after just a few minutes of drinking, the process of rehydration actually takes around half an hour. The delay occurs because the brain receives signals that you drank water before the body is fully rehydrated based on the detection and measurement of osmolality levels in the gut. Osmolality represents the concentration of dissolved materials including sodium and glucose. The laboratory of Caltech biologist Yuki Oka has worked to learn more about the gut-to-brain osmolality signaling that regulates thirst, and now his team has discovered the major sensory pathway that mediates this process. Oka, professor of biology, Chen Scholar, and Heritage Medical Research Institute Investigator; and his lab collaborated on the research with the lab of David Anderson, Seymour Benzer Professor of Biology and Howard Hughes Medical Institute Investigator. Anderson is the Tianqiao and Chrissy Chen Institute for Neuroscience Leadership Chair and director of the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech, of which Oka is also an affiliated faculty member. A paper describing the study appears in the journal Nature on January 26. Blood has high osmolality when our bodies are dehydrated, which triggers the feeling of thirst. But because of the time delay between when we feel quenched and when the body is fully rehydrated, the gut must sense osmolality changes before they happen in the bloodstream, and it must send this information to the brain. Once we ingest food and water, nutrients are absorbed from the intestine to the liver through the specialized blood vessel called the portal vein. During this absorption event, osmolality signals are also detected by sensory neurons in the gut. In the research, led by postdoctoral scholar Takako Ichiki and graduate student Tongtong Wang, the team examined how the gut communicates this information to the brain to indicate thirst or satiation. There are two major sensory pathways from the gut to the brain: the spinal (dorsal root ganglia, or DRG) and vagal pathways. In this study, Ichiki used genetically engineered mice to visualize neural activation patterns in these two pathways. She then systematically monitored DRG and vagal neurons in response to infusions of water, salt, or sugar into the mouse intestine that mimic normal nutrient ingestion. The team found that the vagal neurons, but not spinal neurons, are strongly activated upon osmolality changes in the gut. In fact, distinct subsets of neurons were active in response to the different liquids. The next question was: What part of the gut sends the osmolality information to the brain? The team examined the hepatic portal area (HPA), a major blood vessel running through the gut responsible for absorbing the vast majority of nutrients from the intestine and carrying them to the liver. They found that vagal nerves innervating the HPA indeed carry osmolality signals. Severing a specific vagal nerve branch to the HPA eliminated the ability of vagal neurons to respond to changes in osmolality. The team further investigated whether vagal nerves directly or indirectly sense osmolality changes in the gut. They found that, in response to osmolality changes in the intestine, one particular peptide, the vasoactive intestinal peptide, or VIP, is secreted into the portal vein, which in turn activates vagus nerves in the HPA area. This explains how the gut translates physical osmolality changes into hormonal signals that encode osmolality changes. "We have discovered the beginning of a pathway, the HPA-to-brain axis," says Oka, who is also a New York Stem Cell Foundation Investigator. "The details of all of the connections and molecular mechanisms are still to be determined." Additional future research will examine the connections between the vagal neurons in the body and the regions of the brain known to control thirst. In previous work, Oka lab researchers identified so-called thirst neurons within the brain's subfornical organ (SFO) region. When animals are thirsty, these neurons are highly active; drinking water rapidly calms them down. But the SFO thirst neurons are not directly connected to any gut neurons, so the team aims to figure out how changes in osmolality are communicated to the SFO thirst neurons. "There is still so much that we don't know about how the nervous system controls basic functions, like thirst and satiety," says Karen David, Ph.D., program director at the National Institute of Neurological Disorders and Stroke. "This study shows how approaches supported by the BRAIN Initiative are being used to uncover how brain circuits handle this important sensory information." The paper is titled "Sensory representation and detection mechanisms of gut osmolality change." | 10.1038/s41586-021-04359-5 |
Biology | Models show how global climate change will affect marine crustaceans in the future | Marianna V. P. Simões et al, Environmental matching reveals non-uniform range-shift patterns in benthic marine Crustacea, Climatic Change (2021). DOI: 10.1007/s10584-021-03240-8 Journal information: Climatic Change | http://dx.doi.org/10.1007/s10584-021-03240-8 | https://phys.org/news/2021-11-global-climate-affect-marine-crustaceans.html | Abstract Empirical and theoretical studies suggest that marine species respond to ocean warming by shifting ranges poleward and/or into deeper depths. However, future distributional patterns of deep-sea organisms, which comprise the largest ecosystem of Earth, remain poorly known. We explore potential horizontal range shifts of benthic shallow-water and deep-sea Crustacea due to climatic changes within the remainder of the century, and discuss the results in light of species-specific traits related to invasiveness. Using a maximum entropy approach, we estimated the direction and magnitude of distributional shifts for 94 species belonging to 12 orders of benthic marine crustaceans, projected to the years 2050 and 2100. Distance, direction, and species richness shifts between climate zones were estimated conservatively, by considering only areas suitable, non-extrapolative, and adjacent to the currently known distributions. Our hypothesis is that species will present poleward range-shifts, based on results of previous studies. Results reveal idiosyncratic and species-specific responses, with prevailing poleward shifts and a decline of species richness at mid-latitudes, while more frequent shifts between temperate to polar regions were recovered. Shallow-water species are expected to shift longer distances than deep-sea species. Net gain of suitability is slightly higher than the net loss for shallow-water species, while for deep-sea species, the net loss is higher than the gain in all scenarios. Our estimates can be viewed as a set of hypotheses for future analytical and empirical studies, and will be useful in planning and executing strategic interventions and developing conservation strategies. Working on a manuscript? Avoid the common mistakes 1 Introduction Empirical and theoretical studies suggest that marine species respond to ocean warming by shifting ranges poleward and/or into deeper depths (Poloczanska et al. 2013 ; Saeedi et al. 2017 ; Pinsky et al. 2019 ; Lenoir et al. 2020 ; Chaudhary et al. 2021 ). Moreover, in association with human activities (e.g., aquaculture and maritime trade), changes in ocean temperatures have aided expansion and settlement of species into places that would otherwise be inaccessible. This leads to geographic shifts and unprecedented rates of non-native species settlement, representing a significant threat to ecosystem function (Molnar et al. 2008 ; Ojaveer et al. 2015 ; Chan et al. 2019 ). Deep-sea ecosystems, which have long been thought to be extremely stable in terms of physico-chemical conditions, have shown growing evidence that they also may experience abrupt changes and climate-driven temperature shifts (Danovaro et al. 2001 , 2004 ), putting the most extensive ecosystem on Earth and its largest reservoir of biomass in danger (Sweetman et al. 2017 ; Folkersen et al. 2018 ). However, studies on the potential effects of climatic changes and species range shifts on this marine ecosystem remain scarce, and the effects on ecosystem functioning and potential biodiversity gain or loss remain almost completely unknown (Danovaro et al. 2001 ; Hoegh-Guldberg and Poloczanska 2017 ). Crustaceans are one of the most dominant taxa of both shallow-water and deep-sea communities (Saeedi et al. 2019a , b ; Saeedi and Brandt 2020 ). They are also among the most successful groups of aquatic alien invaders in the ocean, with life history, behavioral and physiological traits that favor a high rate of establishment and success as invasive species (Karatayev et al. 2009 ; Hänfling et al. 2011 ). Their impacts are often substantial due to the complex trophic role of many of these species, leading to cascading effects throughout invaded ecosystems, resulting in fast spread and costly management and/or eradication measures (Thresher and Kuris 2004 ). Therefore, the early detection of climate-driven range shifts and prompt implementation of appropriate management strategies are crucial (Fogarty et al. 2017 ). The application of environmental matching approaches, of which a key concept is the linkage of species distributions and environmental conditions, can assist in the exploration and forecasting of species range shifts in response to changing environments (Jiménez-Valverde et al. 2011 ; Hänfling et al. 2011 ). Insights gained from such analyses can help elucidate current patterns and provide a first approximation of distributional patterns of marine biodiversity due to anthropogenic climate change (Cheung et al. 2016 ; Robinson et al. 2017 ). Concerning marine crustacean fauna, so far, environmental matching has been seldom applied within the context of climate change and its potential impacts on future distribution patterns (Poloczanska et al. 2013 ; Melo-Merino et al. 2020 ; Lenoir et al. 2020 ). Limited by the high cost and logistical and technological limitations in sampling, the marine realm does not allow easy definition of the full spatial distribution of many marine species, and there are large taxonomic biases in the study of species/diversity spatial patterns (Reygondeau 2019 ). The few macroecological studies on marine crustaceans have focused on distributional patterns of economically relevant, well-studied species, and have been restricted to well-sampled geographical locations (i.e., North Sea and other coast of the Atlantic continental shelf; Neumann et al. 2013 ; Melo-Merino et al. 2020 ; Lenoir et al. 2020 ). Fortunately, in recent years, distributional, demographic, genetic and climatic data have become increasingly available digitally through global initiatives, such as the Ocean Biodiversity Information System (OBIS, ). As a result, unprecedented possibilities to search for general patterns and mechanisms in the global distribution of marine biodiversity have been created (Wüest et al. 2019 ). To fill the knowledge gap regarding future climatic change effects on the marine crustacean fauna, and to provide an initial assessment of climate-related horizontal distributional shifts for crustacean species, in this study we use a correlative maximum entropy approach. We estimate the direction and magnitude of potential climate-related range shifts of 94 shallow-water and deep-sea crustacean species representing 12 orders, to both 2050 and 2100, for two representative concentration pathway emission scenarios (RCP 2.6 and 8.5). In addition, we discuss our results in light of species-specific traits that could characterize invasiveness. We hypothesize that both shallow water and deep-sea crustacean species will follow trends seen in other marine organisms by shifting their distribution ranges poleward. Our estimations can be viewed as a set of hypotheses for future analytical and empirical studies, and instrumental in planning and executing strategic interventions, and in developing conservation strategies. 2 Materials and methods 2.1 Species selection Species were selected to represent the diversity within Crustacea, and based on availability of biological information and occurrence data representing distributions. In total, occurrence records for 94 globally distributed benthic crustacean species were assembled, 78 shallow-water and 16 deep-sea species. Benthic species were targeted in order to avoid problems related with the characterization of three-dimensional structure of marine space, as benthic species are less affected by the disparity between environmental predictor values and occurrence point recorded (Bentlage et al. 2013 ; Duffy and Chown 2017 ). To confirm their functional group as benthic species, we crosschecked species taxonomic classifications with information available at the World Register of Marine Species (WoRMS; ). To classify the species according to depth, we used the maximum depth associated with records in the examined databases. Species were categorized into two groups: shallow-water (0–500 m) and deep-sea (> 500 m), as 500 m is the depth at which seasonal variation in physical parameters (e.g., temperature) as well as the influence of sunlight becomes minimal, following the classification by the World Register of Deep-Sea Species (WoRDSS; ; Glover 2019 ) (Fig. 1 ). Fig. 1 available at: Schematic figure representing the steps to create environmental models for shallow-water and deep-sea crustacean species. The main steps included: data capturing; ecological niche modeling; variability and uncertainty assessment. R script was used to perform all steps; it is publicly Full size image 2.2 Distribution records Occurrence records were obtained from the Ocean Biodiversity Information System (OBIS; ) and Global Biodiversity Information Facility (GBIF; ), and complemented with occurrence data obtained from our previous deep-sea expeditions (Malyutina and Brandt 2013 ; Brandt and Malyutina 2015 ; Malyutina et al. 2018 ; Fig. 1 ). Records from before the year 2000 were excluded to avoid uncertainties related to geocoding errors. All species names were matched against WoRMS and synonyms reconciled. Distribution records were visually inspected for suitability, and dubious records were corrected (e.g., reversed latitude and longitude fields), or removed following the protocol in Cobos et al. ( 2018 ) (Fig. 1a ). In total, we obtained 51,852 occurrence records, which were rarefied through spatial thinning using a 10-km distance to avoid problems derived from spatial autocorrelation, resulting in a final count of 12,885 records used to calibrate and create final models (for number of occurrences used to calibrate each species model see Supporting Information Appendix S1 , Table S1.1 ). To reduce sampling bias, thinning distance was chosen considering the spatial resolution of variables (~ 9.2 km at the equator), and the effect in geographic clustering and effective number of remaining points after exploring shorter and longer distances alternatives (i.e., 5 km, 20 km, 50 km and 100 km). After thinning, we selected species with unbiased distributions, i.e. with a good representation of their known distribution, and with a minimum of 30 records, which is above the minimum number necessary to obtain models with good predictive power in ecological niche modeling implementations (Papeş and Gaubert, 2007 ; Wisz et al. 2008 ; Shcheglovitova and Anderson 2013 ; Proosdij et al. 2016 ; Galante et al. 2018 ; Fig. 1 ). Then, the complete set of occurrences of each species was randomly split in two subsets, one set for model training (75%), set aside for creating candidate models to be evaluated with testing data, to evaluate model accuracy (25%). We used a random partition of data as we did not have records that could be considered independent for testing. Partitioning data with spatial blocks has been suggested to potentially benefit model calibration when they need to be transferred (Muscarella et al. 2014 ). However, using spatial blocks could derive incomplete characterizations of species niches in our case, as records for many of the species in our study were distributed in regions with highly contrasting climatic conditions. After all filters were applied, 94 globally distributed crustacean species were assembled, 78 shallow-water and 16 deep-sea species, with counts of occurrences ranging from 30 to 800 records (see Supporting Information Appendix S1 , Table S1.1 ). To clean and manipulate the data, all steps were performed using the statistical software R 3.5.1 (core Team, 2018) and packages ‘ spThin’ (Aiello-Lammens et al. 2015 ), “ raster ” (Hijmans et al. 2015 ) and “ rgdal ” (Bivand et al. 2015 ). 2.3 Environmental data Environmental data were obtained at 5 arcminutes (ca . 0.083°, ~ 9.2 km at the equator) spatial resolution extracted from Bio-ORACLE ( ; Assis et al. 2018 ), including maximum depth benthic layers, and bathymetry (i.e., depth) obtained from the General Bathymetric Chart of the Oceans (GEBCO; ). Limiting environmental factors for marine species include temperature, salinity, ice concentration and bathymetry (Belanger et al. 2012 ; Goldsmit et al. 2017 ). Thus, considering biological relevance and data availability, models included mean bottom temperature, mean salinity, and mean ice thickness, mean current velocity, and bathymetry. For deep-sea species, ice-thickness was not included due to lack of ice in the deep sea. Further, to avoid overfitting and inflation of model accuracy with overly dimensional environmental space and collinearity among variables, we performed Pearson’s correlation coefficient (r) analysis to examine the cross-correlation of the variables, wherein correlation between variables greater than 0.6 were excluded. Finally, to test the statistical relevance of depth to the environmental models, we created two environmental sets, without depth, ‘ set 1 ’, and including depth, ‘set 2’ , to test the statistical relevance of depth to the environmental models (Fig. 1b ). Following Assis et al. ( 2018 ), environmental layers for present conditions were produced with climate data describing monthly averages for the period 2000–2014, and future layers were produced for 2040–2050 and 2090–2100 by averaging simulation results of three atmosphere–ocean general circulation models (AOGCMs: CCSM45, The Community Climate System Model 4; HadGEM2-ES5, Hadley Centre Global Environmental Model 2; MIROC55, Model for Interdisciplinary Research on Climate 5), in an attempt to reduce the uncertainties among diverse AOGCMs (for more information on climatic layers see Assis et al. 2018 ). The future layers used to project the environmental models included scenarios for both years 2050 and 2100, and two representative concentration pathway scenarios (RCP), RCP 2.6, presenting a peak-and-decline scenario ending on very low greenhouse gas concentration levels by the end of twenty-first century; and RCP 8.5, which shows high greenhouse gas concentration levels. For additional information on quality-control of climatic layers see Assis et al. ( 2018 ). 2.4 Ecological niche modeling Ecological niche models were generated using the maximum entropy algorithm (Maxent; Phillips et al. 2006 ). Most crustacean species, due to their planktonic larvae, are able to disperse passively across long distances with ocean currents (Cabezas et al. 2010 , 2013 ; Slusarczyk et al. 2019 ), hence, calibration areas included a buffer of two decimal degrees from the occurrences of each species (Barve et al. 2011 ), representing areas that populations of the species could have already had potential to reach due to their dispersal capabilities and, at the same time, restricted to areas adjacent to regions that had already reported records (Peterson et al. 2014 ). Seeking to reduce model overfitting, we first calibrated models by creating 126 candidate models (per each species), with parameterizations resulted from the combinations of nine regularization multipliers (β: 0.1–1.0 at intervals of 0.2, 2–5 at intervals of 1), seven feature classes representing combinations of linear, quadratic and product responses; and two distinct sets of variables, without depth and including depth; Fig. 1c ). Selected model parameterizations were the ones that resulted in significant models (partial ROC with E = 5%, 500 iterations, and 50% of data for bootstrapping), omission rates lower than a previously defined error rate (E = 5%; Anderson et al. 2003 ), and the lowest AICc value per each species (Akaike information criteria; Warren and Seifert 2011 ), in that order. The mean AUC ratio of each final model is also estimated, representing the mean of all ratios between partial ROC, AUC values from each iteration and the AUC value representing a random model. The AUC ratio varies from 0 to 2: values greater than 1 indicate that model predictions are better than null expectations. These ratios are calculated using the partial ROC analysis (Peterson et al. 2008 ) and the p-values resulting from these analyses are calculated considering the number of times the iterated AUC ratios are below 1. Omission rates and significance were measured using the testing data to validate models created with 75% of the data (training set), AICc was measured on models created with all data. The chosen predictors and parameters settings were used to create final models with 10 replicates by bootstrap, logistic outputs, and transferred to globally, under current and future environmental scenarios (2050 and 2100). Final model projections were created allowing extrapolation and clamping in Maxent. Finally, loss and gain of suitable habitats were calculated by comparing the geographic projections of niche models in current and future scenarios. The comparisons were categorized as follows: (1) if current and future areas were not suitable, these areas were defined as stable non-suitability; (2) when current and future areas were suitable, these were defined as stable suitable areas; (3) when current was suitable and future not suitable, loss of suitable areas was identified; and (4) when current was not suitable and future was suitable, gains of suitable areas were identified. After classifying areas of non-suitability, stability, gain and loss for all species, we summed the overlapping area of gains and of losses separately. To calculate the total area of gain and loss, binary maps were generated by considering only values above zero as areas of gain or loss. Then, the area was calculated in square kilometers, using ArcGIS version 10.6. 2.4.1 Model uncertainty We measured model uncertainty based on the risk of strict extrapolation resulting from projections to non-analogous conditions, and the degree of variability resulted in final model projections. Model variation was assessed and represented geographically considering the amount of variance in model predictions that came from replicates and emission scenarios (RCPs), following (Cobos et al. 2019a ). To assess the risks of strict extrapolation we used the mobility-oriented parity metric (MOP, Fig. 1e ; Owens et al. 2013 ), which consists of measuring similarity between the closest 5% of the environmental conditions of the calibration area to each environmental condition in the area of transference (Supporting Information Appendix S2 , Fig. S2.1 ). Areas of projection with values of similarity of zero compared to calibration areas indicate higher uncertainty as suitability in those regions, derives from model extrapolation only, and caution is required when interpreting likelihood of species presence in such areas (Alkishe et al. 2017 ). Binary maps of MOP results were generated considering only areas with zero similarity as strict extrapolation areas (Fig. 1e ). Then, we excluded areas of extrapolation from binary final models, using binary MOPs, creating final binary models with no extrapolation (‘ post -MOP projections’; Fig. 1f ). All modeling processes, uncertainty and variability analyses were performed using the ‘ kuenm ’ package (Cobos et al. 2019b ) in R 3.5.1 (core Team, 2018). R script used to perform all steps is available at: . 2.5 Patterns of distributional changes 2.5.1 Geographic centroids To assess how suitable areas for the species shifted in latitude owing to climate change, first we calculated the geographic centroid of reduced suitable areas on post -MOP projections for current and future scenarios (Fig. 1f ). For this, we created a concave hull based on species occurrences, of 200 km distance, and masked suitable areas with the resulted polygons to ensure that only relevant areas were considered in calculations (i.e., avoiding suitable areas in the opposite hemisphere when the species was only found in the northern hemisphere, or areas of strict extrapolation and outside the buffered concave hull; Fig. 1f ). 2.5.2 Centroid shift To calculate potential distances of centroid shift, we measured the distance in kilometers from current to future geographic centroid of suitable areas (henceforth, suitability centroids; Fig. 1f ). To assess direction (i.e., northern hemisphere, increase of latitude; southern hemisphere, decrease of latitude) we calculated the differences in latitude of current centroids in comparison to future centroids, and to asses shifts between climate zones, we located the position of the centroid latitude of different time periods, within ranges of climate zone (i.e., tropical zone: 0°–23.5°, subtropical zone: 23.5°–40°, temperate zone: 40°–60°and polar region: 60°–90°; Fig. 1f ). For constraint of suitable areas, estimation of suitability centroids and shifts, analyses were performed using “raster” (Hijmans et al. 2015 ) and “ellipsenm” (Cobos and Osorio-Olvera 2019 ) R packages. Schematic figure of the entire process of modeling and post-modeling can be found in Fig. 1 . 2.6 Attributes characterizing invasiveness When available, we assembled information on reproduction and feeding mechanisms, which are features that in combination with range shifts, can increase the potential of a crustacean species to become invasive (Hänfling et al. 2011 ). Here, we defined invasive species as a colonizer species that can establish populations outside their native distributional ranges and that have potential to spread and negatively affect native ecosystems or local human-mediated systems (Lockwood et al. 2007 ). These features are considered advantageous as a species maximizes the propagule pressure (i.e., ‘introduction effort’), and range of resources (e.g., prey types) available to newly settled individuals (Hänfling et al. 2011 ). The information assembled was extracted from literature, the WoRMS database, and supplied by the experts listed in the acknowledgments. Within reproduction, brooder and free-spawning modes (i.e., indirect reproduction) were considered, and within feeding mechanisms eight subcategories were considered: detritivores, herbivores, carnivores, parasites, scavengers, filter feeders, and omnivores. 3 Results 3.1 Species selection and distributional records In total we assembled occurrence data for 78 shallow-water and 16 deep-sea crustacean species, covering ca. 0.3% of the subphylum’s diversity (Giribet and Edgecombe 2012 ). The final dataset contained species widely distributed, but higher species richness between latitudes 40° and 70°, and between longitudes 20° − − 20° and − 45° − − 80° (Supporting Information Appendix S2 , Figures S2.2 ). 3.2 Model statistics In total, 11,844 candidate models were tested, 3723 were statistically significant models meeting omission rate criteria, 4194 were statistically significant models meeting AIC criteria, and 328 were statistically significant meeting 5% omission rate threshold and AIC criteria (mean AUC ratio of 1.29; Supporting Information Appendix S1 , Table S1.2 ). Mean AUC ratios were higher than 1.24 for most taxa, indicating performance better than random (Supporting Information Appendix S1 , Table S1.3 ). For each species, the selected parameters were the ones that created models that performed significantly better than random expectations ( p -value < 0.01) and met the 5% omission rate threshold (i.e., false negatives, leaving out known distributional area). For parameters chosen for each species see Supporting Information Appendix S1 , Table S1.3 . 3.3 Model parameter settings Between both environmental sets tested, the non-inclusion ( set 1 ) of depth resulted in better models for ca. 29%, represented by 19 shallow-water and 4 deep-sea species; while the inclusion of depth ( set 2 ) was selected to create the final models of ca. 71% species, representing 59 shallow-water and 12 deep-sea species. For models without depth ( set 1 ), temperature and salinity were the most relevant variables for both shallow-water and deep-sea groups (Supporting Information Appendix S1 Table S1.3 ; Appendix S2 Figure S2.3 ), with salinity only slightly more relevant than salinity for shallow-water species (Median contribution: shallow-water: 24%; deep-sea: 30.6%). In models constructed with the inclusion of depth ( set 2 ), depth was recovered as the most relevant for both shallow-water and deep-sea groups, followed by temperature and salinity (Supporting Information Appendix S1 Table S1.3 , Appendix S2 , Figure S2.3 ). For 25% of the species, the statistically best final models were those with β = 0.2 (shallow-water: 20; deep-sea: 2), with β = 5 for only 5% of the species. The majority of best models contained linear-quadratic features (25%), then linear-quadratic-product (21%), closely followed by linear-product (17%), quadratic (11%), and linear (11%). 3.4 Patterns of climatic suitability Under current scenarios, the sum of binary maps of post -MOP projections between all species of shallow-water and deep-sea species indicated the highest climatic suitability was concentrated between latitudes 70°N–45°N (Figs. 2a , c ). General patterns for shallow-water species climatic suitability change for 2050 showed a consistent high suitability north of 45°N that dipped within the equatorial region. For deep-sea species, it was also shown that there was a high suitability north of 45°N, followed by a dip and an uniform increase until − 60°S, followed again by a dip (Fig. 2d ; Supporting Information Appendix S1 , Table S1.4 , Appendix S2 , Figure S2.4 ). Gains of suitable areas, resulting from differences between current scenarios and future projections, were mostly concentrated above 60°N (Supporting Information Appendix S1 , Table S1.5 , Appendix S2 , and Figure S2.5 ). Losses of suitable areas for shallow-water species were higher above 60°N and steadily decreased until a minimum between − 20°S and − 30°S, while losses were mostly concentrated above 60°N for deep-sea species (Fig. 2 ; Supporting Information Appendix S1 , Table S1.5 , Appendix S2 , Figure S2.6 ). Intensification of scenarios was observed for projections to 2100 for RCPs 2.6 and RCPs 8.5 (see vertical density graphs on Fig. 2 ; Supporting Information Appendix S2 , Figure S2.4 ). Net gain of suitability was slightly higher than the net loss for shallow-water species, except for the projection 2100 RCP 8.5, while for deep-sea species, the net loss was higher than the gain in all scenarios (for projected gains and losses for each species see Supporting Information Appendix S1 , Table S1.5 ). Fig. 2 Environmental suitability under current climate scenario, and predicted to 2050 RCP 2.6 and RCP 8.5 emission scenarios for shallow-water and deep-sea Crustacea fauna. a – b Show trends for shallow-water species, c – d for deep-sea species. Vertical graphs on the right side summarize the density of suitability trends observed under current and 2050 RCP 2.6, RCP 8.5 and 2100 RCP 2.6 and RCP 8.5 emission scenarios Full size image 3.5 Distance and direction of potential range-shifts Shallow-water species were expected to shift an average of 431 km between suitability centroids by 2050 RCP 2.6 (RCP 8.5, ca. 620 km), and ca. 435 km by 2100 RCP 2.6 (RCP 8.5, ca. 1300 km) (Fig. 3a ). Deep-sea species were expected to shift ca. 90 km between suitability centroids by 2050 RCP 2.6 (RCP 8.5, ca. 110 km) and ca. 130 km by 2100 RCP 2.6 (RCP 8.5, ca. 180 km) (Fig. 3a ). Among the species studied, eight shallow-water and three deep-sea species were expected to have a shift between suitability centroids at least two times higher than the average by 2050 RCP 2.6 (Fig. 3a ), and six shallow-water and two deep-sea species were expected to shift between suitability centroids at least two times higher than the average by 2100 RCP 2.6 (Fig. 3a ; for distance of shifts projected for each species see Supporting Information Appendix S1 , Table S1.6 ). Fig. 3 a Circle bar graph indicating distance and latitude increase or decrease of distributional centroid positions of 78 shallow-water and 16 deep-sea crustacean species projected for the year 2050 and two representative concentration pathway emission scenarios (RCPs) 2.6 and 8.5. Blue and grey colors represent increase and decrease of latitudinal shift recovered for each species for 2050 RCP 2.6 and 8.5. Yellow stars indicate species (12 spp.) expected to present a distance shift above the average in 2050. Unit: kilometers. b – c , Maps show suitability centroid shifts between climatic zones, projected for shallow water species for b 2050 RCP 2.6 and RCP 8.5, c 2100 RCP 2.6 and RCP 8.5, and for deep-sea species d 2050 RCP 2.6 and RCP 8.5, (c) 2100 RCP 2.6 and RCP 8.5 Full size image Concerning direction of potential distribution shifts, 82% of shallow-water species with current distributional centroid in the northern hemisphere, were projected to experience a shift in their environmental suitability centroids northwards (i.e., increase of centroid latitude) by 2050 RCP 2.6 (86% by RCP 8.5) and 2100 RCP 2.6 (90% by RCP 8.5). At the same time, all shallow-water species with distributional centroids in the southern hemisphere, were projected to experience a shift in their environmental suitability centroids southwards (i.e., decrease of latitudinal centroid), in all scenarios. On the other hand, for the deep-sea species, 80% of species with current distributional centroid in the northern hemisphere, and all species with current distributional centroid in the southern hemisphere, show potential distributional shift of environmental suitability centroids southwards, in all scenarios (Fig. 3b–e ; for plots showing direction and magnitude of shifts projected for 2100 see Supporting Information Appendix S1 , Table S1.6 and Appendix S2 , Figure S2.7 ). Regarding shifts between climate zones (Fig. 3b–e ), shallow-water species showed a projected net-increase of species richness in polar regions and decline of species in the temperate zones, with stability in remaining zones. Deep-sea species showed no changes of species richness in climate zones through time (see Supporting Information Appendix S2 , Figure S2.7 3.6 Model projections and uncertainty Differences recovered between RCPs presented higher variation in results of shallow-water than in deep-sea projections, with an average difference between RCPs of 2050 of ca. 0.7% ( SD = ± 13%) and for 2100 ca. 20% ( SD = ± 58%) for shallow water, and ca. 0.3% ( SD = ± 1.9%) for 2050 and ca. 0% ( SD = ± 4.4%) for 2100 for the deep sea. Mobility-oriented parity (MOP) results indicated that strict extrapolative areas for shallow-water models under current scenario were concentrated above 70°N. In 2050 projections, extrapolation areas abruptly reduced above 70°N and increased toward the mid-Atlantic Ocean, Indian Ocean, and Oceania, intensifying in these regions by 2100 (Supporting Information Appendix S2 , Figures. S2.8a-e ). For the deep-sea models, strict extrapolative areas were permanently recovered above 70°N (Supporting Information Appendix S2 , Figure S2.8f-j ). For shallow-water and deep-sea species models, variability was higher when above ca. 60°N, and when derived from replicates, in comparison with variability derived from different RCP scenarios (Supporting Information Appendix S2 , Figure S2.9 ). 3.7 Attributes related to invasiveness Most decapods and amphipods had information available on feeding habits and reproduction (Supporting Information Appendix S1 , Table S1.7 ). We observed that the majority of species in our dataset had free-spawning reproduction (27 shallow-water species; eight deep-sea species), and were carnivorous (13 shallow-water species; eight deep-sea species), parasites (six shallow-water species, one deep-sea species), and/or omnivorous (five shallow-water species). Among the species with highest projected shifts (i.e., two times above the average), for species with free-spawning reproduction, there were four shallow-water decapods ( Paralithodes camtschaticus Tilesius, (1815), Dardanus gemmatus (H. Milne-Edwards, 1848), Strobopagurus gracilipes (H. Milne-Edwards, 1891), and Thor paschalis (Heller 1862); see Supporting Information Appendix S1 , Table S1.7 ). Species classified as omnivorous had projected shifts lower than the average estimated for shallow-water species (see Supporting Information Appendix S1 , Table S1.7 ), among them were three amphipods ( Byblis longicornis Sars, 1891 , Rhachotropis aculeata (Lepechin, 1780), Ischyrocerus latipes Krøyer, 1842 and Rhachotropis aculeata (Lepechin, 1780)), one species from the order Cumacea ( Diastylis alaskensis Calman, 1912) and one species from Podocopida ( Acetabulastoma arcticum Schornikov, 1970). 4 Discussion The use of environmental matching approaches, such as ecological niche modeling (ENM), has proven in recent years to be a powerful tool for conservation and has greatly increased our understanding of potential changes and impacts due to anthropogenic climate change (Evans 2012 ; Yates et al. 2018 ). However, the models used to estimate global change impacts on marine organisms have been considered limited, mostly focused on local studies, and varying substantially in complexity and underlying hypotheses (Planque et al. 2011 ). Additionally, ENMs of marine organisms have been constructed mostly for shallow-water species, while deep-sea species remain largely neglected, and scarcely represented in the literature (Vierod et al. 2014 ). As a consequence, based on patterns mostly observed on the study of shallow water species, major losses of diversity and biomass in the tropics accompanied by gains of suitability in poleward regions are expected, while future distributional patterns of deep-sea species remain poorly known (Oliveira and Scheffers 2018 ). To our knowledge, despite representing only ca. 0.3% of the total diversity found within Crustacea (Giribet and Edgecombe 2012 ), our study is the largest effort to investigate the differential dynamics between shallow-water and deep-sea species on a global scale in the face of future climatic changes. We recovered idiosyncratic and species-specific responses, with the extent and direction of suitability shifts depending upon the niche characteristics of each species (VanDerWal et al. 2012 ), and areas lost and gained varying between taxa (Supporting Information Appendix S1 , Tables S1.5 ). Nevertheless, although the predicted responses were species-specific, the dominant signal recovered showed poleward range shifts, with declines of species richness at midlatitudes, shifting from subtropics to temperate zone and temperate zone to polar regions. These results are strongly supported by trends reported in previous studies, indicating poleward shifts from warmer waters into cooler waters (e.g., Poloczanska et al. 2013 ; Neumann et al. 2013 ; Lenoir et al. 2020 ). Further, a large predominance of north-westerly range shifts was also observed, as has been similarly recovered from long-term observations of benthic invertebrates in the North Sea (Hiddink et al. 2015 ). Such patterns have been speculated to be related to the ability of species to expand their distributional leading edge (at the cold, polar boundary) and disperse and settle into areas that were previously too cold to inhabit (Sunday et al. 2012 ; Hiddink et al 2015 ). However, movements out of the tropics were not observed. For shallow-water species in particular, long distance range-shifts were estimated (e.g., T. paschalis ), but no substantial shifts reaching cooler climate zones were predicted. This could be linked to the higher thermal limit and narrow thermal breath of tropical species, resulting in lower acclimation response to temperature changes, making them more vulnerable to warming than their temperate counterparts (Vinagre et al. 2019 ). In our results, we also noticed a difference in magnitude of distance shifts between shallow-water and deep-sea benthic crustaceans. While relatively larger potential shift patterns were recovered for shallow-water species (up to 431 km by 2050 RCP 2.6, and ca. 435 km by 2100 RCP 2.6), shorter distances were forecast for deep-sea species (ca. 90 km by 2050 RCP 2.6 and 110 km by 2100 RCP 2.6). Values estimated for the shallow-water species were not so different in comparison with the average distances as described in previous studies for marine species (i.e. 72 km/decade; Poloczanska et al. 2013 ). However, for deep-sea benthic invertebrates, such estimations remain poorly explored. The contrast in magnitude between groups is linked to differences in thermal habitats, due to the strong links of marine species to temperature gradients (Sunday et al. 2012 ; Yasuhara and Danovaro 2016 ; Chaudhary et al. 2021 ). Temperatures in the deep sea are inherently more stable across the range of latitudes and through time and therefore less susceptible to superficial climatic changes; while in shallow waters, temperatures are warmer closer to the equator and decrease gradually toward the poles (Assis et al. 2018 ). Extrapolation analysis (MOP) — used to calculate environmental similarity from a given pixel in a transfer time/region to those within the calibration region (Supporting Information Appendix S2 , Figure S2.3 ) — showed that extrapolation risks will decrease in northernmost areas for shallow-water species, while for the deep-sea species these areas seemed to be permanently concentrated and increasing in the north, above 70°N (Supporting Information Appendix S2 , Figure S2.6 ). This reinforces that shallow-waters are increasing in temperature in higher latitudes through time, while deep-sea shows higher stability and therefore, are less likely prone to movement of species toward polar regions. The differences in magnitude of distance shifts might also be related to the unbalance between net gain and loss of climatic suitable areas observed in our results (i.e. net gain higher than loss for shallow-water species, and losses expected to be higher for deep-sea species; see Supporting Information Appendix S1 , Table S1.5 ); while availability of suitable areas could allow for longer range shifts, a lack of suitable areas could lead to shorter shifts. Previous studies corroborate the responses recovered here, with longer shifts recovered for shallow-water species (e.g., Poloczanska et al. 2016 ; Sunday et al. 2012 ; Morley et al. 2018 ), but relatively small to non-significant contractions of future horizontal environmental suitability for deep-sea benthic species (Guinotte and Davies 2014 ; Basher and Costello 2016 ; Costello et al. 2017 ). However, sampling effort in the ocean is not evenly distributed (Webb et al. 2010 ; Boltovskoy and Correa 2017 ; Menegotto and Rangel 2018 ; Saeedi et al., 2019a ), leading to a large bias in under-explored areas, undescribed species and species known only from single records that are not amenable to biogeographic analyses. Hence, such patterns might be influenced by the relative rarity of studies on marine species and/or their limited geographic scale, hindering the visualization of global patterns. For instance, our dataset presents a considerable latitudinal and bathymetric bias, as it includes mostly species with ranges in the Northern Hemisphere (see Supporting Information, Appendix S2 , Figure S2.2 ), and much less than half of the taxa examined are deep-sea species ( N = 16). Despite these biases, our results offer a robust hypothesis regarding future distribution patterns, and we speculate that an equal sampling in Southern Hemisphere species would result in similar pattern estimations, based on results from MOP analysis, and on the patterns recovered for other benthic invertebrates (e.g., Hiddink et al. 2015 ). The selection of key environmental variables in which to explore the niche of species is a crucial step in model design (Melo-Merino et al. 2020 ). The inclusion of too many environmental dimensions can cause model overfitting (Anderson et al. 2003 ); thus, as good practice, various biological or statistical criteria are used to select the best set of environmental predictors. In models without depth ( set 1 ), temperature and salinity were the most relevant variables, with salinity only slightly more relevant than salinity for shallow-water species. These results are not surprising given that temperature is a key environmental driver of performance (Cheung et al. 2010 ; Poloczanska et al., 2013 ; Saeedi et al., 2017 ), and that for animals in which osmoregulation is the main physiological mechanism maintaining hydromineral homeostasis, such as Crustaceans, salinity plays a fundamental limiting role (Thabet et al. 2017 ). Moreover, in models constructed with the inclusion of depth (63 shallow-water species; 12 deep-sea species; see Appendix S1 Table S1.3 ), depth was recovered as most relevant for both groups, followed by temperature and salinity. This aligns well with the findings of other studies (e.g., Pearman et al., 2020 ; Gonzalez-Mirelis et al. 2021 ), which have indicated that depth along with temperature and salinity are relevant for defining benthic species distributions. It is relevant to note that, previous studies have highlighted the relevance of oxygenation, pH and food supply (or POC flux) in the maintenance of the ecosystem functioning (Sweetman et al. 2017 ). However, as marine organisms are directly influenced by their environment through temperature, oxygen, and food availability, other factors, such as carbonate chemistry, are also important but have been less often implicated in shifting species distributions (Pinsky et al. 2019 ). With the rearrangement of marine biota, invasion risks are also predicted to increase in the future (Cheung et al. 2016 ). Of all alien invasive species, crustaceans are often a significant group, as they are great colonizers, establishing outside their native distributional ranges with the potential to spread and affect both native ecosystems and local human-mediated systems (Engelkes and Mills 2011; Hänfling et al. 2011 ). Hence, uniting functional traits related to invasiveness to distribution model results could provide valuable insights and linkages between species distributions and their underlying biological mechanisms. Here, we focused on collecting features related to reproduction and feeding habits that could facilitate the process of colonization and settlement. Reproduction in particular is a critical step in the invasion process, while the successful establishment of invasive species may hinge on their ability to reproduce across wide-ranging environmental conditions. Among the species modeled in our study, amphipods and decapods — the most prominent examples of invasive crustacean orders (Hänfling et al. 2011 ) — were predominant and were among species with highest projected climate related range shifts. Within this order, free-spawning reproduction with large numbers of eggs within seasons and having planktonic larvae, aids propagule pressure, facilitating the rate of geographical spread of non-native populations (Lockwood et al. 2007 ). This could be one of the traits facilitating the wide range of decapod species such as Thor paschalis , distributed within the Indo-West Pacific Ocean, ranging from the Red Sea, Arabian Gulf and Madagascar eastwards to Australia (Anker & De Grave 2016 ). The species’ exact distribution range remains unclear (Al-Kandari et al. 2020 ), and new records are often reported (e.g., Chace, 1997 ; Anker & De Grave 2016 ); and in our results, a trend of range expansion is also recovered, being one of the species with highest projected range shift (ca. 3715 km by 2050 RCP2.6; distance of shifts projected to each species see Supporting Information Appendix S1 , Table S1.7 ). The orders Cumacea (shrimp) and Podocopida (ostracods) follow in order of magnitude of projected range shifts in our study, and are also characterized by spawning, but eggs are brooded by females until they reach maturity (Watling 2005 ). This reproduction strategy might facilitate invasion success under non-favorable conditions, such as changes in seawater quality and environmental characteristics at spawning grounds, enabling offspring to survive under harsh environmental conditions (Hänfling et al. 2011 ). In particular, ostracods can possess different combinations of sexual reproduction, parthenogenesis, brood care, and egg-laying (Horne et al. 1998 ), favoring rapid spread into new environments. The order also shows flexibility in dispersal mechanisms, being passively dispersed by wind, birds, fish, water insects and drifting algae (Teeter 1973 ), with resistant double-walled eggs suited to withstand dissection during dispersal (Teeter 1973 ), and are usually small-sized (0.5–2 mm; Xing et al. 2018 ). The combination of these features is deemed advantageous for widespread distribution and aid of colonization of new environments, including in association with the gain of climatic suitability recovered in our results, and could confer high rates of migration and establishment in new environments. For example, Rabilimis septentrionalis (Brady, 1866) shows one of the largest range shifts projected in the order (ca. 590 km 2050 RCP 2.6; see Supporting Information Appendix S1 , Table S1.7 ), and this species is known to have a large physiological breadth, i.e., high tolerances to seasonal salinity and temperature variations (Carter 1988), which could favor range expansion and settlement into new environments. Regarding feeding habits, species in our study reflected well the diversity found within Crustacea, including omnivores, filter-feeders, scavengers, parasites and carnivores. Omnivore species have the highest threat to invade environments, by feeding on any trophic level in the invaded food web (van der Velde et al. 2009), and therefore adding pressure to the system due to predation of native species (e.g., Strecker et al. 2006 ). In fact, the large majority of Crustacea are in fact omnivorous, occasionally behaving as predators (Karatayev et al. 2009 ; Hänfling et al. 2011 ), and many benthic invertebrates classified as deposit-feeders are often also omnivores. In our results, species showing this particular feeding habit show great disparity in range shift estimates (i.e., 20 km to 3200 km; Supporting Information, Appendix S1 , Table S1.7 ). Among them, Neomysis rayii had the largest projected shift, ca. 3200 km 2050 (RCP 2.6). Besides reproduction and feeding habits, several additional traits should be considered for alien species to become invasive, e.g. thermoregulation, dispersal capability, and life history traits, which in concert with biotic interactions and evolutionary factors, can contribute to the success or failure of the establishment of a species in new environments (Hänfling et al. 2011 ). In particular, the complexity and broad-scale effects of climate change make it difficult to determine changes or distributional shifts a priori. Despite representing a small proportion of Crustacea diversity, the initial assessment presented here allowed the identification of general distributional patterns in the face of climatic changes and indicated features associated to invasiveness that could aid on the critical interpretation of ecological models. With the increase of anthropogenic disturbances such as fishing, mining, oil drilling, warming, and acidification in marine ecosystems, understanding the magnitude of changes in living marine ecosystems is essential for better management and conservation of their biological resources. 5 Conclusions Our study provides a comprehensive assessment of patterns of climate change impacts on benthic marine crustaceans at the global scale, and results can be used as initial hypotheses in the development of future theoretical and empirical studies. Our results revealed idiosyncratic and species-specific responses, with prevailing poleward shifts and decline of species richness at midlatitudes, while more frequent shifts between temperate and polar regions were recovered. Shallow-water species are expected to shift longer distances than deep-sea species. The net gain of suitability was slightly higher than the net loss for shallow-water species, while for deep-sea species, the net loss was higher than the gain under all scenarios, which will potentially change the composition of assemblages, leading to reorganization and potentially also precipitate extinction of some species. Our estimations can be viewed as a set of hypotheses for future analytical and empirical studies, and should play an instrumental role in the planning and execution of strategic interventions and in the development of conservation strategies. Availability of data and material All data generated or analyzed that support the findings of this study are available as part of the Supporting Information (Appendix S1 and S2 ), these data were derived from the following resources available in the public domain: (a) Ocean Biodiversity Information System (OBIS) ( ); (b) Global Biodiversity Information Facility (GBIF; ) (c) Bio-ORACLE ( ); (d) the General Bathymetric Chart of the Oceans (GEBCO; ). Code availability R script used to perform all steps of the performed analysis available at: . | Senckenberg scientists from Frankfurt and Müncheberg, together with a US-American colleague, have modeled the future distribution patterns of marine crustaceans for the years 2050 and 2100. In their study, published in the journal Climatic Change, they conclude that animals living in water depths above 500 meters will move northward as a result of climate change. In contrast, crustaceans found at depths below 500 meters will spread southward in the future. To this end, the team analyzed data from 94 crustacean species assuming two possible scenarios from the Intergovernmental Panel on Climate Change (IPCC) report—an increase in global mean ocean temperature by either one or by 4.8 degrees Celsius by 2100. The study is part of the Beneficial project regarding the biogeography of the Northwest Pacific fauna. The baseline study will help to estimate the extent of invasions of non-native species to the Arctic Ocean under the rapid global climate change. Crustaceans are one of the most dominant animal groups in both shallow-water and deep-sea communities worldwide. Various studies suggest that crustaceans are shifting their ranges toward the poles or to greater depths as the oceans become warmer. "However, this assumption mainly applies to shallow-water organisms," explains Dr. Marianna Simões of the Senckenberg German Entomological Institute in Müncheberg. "On the other hand, it remains unexplored how global climate change affects the distribution of deep-sea communities and whether there is a difference here compared to the shallow-water fauna—even though the deep sea is the largest ecosystem on Earth." Simões, her Senckenberg colleagues Angelika Brandt and Hanieh Saeedi, and Marlon Cobos of the University of Kansas therefore analyzed more than 12,500 distribution records of 94 globally distributed, ground-dwelling crustacean species from 12 orders and developed climatic models predicting for their distribution in 2050 and 2100. "We wanted to know how global climate change might influence the distribution and abundance of marine crustacean species within the reminder of the century. To do this, we used two Intergovernmental Panel on Climate Change (IPCC) scenarios for global mean ocean temperatures: An increase by one or by 4.8 degrees Celsius by 2050 and 2100, respectively," adds Simões. In addition to the possible horizontal distributional shifts of shallow-water and deep-sea benthic crustaceans, the researchers also discuss species-specific traits in their study that may lead to dispersal into new habitats—the so-called "invasion potential." Crustaceans such as Tecticeps leucophthalmus Gurjanova, 1936 are one of the most dominant groups of animals worldwide in shallow water as well as deep-sea communities. Credit: Anna Lavrenteva Considering all factors, according to the modeling, crustaceans living in shallow water could relocate an average of 431 kilometers away from their original habitats by 2050 with a temperature increase of one degree Celsius, and 620 kilometers with an increase of 4.8 degrees Celsius. Projecting the data to 2100, the shifts estimations are between 435 and 1300 kilometers, respectively. "Deep-sea species, on the other hand, are not predicted to move as far. By 2050, we predict a distance of about 90 kilometers at one degree of warming, and 110 kilometers at 4.8 degrees. By 2100, this still amounts to a potential distribution shift of 130 and 180 kilometers, respectively," says Simões. There are exceptions here, too: For the shrimp species Thor paschalis (Heller, 1861), the researchers predict a horizontal distribution shift by 3715 kilometers for 2050, with a water temperature increase of only one degree. "What surprised us most, however, was the direction in which the crustaceans move," says Simões. "The species we studied that live above a depth of 500 meters migrate northward. Crustaceans below that level move to the south." A decrease in species richness in temperate climates and a potential increase in species richness in polar regions could be the result of this event. "However, not all species can adapt to changing environmental conditions," says Simões. "As anthropogenic disturbances such as fishing, marine mining, oil drilling, or warming and acidification of marine ecosystems increase, so does the pressure on deep-sea fauna. Although our study only covers about 0.3 percent of the total crustacean diversity, our results can help us to better understand the future magnitude of changes affecting the deep-sea life, allowing us to implement appropriate conservation measures." | 10.1007/s10584-021-03240-8 |
Medicine | Mother's age, race, weight affect hormone concentrations in pregnancy, study finds | Emily S. Barrett et al, Predictors of Steroid Hormone Concentrations in Early Pregnancy: Results from a Multi-Center Cohort, Maternal and Child Health Journal (2019). DOI: 10.1007/s10995-018-02705-0 | http://dx.doi.org/10.1007/s10995-018-02705-0 | https://medicalxpress.com/news/2019-02-mother-age-weight-affect-hormone.html | Abstract Objectives To identify factors predicting maternal sex steroid hormone concentrations in early pregnancy. Methods The Infant Development and the Environment Study recruited healthy pregnant women from academic medical centers in four US cities. Gold standard liquid chromatography–tandem mass spectrometry was used to measure maternal sex steroids concentrations (total testosterone [TT], free testosterone [FT], estrone [E1], estradiol [E2], and estriol [E3] concentrations) in serum samples from 548 women carrying singletons (median = 11.7 weeks gestation). Women completed questionnaires on demographic and lifestyle characteristics. Results In multivariable linear regression analyses, hormone concentrations varied in relation to maternal age, body mass index (BMI), race, and parity. Older mothers had significantly lower levels of most hormones; for every year increase in maternal age, there was a 1–2% decrease in E1, E2, TT, and FT. By contrast, each unit increase in maternal BMI was associated 1–2% lower estrogen (E1, E2, E3) levels, but 1–2% higher androgen (TT, FT) concentrations. Hormone concentrations were 4–18% lower among parous women, and for each year elapsed since last birth, TT and FT were 1–2% higher (no difference in estrogens). Androgen concentrations were 18–30% higher among Black women compared to women of other races. Fetal sex, maternal stress, and lifestyle factors (including alcohol and tobacco use) were not related to maternal steroid concentrations. Conclusions for Practice Maternal demographic factors predict sex steroid hormone concentrations during pregnancy, which is important given increasing evidence that the prenatal endocrine environment shapes future risk of chronic disease for both mother and offspring. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Significance What is already known on this topic? Although numerous studies have examined variation in maternal hormones during pregnancy, most have had notable limitations. These limitations include small sample sizes, and demographically homogeneous populations, use of outdated or suboptimal analytic techniques for hormone measurement. What does this study add? Using gold-standard analytic techniques in a large, diverse sample, we demonstrate that maternal sex steroid profiles during pregnancy vary in relation to maternal age, body mass index (BMI), race, and parity. This work confirms and extends the previous literature on this topic. Introduction Health begins in utero, and there is tremendous interest in better understanding how early development contributes to our later health and disease risk. Early research relied upon size at birth as a crude proxy for the in utero environment, but the field has since expanded to examine a wide range of materno-feto-placental biomarkers that may confer—or protect against—future disease risk in the child. The prenatal hormonal milieu has been an area of particular interest given long-standing hypotheses that excess fetal exposure to estrogens and/or androgens may play a role in future risk of reproductive cancers (Schooling et al. 2016 ). Additionally, recent studies link putative biomarkers of the prenatal hormonal milieu, including anogenital distance and digit ratios, to adult outcomes including polycystic ovary syndrome (PCOS) (Wu et al. 2017 ), endometriosis (Mendiola et al. 2016 ), prostate cancer (Mendes et al. 2016 ), and semen quality (Mendiola et al. 2011 ). Characterizing sex steroid concentrations during pregnancy may also yield important insights into the mother’s own future risk of disease including breast (Troisi et al. 2008 ; Arslan et al. 2006 ) and ovarian cancer (Chen et al. 2011 ). Given the important downstream health outcomes believed to be associated with the prenatal endocrine milieu, characterizing natural variation in hormone levels during pregnancy and identifying factors contributing to that variation is important. Indeed, since the 1980s, over a dozen studies have examined prenatal hormone levels in the context of maternal and infant characteristics. However the study population, sample size, timing of sample collection, laboratory techniques, and steroid hormones measured have varied considerably. For example, although many believe that the endocrine environment during early pregnancy is arguably of greatest concern with respect to subsequent disease risk in the fetus (given the rapid cell differentiation, proliferation, and organogenesis that occurs during this period), timing of biospecimen collection for hormone measurement has ranged from early pregnancy through umbilical cord blood collection at delivery. The vast majority of these studies have relied upon immunoassays (including chemiluminescent-, electrochemiluminescent-, and radioimmunoassays) which are relatively cheap, easy, and quick to perform (Troisi et al. 2003 , 2008 ; Caanen et al. 2016 ; Jarvela et al. 2012 ; Bremme et al. 1990 ; Lagerstrom et al. 1990 ; Lagiou et al. 2014 ; O’Leary et al. 1991 ; Steier et al. 2002 ). However in immunoassay, there is potential for the antibodies to cross-react with multiple hormones due to non-specific binding of steroids to the antibody (for example both dehydroepiandrosterone sulphate [DHEAS] and testosterone) (Middle 2007 ), as well as with synthetic steroids (Krasowski et al. 2014 ; Tejada et al. 1998 ). This non-specificity may account for the higher serum sex steroid levels in pregnant women found in some studies using immunoassay e.g. (Zhang et al. 2005 ) compared to studies using LC/MS–MS (Toriola et al. 2011 ). Perhaps the greatest concern is that immunoassay does not offer adequate sensitivity to measure hormones that are present in very low concentrations, such as testosterone in women (Rosner et al. 2007 ). Newer methods like liquid chromatography–tandem mass spectrometry (LC–MS/MS), are more expensive and labor intensive, but offer greater sensitivity and specificity for steroid measurement and are therefore the current gold standard (French 2016 ). To date, only one large study has examined maternal determinants of sex steroid concentrations in early pregnancy using the preferred LC–MS/MS method. Toriola et al. ( 2011 ) measured a panel of steroid hormones (testosterone, androstenedione, estradiol [E2], progesterone, and 17-hydroxyprogesterone) in 1343 women who provided samples for a large Finnish biorepository, finding that parity, smoking, maternal age, and fetal sex predicted maternal steroid levels measured at median 11 weeks gestation (Toriola et al. 2011 ). Here, we seek to extend this work in a more diverse, US pregnancy cohort. Our objective was to identify sociodemographic predictors of early pregnancy maternal serum sex steroid concentrations, including total testosterone (TT), free testosterone (FT), estrone (E1), estradiol (E2), and estriol (E3). Methods Study Participants and Study Overview The Infant Development and the Environment Study (TIDES), is a multi-center longitudinal cohort study that recruited women in their first trimester of pregnancy from 2010 to 2012. Women were recruited from four major US academic medical centers [University of California, San Francisco (UCSF), University of Minnesota (UMN), University of Rochester Medical Center (URMC), and Seattle Children’s Hospital/University of Washington (UW)]. The primary means of recruitment was through study personnel who attended obstetric clinics, approaching potentially eligible women who were awaiting their clinical appointments. Eligibility criteria included: less than 13 weeks pregnant, English-speaking, and no serious medical conditions or threats to the pregnancy. In each trimester, participants completed a questionnaire (in person or online) that included items on maternal demographics, general health, alcohol and tobacco use during pregnancy, and reproductive history. They were also asked whether any stressful life events had occurred during the pregnancy. Items were adapted from two validated questionnaires and queried whether women had experienced job loss, serious family illness or injury, death of a close family member, relationship difficulties with their partner, serious legal/financial issues, or any other major life event during the index pregnancy (Barrett et al. 2015 ). Participants provided a single blood sample during early pregnancy, which was generally timed to coincide with their first or second trimester clinical screening and stored at − 80 ℃ until analysis. Additional information on recruitment and prenatal visits is provided elsewhere (Barrett et al. 2014 ). Gestational dating, including gestational week at blood draw, was determined based on the first ultrasound in the medical record. When that was not available, the obstetrician’s estimate of the last menstrual period in the clinical record was used to calculate gestational week at blood draw. Hormone Assays Serum samples were sent overnight on dry ice to the Endocrine and Metabolic Research Laboratory at Los Angeles Biomedical Research Institute at Harbor-UCLA Medical Center, where all hormone assays were performed using standard, validated protocols, as described elsewhere (Sathyanarayana et al. 2017 ). LC–MS/MS was used to measure TT in serum using standard protocols with minor modifications that shortened the runtime and system parameters. Briefly, LC–MS/MS runs were conducted using a Shimadzu HPLC system (Columbia, MD) and an Applied Biosystems API5500 LC–MS/MS (Foster City, CA) equipped with a Turbo-Ion-Spray source that used positive mode. There was a linear response for calibration standards ranging from 2.0 ng/dL (0.069 nmol/L) to 2000 ng/dL (69.3 nmol/L) for testosterone. Quality control was performed on each assay run using spiked samples. Intra- and inter-run precision was less than 5% and the steroid spiked samples had an accuracy between 100 and 113% for testosterone. The limit of quantification for TT was 2 ng/dL (0.069 nmol/L). Equilibrium dialysis using labeled testosterone was used to measure FT, the unbound and biologically active form of testosterone (Qoubaitary et al. 2006 ). LC–MS/MS was also used to measure serum E1, E2, and E3 concentrations in all subjects. The Shimadzu HPLC system (Columbia, MD) was again used, this time with a triple quadrupole mass spectrometer (API5000 LC–MS/MS, Foster City, CA). The system was operated in the negative mode using multiple-reaction-monitoring in order to separate the estrogens on a column, with a gradient profile from 63 to 100% methanol. For both E1 and E2, the calibration curves were linear for the range of 2 to 2000 pg/mL, and for E3, 50 to 100 pg/mL. The lower limit of quantification was 2.0 pg/mL for E1 and E2, and 50 pg/mL for E3. The within-run precision (%CV) ranged from 2.6 to 5.6 for E1, 4.3 to 5.0 for E2, and 4.1 to 5.7 for E3. The between-run precision (%CV) ranged from 3.9 to 4.6 for E1, 4.6 to 5.2 for E2, and 5.2 to 8.7 for E3. The accuracy was 91.9 to 101.2 for E1, 93.9 to 100.3 for E2, and 87.2 to 104.3 for E3 respectively, spanning different estrogen concentrations. Statistical Analysis Our main analyses considered a set of variables chosen a priori based on the existing literature on sex steroid hormones in pregnant and non-pregnant, cycling women (Troisi et al. 2003 , 2008 ; Arslan et al. 2006 ; Jarvela et al. 2012 ; Toriola et al. 2011 ). These variables included maternal age, maternal BMI, smoking during pregnancy (any/none), alcohol use during pregnancy (any/none), fetal sex, study center, race (Black/White/other), income, education, marital status, parity, age at menarche, stressful life events during pregnancy (any/none), and gestational age at blood draw. Univariate statistics were calculated for all variable of interest, including counts and percentages for categorical variables as well as summary statistics for continuous variables. As expected, hormone measurements were non-normal and were therefore log10-transformed. E3 values below the lower limit of quantification were assigned that value (20 pg/mL). Scatterplots were used to examine the relationships between the log-transformed hormone outcomes and all continuous predictors of interest. Two different model selection criteria, prediction sum of squares (PRESS) and Mallows’ Cp were then used to narrow the list of potential variables of interest to a smaller set of predictors. Mallows’ Cp compares how well subset models make predictions compared to the full model, while PRESS uses leave-one-out cross-validation to measure how well fitted values predict the observed responses. For each hormone outcome, both model selection methods selected the same set of predictors (though the specific predictors varied slightly by hormone). Multivariable models were then fit to include the same standard set of selected predictors for all hormone outcomes (TT, FT, E1, E2, E3). The selected variables were maternal age, maternal BMI, fetal sex, study center, race, parity, and gestational age at blood draw. In secondary analyses limited to parous women, we additionally considered time since last live birth, having identified it as a predictor of sex steroid levels in non-pregnant, cycling women in previous work (Barrett et al. 2014 ). Finally, we back-transformed the estimates from all multi-variable models so that results could be reported as percent change in hormone concentrations, facilitating interpretation. Model diagnostics were run on all models and no issues were detected with regard to normality, linearity, constant variance, or multi-collinearity. However in models for E1, E2, and E3, the residual variance was larger at URMC than the other centers. This greater variance may reflect a greater racial/ethnic diversity at that site that is not fully captured by our race variable, but is unlikely to have much impact on inferences made about the model covariates. Analyses were performed in R version 3.3.2 and p-values < 0.05 were considered significant. In total, 591 women gave a blood sample in early pregnancy that was analyzed for sex steroid hormone levels (mean 11.2 weeks gestation). Two subjects, one who gave birth extremely preterm (25 weeks gestation) and another with an implausible gestational age at delivery (1 week) were excluded from subsequent analysis. The 37 women who had polycystic ovary syndrome (PCOS), a common endocrine disorder characterized by hyperandrogenemia, were also excluded. In addition, four women who were carrying multiples were excluded from analysis. After these exclusions, the final sample size was 548 women. Of these women, 41 were missing data on the outcome or on at least one predictor of interest, therefore the sample size for our main models varied from 507 to 513, depending on the specific hormone outcome. Results TIDES participants were, on average, 31.0 ± 5.4 years old with a pre-pregnancy BMI of 26.1 ± 6.1 kg/m 2 (Table 1 ). Most mothers were White (67.8%), well-educated (86.0% had at least some college), and 49.1% had an annual household income greater than $75,000. Over 80% of women were married or living as married and 44.9% were parous. Self-reported alcohol use and smoking during pregnancy was uncommon (4.3% and 7.5%, respectively). As expected, roughly half of mothers were carrying male fetuses (48.9%). Table 1 Characteristics of the TIDES study population Full size table All hormone measurements were above the limits of quantification of the assays, with the exception of E3, for which 30.9% of values were at or below the limit. In general, samples with E3 below the LOQ were collected earlier in gestation (median 9.2 weeks; min–max: 6.4–13.0) than samples with E3 above the LOQ (median 12.2 weeks; min–max: 5–25.4). The median levels of TT, FT, E1, E2, and E3 were 63.0 ng/dL, 0.28 ng/dL, 708 pg/mL, 1440 pg/mL, and 92 pg/mL, respectively. Results of bivariate and multi-variable analyses refer to log-transformed hormone values. FT and TT were highly correlated with one another (r = 0.81), and TT (but not FT) was weakly correlated with E1 (r = 0.24) and E2 (r = 0.30). E1 and E2 were also highly correlated (r = 0.89), while correlations with E3 were more modest (r = 0.47 for E1; r = 0.55 for E2). All of these associations were significant at p < 0.001. In bivariate analyses (not shown), maternal age was inversely associated with FT (r = − 0.34) and TT (− 0.38), but not E1 (r = 0.03), E2 (r = − 0.04), or E3 (r = 0.09). Maternal BMI was positively associated with maternal androgens (FT: r = 0.32; TT: r = 0.24), but showed only weak inverse associations with estrogens (E1: r = − 0.18; E2: r = − 0.13; E3: r = − 0.06). Gestational age at blood draw was strongly associated with estrogens (E1: r = 0.54; E2: r = 0.58; E3: r = 0.79), but not androgens (FT: r = − 0.17; TT: r = 0.10). In multivariable models (including maternal age, maternal BMI, fetal sex, study center, race, parity, and gestational age at blood draw), we observed significant inverse associations between maternal age and most hormone levels (Table 2 ). Adjusting for other model covariates, for every year increase in maternal age, there was a 1.5% decrease in E1 (95% CI − 3.0%, − 0.07%), a 1.6% decrease in E2 (95% CI − 2.8%, − 0.4%), a 2.1% decrease in TT (95% CI − 3.0%, − 1.2%), and a 1.3% decrease in FT (95% CI − 2.2%, − 0.4%) (Table 2 ). This corresponds to a 6–10% decrease (depending on the particular hormone) with every 5 year increase in maternal age. Parous women had 12.6% lower E1 (95% CI − 23.5%, − 0.1%), 18.0% lower E2 (95% CI − 26.6%, − 8.4%), 13.8% lower TT (95% CI − 20.5%, − 6.5%) and 16.4% lower FT (95% CI − 23.1%, − 9.2%) concentrations than nulliparous women. With every unit increase in BMI, we observed increases in androgen levels [TT: 1.3% (95% CI 0.6%, 2.0%); FT: 2.2% (95% CI 1.5%, 2.9%)] but reductions in estrogen levels [E1: − 2.3% (95% CI − 3.4%, − 1.3%); E2: − 1.6% (95% CI − 2.5%, − 0.7%); E3: − 1.0% (95% CI − 1.8%, − 0.3%)]. This corresponds to 5–11% lower estrogen and 6–11% higher androgen concentrations per five unit increase in BMI. Table 2 Percent change (with 95% confidence intervals) in maternal hormone concentrations in relation to maternal characteristics in the TIDES study Full size table Androgen concentrations also differed by race. Compared to Black women, White women had 29.5% lower TT (95% CI − 39.0%, − 18.5%) and 29.4% lower FT (95% CI − 39.1%, − 18.1%), respectively, while women of “Other” races had 18.9% lower TT (95% CI − 30.5%, − 5.3%) and 18.3% lower FT (95% CI − 30.2%, − 4.3%). E2 concentrations were 18.0% lower (95% CI − 32.7%, − 0.2%) among White women than Black women, but otherwise no significant differences in estrogens by race were noted. Estrogen concentrations were significantly higher (17–30%) with each increasing week of gestational age at the time of blood draw, but only modest differences in TT (2.6%, 95% CI 0.8%, 4.4%) and FT (− 2.8%, 95% CI − 4.5%, − 1.1%) were evident. There were significant differences in hormone concentrations by study center. For example, women at the UCSF center had significantly lower FT than women at the other centers, but higher E1 and E2 levels than women at the other centers (with the exception of the UW center). None of the hormones measured differed by fetal sex and only BMI and gestational age at blood draw were associated with E3 concentrations. In secondary analyses limited to parous women only, results were largely similar (Supplemental Table 1). Hormone levels were higher with increasing time since last birth. In multivariate models, for each year elapsed since the last birth, there was 2.4% (95% CI 0.5%, 4.3%) higher TT, 2.2% (95% CI 0.3%, 4.3%) higher FT, 2.4% (95% CI − 0.9%, 5.8%) higher E1, and 2.3% (95% CI − 0.6%, 5.3%) higher E2. Discussion In a large, diverse US cohort, maternal sex steroid concentrations measured by LC/MS–MS during early pregnancy were significantly predicted by maternal age, BMI, parity, and race. Specifically, concentrations of all hormones (except for E3) were lower among older mothers while BMI was associated with significantly higher androgen (FT and TT) concentrations, but lower estrogen (E1, E2, E3) concentrations. Both androgens and estrogens (with the exception of E3), were lower among parous women compared to women with no prior live birth. In secondary analyses limited to parous women, both androgen and estrogen concentrations were positively associated with the time elapsed since last live birth. In general, Black women tended to have higher androgen concentrations than women of other races. No variation in maternal hormone concentrations was observed in relation to fetal sex, stressful life events during pregnancy, or lifestyle factors including smoking and alcohol use. Our findings of lower sex steroid levels in association with increasing maternal age are consistent with previous work (using both immunoassay and LC/MS–MS methods) examining serum hormone levels measured during early pregnancy (Troisi et al. 2008 ; Toriola et al. 2011 ) and in cord blood collected at birth (Troisi et al. 2003 ). During pregnancy, a large proportion of steroid production (particularly of estrogens and progesterone) occurs in the placenta and surprisingly, very little research has examined variation in placental function (including hormone production) in relation to maternal age. Human chorionic gonadotropin (hCG) may be reduced in older mothers, possibly indicating reduced placental capacity for hormone production and synthesis (Haavaldsen et al. 2014 ). Better understanding how maternal age may impact placental function and steroidogenesis is an important future direction for research given temporal trends towards delayed conception and increasing maternal age. Adjusting for covariates including age, parous women had significantly lower androgen concentrations (as well as non-significantly lower estrogen concentrations) than women with no prior birth, a result consistent with previous work in pregnant women (Troisi et al. 2008 ; Arslan et al. 2006 ; Toriola et al. 2011 ) and in non-pregnancy, naturally cycling women (Barrett et al. 2014 ). These findings are robust to the specific hormone assay technique used (Troisi et al. 2008 ; Arslan et al. 2006 ; Toriola et al. 2011 ). Other work suggests that the endocrine impact of parity extends even further, such that compared to nulliparas, women who have had a full-term pregnancy may also have lower hCG, prolactin, alpha-fetoprotein, dehydroepiandrosterone (DHEA), and DHEAS (Arslan et al. 2006 ; Musey et al. 1987 ; Barkai et al. 1996 ; Haddow et al. 1995 ; Mooney et al. 1995 ; Wald and Watt 1996 ). No definitive mechanistic explanation for parity-related changes in endocrine profiles during gestation has been proffered thus far, although potential explanations include changes in placental size and activity, binding protein concentrations, receptor densities, and maternal metabolism (Arslan et al. 2006 ). Understanding these changes is important given that parity has been linked to reproductive cancer risk (Anderson et al. 2017 ; Rasmussen et al. 2017 ; Salazar-Martinez et al. 1999 ) with changes in lifetime hormone exposure hypothesized to be a key mediator (Arslan et al. 2006 ). In the current analysis, pre-pregnancy BMI was positively associated with androgens, but inversely associated with estrogens. Somewhat surprisingly, data on maternal body size is absent from a number of studies on the prenatal hormonal milieu (Toriola et al. 2011 ; Schock et al. 2016 ), but is important to consider given that nearly 70% of US women are overweight or obese, with Black and Latina/Hispanic women particularly affected (82% and 77%, respectively) (CDC 2016 ). Troisi and colleagues noted positive associations between measures of maternal body size (height, pre-pregnancy weight, or BMI) and androgen concentrations at delivery as well as in early pregnancy (Troisi et al. 2003 , 2008 ) and a recent study found positive associations between weight gain in pregnancy and T concentrations in third trimester serum as well as amniotic fluid prior to labor (Kunovac-Kallak et al. 2017 ). The inverse association between BMI and estrogens is consistent with the observation that hormones of fetoplacental origin tend to be lower in the circulation of heavier mothers due to dilution (de Graaf et al. 2000 ). Androgens, by contrast, are produced primarily by the maternal ovary and adrenal gland (as well as by the male fetus though they are likely aromatized before reaching maternal circulation) (Freeman et al. 2005 ; Manson et al. 2001 ). Studies in non-pregnant cycling women and peripubertal girls also indicate that being overweight or obese is associated with higher TT and FT concentrations (Barbieri et al. 2005 ; McCartney et al. 2007 ). One possible mechanism for this association is that insulin, which is elevated in overweight women, stimulates the ovarian theca cells to produce androgens while also lowering sex hormone binding globulin (SHBG) production, thereby resulting in higher FT (Barbieri et al. 1986 ; Poretsky et al. 1999 ). In our study, Black women had significantly higher androgen concentrations and non-significantly higher estrogen concentrations than women who were White or of other races. Although several of the larger studies on this topic have focused on predominantly White, Scandinavian populations (Toriola et al. 2011 ; Chen et al. 2010 ), evidence from more diverse US-based cohorts supports these findings. In an early study examining racial differences in the prenatal endocrine environment, first trimester TT was significantly higher (and E2 non-significantly higher) among 20 Black women compared to 20 White women (Henderson et al. 1988 ). In a second cohort (n = 86), TT and androstenedione concentrations (but not E2 or E3) were higher in blood collected at delivery in Black women compared to White women (Troisi et al. 2008 ). In a larger study of 300 (150 Black/150 White) nulliparous mothers carrying male fetuses, Black women had higher first and third trimester TT and FT concentrations as well as higher first trimester E2, adjusting for covariates (Zhang et al. 2005 ), which is consistent with the current findings. Examining racial differences in perinatal hormone exposure has been of great interest because of the dramatic differences between Whites and Blacks in the patterns and prevalence of reproductive cancers including testicular germ cell tumors (McGlynn et al. 2005 ), breast cancer (Danforth 2013 ), and prostate cancer (McGinley et al. 2016 ). Additional work has examined the prenatal hormonal milieu among racial and ethnic groups believed to be at lower risk of endocrine-mediated cancers, such as Hispanic and Asian women living in the US and elsewhere (Arslan et al. 2006 ; Lagiou et al. 2014 ), however our study did not have adequate representation of those and other racial and ethnic groups to provide further insight. In contrast to some previous work in which smoking was associated with higher androgen and estradiol levels during pregnancy in Finnish women (Toriola et al. 2011 ; Schock et al. 2016 ), smoking did not predict maternal hormone concentrations in our study and was therefore not included in final models. Relatively few women in TIDES reported any smoking during pregnancy (7% compared to 14% in Toriola et al. 2011 ), which may account for the inconsistent findings. It is also plausible that patterns of smoking during pregnancy differ quite dramatically cross-culturally such that simply measuring “any” smoking does not adequately capture variation in smoking behaviors, particularly given that the Finnish samples were collected starting in the 1980s whereas the TIDES samples were collected starting in 2010. Cross-cultural and/or temporal variation in the social stigma against smoking during pregnancy may contribute to these disparate findings as well and measurement of cotinine levels (a urinary metabolite of nicotine and biomarker for tobacco smoke exposure) would likely yield more accurate information on smoking habits than self-report. Similar issues may explain the contrast between Troisi et al. ( 2008 )’s findings of association between alcohol use and decreased E2 and our null findings (Troisi et al. 2008 ). The literature on differences in maternal sex steroids in relation to fetal sex is highly inconsistent. Amniotic fluid measurements indicate that starting in the late first trimester, the male fetus experiences much higher levels of testosterone (through his own testicular production); the low testosterone levels detected in amniotic fluid from female fetuses is presumed to be of adrenal origin (Kallak et al. 2017 ). In theory, therefore, given the much higher androgen production in male fetuses compared to female, maternal androgen concentrations might be expected to be higher in women carrying male fetuses. Some previous work supports these differences. For instance, in a previous cohort we found that in the third (but not the second) trimester, FT and TT were significantly higher in women carrying males than in women carrying females (Sathyanarayana et al. 2014 ) and a large Australian study found that TT and FT were higher in umbilical cord blood in males than in females at birth (Keelan et al. 2012 ). Notably, these studies (as well as the current study) all used LC/MS–MS to measure androgens, while a number of other studies that have not detected differences in androgen levels with respect to fetal sex have used less sensitive methods like immunoassay that are not recommended for the very low levels of androgens present in females (French 2016 ; van de Beek et al. 2004 ; Chen et al. 2010 ). Here, fetal sex did not predict maternal androgen concentrations, which is consistent with reports that in women carrying male fetuses, expression of placental aromatase (converting androgens to estrogens) is greater, preventing the virilization of mothers carrying male fetuses (Kragie 2002 ). Our study has several notable strengths. Importantly, hormone concentrations were measured using LC–MS/MS with FT evaluated using equilibrium dialysis. Of the similar studies discussed herein, only a small fraction have used LC–MS/MS technology (Toriola et al. 2011 ), which is the gold standard for serum hormone measurement, while the remainder have used older and less sensitive immunoassay techniques (Troisi et al. 2003 , 2008 ; Arslan et al. 2006 ; Jarvela et al. 2012 ; Lagerstrom et al. 1990 ; Lagiou et al. 2014 ; O’Leary et al. 1991 ; Steier et al. 2002 ). This assay sensitivity is especially important for androgens, which are present in very low concentrations in women (French 2016 ). FT is of particular interest because in contrast to TT, which includes both bound and unbound testosterone, FT represents only the unbound, bioavailable proportion of hormone. Although TT concentrations can be quite high in pregnant women, concomitant increases in SHBG result in FT levels remaining relatively low until late pregnancy (Bammann et al. 1980 ). As epidemiologists extend work on prenatal hormones to examine associations with subsequent health outcomes, using gold standard measurement techniques to ensure the validity of results will be increasingly important. Another strength of our study was its recruitment of a large sample of healthy pregnant women. Some previous work on maternal endocrine profiles during pregnancy has been under-powered or has examined special populations, such as women with pre-eclampsia or PCOS. Of the related literature, to our knowledge only Toriola et al. ( 2011 ) (n = 1343) has had a sample size larger than the current study (Toriola et al. 2011 ). The relative diversity of our sample (recruited from four US cities) and our extensive questionnaires allowed us to examine race, income, stress, age at menarche and other potential predictors that could not be examined in previous work in more homogeneous cohorts reliant on birth registry data. Finally, sample collection in early pregnancy allowed us to assess the hormonal milieu at a particularly critical point during development when fetal organ systems are forming. Our study has several limitations of note as well. At present, research on prenatal hormones is hindered by our inability to determine the relative contributions of multiple hormone sources (mother, fetus, and placenta), an advance which would be useful to better understand the mechanisms underlying observed associations in this study. An exception to this is E3, which is produced almost exclusively by the placenta based on fetal adrenal precursors. Interestingly, we observed that only BMI and gestational age at blood draw predicted E3 concentrations, suggesting that there may be little variation in placental steroidogenesis in relation to maternal sociodemographic status. Our ability to detect associations may have been hindered by the large proportion of samples below the E3 LOQ (30%), due in part to the early gestational age at sample collection. Ultimately, quantifying fetal hormone exposure is of greatest interest in the context of the developmental origins of health and disease, however given the logistical impossibilities of sampling fetal blood or amniotic fluid for research studies in healthy pregnancies, we are necessarily limited in our ability to characterize that environment. Our hormone assessments were based on a single blood draw collected opportunistically during early pregnancy and ideally, tracking trajectories of hormone concentrations over the course of an entire pregnancy (as some smaller studies have done) would be ideal and might provide more insight into vulnerable periods for future maternal disease risk (O’Leary et al. 1991 ; Schock et al. 2016 ). Androgens can be converted to E1 and E2 through the enzyme aromatase; aromatase levels, which were unavailable in this study, could provide more insight into inter-individual variation in steroidogenesis. Finally, our cohort, while more diverse than many of the other populations studied in similar work, was still predominantly White, of healthy weight, and higher socioeconomic status. Thus our ability to generalize to US women as a whole is limited. In conclusion, this research in a large, diverse, multi-center US pregnancy cohort adds to the body of work on predictors of maternal sex steroid concentrations measured by gold standard methods during pregnancy. Understanding how factors such as maternal age, BMI, race, and parity impact materno-feto-placental physiology is important given the extensive evidence that endocrine environment during pregnancy may have important implications for the future health of mother and child alike. | Hormone concentrations during early fetal development—that may affect the child's development and increase the mother's risk for breast and ovarian cancer years later—are significantly affected by maternal age, body mass index and race rather than lifestyle, according to a Rutgers study. The findings appear in Maternal and Child Health Journal. The researchers looked at the concentrations of estrogens and testosterone in 548 healthy women in the first trimester of pregnancy in relation to their lifestyle to better understand what drives elevated sex hormones during fetal development. "Hormones in early development play a key role in human health and disease risk. Since we are unable to directly measure hormones in fetuses as they are developing, the next best way is to study the mother's hormones since they can be transferred to the fetus," said Emily Barrett, an associate professor at Rutgers School of Public Health and a faculty member at Rutgers Environmental and Occupational Health Sciences Institute. Previous studies have suggested that excess fetal exposure to estrogens and androgens, which are male sex hormones such as testosterone, may play a role in future risk of reproductive cancers and other conditions such as polycystic ovary syndrome, endometriosis, prostate cancer and semen quality in the infant. In the new study, researchers recruited women during early pregnancy and collected a blood sample to measure hormones. The participants completed questionnaires with items on demographics as well as lifestyle factors, including their alcohol or tobacco use, and stressors in their lives Most of the women were white, married, well-educated and had an average age of 31. Less than 5 percent drank alcohol and less than 8 percent smoked. The researchers found that older mothers and women who had previously given birth had lower estrogen and testosterone levels. They also found that heavier women had lower estrogen levels, but higher testosterone levels than leaner women. Confirming results from previous work, they found that black women had higher testosterone levels than women of other races, a difference that may help to explain health disparities in reproductive cancers and other hormone-sensitive diseases. According to the Centers for Disease Control and Prevention, black women and white women get breast cancer at about the same rate, but black women die from breast cancer at a higher rate than white women. The study found no variation in maternal hormone concentrations in relation to fetal sex, stressful life events during pregnancy or lifestyle factors such as smoking and alcohol use, suggesting that hormone concentrations were not influenced by maternal behaviors or the gender of the fetus. "Characterizing sex steroid concentrations during pregnancy may yield important insights into the mother's own future risk of disease as high levels of exposure to estrogen have been shown to increase the risk for breast and ovarian cancer later on," said Barrett, whose research focuses on prenatal exposure to endocrine disruptors, agents which interfere with the normal activity of hormones in the body. | 10.1007/s10995-018-02705-0 |
Physics | New 'refrigerator' super-cools molecules to nanokelvin temperatures | Collisional cooling of ultracold molecules, Nature (2020). DOI: 10.1038/s41586-020-2141-z , nature.com/articles/s41586-020-2141-z Journal information: Nature | http://dx.doi.org/10.1038/s41586-020-2141-z | https://phys.org/news/2020-04-refrigerator-super-cools-molecules-nanokelvin-temperatures.html | Abstract Since the original work on Bose–Einstein condensation 1 , 2 , the use of quantum degenerate gases of atoms has enabled the quantum emulation of important systems in condensed matter and nuclear physics, as well as the study of many-body states that have no analogue in other fields of physics 3 . Ultracold molecules in the micro- and nanokelvin regimes are expected to bring powerful capabilities to quantum emulation 4 and quantum computing 5 , owing to their rich internal degrees of freedom compared to atoms, and to facilitate precision measurement and the study of quantum chemistry 6 . Quantum gases of ultracold atoms can be created using collision-based cooling schemes such as evaporative cooling, but thermalization and collisional cooling have not yet been realized for ultracold molecules. Other techniques, such as the use of supersonic jets and cryogenic buffer gases, have reached temperatures limited to above 10 millikelvin 7 , 8 . Here we show cooling of NaLi molecules to micro- and nanokelvin temperatures through collisions with ultracold Na atoms, with both molecules and atoms prepared in their stretched hyperfine spin states. We find a lower bound on the ratio of elastic to inelastic molecule–atom collisions that is greater than 50—large enough to support sustained collisional cooling. By employing two stages of evaporation, we increase the phase-space density of the molecules by a factor of 20, achieving temperatures as low as 220 nanokelvin. The favourable collisional properties of the Na–NaLi system could enable the creation of deeply quantum degenerate dipolar molecules and raises the possibility of using stretched spin states in the cooling of other molecules. Main The full potential of ultracold atoms was not realized until the advent of collision-based cooling methods such as evaporative and sympathetic cooling. Although atomic systems have been recently used to demonstrate laser cooling to quantum degeneracy, these schemes still require collisional thermalization 9 , 10 . Therefore, many efforts have been made over the past 15 years 11 to achieve collisional cooling of ultracold molecules. Buffer gas cooling 8 cannot achieve temperatures below 100 mK owing to the rapidly diminishing vapour pressure of buffer gases at such temperatures. Supersonic expansion 7 can produce temperatures around 100 mK. Controlled collisions in crossed molecular beams 12 can decrease the laboratory-frame velocity of particles while narrowing the velocity distribution. However, this technique has not been demonstrated below about 500 mK. Merged supersonic beams can be used to study collisions at energies equivalent to a temperature of 10 mK (ref. 13 ). Cooling below 100 mK calls for trapping molecules in magnetic or electrostatic traps and for good collisional properties (that is, a ratio of elastic to inelastic collisions much greater than 1). Such traps require preparing molecules in weak-field seeking states, which are never the absolute ground state, allowing inelastic state-changing collisions to eject the cold molecules from the trap. A variety of systems have been proposed for evaporative or sympathetic cooling of molecules 11 , 14 , 15 , 16 , 17 , 18 . So far, elastic collisions have been observed clearly in O 2 at temperatures below 1 K (ref. 19 ) and possibly in OH radicals around 10 mK (ref. 20 corrects an earlier report 21 ). In the O 2 case, inelastic collisions prevent thermalization and collisional cooling. In recent years, the assembly of molecules from ultracold atoms 22 and the direct laser cooling of molecules 23 , 24 , 25 , 26 have both expanded to new molecules and temperature regimes. These techniques have achieved molecular systems at temperatures less than 100 mK, raising the challenge of collisional cooling in the micro- and nanokelvin regimes. Optical traps enable trapping of the absolute ground state, which removes the concern of state-changing collisions. However, collisional cooling in the absolute ground state of chemically stable systems has not yet been realized 27 . By contrast, chemically stable molecular species have shown anomalously high inelastic loss rates that preclude collisional cooling, possibly owing to collision complex formation 28 or interactions with optical trapping beams 29 . In this study we observe sympathetic cooling of rovibrational ground-state triplet 23 Na 6 Li molecules by 23 Na atoms, both of which are prepared in their upper stretched hyperfine spin states (that is, states with both nuclear and electronic spins aligned along the direction of the magnetic bias field). Although sympathetic cooling of one atomic species by another has been observed in various ultracold atomic mixtures 30 , triplet NaLi was considered unlikely to have sufficiently good collisional properties to support such cooling. NaLi has energetically allowed chemical reactions, even in the electronic ground state (that is, a singlet state), and the triplet state has an electronic excitation energy of 0.8 eV or 10,000 K. Furthermore, theoretical studies on various systems have explored the possibility of suppressing inelastic collisions and reactions by spin polarization, giving pessimistic predictions for triplet molecules and more favourable ones for doublet molecules 17 , 31 , 32 , 33 , 34 . Nonetheless, here we report clear thermalization and sympathetic cooling of triplet NaLi to 220 nK. We observe 20-fold increases of phase-space density (PSD), opening the possibility of collisional cooling to deep quantum degeneracy. Our experimental setup is summarized in Fig. 1 . Similarly to our previous work 35 , 36 , we produce about 3.0 × 10 4 NaLi molecules in the rovibrational ground state of the triplet potential using a mixture of Na and Li atoms (further details in Methods). We also prepare about 1.0 × 10 5 Na atoms in the upper stretched hyperfine state (in the low-field basis, | F , m F ⟩ = |2, 2 ⟩ , where F and m F are the hyperfine and magnetic quantum numbers, respectively) in a one-dimensional (1D) optical lattice formed by 1,596-nm light. Owing to the differential polarizability at 1,596 nm, molecules feel a deeper trapping potential than atoms. This results in a sudden increase of the potential energy as atoms associate and form molecules (see Fig. 2a ). Immediately after production, the effective temperature of the molecules is 2.80(6) μK and the temperature of Na atoms is 2.42(3) μK (all uncertainties are as defined in the legend of Fig. 2 ). As the molecules thermalize with the Na atoms and a hot fraction of atoms evaporates out of the trap, the temperatures of both particles settle to 2.23(6) μK (see Methods for molecular thermometry). Fig. 1: Experimental setup. The Na atoms (yellow circles) and NaLi molecules (yellow and red circles on black sticks) are trapped in a 1D optical lattice formed by a 1,596-nm laser, which is retro-reflected. The magnetic field, which defines the quantization axis, is coaxial with the lattice beam. The free atoms that remain after the formation of the ground-state molecules are removed by resonant light in the radial direction ( y axis in the figure). Stimulated Raman adiabatic passage (STIRAP) is performed with circularly polarized beams of two wavelengths (833 nm and 819 nm) that propagate along the axial direction ( z axis in the figure). σ + and σ − represent left-handed and right-handed circular polarization, respectively. The ground-state molecules are detected by absorption imaging along the axial direction of the atoms resulting from the dissociation of NaLi. Full size image Fig. 2: Thermalization of Na and NaLi. a , The trapping potential of molecules (red solid line) is deeper than that of Na atoms (black dashed line). This allows us to evaporate Na atoms with negligible loss of molecules. b , c , The molecule number ( b ) and temperature ( c ) are measured at various hold times after a 100-ms-long exponential evaporation ramp to a trap power of 0.21 W (7 μK trap depth), followed by a 10-ms-long recompression to 1.2 W (40 μK trap depth; trap-depth values are for NaLi throughout). In the number plot ( b ) the red dashed line is an exponential loss fit and the blue solid line is a two-body loss fit (details in the main text). The dashed line in the temperature plot ( c ) is an exponential fit. No temperature drop occurs in the absence of Na atoms. The exponential fit of the molecule temperature agrees well with the measurement but should be regarded as an interpolation to determine the initial slope of the thermalizaion. For this measurement, the Na number was about 1.5 × 10 5 . Data values represent the average and error bars represent the standard error of the mean, estimated from the fitting and statistical errors of 3–8 measurements. Full size image Although this initial settling of temperatures hints at sympathetic cooling, we are able to see much stronger effects by forced cooling and heating of Na atoms. We evaporate Na atoms with almost no loss of molecules by taking advantage of the particles’ different polarizabilities: α NaLi / α Na = ( mω 2 ) NaLi /( mω 2 ) Na ≈ 2.6, where α i is the polarizability of particle i , ω is the angular frequency for oscillation in the trap and m is the particle mass. The curve in Fig. 2c shows the thermalization between molecules and atoms with a large initial temperature difference after an exponential evaporative cooling ramp followed by a recompression of the trap (see Fig. 2 legend). As we hold the particles in the trap, their temperatures approach each other. Owing to the large particle number ratio, N Na / N NaLi ≈ 7, the molecule temperature decreases by 0.68(9) μK after thermalization, whereas the temperature of Na atoms increases only by 70(30) nK. However, if the Na is removed immediately before the hold time, the molecule temperature remains fixed during the same period. As further evidence of thermalization, the sympathetic heating of molecules with hot atoms is shown in Fig. 3 . We first prepare the atom–molecule mixture fully thermalized at 1.5 μK, after a 10-ms-long exponential evaporative ramp, followed by a 200-ms-long recompression of the trap to the initial trap power of 1.5 W (50 μK trap depth). Then, we selectively heat the atoms to 2.07(5) μK by sinusoidally modulating the trap amplitude at twice the sodium trap frequency (2 ω Na = 2π × 920 Hz) with a modulation amplitude of 20% of the trap depth for 100 ms. After 200 ms, particles thermalize and the molecule temperature rises to the temperature of the heated Na atoms. When the trap amplitude is modulated in the same manner without the Na atoms, the temperature of molecules remains at 1.59(5) μK. The heating process for sodium also induces centre-of-mass motion and breathing oscillations that cause the Na density to depend on time in the early stages of sympathetic heating, which is the reason for the delay in heating and for the non-exponential thermalization curve. Fig. 3: Sympathetic heating. After forced heating of Na atoms (see main text for details), the NaLi molecule temperature (red squares) rises and reaches that of Na atoms (black asterisk) as both particles thermalize. The temperature of molecules without hot atoms (blue circles) remains low. Values and error bars are estimated as in Fig. 2 from four measurements. Full size image To measure the rate of thermalization, we return to the simpler situation of cooling in Fig. 2 . We fit the temperature to a simple exponential model, T ( t ) = ( T 0 − T ∞ )exp(− Γ th t ) + T ∞ , where T 0 and T ∞ represent the initial and infinite-time-limit temperature of molecules, respectively, and obtain the thermalization rate, Γ th = 26(5) s −1 . Considering that thermalization requires about 3/ ξ collisions (for Na–NaLi, ξ ≈ 1; see Methods ), we obtain the average elastic collision rate per particle, Γ el , from the measured thermalization rate: Γ el ≈ (3/ ξ ) Γ th = 80(14) s −1 . In the presence of Na atoms, the initial loss rate of molecules is Γ inel = 1.29(6) s −1 , as obtained from a fit to the exponential loss model N ( t ) = N 0 exp(− Γ inel t ) (red dashed line in Fig. 2b ). Comparing the average elastic collision rate to the total loss rate, we obtain the ratio of elastic to inelastic collisions for NaLi, γ ≳ Γ el / Γ inel = 62(12). Without Na atoms, the molecular loss follows a two-body loss model, N ( t ) = N 0 /( βt + 1) (blue solid line in Fig. 2b ), from which we obtain an initial loss rate of β = 1.02(6) s −1 . By considering the difference between Γ inel and β as the effective inelastic loss rate for collisions between Na and NaLi, we obtain the ratio of ‘good’ to ‘bad’ collisions, γ ≈ Γ el /( Γ inel − β ) ≈ 300 with an uncertainty of about 40%. The Na–NaLi loss rate constant is 4.0 ± 1.3 × 10 −13 cm 3 s −1 ; this is more than two orders of magnitude smaller than the universal loss model rate constant for Na–NaLi s -wave collisions 37 , which is 1.7 × 10 −10 cm 3 s −1 (see Methods ). Whereas Na and NaLi in their upper stretched states form a stable mixture, as shown, preparing Na atoms in their lowest hyperfine state (| F , m F ⟩ = |1, 1 ⟩ ) gives a loss rate consistent with the universal loss model 35 . We consider the two different evaporation ramps shown in Fig. 4 . The molecule numbers, temperatures and PSDs resulting from these ramps are displayed in Fig. 5 . For a single-species system with losses, optimal evaporation can be achieved by a single, continuous decrease of the trap depth. We achieve an increase in PSD by a factor of 7 by using a single stage of evaporation (Fig. 4a , red squares in Fig. 5 ). This increase is limited by the low initial Na density. In our system, thermalization is dominated by the Na density, whereas loss is dominated by the NaLi density. To overcome the low Na density, we shorten the initial evaporation and recompression cycle ((1) and (2) in Fig. 4b ). This cools the Na atoms and quickly increases the density of the cold Na for fast thermalization. After the cold, dense Na efficiently pre-cools the molecules during a short hold of 30 ms in the tight trap (3), we apply the second evaporation ramp (4). In this double evaporation, we achieve a peak PSD of 2.0(2) × 10 −2 (blue circles in Fig. 5 ), which is 20 times higher than the initial PSD. Before the final recompression of the trap, the lowest temperature of the molecules is 220(20) nK. Fig. 4: Evaporation sequences. The initial power of the 1,596-nm trapping laser is 1.5 W. At the end of evaporation we recompress the trap to the initial power to increase the thermalization rate and for a straightfoward comparison of the NaLi density without re-scaling the trap volume. a , Single evaporation: (i) exponential forced evaporation ( τ = 40 ms) for 100 ms with trap depth U ( t ) = A exp(− t / τ ) + B , where A and B are determined from the trap depths at t = 0 and 100 ms; (ii) recompression for 500 ms; (iii) hold for 40 ms for complete thermalization. b , Double evaporation: (1) exponential forced evaporation ( τ = 40 ms) for 100 ms; (2) recompression to the initial trap depth for 50 ms; (3) hold for 30 ms; (4) exponential evaporation ( τ = 120 ms) for 300 ms; (5) recompression for 200 ms. Full size image Fig. 5: Increasing phase-space density. a – c , The number ( a ), temperature ( b ) and PSD ( c ) of NaLi molecules are plotted versus the power of the 1,596-nm trapping laser at the end of the forced exponential evaporation ((i) for single evaporation and (1) for double evaporation in Fig. 4 ). For the double evaporation, we ramp the power down to 0.06 W (2 μK trap depth) in (4), the lowest point that gives a molecular signal high enough for consistent temperature measurement. Single-stage evaporation to this low trap depth leads to loss of almost every molecule. The black dashed lines in b and c indicate the molecule temperature and PSD after STIRAP and removal of free atoms (shaded areas show ±1 error bar). This PSD corresponds to 2.6(1) × 10 4 molecules. When Na atoms are not in the trap or when the evaporation is not performed to a sufficiently low trap depth, the PSD is lower than the initial value. This is mainly due to the molecular two-body loss. The solid curves are guides for the eye. Values and error bars are estimated as in Fig. 2 from 4–9 measurements. Full size image With Na present, more molecules survive the evaporation sequence as we evaporate Na atoms to a lower trap depth. This is mainly due to suppressed two-molecule loss. The loss rate constant scales linearly with the temperature owing to the p -wave character of the collisions 37 . Evaporation to too low trap depth decreases the Na density, which makes the sympathetic cooling of molecules inefficient, and anti-evaporation (heating due to the preferential loss of the coldest molecules) dominates. This appears as a temperature increase below a trap power of 0.17 W (5.7 μK trap depth) for a single stage of evaporation, or 0.14 W (4.5 μK trap depth) for two-stage evaporation. Without Na, the ramp-down in trap depth, (i), is adiabatic to a trap power of around 0.3 W, as evidenced by the constant molecule numbers and temperatures. Below that trap power, the ramp-down starts becoming non-adiabatic. As a result, a hot fraction of molecules escape, and both the molecule number and the temperature start decreasing. We note that because fermionic molecules do not thermalize by themselves, this temperature should be regarded as the average kinetic energy of molecules that are not in thermal equilibrium. The favourable collisional properties of the spin-polarized Na–NaLi mixture in fully stretched hyperfine states result from strong suppression of electronic spin flips during collisions, which could otherwise lead to fast reactive or inelastic losses. In fully stretched states, direct spin exchange is forbidden and residual spin flips can occur only by weaker interactions. The main contribution to the weak spin flips is from dipolar relaxation due to direct coupling between the magnetic dipole moments of the spins of Na and NaLi. Studies in other spin-polarized systems 17 , 34 , 38 predict spin flips induced by anisotropic electrostatic interaction at short range and intramolecular spin–spin and spin–rotation couplings to be weaker at ultracold temperatures. Dipolar relaxation can be eliminated by using the strong-field-seeking stretched state, but this was not necessary in our work. It is possible that the Na–NaLi system is favourable owing to the low reduced and molecular masses and the small relativistic effects, which result in a large rotational constant 32 , 39 , low density of states 28 and small spin–orbit coupling 40 , respectively. Further, ref. 31 has shown that intramolecular spin–spin coupling causes spin flips, leading to chemical reactions for triplet NH molecules, thwarting evaporative cooling. Our results show a clear need for further work to determine what other molecules—and what conditions of internal states and external fields—would be suitable for collisional cooling at ultracold temperatures. Technical upgrades to our apparatus should allow cooling into the quantum degenerate regime by increasing the initial Na density. The dominant loss mechanism— p -wave collisions between molecules—is expected to slow down compared to thermalization at lower temperatures, improving the efficiency of cooling. Further, the atomic and molecular states used in this work are magnetically trappable. Magnetic trapping would allow the use of much larger samples with perfect spin polarization. This would also eliminate the major concern of molecular loss induced by trapping light 29 , making NaLi an ideal system for studying quantum chemistry. Concerns about sticky collisions 28 , which can lead to a long-lived collision complex with high density of states, have led to pessimism over the prospects of achieving collisional cooling of ultracold molecules. This work demonstrates that sticky collisions do not limit NaLi, and probably other low-mass systems, suggesting a bright future for cooling light molecules. Methods Atomic sample preparation We produce an ultracold mixture of 23 Na and 6 Li in their upper stretched hyperfine states (in the \(|F,{m}_{F}\rangle \) basis, these states correspond to \(|2,2\rangle \) for Na and \(|3/2,3/2\rangle \) for Li) in an Ioffe–Pritchard magnetic trap at a bias field of 2 G. We cool the Na atoms using microwave-induced evaporation and sympathetically cool the Li atoms with the Na atoms 30 , 41 , 42 , 43 . Then, the mixture is transferred into the combination of two coaxial optical traps: a 1,064-nm optical dipole trap (30 μm waist, 10 W power) and a 1,596-nm 1D optical lattice (50 μm waist, 2 W power). After 0.6 s of forced evaporation in the 1,064-nm optical dipole trap, the trap is switched off. Then, in the 1,596-nm 1D lattice, the sample is transferred to the lowest-energy Zeeman states ( \(|1,\,1\rangle \) for Na and \(|1/2,1/2\rangle \) for Li) using a Landau–Zener magnetic field sweep. By controlling the microwave power for the Landau–Zener sweep, we intentionally leave a fraction of Na atoms in the upper stretched hyperfine state for later use as coolants of the NaLi molecules. Molecule formation and detection As previously demonstrated with other atomic species 22 , 35 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , NaLi ground-state molecules are created from Feshbach molecules in the least-bound vibrational state. In a 1,596-nm 1D optical lattice, NaLi Feshbach molecules form in the a 3 Σ + , v = 10, N = 2, m N = −2 state ( v and N are the vibrational and rotational molecular quantum numbers, respectively, and m N is the magnetic quantum number of N ) from the atoms in the lowest-energy Zeeman states by sweeping the magnetic field across an interspecies scattering resonance (Feshbach resonance) at 745 G (ref. 36 ). Then, about 98% of Feshbach molecules are transferred to the triplet ground state ( a 3 Σ + , v = 0, N = 0) by means of a 30-μs-long STIRAP sequence with an intermediate state ( v = 11, N = 1, m N = −1) in the c 3 Σ + excited potential. The use of 1,596-nm light for the trap improves the lifetime of the NaLi molecules by suppressing spontaneous photon scattering compared to the more common 1,064-nm trap. The 1D lattice provides strong axial confinement to overcome the anti-trapping produced by magnetic curvature 35 . The STIRAP light (upleg: 833 nm, 360.00381(1) THz; downleg: 819 nm, 366.23611(1) THz) 35 , 52 , 53 is obtained from two home-built external cavity diode lasers (ECDL) 54 locked to an ultralow-expansion cavity. The relative linewidth of the two ECDLs is less than 1 kHz. To attain sufficient optical power for the upleg transition, we use a 500-mW high-power diode laser injection-locked by 2 mW of ECDL light. After molecule formation, a resonant light pulse is applied for about 1 ms to remove the free atoms that were not converted to molecules. These atoms in the lowest-energy Zeeman states accelerate the loss of NaLi to roughly the universal loss model rate 35 . Sodium atoms in this state are optically pumped by the resonant removal light into a dark Zeeman state ( \(|F,\,{m}_{F}\rangle =|2,1\rangle \) ). Adding a repump beam to address this state removes the need to reduce the magnetic field (by instantaneously shutting off the current in the coils generating the 745-G field), which caused considerable heating and loss of molecules in our previous work 35 . We detect ground-state molecules using reverse STIRAP and a magnetic field sweep across the Feshbach resonance followed by resonant absorption imaging of the resulting free atoms. Thermometry of molecules To avoid issues with different atomic and molecular trapping potentials, we dissociate molecules in free space after turning off the trap, which allows accurate molecular thermometry. Given that the particle velocities are fixed after the trap is turned off, the density at some time of flight maps the velocity distribution of molecules in the trap. The molecule temperature is determined from a fit to such density profiles. Extra kinetic energy could be added in quadrature during the dissociation of Feshbach molecules into atomic continuum states 55 . We determine the parameters of Feshbach dissociation (that is, the rate of the magnetic field sweep) to minimize the released dissociation energy. All uncertainties quoted in this article are from statistical and fitting errors. The systematic uncertainty, which is mainly determined from the imaging magnification, is about 6% but has no effect on the measured thermalization rates because it gives a common multiplier to all temperature measurements. Na–NaLi s -wave elastic-scattering cross-section The elastic-scattering rate between Na and NaLi is given by Γ el = n ov σ el v rel where n ov is the overlap-averaged density of the mixture, σ el is the elastic-scattering cross-section and v rel is the mean relative velocity 56 : $$\begin{array}{l}\,{v}_{{\rm{rel}}}=\sqrt{\frac{8{k}_{{\rm{B}}}}{{\rm{\pi }}}\left(\frac{{T}_{{\rm{Na}}}}{{m}_{{\rm{Na}}}}+\frac{{T}_{{\rm{NaLi}}}}{{m}_{{\rm{NaLi}}}}\right)}\\ \,{n}_{{\rm{ov}}}=\frac{I}{\frac{{N}_{{\rm{Na}}}{N}_{{\rm{NaLi}}}}{{N}_{{\rm{Na}}}+{N}_{{\rm{NaLi}}}}}=\frac{{N}_{{\rm{Na}}}{N}_{{\rm{NaLi}}}{\bar{\omega }}_{{\rm{Na}}}^{3}\,{\left[\left(\frac{2{\rm{\pi }}{k}_{{\rm{B}}}{T}_{{\rm{Na}}}}{{m}_{{\rm{Na}}}}\right)\left(1+\frac{{\alpha }_{{\rm{Na}}}}{{\alpha }_{{\rm{NaLi}}}}\frac{{T}_{{\rm{NaLi}}}}{{T}_{{\rm{Na}}}}\right)\right]}^{-\frac{3}{2}}}{\frac{{N}_{{\rm{Na}}}{N}_{{\rm{NaLi}}}}{{N}_{{\rm{Na}}}+{N}_{{\rm{NaLi}}}}}\end{array}$$ (1) where m i , T i and α i represent the mass, temperature and polarizability, respectively, of particle i , \(I={\int }^{}{\rm{d}}V{n}_{{\rm{Na}}}{n}_{{\rm{NaLi}}}\) is the density overlap integral of the mixture in a harmonic trap, \({\bar{\omega }}_{{\rm{Na}}}={({\omega }_{x}{\omega }_{y}{\omega }_{z})}^{1/3}=2{\rm{\pi }}\times {(540\times 410\times 34,000)}^{1/3}\,{\rm{Hz}}\) is the geometric mean of the trap frequencies, and k B is the Boltzmann constant. The differential gravitational sag between molecules and atoms has a negligible effect on the overlap density. The effective number of particles per lattice site, N (Na,NaLi) , is the total number of particles divided by the factor l eff / a , where a = λ /2 is the lattice spacing ( λ , wavelength of the trap laser) and l eff is the effective axial width (in the z direction; see Fig. 1 ) of the atomic density distribution across all lattice sites: $${l}_{{\rm{eff}}}=2\frac{{\int }_{0}^{\infty }z\exp (-{z}^{2}/2{\sigma }^{2}){\rm{d}}z}{{\int }_{0}^{\infty }\exp (-{z}^{2}/2{\sigma }^{2}){\rm{d}}z}=\sqrt{\frac{8}{{\rm{\pi }}}}\sigma $$ (2) where σ = 0.72(1) mm is the Gaussian width, obtained from a fit of the atomic density profile. In a mass-imbalanced system, the factor 3/ ξ quantifies the approximate average number of collisions per particle required for thermalization, where ξ = 4 m Na m NaLi /( m Na + m NaLi ) 2 (ref. 42 ). For the Na–NaLi mixture, this corresponds to approximately three collisions because the mass difference is small. Thus, the relation between the thermalization rate and the elastic-scattering rate is given by Γ th ≈ Γ el /(3/ ξ ). In our system, where the particle number is largely imbalanced (that is, N Na / N NaLi ≈ 7), we can write the thermalization rate as: $${{\Gamma }}_{{\rm{th}}}\approx \frac{(I/{N}_{{\rm{NaLi}}}){\sigma }_{{\rm{el}}}{v}_{{\rm{rel}}}}{3/\xi }$$ (3) where I / N NaLi is the average density of Na atoms seen by the NaLi molecules. Given the measured thermalization rate, we obtain the elastic-scattering cross-section between a molecule and an atom, σ el ≈ (3/ ξ )( N NaLi / v rel I ) Γ th = 2.4(5) × 10 −11 cm 2 , and the corresponding scattering length, a = 263(24) a 0 , where a 0 is the Bohr radius. In addition to the simple exponential fit, we perform a numerical simulation of the thermalization process based on the differential equation for Δ T = T NaLi − T Na (refs. 42 , 43 , 56 ): $$\frac{1}{\Delta T}\frac{{\rm{d}}(\Delta T)}{{\rm{d}}t}=-{\varGamma }_{{\rm{th}}}=-\frac{\xi }{3}{n}_{{\rm{ov}}}{\sigma }_{{\rm{el}}}{v}_{{\rm{rel}}}$$ (4) In the numerical simulation, the temperature dependence of n ov , σ el and v rel is fully taken into account. The s -wave elastic-scattering cross-section between Na and NaLi is given by \({\sigma }_{{\rm{e}}{\rm{l}}}=\frac{4{\rm{\pi }}{a}^{2}}{[1-(1/2){k}^{2}{r}_{0}a{]}^{2}+{(ka)}^{2}}\) , where ħk = μv rel is the relative momentum ( μ is the reduced mass, ħ is the reduced Planck constant) and r 0 is the effective range calculated from the scattering length, a , and from \({C}_{6}^{{\rm{Na}}-{\rm{NaLi}}}\) (see Methods section ‘Na–NaLi loss rate constant from the universal loss model’). We iteratively solve equation ( 4 ) by varying a , and find the optimal solution that minimizes the weighted sum of squared residuals: \({{\sum }_{i}[(\Delta {T}_{m,i}-\Delta {T}_{n,i})/{\sigma }_{\Delta {T}_{m,i}}]}^{2}\) , where the subscript n denotes the numerical solution, m denotes the measurement and \({\sigma }_{\Delta {T}_{m,i}}\) is the uncertainty of the measurement at the i th time step, estimated as in Fig. 2. The numerical solution for a obtained in this way agrees with the result calculated from the simple exponential model described above within the uncertainty. Na–NaLi loss rate constant The Na–NaLi loss rate constant is defined as the differential loss rate, Γ inel − β , normalized by the density of Na overlapped with NaLi (that is, the overlap-averaged density of the mixture, n ov ), where Γ inel is the molecular loss rate with Na and β is without Na. As shown in the calculation of the s -wave elastic-scattering cross-section between Na and NaLi, n ov ≈ I / N NaLi , given the large imbalance of particle numbers. Thus, the Na–NaLi loss rate constant mentioned in the main text is calculated from ( Γ inel − β )/( I / N NaLi ). Na-NaLi loss rate constant from the universal loss model We compare the measured loss rate constant for Na and NaLi with the theoretical prediction based on the universal loss model 37 . When calculating the theoretical loss rate constant, we estimate the van der Waals coefficient between Na and NaLi, \({C}_{6}^{{\rm{Na}}-{\rm{NaLi}}}\) , as the sum of the C 6 coefficients for the atomic pairs that constitute the collision partners, using values from ref. 57 : \({C}_{6}^{{\rm{Na}}-{\rm{NaLi}}}={C}_{6}^{{\rm{Na}}-{\rm{Li}}}+{C}_{6}^{{\rm{Na}}-{\rm{Na}}}\) . Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. | For years, scientists have looked for ways to cool molecules down to ultracold temperatures, at which point the molecules should slow to a crawl, allowing scientists to precisely control their quantum behavior. This could enable researchers to use molecules as complex bits for quantum computing, tuning individual molecules like tiny knobs to carry out multiple streams of calculations at a time. While scientists have super-cooled atoms, doing the same for molecules, which are more complex in their behavior and structure, has proven to be a much bigger challenge. Now MIT physicists have found a way to cool molecules of sodium lithium down to 200 billionths of a Kelvin, just a hair above absolute zero. They did so by applying a technique called collisional cooling, in which they immersed molecules of cold sodium lithium in a cloud of even colder sodium atoms. The ultracold atoms acted as a refrigerant to cool the molecules even further. Collisional cooling is a standard technique used to cool down atoms using other, colder atoms. And for more than a decade, researchers have attempted to supercool a number of different molecules using collisional cooling, only to find that when molecules collided with atoms, they exchanged energy in such a way that the molecules were heated or destroyed in the process, called "bad" collisions. In their own experiments, the MIT researchers found that if sodium lithium molecules and sodium atoms were made to spin in the same way, they could avoid self-destructing, and instead engaged in "good" collisions, where the atoms took away the molecules' energy, in the form of heat. The team used precise control of magnetic fields and an intricate system of lasers to choreograph the spin and the rotational motion of the molecules. As result, the atom-molecule mixture had a high ratio of good-to-bad collisions and was cooled down from 2 microkelvins to 220 nanokelvins. "Collisional cooling has been the workhorse for cooling atoms," adds Nobel Prize laureate Wolfgang Ketterle, the John D. Arthur professor of physics at MIT. "I wasn't convinced that our scheme would work, but since we didn't know for sure, we had to try it. We know now that it works for cooling sodium lithium molecules. Whether it will work for other classes of molecules remains to be seen." Their findings, published in the journal Nature, mark the first time researchers have successfully used collisional cooling to cool molecules down to nanokelvin temperatures. Ketterle's coauthors on the paper are lead author Hyungmok Son, a graduate student in Harvard University's Department of Physics, along with MIT physics graduate student Juliana Park, and Alan Jamison, a professor of physics at the University of Waterloo and visiting scientist in MIT's Research Laboratory of Electronics. Reaching ultralow temperatures In the past, scientists found that when they tried to cool molecules down to ultracold temperatures by surrounding them with even colder atoms, the particles collided such that the atoms imparted extra energy or rotation to the molecules, sending them flying out of the trap, or self-destructing all together by chemical reactions. The MIT researchers wondered whether molecules and atoms, having the same spin, could avoid this effect, and remain ultracold and stable as a result. They looked to test their idea with sodium lithium, a "diatomic" molecule that Ketterle's group experiments with regularly, consisting of one lithium and one sodium atom. "Sodium lithium molecules are quite different from other molecules people have tried," Jamison says. "Many folks expected those differences would make cooling even less likely to work. However, we had a feeling these differences could be an advantage instead of a detriment." The researchers fine-tuned a system of more than 20 laser beams and various magnetic fields to trap and cool atoms of sodium and lithium in a vacuum chamber, down to about 2 microkelvins—a temperature Son says is optimal for the atoms to bond together as sodium lithium molecules. Once the researchers were able to produce enough molecules, they shone laser beams of specific frequencies and polarizations to control the quantum state of the molecules and carefully tuned microwave fields to make atoms spin in the same way as the molecules. "Then we make the refrigerator colder and colder," says Son, referring to the sodium atoms that surround the cloud of the newly formed molecules. "We lower the power of the trapping laser, making the optical trap looser and looser, which brings the temperature of sodium atoms down, and further cools the molecules, to 200 billionths of a kelvin." The group observed that the molecules were able to remain at these ultracold temperatures for up to one second. "In our world, a second is very long," Ketterle says. "What you want to do with these molecules is quantum computation and exploring new materials, which all can be done in small fractions of a second." If the team can get sodium lithium molecules to be about five times colder than what they have so far achieved, they will have reached a so-called quantum degenerate regime where individual molecules become indistinguishable and their collective behavior is controlled by quantum mechanics. Son and his colleagues have some ideas for how to achieve this, which will involve months of work in optimizing their setup, as well as acquiring a new laser to integrate into their setup. "Our work will lead to discussion in our community why collisional cooling has worked for us but not for others," Son says "Perhaps we will soon have predictions how other molecules could be cooled in this way." | 10.1038/s41586-020-2141-z |
Medicine | Obesity in pregnant moms linked to lag in their sons' development and IQ | Elizabeth M. Widen et al, Prepregnancy obesity is associated with cognitive outcomes in boys in a low-income, multiethnic birth cohort, BMC Pediatrics (2019). DOI: 10.1186/s12887-019-1853-4 Amy R. Nichols et al. Prepregnancy obesity is associated with lower psychomotor development scores in boys at age 3 in a low-income, minority birth cohort, Journal of Developmental Origins of Health and Disease (2019). DOI: 10.1017/S2040174419000412 | http://dx.doi.org/10.1186/s12887-019-1853-4 | https://medicalxpress.com/news/2019-12-obesity-pregnant-moms-linked-lag.html | Abstract Background Maternal obesity and high gestational weight gain (GWG) disproportionally affect low-income populations and may be associated with child neurodevelopment in a sex-specific manner. We examined sex-specific associations between prepregnancy BMI, GWG, and child neurodevelopment at age 7. Methods Data are from a prospective low-income cohort of African American and Dominican women ( n = 368; 44.8% male offspring) enrolled during the second half of pregnancy from 1998 to 2006. Neurodevelopment was measured using the Wechsler Intelligence Scale for Children (WISC-IV) at approximately child age 7. Linear regression estimated associations between prepregnancy BMI, GWG, and child outcomes, adjusting for race/ethnicity, marital status, gestational age at delivery, maternal education, maternal IQ and child age. Results Overweight affected 23.9% of mothers and obesity affected 22.6%. At age 7, full-scale IQ was higher among girls (99.7 ± 11.6) compared to boys (96.9 ± 13.3). Among boys, but not girls, prepregnancy overweight and obesity were associated with lower full-scale IQ scores [overweight β: − 7.1, 95% CI: (− 12.1, − 2.0); obesity β: − 5.7, 95% CI: (− 10.7, − 0.7)]. GWG was not associated with full-scale IQ in either sex. Conclusions Prepregnancy overweight and obesity were associated with lower IQ among boys, but not girls, at 7 years. These findings are important considering overweight and obesity prevalence and the long-term implications of early cognitive development. Peer Review reports Background Low-income, urban children are at higher risk of not achieving their developmental potential [ 1 , 2 , 3 ]. Furthermore, low-income, multiethnic populations are disproportionally affected by adverse prenatal factors, such as excessive maternal adiposity and high gestational weight gain (GWG) [ 4 , 5 ]. Prior studies suggest that prepregnancy body mass index (BMI) and/or GWG may be negatively associated with cognitive development in early and mid-childhood [ 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 ]; however, these associations have not been examined in a low-income, multiethnic urban population. Fetal development depends on maternal nutrition status, but the systemic inflammation, metabolic stress, and hormonal perturbations that accompany excess adiposity may adversely affect placental function and fetal development at critical phases [ 15 , 16 , 17 ]. While child sex is a determinant of behavior and cognition, and evidence suggests that boys and girls respond differently to adverse exposures (e.g., poverty, stress, prenatal lead exposure [ 18 , 19 ]), the interplay among maternal BMI and/or GWG, child sex and cognitive development is poorly understood. We recently reported differences in associations of maternal prepregnancy BMI and child development by sex in our cohort at age 3; specifically we found that maternal obesity was associated with lower psychomotor development index scores in boys, but not girls [ 20 ]. Whether these sex-specific effects persist into mid-childhood remains unknown. Child growth and development are also shaped by environmental and socioeconomic factors, many of which are interrelated. Although partially heritable, child cognition may be predicted by postnatal aspects of the home environment, such as parental nurturance or environmental stimulation [ 21 , 22 ]. A more nurturing environment has the potential to temper adverse effects from other key determinants, such as limited socioeconomic resources, environmental exposures and possibly maternal excess adiposity or GWG; however, this has not been evaluated [ 12 , 23 , 24 ]. Environmental toxicant exposures, including pesticides and air pollution, are associated with child neural development [ 25 , 26 , 27 , 28 , 29 ] and have been linked to weight and fat mass gain [ 30 , 31 , 32 , 33 , 34 , 35 ]. Because pregnancy includes shifts in adipose tissue depots [ 36 ], toxicant exposure levels in utero could potentially vary by prepregnancy BMI and GWG; but it is unclear if toxicants impact associations between BMI and/or GWG and child cognition. Therefore, among low-income African American and Dominican urban children participating in the Columbia Center for Children’s Environmental Health (CCCEH) Mothers and Newborns Study, we examined whether maternal prepregnancy BMI and GWG were related to neurodevelopment at child age 7 and if associations varied by child sex. We hypothesized that maternal obesity and greater GWG would be associated with lower IQ, and that associations would be stronger among boys. Moreover, we evaluated whether a more nurturing postnatal home environment changed directions of associations. We also conducted a sensitivity analysis to evaluate whether associations observed were moderated or confounded by prenatal exposure to chlorpyrifos (CPF) and polycyclic aromatic hydrocarbons (PAH), which were previously associated with decreased child IQ in our population [ 25 , 26 ]. Methods This analysis was conducted in a subset of a cohort designed to examine the role of environmental exposures on birth outcomes. Since 1997, the CCCEH Mothers and Newborns cohort ( n = 727) has followed mother-child dyads from northern Manhattan and the South Bronx, previously described in detail [ 37 ]. From 1997 to 2006, Dominican and African American women with singleton gestations were enrolled from prenatal clinics at New York Presbyterian Medical Center and Harlem Hospital if they met eligibility criteria, including first prenatal visit < 20 weeks of gestation and no self-reported diabetes, hypertension, HIV, illicit drug use or smoking during pregnancy. An initial prenatal visit during the second or third trimester included maternal measurements and an interviewer-administered questionnaire. Self-reported prepregnancy weight, Income, marital status, exposure to environmental tobacco smoke, and prenatal distress, including demoralization (i.e. psychological stress) [ 38 ], use of public assistance, and material hardship (self-report of challenges affording food, paying utilities) [ 39 ] were assessed. Self-reported height was obtained at the prenatal visit, and measured height was obtained at postnatal follow-up visits. Maternal height data checking and cleaning in this cohort was previously described in detail [ 40 ]. After delivery, medical records were abstracted to ascertain prenatal medical history, last measured weight prior to delivery and infant birth weight. Total GWG was calculated by subtracting the last measured weight prior to delivery from the self-reported prepregnancy weight. BMI category-specific gestational-age standardized weight gain Z-scores (GWG Z-scores) were calculated from total GWG, as previously described, for women with last measured prenatal weights within 4 weeks of delivery [ 41 , 42 ]. Positive GWG Z-scores indicate that GWG is above average for a gestational age duration, and negative Z-scores indicate that GWG is below average for a given gestational age. For tests of interaction, we used the GWG-Z score calculated using the normal weight women reference for all participants, and for other tests BMI-category specific Z-scores were used. Maternal intelligence was assessed with the Test of Nonverbal Intelligence (2nd edition) (TONI), a 15-min language-free measure of general intelligence, at child age 3 years during a follow up visit at our testing center. During a home visit, at mean child age 3.6 years (range 1.1–6.3 years), a trained researcher conducted the 1-h unstructured Home Observation for Measurement of the Environment (HOME) Inventory to assess learning materials, language stimulation, academic stimulation, variety, and parental responsivity, modeling and acceptance [ 28 ]. At child age 7, the Wechsler Intelligence Scale for Children (WISC-IV) was administered by a trained bilingual research assistant. Ten WISC-IV subscales were used for this study [ 29 ]. Raw scores were converted into scaled scores, as previously described, and scales scores were derived into composite scores assessing four cognitive indices: verbal comprehension, perceptual reasoning, working memory and processing speed). The composite scores were summed to yield a full-scale composite IQ score. Average expected performance on WISC-IV is a score of 100 (with a standard deviation of 15), and intellectual disability is typically defined as a WISC-IV full-scale IQ score less than or equal to 70. This study was approved by the Institutional Review Board at Columbia University. Informed consent was obtained from all participating mothers and assent was obtained from the children at age 7. Analyses were conducted with Stata 14.0 (Stata-Corp, College Station, TX, USA) using an alpha of 0.05 and 0.1 for statistical tests of a priori hypotheses and interactions, respectively. A complete-case analysis was conducted. Baseline characteristics were compared using chi-square tests, t-tests, and Wilcoxon rank-sum tests. ANOVA was used to compare mean characteristics across prepregnancy BMI categories by child sex. Standard BMI categories were used to allow for comparisons with other reports, and withour findings at age 3 [ 20 ]. Multivariable linear regression was used to evaluate associations of 1) maternal prepregnancy BMI category and 2) prepregnancy BMI category and GWG Z-score [ 41 , 42 ] with child continuous WISC-IV full-scale IQ and index specific scores. Potential confounders and effect modifiers were identified by causal diagrams and literature review. Potential effect modifiers of the associations between prepregnancy BMI and child outcome, included child sex and GWG. First, we evaluated if associations between prepregnancy BMI category and child IQ varied by child sex on the additive scale by including an interaction term between BMI and sex. We observed effect modification by sex, so all subsequent models were sex-stratified. Then, we included interaction terms between GWG and prepregnancy BMI category to examine effect modification by GWG Z-score on the additive scale. Potential confounders included maternal race/ethnicity (Dominican or African American), marital status (yes/no, married or cohabitating), education (≥high school vs. <high school), age (continuous), parity (nulliparous vs. parous), maternal IQ (continuous), demoralization (total score > 1.55, representing the highest quartile of demoralization in the sample) and hardship (yes/no, defined as at least 1 unmet basic need: going without food, shelter, utilities or clothing at least once during pregnancy). Potential confounders were retained in the model if they changed the beta coefficient for BMI category by > 10%. The final adjustment set included maternal race/ethnicity, marital status, education and maternal IQ, plus child gestational age at delivery (weeks) and age at testing (months) to reduce variance in the outcome. We investigated the postnatal HOME score (continuous) by adding this factor to the primary model and examining change in beta coefficients. An additional sensitivity analysis examined whether inclusion of the common environmental toxicants chlorpyrifos (CPF) and polycyclic aromatic hydrocarbons (PAH), collected as part of the original study design, modified or confounded associations (See Additional file 1 for details). Despite strategies designed to improve retention in this longitudinal study [ 43 ], a number of participants lacked outcome data due to loss to follow-up by child age 7. To address this, inverse probability weighting (IPW) was used to assess effects of attrition, as previously conducted in this cohort [ 44 ]. Separately for boys and girls, a logistic regression model was fit with baseline data, including maternal prepregnancy BMI, parity, age, race/ethnicity, education and hardship, predicting successful retention from which a predicted probability was estimated, and the inverse of this probability was used as a sampling weight in the re-analysis of the linear models. Results From the original cohort ( n = 727), complete data were available on 368 dyads (Fig. 1 ). Baseline characteristics were similar between included and excluded dyads (data not shown); however, compared to those not included, the relative proportion of African American dyads included was higher (41.3 vs. 28.4%) and Dominican dyads was lower (58.7 vs. 71.6%). Fig. 1 Participant flow diagram Full size image Among all mothers, average total GWG was 16.5 ± 7.4 kg (Mean ± SD) and GWG Z-score was 0.16 ± − 3.6. Table 1 shows baseline characteristics and child measures by sex. At child age 7, full-scale IQ and working memory scores were higher among girls compared to boys. Unadjusted mean values for WISC-IV scores by prepregnancy BMI and child sex are outlined in Fig. 2 . In boys, perceptual reasoning, full-scale IQ, and processing speed scores varied by prepregnancy BMI category, with higher scores found among boys born to women with normal prepregnancy BMI values (see Additional fi1e 1 : Table S1). Scores did not vary by prepregnancy BMI in girls. Table 1 Participant demographics and outcome values by child sex ( n = 368) Full size table Fig. 2 Mean values (Mean ± SD) for child WISC-IV scores at age 7 by sex and prepregnancy BMI category. *Indicate that scores vary across prepregnancy BMI categories Full size image In our multivariable models, the association between prepregnancy BMI and child cognitive outcomes varied by sex. Specifically, the interaction p -values between prepregnancy overweight or obesity and infant sex were 0.06 and 0.09, respectively, for full-scale IQ. This suggests that associations between prepregnancy BMI and child IQ were different among girls compared to boys. As full-scale IQ is a composite score reflecting four cognitive indices, we sex-stratified subsequent full-scale and index-specific models. Among boys in our initial multivariable models with and without adjustment for GWG (Table 2 – Models 1 & 2 ), maternal overweight and obesity were associated with lower full-scale IQ and perceptual reasoning scores. Only maternal overweight was associated with lower processing speed scores, and only maternal obesity was associated with lower verbal comprehension scores. Among girls, prepregnancy BMI category was not associated with full-scale IQ or any of the four indices in models with and without adjustment for GWG (Table 2 – Models 3 & 4 ). No interactions between prepregnancy BMI category and GWG were observed within the sex-stratified tables. When GWG z-scores were included as covariates in the sex-stratified models, GWG was not associated with cognitive outcomes in boys. Among girls, however, an inverse association between GWG and perceptual reasoning was observed. Table 2 Adjusted associations between maternal prepregnancy BMI, pregnancy weight gain and child cognitive test scores in boys and girls at age 7, Columbia Center for Children’s Environmental Health, enrolled from 1998 to 2006 Full size table In the primary models with additional adjustment for postnatal HOME score (see Additional file 1 : Tables S2 & S3 - Model 1), we found that the HOME score impacted several associations, some by > 10%. In boys, BMI category beta coefficients for full-scale IQ and verbal comprehension were attenuated after adjustment for the HOME score. The effect size on full-scale IQ among boys for maternal obesity compared to normal weight was − 6.5 without HOME adjustment, and − 5.7 with HOME adjustment (a 12% difference). Furthermore, the association between maternal obesity and lower verbal comprehension scores in boys no longer existed after adjustment for the HOME score. Calculation of IPW for retention at child age 7 showed that African American race was associated with follow-up for girls, but not boys, while other factors were not associated with retention (data not shown). Weighting the data did not appreciably alter associations for prepregnancy BMI category models, or for models with both prepregnancy BMI category and GWG (see Additional file 1 : Tables S1 & S2 – Models 2 & 3). In toxicant sensitivity analyses, we found no evidence of effect measure modification or confounding of prepregnancy BMI by high CPF or PAH [ 45 ] (see Additional file 1 ). Discussion In this longitudinal cohort of low-income, urban, African American and Dominican maternal-child dyads, we found sex-specific associations in mid-childhood between maternal prepregnancy BMI and child cognitive outcomes in boys, and a limited association between GWG and child perceptual reasoning in girls. Specifically, in boys, maternal overweight and obesity were associated with lower full-scale IQ and perceptual reasoning scores at child age 7, while maternal overweight was associated with lower processing speed scores. Maternal obesity was also associated with lower verbal comprehension scores, although this deficit was attenuated when HOME score was added to the model. The effect sizes for maternal prepregnancy overweight or obesity in boys, when compared to normal weight women, ranged from 4.6 to almost 9 points lower in mid-childhood. Among girls, we observed no association between maternal prepregnancy BMI and cognitive test scores in mid-childhood, but found that gestational-age-standardized GWG was inversely associated with perceptual reasoning after adjustment for the HOME score. These sex-specific associations observed for effects of maternal excess adiposity on mid-childhood cognitive test scores are intriguing and have not been previously reported. As childhood IQ predicts education level, socioeconomic status and professional success [ 46 ], a deficit up to 9 points may be individually meaningful and have implications on a population level. The biological rationale for these observed sex differences in mid-childhood, as well as the biological underpinnings of the links between prepregnancy body size, GWG and child cognitive development, are not fully understood and may be interrelated. Some of the biological pathways linking maternal overweight/obesity and high GWG to fetal and child brain development, structure and function include inflammatory or hormonal perturbations [ 47 , 48 , 49 , 50 , 51 , 52 ] and differential dietary or nutrient exposures (e.g., high-fat diet, suboptimal nutrient intakes) [ 48 , 53 ]. Consistent with previous evidence suggesting that boys are differentially affected by adverse exposures [ 18 ], the boys in our study appear to be negatively affected by maternal overweight or obesity compared to girls. Alternatively, the girls could also be adversely affected by maternal overweight or obesity, but in this low-socioeconomic context where boys appear to be more vulnerable to adverse exposures [ 54 ] and girls appear to be more responsive or resilient [ 55 ], adverse effects on girls’ developmental trajectories may be attenuated by age 7. The mechanisms underlying these sex-specific findings are unknown, but investigations of explanatory biochemical and molecular changes are ongoing, particularly in the placenta. The placenta mediates fetal programming through regulation of fetal growth and development, and evidence points to sexual dimorphism in placental functioning associated with maternal adiposity [ 56 ]. Women with greater adiposity experience greater placental inflammation [ 57 , 58 ], oxidative and nitrative stress [ 17 , 59 ] and placental dysfunction compared to women within the normal BMI range [ 17 , 60 , 61 ]. A growing body of animal and human evidence indicates that placental function [ 62 ], responsivity [ 63 ] and endocrine and neurochemical responses [ 64 ], determined by global genome expression and regulation [ 65 , 66 , 67 , 68 ], the epigenome [ 62 , 69 ] and response to maternal inflammation and diet [ 70 , 71 , 72 ], affect the growing fetus in a sex-specific manner as early as conception [ 62 ]. Additionally, males and females develop at different rates in utero [ 73 ], and a faster growing fetus has greater exposure to prenatal insults that may partly explain why males are at increased risk for developing adverse pregnancy outcomes [ 62 , 74 ]. It is challenging to compare our findings to other reports because neurodevelopmental sex differences for maternal pregnancy weight-related exposures have previously not been explored in a low-income, urban population, and further, previous studies examined a wide range of cognitive functions assessed over varying periods of follow-up [ 75 ]. However, our findings in boys are consistent with most previous studies in similarly aged children (5-8y) reporting significant associations for prepregnancy BMI alone [ 6 , 11 , 13 , 76 , 77 ] or prepregnancy BMI and GWG [ 8 , 10 , 12 , 78 ]. In girls, we found no associations for prepregnancy BMI and observed an unexpected inverse association for GWG with perceptual reasoning scores when models were adjusted for the HOME score. These findings are less consistent with previous reports where associations for GWG were also observed, but only among women with higher prepregnancy weight or BMI [ 8 , 14 , 79 ]. The quality of the home environment and parenting practices in childhood are important contributors to child neurodevelopment, and the role of a stimulating and nurturing environment on associations may vary by child sex [ 24 , 28 ]. We do not believe that any previous similar study evaluated whether the postnatal home environment impacted associations. In separate studies, Farah et al. in children ages 4 and 8 and Horton and Kahn et al. in children at 7 years found that parental nurturance predicted child working memory; additionally, Horton and Kahn et al. found that boys benefited more than female counterparts from a nurturing home environment [ 22 ]. In building our models, we found that a stimulating and nurturing postnatal home environment attenuated associations between prepregnancy BMI and child cognitive scores in some models. This suggests that the home environment may be on the causal pathway, as posited by Farah and in animal models [ 22 ], or a positive confounder between maternal pregnancy weight-related factors and child cognition. Therefore, supporting a healthy home environment during pregnancy and thereafter may be an important area for future investigation and intervention. In our sensitivity analysis, the toxicants CPF and PAH did not modify or confound associations between prepregnancy BMI and child cognitive test scores. While our exposure assessment in cord blood may capture a significant period of exposure near the end of pregnancy (e.g., PAH DNA adducts have an estimated half-life of 3–4 months), this does not reflect the entire course of pregnancy, or the early pregnancy period where the adverse effects of environmental exposures or high prepregnancy BMI and associated inflammation may be stronger [ 45 ]. These findings add to the growing evidence that maternal adiposity affects offspring cognition in middle childhood, but there are limitations to this work. First, our sample size may have been underpowered to detect effect measure modification, especially after sex-stratification. Second, as with most studies in this area, we used self-reported prepregnancy weight to calculate prepregnancy BMI, which potentially biased findings [ 80 ]; however, we conducted data cleaning on women with longitudinal prenatal weight data and excluded highly implausible values. We had too few women with severe obesity (BMI > 40 kg/m 2 ) to evaluate obesity subgroups. This cohort was predominately enrolled in late pregnancy and included women with relatively healthy pregnancies who did not report diabetes or other medical conditions; however, we were unable to account for preeclampsia, gestational diabetes or other conditions in our analyses since these were not abstracted in the original study design. Although there was attrition, we conducted IPW analyses to assess whether attrition biased our findings and the results were essentially unchanged. The strengths of this study include our ability to account for many factors in our analyses, including maternal IQ, the postnatal home environment and, in a subset, urban environmental toxicant exposures. We also used gestational-age-standardized GWG Z-scores to examine GWG, which allowed for assessment of associations independent of gestational age at delivery. Conclusions In summary, we found that prepregnancy overweight and obesity were associated with lower IQ scores in boys at 7 years of age, but not in girls, an association that was partially attenuated by adjustment for the home environment. These sex-specific associations may reflect differences in the intrauterine environment or potentially the postnatal environment, but the mechanisms are currently not well understood. These findings are important in light of the high prevalence of maternal overweight and obesity, and the longer-term implications of early cognitive development. Availability of data and materials The data for this the current study was used under a limited use data use agreement between Columbia University and the University of Texas at Austin. The data that support the findings of this study are available from Columbia University; but restrictions apply to the availability of these data because of the need to maintain participant confidentiality. Data are available upon request and review by the Columbia Center for Children’s Environmental Health through an institutional data use agreement. Abbreviations BMI: Body mass index CCCEH: Columbia Center for Children’s Environmental Health CPF: Chlorpyrifos GWG: Gestational weight gain HOME: Home Observation for Measurement of the Environment IPW: Inverse probability weighting IQ: Intelligence quotient PAH: Polycyclic aromatic hydrocarbon TONI: Test of Nonverbal Intelligence | A mother's obesity in pregnancy can affect her child's development years down the road, according to researchers who found impaired motor skills in preschoolers and lower IQ in middle childhood for boys whose mothers were severely overweight while expecting them. A team of nutrition and environmental health researchers at The University of Texas at Austin and Columbia University found that the differences are comparable to the impact of lead exposure in early childhood. The team studied 368 mothers and their children, all from similar economic circumstances and neighborhoods, during pregnancy and when the children were 3 and 7 years of age. At age 3, the researchers measured the children's motor skills and found that maternal obesity during pregnancy was strongly associated with lower motor skills in boys. At age 7, they again measured the children and found that the boys whose mothers were overweight or obese in pregnancy had scores 5 or more points lower on full-scale IQ tests, compared with boys whose mothers had been at a normal weight. No effect was found in the girls. "What's striking is, even using different age-appropriate developmental assessments, we found these associations in both early and middle childhood, meaning these effects persist over time," said Elizabeth Widen, assistant professor of nutritional sciences at UT Austin. "These findings aren't meant to shame or scare anyone. We are just beginning to understand some of these interactions between mothers' weight and the health of their babies." It isn't clear why obesity in pregnancy would affect a child later, though previous research has found links between a mother's diet and cognitive development, such as higher IQ scores in kids whose mothers have more of certain fatty acids found in fish. Widen said that dietary and behavioral differences may be driving factors, or fetal development may be affected by some of the things that tend to happen in the bodies of people with too much extra weight, such as inflammation, metabolic stress, hormonal disruptions and high amounts of insulin and glucose. The researchers controlled for several factors in their analysis, including race and ethnicity, marital status, the mother's education and IQ, as well as whether the children were born prematurely or exposed to environmental irritants such as air pollution. What the pregnant mothers ate or whether they breastfed were not included in the analysis. The team also examined and accounted for the nurturing environment in a child's home in early childhood, looking at how parents interacted with their children and whether the child was provided with books and toys. A nurturing home environment was found to lessen the negative effects of obesity. "The effect on IQ was smaller in nurturing home environments, but it was still there," Widen said. This is not the first study to find that boys appear to be more vulnerable in utero. A 2018 study found lower performance IQ in boys, but not girls, whose mothers were exposed to lead, and a 2019 study suggested boys whose moms had fluoride in pregnancy scored lower on an IQ assessment. Because childhood IQ is a predictor of education level, socioeconomic status and professional success later in life, the researchers said there is potential for effects to last into adulthood. Widen advised women who are obese or overweight when they become pregnant to eat a well-balanced diet that is rich in fruits and vegetables, take a prenatal vitamin, stay active and make sure to get enough fatty acids such as the kind found in fish oil. Giving children a nurturing home environment also matters, as does seeing a doctor regularly, including during pregnancy to discuss weight gain. "Work with your doctor and talk about what is appropriate for your circumstances," Widen said. The families involved in the research were participating in the urban birth cohort study in New York City led by the Columbia Center for Children's Environmental Health. The study on IQ at age 7 was published today in BMC Pediatrics with co-authors Amy Nichols and Sara Dube of UT Austin; Linda Kahn of New York University; and Pam Factor-Litvak, Beverly Insel, Lori Hoepner, Virginia Rauh, Frederica Perera and Andrew Rundle of Columbia University. The same team, absent Dube and Kahn, were involved in a paper about the children at age 3, published in September in the Journal of Developmental Origins of Health and Disease. | 10.1186/s12887-019-1853-4 |
Physics | A new and game-changing magnetoresistance | "Unidirectional spin Hall magnetoresistance in ferromagnet/normal metal bilayers." Nature Physics (2015) DOI: 10.1038/nphys3356 Journal information: Nature Physics | http://dx.doi.org/10.1038/nphys3356 | https://phys.org/news/2015-06-game-changing-magnetoresistance.html | Abstract Magnetoresistive effects are usually invariant on inversion of the magnetization direction. In non-centrosymmetric conductors, however, nonlinear resistive terms can give rise to a current dependence that is quadratic in the applied voltage and linear in the magnetization. Here we demonstrate that such conditions are realized in simple bilayer metal films where the spin–orbit interaction and spin-dependent scattering couple the current-induced spin accumulation to the electrical conductivity. We show that the longitudinal resistance of Ta|Co and Pt|Co bilayers changes when reversing the polarity of the current or the sign of the magnetization. This unidirectional magnetoresistance scales linearly with current density and has opposite sign in Ta and Pt, which we associate with the modification of the interface scattering potential induced by the spin Hall effect in these materials. Our results suggest a route to control the resistance and detect magnetization switching in spintronic devices using a two-terminal geometry, which applies also to heterostructures including topological insulators. Main The effects of the magnetization on the electric conductivity of metals have been studied for a long time 1 , providing understanding of fundamental phenomena associated with electron transport and magnetism as well as a multitude of applications in sensor technology. The anisotropic magnetoresistance (AMR)—the change of the resistance of a material on rotation of the magnetization—is a prominent manifestation of spin–orbit coupling and spin-dependent conductivity in bulk ferromagnets 2 , 3 . In thin-film heterostructures, the additional possibility of orienting the magnetization of stacked ferromagnetic layers parallel or antiparallel to each other gives rise to the celebrated giant magnetoresistance (GMR) effect 4 , 5 , which has played a major role in all modern developments of spintronics. Together with the early spin-injection experiments 6 , 7 , the study of GMR revealed how non-equilibrium spin accumulation at the interface between ferromagnetic (FM) and normal metal (NM) conductors governs the propagation of spin currents 8 , 9 , 10 , 11 and, ultimately, the conductivity of multilayer systems 10 , 12 . Recently, it has been shown that significant spin accumulation at a FM/NM interface can be achieved using a current-in-plane (CIP) geometry owing to the spin Hall effect (SHE) in the NM (ref. 13 ). When the FM is a metal and NM is a heavy element such as Pt or Ta, the spin accumulation is strong enough to induce magnetization reversal of nanometre-thick FM layers at current densities of the order of j = 10 7 –10 8 A cm −2 (refs 14 , 15 , 16 ). When the FM is an insulator, such as yttrium iron garnet, the SHE causes an unusual magnetoresistance associated with the back-flow of a spin current into the NM when the spin accumulation is aligned with the magnetization of the FM, which increases the conductivity of the NM due to the inverse SHE (refs 17 , 18 , 19 , 20 ). This so-called spin Hall magnetoresistance (SMR) is characterized by R y < R z ≍ R x , where R i is the resistance measured when the magnetization ( M ) is saturated parallel to i = x , y , z , and differs from the conventional AMR in polycrystalline samples, for which and R y ≍ R z < R x (ref. 3 ). In this work, we report on a magnetoresistance effect occurring in FM/NM bilayers with the NM possessing a large SHE. The effect combines features that are typical of the current-in-plane (CIP) GMR and SHE, whereby the spin accumulation induced by the SHE in the NM replaces one of the FM polarizers of a typical GMR device. Differently from GMR, however, this effect introduces a nonlinear dependence of the resistance on the current, which gives it unique unidirectional properties: the resistivity changes when reversing either the sign of the magnetization or the polarity of the current, increasing (decreasing) when the SHE-induced non-equilibrium magnetization at the FM/NM interface is oriented parallel (antiparallel) to the magnetization of the FM, as illustrated in Fig. 1a, b . We associate this phenomenon with the modulation of the FM/NM interface resistance due to the SHE-induced spin accumulation, which gives rise to a nonlinear contribution to the longitudinal conductivity that scales proportionally with the current density and has opposite sign in Pt and Ta. Contrary to the linear magnetoresistive effects, including the AMR, GMR, and SMR described above, which are even with respect to the inversion of either the current or magnetization owing to the time reversal symmetry embodied in the Onsager’s reciprocity relations, here we observe R ( j , M ) = − R ( −j , M ) = − R ( j , −M ), providing a unidirectional contribution to the magnetoresistance in simple bilayer systems. Figure 1: Illustration of the unidirectional spin Hall magnetoresistance effect and sample layout. a , Parallel alignment of the SHE-induced non-equilibrium magnetization at the FM/NM interface with the magnetization of the FM increases the resistivity of the bilayer. b , Antiparallel alignment decreases the resistivity. The arrows in a , b indicate the direction of the spin magnetic moment. c , Scanning electron micrograph of a sample and schematics of the longitudinal resistance measurements. Full size image Sample layout The samples studied in this work are Pt (1–9 nm)|Co (2.5 nm) and Ta (1–9 nm)|Co (2.5 nm) films with spontaneous in-plane magnetization, capped by 2 nm of AlO x and patterned in the shape of Hall bars of nominal length l = 20–50 μm, width w = 4–10 μm, and l / w = 4, as shown in Fig. 1c . Additional control experiments were carried out on single Co, Ta and Pt films, and Ta (1, 6 nm)|Cu (2, 4, 6 nm)|Co (2.5 nm) trilayers. To characterize the magnetic and electrical properties of these layers we performed harmonic measurements of the longitudinal resistance ( R , see Supplementary Information ) and transverse Hall resistance ( R H ; refs 16 , 21 , 22 , 23 ) as a function of a vector magnetic field defined by the polar and azimuthal coordinates θ B and ϕ B . The measurements were carried out at room temperature by injecting an a.c. current of frequency ω /2π = 10 Hz and simultaneously recording the first ( R ω ) and second harmonic resistance ( R 2 ω ) between the contacts shown in Fig. 1c while rotating the sample in a uniform magnetic field of 1.7 T. Here, R ω represents the linear response of the samples to the current—that is, the conventional resistance. To include the different magnetoresistive angular dependencies in a single expression we write this term as where θ and ϕ are the polar and azimuthal angles of M , as schematized in Fig. 2a . R 2 ω describes resistance contributions that vary quadratically with the applied voltage and includes the current-induced changes of resistivity that are the main focus of this work. Figure 2: Linear and nonlinear magnetoresistance. a , Geometry of the measurements. b , c , First-harmonic resistance of Ta (6 nm)|Co (2.5 nm) ( c ) and Pt (6 nm)|Co (2.5 nm) ( c ), measured with a current density of j = 10 7 A cm −2 . d , e , Second-harmonic resistance measured simultaneously with b , c . The dimensions of the Hall bars are l = 50 μm and w = 10 μm. Full size image Magnetoresistance measurements Figure 2b, c shows the resistance of Ta (6 nm)|Co (2.5 nm) and Pt (6 nm)|Co (2.5 nm) layers during rotation of the applied field in the xy , zx and zy planes. We observe a sizeable magnetoresistance (MR) in all three orthogonal planes and R x > R z > R y for both samples, in agreement with previous measurements on Pt|Co films 24 , 25 . The resistivity of Ta (6 nm)|Co (2.5 nm) [Pt (6 nm)|Co (2.5 nm)] is 108.9 (36.8) μΩ cm and the MR ratios are ( R x − R z )/ R z = 0.09% [0.05%] and ( R z − R y )/ R z = 0.12% [0.53%], showing significant SMR-like behaviour, a factor 10 to 100 times larger compared to Ta|YIG and Pt|YIG (refs 17 , 18 ). The solid lines represent fits to the MR using equation (1) and θ simultaneously measured via the anomalous Hall resistance (see Supplementary Information ). As well as the linear resistance, we measure an unexpected nonlinear resistance, R 2 ω , which has a different angular dependence from that of R ω and opposite sign in Pt and Ta, as shown in Fig. 2d, e . By fitting the curves with respect to the angles θ and ϕ (solid lines), we find that R 2 ω ∼ sin θ sin ϕ ∼ M y . In the following, we discuss the types of nonlinear effects that can give rise to such a symmetry. Spin–orbit torques and thermoelectric contributions First, we consider oscillations of the magnetization due to the current-induced spin–orbit torques (SOTs; refs 16 , 21 , 22 , 23 , 26 ). As the SOTs are proportional to the current, a.c. oscillations of the magnetization can introduce second-order contributions to R 2 ω due to the first-order MR described by equation (1). However, as shown in detail in the Supplementary Information , the SOT-induced signal is not compatible with the angular dependence of R 2 ω . Both the field-like and antidamping-like SOT (as well as the torque due to the Oersted field) vanish for M ‖ y , where | R 2 ω | is maximum. Moreover, the field-like SOT is small in 2.5-nm-thick Co layers 23 , whereas the antidamping SOT can only induce variations of R 2 ω in the zx plane with maxima and minima close to θ B = 0° and 180°, which we observe to be small and more pronounced in Pt|Co relative to Ta|Co ( Fig. 2d, e ). Second, we analyse the influence of thermal gradients ( ∇ T ) and related thermoelectric effects. The anomalous Nernst effect (ANE) and spin Seebeck effect (SSE; refs 27 , 28 ), both inducing a longitudinal voltage proportional to j 2 ( M × ∇ T ), can give rise to a similar angular dependence as observed for R 2 ω when (see Supplementary Information ). Here, we find that thermoelectric voltages are negligible in Pt|Co, in agreement with the very small thermal gradients reported for this system 23 . In Ta|Co, on the other hand, the much larger resistivity of Ta relative to Co results in a higher current flowing in the Co layer and a positive ∇ T . In such a case, the second harmonic signal of thermal origin, R 2 ω ∇ T , can be simply estimated from its transverse (Hall) counterpart scaled by the geometric factor l / w when the magnetization is tilted in the x direction, and subtracted from the raw R 2 ω signal. Accordingly, we find that R 2 ω ∇ T = 5 mΩ in Ta (6 nm)|Co (2.5 nm), which accounts for only about 50% of the total R 2 ω reported in Fig. 2 . The same procedure applied to Pt|Co gives R 2 ω ∇ T of the order of 5% of the total R 2 ω , whereas in the control samples lacking a heavy metal we find uniquely a signal of thermal (ANE) origin. We conclude that there is an additional magnetoresistive effect in the Pt|Co and Ta|Co bilayers that cannot be accounted for by either current-induced magnetization dynamics or thermoelectric voltages. Unidirectional spin Hall magnetoresistance The symmetry as well as the opposite sign of the nonlinear resistance in Ta|Co and Pt|Co suggest that it is related to the scalar product of the magnetization with the SHE-induced spin accumulation at the FM/NM interface, , giving rise to a chiral MR contribution R 2 ω USMR ∼ j × M . This relation describes the general features expected from a unidirectional magnetoresistance driven by the spin Hall effect (USMR). We note that this MR contribution depends on the current direction and that the resistance of the bilayer increases when the direction of the majority spins in the FM and the spin accumulation vector are parallel to each other, and decreases when they are antiparallel. This may seem counterintuitive at first sight, considering that the conductivity of Co is larger for the majority spins. However, as we will discuss later, this behaviour is consistent with the theory of GMR in FM|NM|FM heterostructures 10 , 29 , 30 when only a single FM/NM interface is retained and the SHE is taken into account. To investigate further the USMR we have measured R 2 ω as a function of an external magnetic field applied parallel to and current amplitude. Figure 3 shows that R 2 ω is constant as a function of field for Ta|Co ( Fig. 3a ) as well as for Pt|Co ( Fig. 3b ) and reverses sign upon switching the magnetization from the y to the − y direction. In the Pt|Co case we observe also two spikes, which we attribute to the magnetization breaking into domains at low field and giving rise to dynamic effects on the domain walls 16 . Note that the magnetization of Pt|Co is not fully saturated below 0.65 T due to the large perpendicular magnetic anisotropy of this system, differently from Ta|Co ( Supplementary Information ). Figure 3c shows the current dependence of R 2 ω USMR = R 2 ω − R 2 ω ∇ T ( R 2 ω USMR ≍ R 2 ω for Pt|Co) obtained by taking the average of the data measured at fields larger than |±1|T. R 2 ω USMR is linear with the injected current density and converges to zero within the error bar of the linear fit (black lines). Figure 3: Field and current dependence of the nonlinear magnetoresistance. a , b , R 2 ω of Ta (6 nm)|Co (2.5 nm) ( a ) and Pt (6 nm)|Co (2.5 nm) ( b ) recorded during a field sweep along y with a current density of j = 1.2 × 10 7 A cm −2 . c , Current dependence of R 2 ω USMR . The solid lines are fits to the data. The slope gives the amplitude of the USMR, which is 5.5 mΩ and 1.25 mΩ per 10 7 A cm −2 for these samples, respectively. The thermal contribution R 2 ω ∇ T has been subtracted from the Ta|Co data. The dimensions of the Hall bars are l = 50 μm and w = 10 μm. Full size image To verify the role of the interfacial spin accumulation due to the SHE we examined the dependence of the USMR on the thickness of the NM. Figure 4a, b shows the absolute change of sheet resistance Δ R USMR = R 2 ω USMR (± M , ± j ) − R 2 ω USMR (± M , ∓ j ) and the relative change of resistivity Δ R USMR / R measured at constant current density as a function of the Ta and Pt thickness. Both curves exhibit qualitatively similar behaviour: an initial sharp increase below 2–3 nm and a gradual decrease as the NM layer becomes thicker. We note that the USMR signal is almost absent in Ta (1 nm)|Co, contrary to Pt (1 nm)|Co, which we attribute to the oxidation of Ta when deposited on SiO 2 and its consequent poor contribution to electrical conduction. The initial increase of the USMR is consistent with the increment of the spin accumulation at the FM/NM interface as the thickness of the NM becomes larger than the spin diffusion length, which is of the order of 1.5 nm in both Ta and Pt (refs 18 , 31 ). Moreover, we observe that the decline of the signal in the thicker samples is stronger in Pt|Co than in Ta|Co. This behaviour is consistent with Pt gradually dominating the conduction due to its low resistivity, and a smaller proportion of the current experiencing interface scattering in Pt|Co. Conversely, the high resistivity of Ta shunts the current towards the Co side, increasing the relative proportion of the current affected by scattering at the Ta/Co interface. Figure 4: USMR as a function of NM thickness. a , Sheet resistance Δ R USMR as a function of Ta (squares) and Pt (circles) thickness measured at constant current density j = 10 7 A cm −2 . The Co layer is 2.5 nm thick in all samples. b , Normalized resistance Δ R USMR / R . The solid lines are fits to the data according to the model described in the text. Full size image As an additional check to validate these arguments we have performed measurements on single Ta (6 nm), Pt (6 nm) and Co (8 nm) layers as well as on Ta (1, 6 nm)|Cu (2, 4, 6 nm)|Co (2.5 nm) trilayers, all capped by 2 nm AlO x . The USMR is absent in the Ta, Pt and Co single layers, which also excludes any self-induced magnetoresistive effect 32 and proves the essential role of the FM/NM interface. On the other hand, we find a sizable USMR when a Cu spacer is inserted between Ta and Co, which excludes proximity-induced magnetization as a possible cause for the USMR (see Supplementary Information ). Discussion On the basis of the analysis presented above, we conclude that the current-induced spin accumulation creates an additional spin-dependent interface resistance that adds to or subtracts from the total resistance depending on the sign of the cross-product j × M . Given the in-plane geometry, the interpretation of this effect requires a Boltzmann equation approach to model the spin- and momentum-dependent reflection and transmission of electrons at the FM/NM interface, equivalent to extending the theory of CIP-GMR (refs 29 , 30 ) beyond first order and including the SHE. However, a qualitative understanding of the USMR can be derived also by introducing a two-current series resistor model and an interface resistance term proportional to the SHE-induced shift of the electrochemical potential between the FM and NM layers. The latter can be calculated using a one-dimensional drift-diffusion approach 9 , 10 , 17 . We consider two separate conduction channels for the minority (spin ↑ ) and majority (spin ↓ ) electrons. As in bulk FM, scattering at the interface is spin-dependent due to the unequal density of majority and minority states near the Fermi level, which, in most cases, leads to a larger resistance for minority electrons relative to majority electrons: r ↓ > r ↑ . This resistance difference is at the heart of the GMR effect, both in the CIP (refs 29 , 30 , 33 ) and the current-perpendicular-to-plane (CPP) geometry 10 , 34 . Furthermore, when an electric current flows from a FM to a NM or vice versa, another resistance term appears due to the conductivity mismatch between majority and minority electrons on opposite sides of the junction, which results in spin accumulation (refs 8 , 9 ). This so-called ‘spin-coupled interface resistance’ plays a role in CPP-GMR as well as in local and non-local spin-injection devices 7 , 10 , 35 , whereas in the CIP geometry it is usually neglected because there is no net charge flow across the interface and the spin accumulation is assumed to be zero. If we take the SHE into account, however, the transverse spin current flowing between the NM and the FM induces a splitting of the spin-dependent electrochemical potentials μ ↑ and μ ↓ and a net interfacial spin accumulation μ s = μ ↑ − μ ↓ , which is given by where μ s N 0 = 2 eρ N λ N θ SH j is the bare spin accumulation due to the SHE that would occur in a single, infinitely thick NM layer, θ SH the spin Hall angle of the NM, ρ N, F and λ N, F are the resistivity and spin diffusion length of the NM and FM, respectively, and r b = ( r ↑ + r ↓ )/4 is the interface resistance 10 . Moreover, the same effect induces a shift Δ μ = μ N − μ F of the electrochemical potential μ N, F = ( μ N, F + + μ N, F − )/2 of the NM relative to the FM: where γ = ( r ↓ − r ↑ )/( r ↑ + r ↓ ) and . Figure 5a, b shows a graphical representation of μ s and Δ μ N ; details about the derivation of equations (2) and (3) are given in the Supplementary Information . A key point is that Δ μ N depends on the product Pθ SH j , as the USMR, and is linked with the spin-dependent scattering potential that gives rise to different transmission coefficients for majority and minority electrons at the FM/NM interface 36 . We can thus draw the following qualitative interpretation of the USMR: when the non-equilibrium magnetization induced by the SHE and the magnetization of the FM are parallel to each other (δ m ‖ M ), the transmission of ↑ ( ↓ ) electrons across the interface is reduced (enhanced) by the accumulation of majority electrons at the FM/NM boundary, due to the conductivity mismatch of ↑ and ↓ spins in the two materials. Likewise, when δ m ‖ − M , the transmission of ↓ ( ↑ ) electrons across the interface is reduced (enhanced) since minority electrons accumulate at the FM/NM boundary. The overall effect is a modulation of the interface resistance of the ↑ and ↓ spin channels by a nonlinear term ± r s , as schematized in Fig. 5c . This two-current ( ↑ and ↓ ) series resistor model leads to a resistance difference between the two configurations, given by 2 r s γ , where r s is assumed proportional to Δ μ N . Figure 5: Modulation of the spin accumulation, spin-dependent electrochemical potential, and interface resistance by the SHE. a , b , Profile of the electrochemical potential of majority ( μ ↑ , blue lines) and minority ( μ ↓ , red lines) electrons in the proximity of the FM/NM interface for positive ( a ) and negative ( b ) current. The electrochemical potential of the NM shifts relative to that of the FM, as indicated by the green arrow. The direction of the magnetization is in both panels. In our notation, the majority spins are oriented antiparallel to M and δm, the non-equilibrium magnetization induced by the SHE, has opposite sign relative to μ s . Reversing M is equivalent to exchanging μ ↑ and μ ↓ and inverting Δ μ . The sign of the SHE corresponds to Pt|Co; the parameters used to calculate μ ↑ and μ ↓ are given in the Supplementary Information . c , Two-current series resistor model of the USMR corresponding to a (top, higher resistance) and b (bottom, lower resistance). Note that the resistances r ↑ and r ↓ may be generalized to include also the bulk spin-dependent resistances of the FM layer. Full size image Accordingly, using realistic values of r b , ρ N and ρ F for Ta|Co and Pt|Co, we fit the dependence of the USMR on current and NM thickness to the following phenomenological expression (see Supplementary Information ): Δ R USMR = A tanh( t N /2 λ N )/(1 + R FI / R N ) 2 , where A is a parameter proportional to P μ s N 0 representing the amplitude of the effect, R FI is the effective resistance of the FM and interface regions, and R N = ρ N l /( wt N ) is the resistance of the NM. The denominator accounts for the decreased fraction of electrons that scatter at the interface as the thickness of the NM increases. Similarly, we obtain Δ R USMR / R = ( A / R )tanh( t N /2 λ N )/(1 + R FI / R N ). As shown in Fig. 4 , these simple expressions fit Δ R USMR and Δ R USMR / R remarkably well, providing also values of λ Pt = 1.1 nm and λ Ta = 1.4 nm that are in agreement with previous measurements 18 , 31 . Our model thus captures the essential features of the USMR, namely its sign, angular dependence, and proportionality to the current. Detailed calculations including realistic scattering parameters within a nonlinearized Boltzmann approach including the SHE (ref. 37 ) should be able to account for quantitative aspects of the USMR in different materials. We stress also that the USMR is not uniquely linked to the SHE but may arise also due to other sources of non-equilibrium spin accumulation, such as the Rashba effect at FM/NM interfaces and topological insulators 37 , 38 , 39 , 40 , as well as the anomalous Hall effect in FM. Conclusions The existence of a nonlinear magnetoresistive term proportional to j × M has both fundamental and practical implications. Identifying which symmetries survive the breakdown of the Onsager relationships in the nonlinear regime is central to the understanding of electron transport phenomena, particularly in mesoscopic and magnetic conductors, where such effects can also have thermoelectric and magneto-optical counterparts 41 , 42 . In this respect, the USMR shows that the longitudinal conductivity has an antisymmetric Hall-like component that has so far remained unnoticed. We expect such a component to be a general feature of non-centrosymmetric magnetic systems with strong spin–orbit coupling. We note also that the USMR differs from the nonlinear MR observed in chiral conductors, such as metal helices 42 and molecular crystals 43 , which is proportional to j ⋅ M . In the field of spintronics, nonlinear interactions between spin and charge are emerging as a tool to detect spin currents 44 and thermoelectric 45 effects, as well as magnetization reversal in dual spin valves 46 . Although the USMR is only a small fraction of the total resistance, its relative amplitude is of the same order of magnitude as the spin transresistance measured in non-local metal spin valves 7 , 35 , which is a method of choice for the investigation of spin currents. The thermoelectric counterpart of the USMR, related to the spin Nernst effect, may be used to detect heat-induced spin accumulation by modulation of the magnetization rather than an electric current. We note that the electric field created by the USMR is of the order of 2 V m −1 per 10 7 A cm −2 , which is comparable to the ANE (ref. 23 ) and three orders of magnitude larger than the typical electric fields due to the SSE (refs 27 , 28 ). In terms of applications, the USMR may be used to add 360° directional sensitivity to AMR sensors, which are widely employed for position, magnetic field, and current sensing, and already include built-in modulation circuitry for accurate resistance measurements. Most interestingly, the USMR shows that it is possible to realize two-terminal spintronic devices where switching is performed by SOTs (refs 14 , 15 ) and reading by a resistance measurement. Such a scheme involves only one FM layer and minimum patterning effort. Finally, we believe that there is substantial room for improving the amplitude of the USMR to levels closer to the AMR, either by material or heterostructure engineering. In particular, the USMR could increase significantly in magnetic topological insulator structures due to the very large spin accumulation and reduced bulk conductance reported for these systems 47 , 48 . Note added in proof : After submission of this manuscript, Olejník et al. reported a similar effect in a ferromagnetic/paramagnetic GaMnAs bilayer 49 . This observation nicely confirms our findings in a different material system. Further, it shows that the USMR increases by orders of magnitude in conductors where the charge carrier density is small compared to metals. Methods Sample preparation. The Pt (1–9 nm)|Co (2.5 nm)|AlO x (2 nm) and Ta (1–9 nm)|Co (2.5 nm)|AlO x (2 nm) layers were grown by d.c. magnetron sputtering on thermally oxidized Si wafers. The deposition rates were 0.185 nm s −1 for Pt, 0.067 nm s −1 for Ta, 0.052 nm s −1 for Co, and 0.077 nm s −1 for Al. The deposition pressure was 2 mtorr and the power was 50 W for all targets. The Al capping layers were oxidized in situ by a 7 mtorr O 2 plasma at 10 W for 35 s. The layers were subsequently patterned into six-terminal Hall bars by means of standard optical lithography and Ar milling procedures. The Hall bar dimensions are w for the current line width, w /2 for the Hall branch width, with l = 4 w being the distance between two Hall branches, where w varies between 4 and 10 μm. Characterization. All layers possess spontaneous isotropic in-plane magnetization. To determine the saturation magnetization of Co we have performed anomalous Hall effect measurements on an 8-nm-thick Co reference sample with B ext ‖ z . The field required to fully saturate M out-of-plane is about 1.5 T; which, assuming that perpendicular magnetic anisotropy is negligible in this layer, is close to μ 0 M s expected of Co. Similar measurements on Ta (6 nm)|Co (2.5 nm) and Pt (6 nm)|Co (2.5 nm) layers give saturation fields of 1.45 T and 0.8 T, respectively. This is attributed to the small (large) perpendicular interface anisotropy contribution of the Ta/Co (Pt/Co) interface, reducing the field required to saturate the magnetization out-of-plane. Four-point resistivity measurements on single Co (8 nm), Ta (6 nm) and Pt (6 nm) layers yield ρ Co = 25.3 μΩ cm, ρ Ta = 237 μΩ cm and ρ Pt = 34.1 μΩ cm, in line with expectations for Pt and Co thin films, and the β-phase of Ta. The magnetoresistance and Hall voltage measurements were performed at room temperature using an a.c. current I = I 0 sin ωt , where ω /2π = 10 Hz, generated by a Keithley-6221 constant current source. For the data reported in Fig. 2 the peak amplitude of the injected a.c. current was set to 8.5 mA, corresponding to a nominal current density of j = 10 7 A cm −2 . In other measurements with different device size and thickness, the current was adapted to have the same current density. The longitudinal and transverse voltages were recorded simultaneously by means of a 24-bit resolution National Instruments PXI-4462 dynamic signal analyser, dwelling 10 s at each angle position in a uniform external field of 1.7 T. The rotation of the sample was provided by a motorized stage with a precision of 0.02°. The acquired voltages were fast Fourier transformed to extract the first and second harmonic voltage signals V ω and V 2 ω . The corresponding resistances are given by R ω = V ω / I 0 and R 2 ω = V 2 ω / I 0 (peak values). | More than 150 years ago, William Thomson, later Lord Kelvin, discovered the magnetoresistive effect. Today, this finding enables sensors to measure the rotational speed of a car wheel, and is also used in compass navigation and robot control systems. ETH material scientists have now found a new kind of magnetoresistance that promises further insight into basic research and could one day be used for practical applications. The discovery by Pietro Gambardella, Professor for Magnetism and Interface Physics at ETH Zurich, and his team has just been published in the British scientific journal Nature Physics. However, advance publication of their work a few months ago on a science platform caused a stir among experts and has motivated other researchers to attempt similar experiments. The magnetoresistance of a conductive material normally remains the same when an electric current changes direction. "However, a new magnetoresistive effect that we discovered changed when the electron flow was reversed," Gambardella explains. "This is very unusual in metals." According to physical principles, these microscopic processes should remain independent of the direction of electrons in a metallic conductor. This new discovery is even more surprising considering that the first magnetoresistance effect was observed more than 150 years ago. Resistance measurements are "among the simplest that you can do", according to Gambardella. In the laboratory at ETH Zurich, old-style electromagnets are placed next to modern superconductors and laser instruments. One of the magnets comes from Gambardella's earlier workplace at EPFL in Lausanne; he rescued it from the trash before it was carted off for disposal. The researchers clamped samples manufactured in the laboratory in the magnets and measured the electrical resistance at different magnetisation. Sensors and read heads for hard drives Wiring of a sample for magnetoresistance measurements. Credit: Avci, Mendil and Gambardella / ETH Zurich Electrical resistance is a fundamental property of conductive materials. It indicates how well an electric current can be sent through a wire and is dependent on the magnetisation of the material. In 1856, William Thomson (later Lord Kelvin) discovered that an iron plate's electrical resistance changed depending on the direction in which the magnetisation ran. This anisotropic magnetoresistive effect (AMR) is used today in a variety of sensors; for example, it is used in cars to measure the wheel speed and the pedal and sitting position, and to control the power steering. AMR sensors are also used in cameras and in engineering and medical technology. They can even be found on Mars; the robots have more than 100 AMR sensors to control the moving parts. AMR sensors are based on thin magnetic layers in which the resistance varies by a few percent depending on whether the current is perpendicular or parallel relative to the magnetisation. French physicist Albert Fert and his German colleague Peter Grünberg discovered a much larger resistance difference of 50 percent in thin-film structures made up of alternating layers of magnetic and non-magnetic materials. For this Giant Magnetoresistive Effect (GMR), the two researchers received the Nobel Prize in Physics in 2007. The GMR effect is used today in, inter alia, magnetic field sensors and to read data on computer hard drives. Bilayers of heavy metal and ferromagnet A GMR structure typically consists of two films of ferromagnetic material, such as iron or cobalt, separated by a non-magnetic layer of copper. If the magnetisation of the iron or cobalt layer is parallel, the structure's electric resistance is small. If the orientation is anti-parallel, resistance is greater. "We restricted ourselves to two instead of three layers," explains Gambardella. The researchers' structure consisted of a thin layer of heavy metal, such as platinum or tantalum, with a thin layer of iron or cobalt above. "The very clever idea for this experiment came from one of our students, Can Onur Avci," says the ETH professor. With their two–layer samples, the researchers measured a normal AMR effect; as expected, the resistance changed according to the magnetisation of the ferromagnets. Surprisingly, however, they also found a new magnetoresistive effect in which the resistance depended on the direction of the electron flow. The samples' strange behaviour can be explained by the electrons' magnetic moment – the spin. In heavy metals, electrons with opposite spin are deflected in different directions. Electrons with the same spin direction therefore accumulate at the interface of the platinum or tantalum. But if a layer of iron or cobalt is placed on the heavy metal, the total electrical resistance depends on how the accumulated spin and the magnetisation of the ferromagnetic material are aligned. Small effect, big hopes The opposite deflection of the electrons with the opposite spin is called the Spin Hall effect, which is why the ETH researchers dubbed the newly discovered magnetoresistance, which depends on the direction of electron flow, Unidirectional Spin Hall Magnetoresistance. This phenomenon is similar to GMR, according to Gambardella. However, it not only involves a material's property, but is directly proportional to the amount of current flowing into the material. Nevertheless, ETH researchers warn that expectations should not be set too high: "Our effect is very interesting for basic research, but can't yet be used in a practical way." The fractional differences in resistance are much too small. One day, however, customised materials may be produced in which the new magnetoresistance could be used. The researchers are thinking of semiconductors or topological insulators in which the electrons are confined at the interface of different material components and may present amplified magnetoresistances. Spintronics is the name of the field of research that flourished after the discovery of the GMR effect. Advances in this area have applications for the storage and processing of digital information, for example by storing and processing information not only based on charges but also on the spin. | 10.1038/nphys3356 |
Medicine | Brain 'relay' also key to holding thoughts in mind | Schmitt LI, Wimmer RD, Nakajima M, Happ M, Mofakham S, Halassa MM. Thalamic amplification of cortical connectivity sustains attentional control. Nature, May 3, 2017. DOI: 10.1038/nature22073. Bolkan SS , Stujenske JM, Parnaudeau S, Spellman TJ, Rauffenbart C, Abbas AI, Harris AZ, Gordon JA, Kellendonk C, Thalamic projections sustain prefrontal activity during working memory maintenance. Nature Neuroscience, May 3, 3017. DOI: 10.1038/nn.4568 Guo ZV, Inagaki HK, Daie K, Druckmann S, Gerfen CR, Svoboda K. Maintenance of persistent activity in a frontal thalamocortical loop. Nature, May 3, 2017. DOI: 10.1038/nature22324 Journal information: Nature Neuroscience , Nature | http://dx.doi.org/10.1038/nature22073 | https://medicalxpress.com/news/2017-05-brain-relay-key-thoughts-mind.html | Abstract Although interactions between the thalamus and cortex are critical for cognitive function 1 , 2 , 3 , the exact contribution of the thalamus to these interactions remains unclear. Recent studies have shown diverse connectivity patterns across the thalamus 4 , 5 , but whether this diversity translates to thalamic functions beyond relaying information to or between cortical regions 6 is unknown. Here we show, by investigating the representation of two rules used to guide attention in the mouse prefrontal cortex (PFC), that the mediodorsal thalamus sustains these representations without relaying categorical information. Specifically, mediodorsal input amplifies local PFC connectivity, enabling rule-specific neural sequences to emerge and thereby maintain rule representations. Consistent with this notion, broadly enhancing PFC excitability diminishes rule specificity and behavioural performance, whereas enhancing mediodorsal excitability improves both. Overall, our results define a previously unknown principle in neuroscience; thalamic control of functional cortical connectivity. This function, which is dissociable from categorical information relay, indicates that the thalamus has a much broader role in cognition than previously thought. Main We recorded PFC ensembles in a two-alternative forced choice (2AFC) task, in which a freely behaving mouse selected between conflicting visual and auditory stimuli based on whether one of two rules was presented, and where rule presentation was varied on a trial-by-trial basis ( Fig. 1a ). Broadband white noise was informative of trial availability, prompting trial initiation by snout-protrusion into a centrally located port. Subsequently, either low-pass (10 kHz) or high-pass (11 kHz) noise was presented for 100 ms to signal the task rules (rule 1 (low pass), attend to vision; rule 2 (high pass), attend to audition), and after a brief delay where the head position was stably maintained, the animal selected between spatially conflicting visual and auditory stimuli to receive a reward. Animals achieved a balanced and robust performance across modalities ( Extended Data Fig. 1a ) and no stereotypical behaviour, which indicates a modality or location preference, was observed during the delay. These findings suggest that animals were capable of holding the task rule ‘in mind’ and using it to map onto sensory targets. Figure 1: Task-specific sequential PFC activity maintains rule representation. a , Schematic of task design. b , Example peri-stimulus time histogram (PSTH) and rasters for neurons tuned to either attend to vision (red) or attend to audition (blue) rules. c , Examples of tuning peaks across multiple sessions. d , Task-variable information, indicates that tuned neurons ( n = 512 neurons from four mice) reflect rule information (top, green), but not movement (top, grey), whereas unturned neurons do not reflect rule information ( n = 2,727, bottom). AU, arbitrary units. e , Example spike-time cross-correlation between two neurons (50-μs bins), indicating a putative monosynaptic connection. f , Putative monosynaptic connections in same rule tuned pairs showed a significantly larger average peak. Vertical ticks indicate peak times. Opp. rule, opposite rule. g , Cumulative plot showing cross-correlation values for each pair. Kolmogorov–Smirnov test. h , Same rule tuned pairs with putative monosynaptic connections had overlapping tuning peaks. i , Raster and PSTH examples showing diminished tuning during optogenetic activation of inhibitory neurons (blue shading indicates laser on). j , Quantification of laser effects on peak sizes ( n = 94 neurons, three mice; example in Extended Data Fig. 3j ). k , l , Temporally limited optogenetic manipulations indicate that later tuning depends on earlier activity. Blue line, laser on; green line, mean; green shading, 95% confidence interval (CI); grey shading, 95% CI for the baseline. PowerPoint slide Full size image Given that prelimbic cortex 7 activity during the delay (referred to as prefrontal cortex (PFC)) is necessary for task performance 8 , we directly interrogated its neural substrates ( Extended Data Fig. 1b–d ). We found that certain PFC neurons signalled the task rule by an increase in spiking for a brief moment during the delay ( Fig. 1b ). This temporal sparseness is analogous to that observed in the primate dorsolateral PFC during task rule encoding and/or maintenance 9 . The majority of such neurons signalled only one rule, but a minority (17%) signalled both rules at different temporal offsets ( Extended Data Fig. 1e, f ). Most tuned neurons exhibited regular spiking waveforms (regular spiking, 82% of total tuned neurons; 19% of all recorded regular-spiking neurons) consistent with these neurons being pyramidal neurons. Only a minority were fast spiking (fast-spiking neurons, 18% of total tuned neurons; 11% of all fast-spiking neurons; proportion of tuned fast-spiking out of total tuned neurons versus proportion of tuned regular-spiking neurons out of total tuned neurons, P = 0.001, binomial test; Extended Data Fig. 1g and Supplementary Note 1 ). Tuned peaks tiled the entire delay ( Fig. 1c ), and tuning was independent of delay length ( Extended Data Fig. 1h ). Linear decoding 10 showed that tuned neurons encoded task rule but not movement ( Fig. 1d and Supplementary Discussion 1 ). Rule information was exclusively represented by neurons we identified as tuned, because spike trains derived from the remaining neurons (untuned) did not contain rule information (or movement, Fig. 1d (bottom) and Extended Data Fig. 1i ). As such, distinct PFC populations represent the two task rules used for sensory selection. The conclusion that these population codes reflect sensory selection rules, rather than rules directing the selection or avoidance of one sensory target, is supported by multiple findings. First, performance on trials with single-target modality presentation was identical to performance with sensory conflict ( Extended Data Fig. 2a ). Second, in sessions with sufficient errors (more than 20 trials), PFC neurons that were appropriately tuned to one rule displayed identical activity during error trials of the opposite trial type, indicating that inappropriate rule encoding was the major source of errors in this task ( Extended Data Fig. 2b, c ). To fully test this idea, we developed a four-alternative forced choice (4AFC) task ( Extended Data Fig. 2d ) in which animals separately indicated their choice for target modality type as well as its spatial location, thereby distinguishing errors that are related to rule encoding (executive) from those related to target stimulus perception (sensory) ( Extended Data Fig. 2e ). The largest source of 4AFC errors was executive ( Extended Data Fig. 2f ), consistent with our interpretation that similar misattribution of task rules explains most of the errors in the 2AFC task ( Supplementary Discussion 2 ). When multiple cells encoding the same rule were simultaneously recorded ( Extended Data Fig. 3a ), their consistent temporal order indicated that a neural sequence maintained the relevant categorical representation over time 11 , 12 , 13 , 14 , 15 . Consistent with this, we found that many PFC neuronal pairs exhibited robust short-latency cross-correlations, indicating their co-modulation at synaptic timescales 16 , 17 . This co-modulation was related to similarity in both categorical and temporal tuning ( Extended Data Fig. 3b–f ). Analysis at a higher temporal resolution, which is required for inferring putative monosynaptic connections 18 , 19 (example in Fig. 1e ), showed significantly higher probability and strength among same-rule encoding pairs ( Fig. 1f, g ) and was only observed among pairs with overlapping temporal fields ( Fig. 1h ). As such, our data are consistent with a synaptic chain mechanism for categorical rule representation in the PFC. This inferred relationship between connectivity and coding is reminiscent of the one directly demonstrated in the mouse primary visual cortex, where neurons with similar orientation tuning are more likely to be synaptically connected 20 . To causally test this synaptic chain model, we used local optogenetic activation of inhibitory cortical neurons 21 to produce temporally precise chain disruption ( Extended data Fig. 3g ). For each mouse, we used the minimum light intensity that, when delivered specifically during the delay, was sufficient to render it incapable of appropriate sensory selection, as has been shown previously 8 . Under those conditions, we found evidence for driving fast-spiking neurons, but overall, regular-spiking neurons were generally only slightly inhibited ( Extended Data Fig. 3h, i ). Nonetheless, laser delivery over the entire delay resulted in diminished rule tuning ( Fig. 1i, j and Extended Data Figs 3j , 4a ). Temporally limited manipulations revealed that early PFC manipulation diminished late task rule representation, even when the rule presentation period itself was spared ( Fig. 1k, l and Extended Data Fig. 4b–d ). While synaptic PFC chains are probably necessary for sustaining rule representation, they are not sufficient, as presenting the two rule-associated cues outside of the task did not generate PFC tuning ( Extended Data Fig. 4e, f ). This indicated that additional factors are required for PFC populations to represent task rules. Given previous work that has shown a notable role for the mediodorsal thalamus in executive function 2 , 22 , and its heavy reciprocal connectivity with the PFC 23 , we investigated whether its interaction with the PFC was a factor. Bilateral optogenetic suppression of the mediodorsal thalamus during the delay rendered mice incapable of appropriate sensory selection in the 2AFC task ( Fig. 2a ). Similar suppression in the 4AFC task resulted in identical error patterns to those resulting from PFC suppression (executive errors; Fig. 2b and Extended Data Fig. 5a, b ), and ones that were distinct from those resulting from visual thalamic suppression (lateral geniculate nucleus (LGN); Fig. 2b and Extended Data Fig. 5c ). Consistent with this behavioural dissociation, LGN suppression did not affect PFC rule-tuning ( Extended Data Fig. 6a ), whereas suppression of the mediodorsal thalamus diminished rule maintainence( Fig. 2c–g and Extended Data Fig. 6b–e ) while largely sparing rule initiation (first 100 ms, during rule presentation; Fig. 2d, e ). Suppression of the mediodorsal thalamus limited to the latter half of the delay was less effective at eliminating population coding and behaviour compared to an equivalent period of local PFC suppression ( Extended Data Fig. 6e ). These differences indicate that mediodorsal activity may not be required for initial rule encoding, and that the mediodorsal thalamus may be recruited by the PFC to sustain rule representation in a manner that outlasts mediodorsal neuronal spiking. Consistent with this, optogenetic PFC suppression in 100-ms bins across the delay resulted in identical behavioural effects throughout, whereas corresponding mediodorsal suppression resulted in a weaker effect at the earliest and latest bins ( Fig. 2h ). Notably, mediodorsal dependency was linked to delay length ( Fig. 2i ). Figure 2: Categorically-free mediodorsal activity is required for PFC rule representation and task performance. a , Delay-limited PFC or mediodorsal (MD) inhibition diminishes task performance. (four sessions, n = 4 mice each). b , Delay-limited PFC or mediodorsal inhibition in the 4AFC task selectively increases executive errors, whereas LGN suppression selectively increases sensory ones ( n = 4 mice per group). c , Raster and PSTH examples of PFC rule tuning with mediodorsal suppression (shading denotes laser). d , Population quantification of data shown in c as in Fig. 1j ( n = 58 neurons). e , f , Temporally limited mediodorsal suppression effects on population tuning ( e , n = 43 neurons; f , n = 46 neurons; three mice with four sessions per condition). g , Comparison of behavioural performance with full and temporally limited mediodorsal suppression ( n = 3 mice with four sessions each). h , Effect of a short (100 ms) suppression of the PFC or mediodorsal thalamus across the delay on performance. i , Effect of mediodorsal suppression in short delay (20 ms) trials. j , Schematic for mediodorsal recordings. k , Raster and PSTH example of a mediodorsal neuron showing similar peaks in both trial types. l , Example PSTHs of five mediodorsal neurons showing consistent lack of rule specificity. m , Linear decoding fails to reveal rule information in mediodorsal neurons with peaks (PFC, n = 604 neurons, six mice; mediodorsal thalamus, n = 156 neurons, three mice). n – p , Mediodorsal peak elimination by PFC suppression (full, n = 47 neurons; early, n = 34 neurons; middle, n = 36 neurons; two mice with 4–5 sessions each). q , Effects of mediodorsal (yellow) and PFC (blue) suppression on the first 50 ms of PFC tuning. Mediodorsal suppression produces a small effect on early PFC peaks ( n = 101 neurons, three mice) relative to local PFC suppression ( n = 146 neurons, three mice), whereas PFC suppression strongly reduced early mediodorsal tuning (green, n = 81 neurons, two mice). Supp., suppression; rec., recording. r , Early rule information is preserved with mediodorsal, but not PFC, suppression (decoding as in Fig. 1 ) (mediodorsal thalamus, n = 101 neurons, three mice; PFC, n = 146 neurons, three mice). Shading indicates 95% CI. Wilcoxon rank-sum test was used for all comparisons. Data are presented as mean ± s.e.m. PowerPoint slide Full size image To understand how mediodorsal neurons sustained PFC representations, we recorded their spiking during the task ( Fig. 2j ). Certain mediodorsal neurons displayed temporally limited enhanced spiking during the task delay, but were non-selective to rule as the vast majority showed identical activity for both the attend to vision and attend to audition trials ( Fig. 2k, l , Extended Data Fig. 7a and Supplementary Note 2 ). As such, it was not surprising that this mediodorsal population was uninformative to the task rule ( Fig. 2m and Extended Data Fig. 7b–e ). Notably, peaks were only encountered in the lateral mediodorsal thalamus ( Extended Data Fig. 7f ), consistent with their reciprocal connectivity pattern with the PFC 24 ( Extended Data Fig. 7g, h ). In fact, 58% of neurons recorded in the lateral mediodorsal thalamus showed rule non-selective peaks dependent on PFC activity ( Fig. 2n–p ). However, in contrast to the impact of mediodorsal inactivation on PFC tuning, the very first mediodorsal peaks were eliminated by PFC suppression ( Fig. 2q, r ). Overall, these data indicate that mediodorsal engagement in the delay is not to relay categorical rule information, but rather to sustain existing cortical representations. Consistent with this, neither overall spike rates nor tuning peaks were changed in error trials ( Extended Data Fig. 7i, j ), confirming the conclusions that most errors in this task are due to rule misattribution and that this information is generated cortically. To our knowledge, this is the first direct mechanistic demonstration of a non-relay function of the thalamus. To gain mechanistic insight into how the mediodorsal thalamus sustains categorical cortical representations, we performed multi-site recordings within the mediodorsal thalamus–PFC loop. We found that mediodorsal neurons increased spiking upon task engagement, but that only inhibitory cortical (fast-spiking) neurons and not excitatory (regular-spiking) neurons showed a similar increase ( Fig. 3a ). This was also true for changes seen in the task delay; mediodorsal and cortical fast-spiking neurons showed additional spiking enhancement, whereas regular-spiking neuronal spike rates remained relatively unaltered ( Fig. 3a ). Next, we investigated what could maintain spike rates of regular-spiking neurons constant across these conditions, but increase spiking of fast-spiking neurons. Because PFC neurons generate functional sequences following rule presentation, we suspected that their local connectivity might be enhanced by mediodorsal inputs, balancing increased inhibition. Consistent with this hypothesis, Granger causality of cortical regular-spiking neuronal network spike trains, a proxy for functional connectivity 25 , 26 , increased upon task engagement and further increased during the task delay ( Fig. 3b ). These findings suggested that the firing rates of regular-spiking neurons became more dependent on local regular-spiking neuronal connections, and by extension mediodorsal inputs, as the animal engaged in the task. Consistent with this, optogenetic suppression of mediodorsal activity resulted in reduced spiking of regular-spiking neurons during, but not outside, the task ( Fig. 3c and Extended Data Fig. 8a-c ). Broadly enhancing mediodorsal excitability through stabilized step function opsin (SSFO) activation 27 selectively increased spiking of fast-spiking neurons and increased functional connectivity of regular-spiking neurons, without significantly changing spike rates of these putative excitatory neurons ( Fig. 3c and Extended Data Fig. 8d–f ). This modulatory mediodorsal-to-PFC input is very different from the driving PFC to mediodorsal thalamus input we observe after SSFO-mediated activation of the PFC ( Fig. 3d and Extended Data Fig. 8g–i ), and is consistent with the theoretical prediction that strong (driver–driver) thalamocortical loops are not implemented in the brain 28 . To further test whether mediodorsal inputs enhance functional cortical connectivity, we examined the effect of mediodorsal activation on intra-cortical evoked responses ( Fig. 3e ). In this context, a non-driving mediodorsal input amplified an intracortical evoked response ( Fig. 3f ). The supralinear nature of this amplification is evident by comparing the observed functional connection to the one predicted based on the arithmetic sum of its individual components (thalamic and cortical; Fig. 3g ). Notably, these results were different from the ones observed in the thalamocortical loop containing the LGN and primary visual cortex (V1) ( Fig. 3h ); LGN inputs drove robust spiking in ipsilateral V1 and combining this driving input with contralateral V1 stimulation led to sublinear responses ( Fig. 3i, j and Extended Data Fig. 9a ). Figure 3: Mediodorsal input amplifies local PFC connectivity. a , Mediodorsal and cortical fast-spiking (FS), but not regular-spiking (RS), neurons show increased firing rates upon task engagement and further increase during the delay (normalized to values outside of the task; PFC, six mice, regular-spiking neurons, 516 cells, fast-spiking neurons, 213 cells; mediodorsal thalamus, three mice, 196 neurons). Grey shading indicates 95% CI of null distribution. b , Regular-spiking neuronal network connectivity, assessed by Granger causality of spike trains (normalized to values outside of the task, n = 6 mice, 43 sessions; median, 13 neurons per session). c , Suppressing the mediodorsal thamalus reduces firing rates of cortical fast-spiking and regular-spiking neurons during task delay (regular-spiking neurons, n = 245; fast-spiking neurons, n = 114; n = 2 mice), as well as regular-spiking neuronal connectivity ( n = 2 mice, 19 sessions; median 13 neurons per session). Enhancing mediodorsal excitability increases the firing rates of cortical fast-spiking neurons, connectivity among cortical regular-spiking neurons, but not neuronal firing rates of regular-spiking neurons (regular-spiking neurons, n = 303, fast-spiking neurons, n = 131; n = 2 mice; regular-spiking connectivity ( n = 17 sessions; median, 18 neurons per session)). d , Spike transfer function (Methods) of the PFC to mediodorsal thalamus is significantly higher compared to transfer of the mediodorsal thalamus to PFC ( n = 17 sessions, median, 11 PFC and 10 mediodorsal neurons per session for PFC to mediodorsal thalamus, median, 18 PFC and 15 mediodorsal neurons per sessions for mediodorsal thalamus to PFC, 2 mice per condition). Error bars are 95% CI estimated across sessions. e , Experimental setup for testing the effect of mediodorsal activation on local intra-PFC connectivity. f , Example regular-spiking neuron responses (normalized PSTH, mean ± s.e.m.) to mediodorsal activation alone, intra-PFC activation alone or the combination. g , Comparison of the observed combined response with the arithmetic sum of its individual components shows supra-linearity ( P < 10 −15 , signed-rank test). h , i , As in e , f , but for SSFO-mediated activation of LGN and recordings from V1. j , Combined stimulation results in a sub-linearity ( P < 10 −4 , signed-rank test). PowerPoint slide Full size image To formalize our notion of this non-relay thalamic function, we built a data-driven spiking PFC–mediodorsal thalamus neural model ( Extended Data Fig. 9b–f and Methods). Progressively increasing mediodorsal excitability to mimic task engagement enhanced correlated spiking among model PFC neurons, but resulted in rule-specific neural sequences only when co-tuned ‘starter’ neurons were synchronized by a common input ( Extended Data Fig. 9g, h ). The theoretical prediction that a combinination of mediodorsal amplification of cortical connectivity with a specific input that drives initial synchrony is sufficient to generate a sequential categorical representation, was experimentally validated ( Extended Data Fig. 9i, j ). Our model shows that enhancing mediodorsal excitability increases PFC rule information content by improving tuning of individual cortical neurons, and by recruiting previously untuned ones ( Extended Data Fig. 9i ). This enhancement comes in contrast to rule information reduction when PFC excitability itself is increased because of chain cross-talk ( Extended Data Fig. 9j ). Consistent with the model, we found that SSFO-mediated enhancement of mediodorsal excitability led to increased PFC rule information ( Fig. 4a–f ), while enhancing PFC excitability diminished it ( Fig. 4b–f ). Changes in the neural code were also reflected in behaviour, as enhancing PFC excitability substantially diminished task performance, whereas boosting mediodorsal excitability improved it, with individual mice consistently performing at levels not typically observed in our cohorts without manipulation (26% reduction in lapse rate, Fig. 4h, i ). Such enhancement was not found when primary auditory thalamus (medial geniculate body (MGB)) activity was activated in the context of auditory discrimination ( Extended Data Fig. 10 ), highlighting the utility of the SSFO approach as a diagnostic test for categorical information encoded within neural circuits and the idea that, in contrast to the MGB, the mediodorsal thalamus does not relay such information during attentional control. Even mediodorsal neurons that apparently signalled one rule showed emergence of a peak in the opposite trial type with local SSFO activation, confirming that this information is not used for task performance ( Extended Data Fig. 9k, l ). Figure 4: Enhancing mediodorsal excitability strengthens PFC rule representation and improves performance. a , Optogenetic mediodorsal thalamus activation causes tuning of previously untuned PFC neurons in the 2AFC delay period. b , Optogenetic PFC activation generates inappropriate PFC tuning peaks (shading indicates laser on). c , d , Quantification of examples in a , b (binomial test). e , Quantification of existing peak size change. Data are mean ± 95% CI. f , g , Quantification of rule information within the PFC following mediodorsal or PFC activation ( n = 2 mice). h , i , Opposing performance effects of the two manipulations ( n = 16 sessions each from four mice). Shading indicates 95% confidence intervals (see Methods). Behavioural data are presented as session averages and compared using Wilcoxon rank-sum test. PowerPoint slide Full size image Overall, our study describes how a thalamic circuit amplifies local cortical connectivity to sustain attentional control. This function may be general to cognitive processes that require extended cortical representations over time ( Supplementary Discussion 4, 5 ). Because other thalamic circuits are thought to enhance connectivity between cortical areas 29 , 30 , the precise engagement of the thalamus in a cognitive process may determine which cortical regions process information locally and which are engaged when processing spans multiple cortical nodes. Future studies aimed at testing this idea will undoubtedly provide a new way of thinking about cognition, where the thalamus forms a functional backbone that sustains, coordinates and switches distributed cortical computations. Methods Animals A total of 43 male mice were used in this study and the figure contribution of each mouse is summarized in Supplementary Table 1 . All mice tested were between 2 and 12 months of age. C57BL/6J mice were purchased from Taconic Biosciences and VGAT–channel rhodopsin-2 (ChR2) mice were obtained from the Jackson laboratories. VGAT-cre mice were backcrossed to C57BL/6J mice for at least six generations. All mice were kept on a 12 h–12 h light–dark cycle. All animal experiments were performed according to the guidelines of the US National Institutes of Health and the Institutional Animal Care and Use Committee at the New York University Langone Medical Center. Behavioural training and testing Behavioural setup . Behavioural training and testing took place in gridded floor-mounted, custom-built enclosures made of sheet metal covered with a thin layer of antistatic coating for electrical insulation (dimensions in cm: length, 15.2; width, 12.7; height, 24). All enclosures contained custom-designed operant ports, each of which was equipped with an IR LED/IR phototransistor pair (Digikey) for nose-poke detection. Trial initiation was achieved through an ‘initiation port’ mounted on the grid floor 6 cm away from the ‘response ports’ located at the front of the chamber. Task rule cues and auditory sweeps were presented with millisecond precision through a ceiling-mounted speaker controlled by an RX8 Multi I/O processing system (Tucker-Davis Technologies). Visual stimuli were presented by two dimmable, white-light-emitting diodes (Mouser) mounted on each side of the initiation port and controlled by an Arduino Mega microcontroller (Ivrea). For the 2AFC and 4AFC tasks, two and four response ports were mounted at the angled front wall 7.5 or 5 cm apart, respectively. Response ports were separated by 1-cm divider walls and each was capable of delivering a milk reward (10 μl evaporated milk delivered by a single syringe pump (New Era Pump Systems) when a correct response was performed. For the auditory Go/No-go task environment, response and reward ports were dissociated, with the reward port placed directly underneath the response port. In the 4AFC, the two outermost ports were assigned for ‘select auditory’ responses, whereas the two innermost ports were assigned for ‘select visual’ responses. Access to all response ports was restricted by vertical sliding gates which were controlled by a servo motor (Tower Hobbies). The TDT Rx8 sound production system (Tucker Davis Technologies) was triggered through MATLAB (MathWorks), interfacing with a custom-written software running on an Arduino Mega (Ivrea) for trial logic control. Training . Prior to training, all mice were food restricted to and maintained at 85–90% of their ad libitum body weight. 2AFC Training was largely similar to our previously described approach 8 . First, 10 μl of evaporated milk (reward) was delivered randomly to each reward port for shaping and reward habituation. Making response ports accessible signalled reward availability. Illumination of the LED at the spatially congruent side was used to establish the association with visual targets on half of the trials. On the other half, association was established with the auditory targets where an upsweep (10 to 14 kHz, 500 ms) indicated a left and a downsweep (16 to 12 kHz, 500 ms) indicated a right reward. An individual trial was terminated 20 s after reward collection, and a new trial became available 5 s later. Second, mice learned to poke in order to receive reward. All other parameters remained constant. An incorrect poke had no negative consequence. By the end of this training phase, all mice collected at least 20 rewards per 30-min session. Third, mice were trained to initiate trials. Initially, mice had to briefly (50 ms) break the infrared beam in the initiation port to trigger target stimulus presentation and render reward ports accessible. Trial rule (attend to vision or attend to audition) was indicated by 10-kHz low-pass filtered white noise (vision) or 11 kHz high-pass filtered white noise (audition) sound cues. Stimuli were presented in blocks of six trials consisting of single-modality stimulus presentation (no conflict). An incorrect response immediately rendered the response port inaccessible. Rewards were available for 15 s following correct poking, followed by a 5 s inter-trial interval (ITI). Incorrect poking was punished with a time-out, which consisted of a 30 s ITI. During an ITI, mice could not initiate new trials. Fourth, conflict trials were introduced, in which auditory and visual targets were co-presented indicating reward at opposing locations. Four different trial types were presented in repeating blocks: (1) three auditory-only trials; (2) three visual-only trials; (3) six conflict trials with auditory target; and (4) six conflict trials with visual target. The time that mice had to break the IR barrier in the initiation port was continuously increased over the course of this training stage (1–2 weeks) until it reached 0.5 s. At the same time, duration of the target stimuli was successively shortened to a final duration of 0.1 s. Once mice performed successfully on conflict trials, single-modality trials were removed and block length was reduced to three trials. Fifth, during the final stage of training, trial availability and task rule were dissociated. Broadband white noise indicated trial availability, which prompted a mouse to initiate a trial. Upon successful initiation, the white noise was immediately replaced by either low-pass or high-pass filtered noise for 0.1 s to indicate the rule. This was followed by a delay period (variable, but for most experiments it was 0.4 s) before target stimuli presentation. All block structure was removed and trial type was randomized. Particular steps were taken throughout the training and testing periods to ensure that mice used the rules for sensory selection (see Supplementary Discussion 2 ). 4AFC The first two training steps were identical to the 2AFC training, except that auditory stimuli consisted of tone clouds (interleaved pure tones (50 ms per tone over 200 ms, 36 tones total) spanning a frequency range of 1–15 kHz) directed to the left or right ear of the mouse to indicate the side of reward delivery. In the third stage, mice were trained to recognize the difference between visual and auditory response port positions. Initially, only two reward ports were available while access to the response ports associated with the non-target modality was restricted. All other parameters were as previously described in the 2AFC. Once mice successfully oriented to both target types (about two weeks), all four response ports were made available for subsequent training. Choosing a response port of the wrong modality was punished by a brief air puff delivered directly to the response port. Mice remained on this paradigm until they reached a performance criterion of 70% accuracy on both modalities. During the fourth training stage, sensory conflict trials were introduced using the same parameters as in the 2AFC. Trial types and locations were randomized (spatial conflict was also random). Responses were scored as correct or one of three different error types (see confusion matrix in Extended Data Fig. 2e ). Auditory Go/No-go A total of four mice were trained. A pair of electrostatic speakers (Tucker Davis Technologies) producing the auditory stimuli were placed outside of the training apparatus and sound stimuli were conveyed by cylindrical tubes to apertures located at either side of the initiation port, allowing stereotypical delivery of stimuli across trials. Trial availability was indicated by a light positioned at the top of the box and trial initiation required a 200-ms continuous interruption of the IR beam in the initiation port to ensure that the animals head was properly positioned to hear the stimuli. Following trial initiation, a second port (the response port) was opened and a pure tone stimulus was played. A 20-kHz tone signalled a ‘Go’ response, whereas frequencies above or below 20 kHz signalled a ‘No-go’ response. The pure tone stimuli were presented for 300 ms before response time, and were pseudo-randomly varied on a trial-by-trial basis, with trials divided between the Go stimulus (approximately 40% of trials) and two No-go stimuli (16 and 24 kHz, appproximately 30% of trials per frequency). After stimulus presentation, the response port was made accessible for a 3-s period. In Go trials, correct poking within the trial period (hit) rendered the reward port accessible, and reward was subsequently delivered upon poking. For a ‘miss’ in which the mouse failed to poke within the 3-s period, the reward port remained inaccessible. For a ‘correct rejection’, which involved withholding a response when No-go stimuli were played, the reward port was made accessible at the end of the 3-s period. For a ‘false alarm’, which involved a poke in the response port on a No-go trial, the reward port remained inaccessible and the next trial was delayed by a 15-s time-out, as opposed to the regular 10-s inter-trial interval. Testing For electrophysiological recordings and experiments with optical manipulation, testing conditions were equivalent to the final stage of training. The first cohort of PFC recordings involving ‘manipulation-free mice’ included three C57BL/6 wild-type mice and one VGAT-cre mouse. The VGAT-cre mouse in this cohort, which was also used for experiments involving PFC manipulations, was initially run for an equivalent number of laser-free sessions as the three wild-type mice before any manipulation. This design was used to confirm equivalence in electrophysiological findings across genotypes, and to strengthen the overall conclusions drawn by using transgenic animals. Equivalence across genotypes can be readily appreciated by comparing the four principal component analysis (PCA) plots in Extended Data Fig. 1j . For laser sessions, laser pulses of either blue (473 nm for ChR2 activation) or yellow (560 nm for eNpHR3.0 activation) light at an intensity of 4–5 mW (measured at the tip of the optic fibres) were delivered pseudo-randomly on 50% of the trials. During most optogenetic experiments, laser stimulation occurred during the whole delay period (500 ms) of the task. For temporal-specific manipulations concurrent with electrophysiological recordings ( Fig. 1k, l , 2e, f and Extended Data Figs 4 , 6 ), laser pulses were delivered for 250 ms either during the first half, after 100 ms (following cue presentation) or the latter half of the delay period. In the high-resolution optogenetic inactivation experiment ( Fig. 2h ) laser pulses were 100 ms long, dividing the 500-ms delay period equally into five periods. During a session, only one condition was tested. For stabilized step function opsin (SSFO, hChR2(C128S/D156A)) experiments ( Figs 3 , 4 and Extended Data Fig. 8 ), a 50-ms pulse of blue (473 nm, 4 mW intensity) light at the beginning of the delay period was delivered to activate the opsins and a 50-ms pulse of red (603 nm, 8 mW intensity) light to terminate activation at the end of the delay period. Similarly, for MGB manipulations ( Extended Data Fig. 10 ), SSFO was activated by a 50-ms pulse of blue (473 nm, 4 mW intensity) light before stimulus delivery and its activity was terminated by a 50-ms pulse of red (603 nm, 8 mW intensity) at stimulus offset. An Omicron-Laserage lighthub system (Dudenhofen) was used for all optogenetic manipulations. Behavioural analysis For all experiments with optogenetic manipulations, only sessions where baseline performance was ≥65% correct were included in the analysis. For all behavioural testing, single-mouse statistics were initially used to evaluate significance and effect size followed by statistical comparisons across sessions. Performance on the auditory Go/No-go task was assessed on the basis of the number of correct responses to Go stimuli (hit rate) relative to No-go stimuli (false alarm rate) and was considered sufficient if the overall discrimination index ( d ′ = Z hit − Z false alarm )) was greater than 2 for the baseline condition. In cases where multiple groups were compared, a Kruskal–Wallis one-way analysis of variance (ANOVA) was used to assess variance across groups, followed by post hoc testing. For pairwise comparisons a Wilcoxon rank-sum test was used. Data are presented as mean ± s.e.m. and significance levels were set to P < 0.05. Virus injections Injections were performed using a quintessential stereotactic injector (QSI, Stoelting). All viruses were obtained through UNC Chapel Hill, virus-vector core. For PFC manipulation during electrophysiological recordings, 200 nl of AAV2-hSyn-DIO-ChR2 was injected bilaterally into the PFC of VGAT-cre mice. Bilateral injections of AAV1-hSyn-eNpHR3.0-eYFP (300 nl) were used for mediodorsal thalamus and LGN manipulations. For SSFO experiments, AAV1-CamKIIa-SSFO-GFP was injected bilateral either into PFC (200 nl) or mediodorsal thalamus (400 nl). To test the effect of mediodorsal activation on functional cortical connectivity we injected the mediodorsal thalamus with AAV1-CamKIIa-SSFO-GFP (400 nl) ipsilateral and the PFC with AAV1-hSyn-ChR2-eYFP (200 nl) contralateral to the recording site. Following virus injection, animals were allowed to recover for at least two weeks for virus expression to take place before the start of behavioural testing or tissue collection. Optic-fibre implants for behavioural experiments Mice were deeply anaesthetized using 1% isoflurane. For each mouse, up to three pairs of optic fibres (Doric Lenses) were used in behavioural optogenetic experiments and stereotactically inserted at the following coordinates (in mm from Bregma): PFC, AP 2.6, ML ± 0.25, DV −1.25; mediodorsal thalamus, AP −1.4, ML ± 0.6, DV −1.5; LGN, AP −2.2, ML 2.15, DV 2.6. Up to three stainless-steel screws were used to anchor the implant to the skull and everything was bonded together with dental cement. Mice were allowed to recover with ad libitum access to food and water for one week, after which they were brought back to food regulation and behavioural training resumed. A 473-nm laser was used for ChR2 activation, whereas eNpHR3.0 activation was achieved with a laser with a wavelength of 561 nm. Laser intensities were adjusted to be 4–5 mW measured at the tip of the optic fibre, which was generally the minimum intensity required to produce behavioural effects. Multi-electrode array construction and implantation Custom multi-electrode array scaffolds (drive bodies) were designed using 3D CAD software (SolidWorks) and printed in Accura 55 plastic (American Precision Prototyping) as described previously 21 . Prior to implantation, each array scaffold was loaded with 12–18 independently movable microdrives carrying 12.5-μm nichrome (California Fine Wire Company) stereotrodes or tetrodes. Electrodes were pinned to custom-designed, 96-channel electrode interface boards (EIB, Sunstone Circuits) along with a common reference wire (A-M systems). For combined optogenetic manipulations and electrophysiological recordings of the PFC, optic fibres delivering the light beam lateral (45° angled tips) were embedded adjacent to the electrodes ( Extended Data Fig. 3g ). In the case of combined optogenetic PFC manipulations with mediodorsal recordings, the optic fibre was placed away from the electrodes at the appropriate spatial offset. For combined unilateral multi-site recordings of PFC and mediodorsal (four mice) with SSFO manipulations, two targeting arrays (0.5 × 0.5 mm for PFC and 0.5 × 0.35 mm for mediodorsal) where separated by 3.2 mm in the AP axis. For SSFO manipulations, optic fibres delivering a lateral light beam were implanted directly next to the array targeting either PFC or mediodorsal thalamus. To test the effect of mediodorsal activation on functional cortical connectivity, a single electrode array was targeted to the PFC unilaterally, whereas a 400-μm core optic fibre (Doric Lenses) was targeted to the contralateral PFC. In addition, a 200-μm core optic fibre was placed 2.8 mm behind the electrode array for activating SSFO in the ipsilateral mediodorsal thalamus. Similarly, to interrogate the same question in a sensory thalamocortical circuit, an electrode array was implanted unilaterally into V1 and an additional 400-μm core optic fibre (Doric Lenses) was targeted to the contralateral V1. In addition, a 200-μm core optic fibre was placed 0.5 mm anterior to the electrode array for activating SSFO in the ipsilateral LGN. During implantation, mice were deeply anaesthetized with 1% isofluorane and mounted on a stereotaxic frame. A craniotomy was drilled centred at AP 2 mm, ML 0.6 mm for PFC recordings (approximately 1 × 2.5 mm), at AP −3 mm, ML 2.5 mm for V1 (1.5 × 1.5 mm) or at AP −1 mm, ML 1.2 mm for mediodorsal recordings (approximately 2 × 2 mm). The dura was carefully removed and the drive implant was lowered into the craniotomy using a stereotaxic arm until stereotrode tips touched the cortical surface. Surgilube (Savage Laboratories) was applied around electrodes to guard against fixation through dental cement. Stainless-steel screws were implanted into the skull to provide electrical and mechanical stability and the entire array was secured to the skull using dental cement. Electrophysiological recordings Signals from stereotrodes (cortical recordings) or tetrodes (thalamic recordings) were acquired using a Neuralynx multiplexing digital recording system (Neuralynx) through a combination of 32- and 64-channel digital multiplexing headstages plugged into the 96-channel EIB of the implant. Signals from each electrode were amplified, filtered between 0.1 Hz and 9 kHz and digitized at 30 kHz. For thalamic recordings, tetrodes were lowered from the cortex into the mediodorsal thalamus over the course of 1–2 weeks where recording depths ranged from −2.8 to −3.2 mm DV. For PFC recordings, adjustments accounted for the change of depth of PFC across the AP axis. Thus, in anterior regions, unit recordings were obtained –1.2 to −1.7 mm DV, whereas for more posterior recordings electrodes were lowered −2 to −2.4 mm DV. Following acquisition, spike sorting was performed offline on the basis of the relative spike amplitude and energy within electrode pairs using the MClust toolbox ( ). Units were divided into fast spiking and regular spiking on the basis of the waveform characteristics as previously described 21 . In brief, the peak to trough time was measured in all spike waveforms, and showed a distinct bimodal distribution (Hartigan’s dip test, P < 10 −5 ). These distributions separated at 210 μs, and cells with peak to trough times above this threshold were considered regular-spiking neurons and those with peak to trough times below this threshold were considered fast-spiking cells ( Extended data Fig. 1g ). The majority of cells (2,727) in PFC recordings were categorized as regular spiking, whereas approximately one-third (909) was categorized as fast spiking. Histology For histological verification of electrode position, drive-implanted mice were lightly anaesthetized using isoflorane and small electrolytic lesions were generated by passing current (10 μA for 20 s) through the electrodes. All mice were then deeply anaesthetized and transcardially perfused using phosphate-buffered saline (PBS) followed by 4% paraformaldehyde. Brains were dissected and postfixed overnight at 4 °C. Brain sections (50 μm) were cut using a vibratome (LEICA) and fluorescent images were obtained on a confocal microscope (LSM800, Zeiss). Confocal images are shown as maximal projection of 10 confocal planes, 20 μm thick. Analysis of firing rate For all PFC and mediodorsal neurons, changes in firing rate associated task performance were assessed using peri-stimulus time histograms (PSTHs). PSTHs were computed using a 10-ms bin width for individual neurons in each recording session 4 convolved with a Gaussian kernel (25 ms full-width at half-maximum) to create a spike density function (SDF) 31 , 32 , which was then converted to a z score by subtracting the mean firing rate in the baseline (500 ms before event onset) and dividing by the variance over the same period. For comparison of overall firing rates across conditions, trial number and window size were matched between groups. Homogeneity of variance for firing rates across conditions was determined using the Fligner–Killeen test for homoscedasticity 33 . For comparisons of multiple groups, a Kruskal–Wallis one-way ANOVA was used to assess variance across groups before pairwise comparisons. Identification of peaks in task-modulated neurons A total of 3,444 single units were recorded within the PFC and 974 single units were recorded in the mediodorsal across animals. Overall assessment of firing rates during the task delay period showed that individual regular-spiking PFC neurons did not exhibit sustained increases in spiking relative to baseline (population shown in Extended Data Fig. 1 ) and a comparison of variance homoscedasticity (Fligner–Killeen test) did not reveal changes in variance. In a subset of cells, however, a brief enhancement of spike-timing consistency at a defined moment in the delay period was observed ( Fig. 1b ). To formally identify these neurons we used the following steps. First, periods of increased consistency in spike-timing across trials were identified using a matching-minimization algorithm 34 . This approach was used to determine the best moments of spike time alignment across trials (candidate tuning peaks). The number of these candidate tuning peaks ( n ) was based on firing rate values during the delay period for each neurons. n was obtained by minimizing the equation: Where n k is the number of observed spikes in a trial k . As such, the initial (and maximum) number of candidate peaks is equal to the median number of spikes observed across trials. With an initial number of candidate peaks in hand, their times were subsequently estimated. These times were initially placed randomly within the delay window, and iteratively adjusted to obtain the set of final candidate peak times. The result of this iterative process was the solution to the equation: Where the set of final candidate peak times S is obtained by iteratively minimizing the temporal distance between candidate peak times (in each iteration) C and the observed spike times across trials S k on the basis of a penalty associated with increased temporal distance, computed across all trials k . In the first step, temporal adjustment for each candidate peak time was based on finding the local minimum of the temporal distance function, d 2 (as described in ref. 34 ) after which spikes were adjusted by linear interpolation. In brief, neighbouring spike times across trials were sorted by their temporal offset to a given candidate peak time, and their linear fit was computed. Each candidate peak time was then moved to the midpoint of that fitted line, to achieve a local minimum. In a second step, cost minimization was jointly computed for all putative peaks using the Lagrange multiplier solution to the global minimization equation 34 and intervals between peak times were adjusted on the basis of this global minimum. Both the local and the global minimization steps were iterated until the spike-time variance, defined as the sum of the squared distances between spikes across trials, converged and a set of final candidate peak times were determined. Next, to identify genuine tuning peaks, we applied two further conditions. First, for 75% of the trials, at least one spike was required to fall within ±25 ms of each final candidate peak time. This conservative threshold was based on the median firing rates observed during the delay (around 10 Hz) predicting that inter-trial spike distances will be greater than 50 ms if spikes were randomly distributed, making it highly improbable to fulfill this condition by chance. Second, these candidate peaks needed to have z -score values of >1.5 (equivalent to a one-sided test of significance) to be considered genuine tuning peaks. The z score of spiking across trials during the delay was computed relative to the pre-delay 500-ms baseline (10-ms binning, convolved with a 25-ms full-width at half-maximum Gaussian kernel). Obtaining a genuine tuning peak identified a unit as task-modulated, which was subsequently used for most analyses in this study. The vast majority of units only showed a single tuning peak using this method. Independent validation of this method’s validity is discussed in Supplementary Discussion 1 . Principle component and linear regression analysis of population code To estimate the extent to which task modulated units differentially encode task rules, a PCA was first performed as described previously 10 . Next, linear regression was applied to define the two orthogonal, task-related axes of rule type and movement direction. These analyses were performed on neural z -core time-series, separately for each comparison (trials separated by rule type or movement direction). In brief, a data matrix X of size N unit × ( N condition × T ), was constructed in which columns corresponded to the z -scored population response vectors for a given task rule or movement direction at a particular time ( T ) within the 1-s window following task initiation. This window size was chosen to provide sufficient samples for analysis, but only the delay period data were examined for this study. The contribution of each principal component to the population response across time was quantified by projecting the trial-type-specific z -score time-series (for example, attend to vision rule) onto individual principal components and computing the variance. The first principal component was used for all subsequent analyses as subsequent principal components were found to be uninformative in the initial analysis. Multi-variable linear regression was applied to determine the contribution of task rule and subsequent movement to principal component divergence across time for the corresponding trial-type comparisons. Specifically, linear analysis related the response of unit i at time t to a linear combination of these two task variables using the following equation: Where r i,t ( k ) is the z -score response for a neuron in trial set ( k ) for each task variable; movement and rule. The regression coefficients (β ) were used to describe the extent to which z -score time-series variation in the firing rate of the unit at a given time point describes a particular task variable. This analysis was generally only applied to correct responses. Regression coefficients were then used to identify dimensions in state space corresponding to variance across neural response data for the two task variables. Vectors of these coefficients across z -score time-series matrices separated by trial types (for example, rule1 versus rule 2) were projected onto subspaces spanned by the previously identified principal component. We next constructed task-variable axes ( ) using QR-decomposition to identify principal component separation associated with each task variable ( v ). To identify movement along these axes for each population response, their associated z -score time-series were projected onto these axes across time as follows: Where X c is the population vector for trial type c . This projection resulted in two time-series vectors p v,c for each task variable that compared movement across trial types (rule 1 versus rule 2; right versus left) on their corresponding axes. The difference between these two time-series was used as the main metric for information (task rule or movement) in this study. For evaluating rule information in error trials when their number permitted analysis (>20 error trials; based on empirical assessment of minimum trial numbers required for principal component divergence), trial type axes obtained from correct trials were multiplied by −1 to reverse directionality. The significance bounds for all time-series were obtained using random subsampling and bootstrapping (around 60% of total neurons per bootstrap, 200 replications). The 95% confidence bounds at each time point were then estimated on the basis of the resulting distribution. To determine whether our inference that rule information was related to tuning peaks, task-modulated spike times were randomly jittered by 500 ms and the PCA repeated. This resulted in loss of rule-information-related principal component divergence, validating our inference. Peak strength analysis To obtain a quantitative estimate of peak fidelity across multiple trials, an internal neural synchrony measurement 35 was modified for short-term synchrony, which was associated with identified peaks. This approach was applied to spike trains associated with differing task conditions and responses. Each spike within the train was convolved with a Gaussian kernel with a 9-ms half width. Trials were then summed and divided by the kernel peak size and trial number giving a maximum value (for perfect alignment) of one at any point. Convolution vector values around the tuning in the baseline condition were compared to the value within the same time window in the other condition. Cross-correlation analysis To compute cross-correlation histograms (cross-correlograms), the MATLAB function ‘crosscorr’ was applied to whole-session spike trains from pairs of cells. Continuous traces at a 1-kHz sampling rate were first generated on the basis of the spike times, with times at which spikes occurred set to one and all other times to zero. Crosscorr was then applied to trains from all possible cell pairs, using a maximum lag time of ±50 ms. The significance of a cross-correlogram was determined by randomly jittering all spike times independently and re-computing the cross-correlogram. Jitter values were drawn from a Gaussian distribution centred at zero with a s.d. of 3 ms. This process was repeated 100 times for each pair, and if the observed peak cleared the 95% confidence bounds of all shuffled sets, the pair was determined to have a significant cross-correlation. Pairs of cells were grouped as follows: the control group was composed of cells in which only the first cell was rule-tuned. The test group was composed of pairs in which both cells were tuned. This test group was further broken down into two subgroups: one in which both cells responded to the same rule and one in which the cells responded to different rules. Within these groups, co-modulation was defined as the number of significant cross-correlograms divided by the total number of cross-correlograms. After overall group comparison using a χ 2 test, proportion differences were statistically evaluated in a post hoc pairwise fashion using binomial proportion tests. To examine the effect of tuning to the same rule on co-modulation strength, the distributions of cross-correlogram peak heights were also compared for the groups of pairs described above. An empirical CDF (cumulative distribution function) was constructed using the peak heights of each group, and these distributions were compared using a signed-rank test. Finally, the relationship between cross-correlogram peak height and inter-alignment time was explored. The inter-alignment times among neuronal pairs tuned to the same rule were calculated by taking difference in spike alignment times of each pair. To more effectively assess putative monosynaptic connections, the significant cross-correlograms between tuned pairs were also re-computed at a 50-μs resolution. Significance thresholding at this resolution was repeated by determining whether a sequence of two or more successive bins of the adjusted trace, which exceeded two standard deviations of the overall trace, occurred within 10 ms of the centre bin 19 . Cross-correlograms containing such outliers were further characterized on the basis of their peak times. Those with peaks at 300 μs or later were categorized as putative monosynaptic connections 18 , 19 . Among these putative connections, the pairs were split into two groups: those that were tuned to the same rule, and those that were tuned to opposite rules. To compare peak strength, spike probability was estimated by subtracting a shuffled distribution of spike times with the same average firing rate as the postsynaptic neuron and dividing by the number of spikes in the presynaptic neuron 17 . The distributions of the resulting peak strengths among same rule and opposite rule putative monosynaptic connections were compared using the Kolmogorov–Smirnov test. Finally, the peak strengths of these pairs were plotted against their inter-alignment time. As in the above analysis, only same rule pairs were included. Nonlinear decoding analysis To further assess the degree of rule representation in the PFC and mediodorsal thalamus, we applied two population decoding approaches, the maximum correlation coefficient (MCC) and Poisson naive Bayes (PNB) classifiers as implemented in the neural decoding toolbox 36 . These analyses were applied to all tuned neurons recorded from either structure, each of which were pooled into a pseudo-population for each structure ( n = 604 neurons in the PFC and n = 156 neurons in the mediodorsal thalamus). For MCC decoding, firing rate response profiles in individual correct trials associated with each rule were preprocessed by converting them to a z score using the mean and variance in the corresponding trial to prevent baseline spike-rate differences from affecting classification 37 . For PNB classification, neuron spiking activity was modelled as a Poisson random variable with each neuron’s activity assumed to be independent. Trial-specific z scores (MCC) or spike counts (PNB) from these pseudo-populations were then repeatedly and randomly subsampled (200 resampling runs) and divided into training and test subsets (six training and two test trials per recording session across n = 360 PFC and n = 116 mediodorsal sessions). For each subsampling, the classifier was trained using the training subset to produce a predictive mean response template for each rule ( i ). Templates were constructed separately for 100-ms overlapping windows across the trace (step size = 20 ms) and classifiers trained for each template. The windowed classifiers allowed us to estimate the temporal evolution of information in the population. In the cross-validation step, these templates were used to predict the class for each test trial in the test set ( x *) by maximizing the correlation decision function in the case of MCC or the log-likelihood decision function in the case of the PNB classifier 38 . Finally, we estimated the predictive strength of population activity at each time point, that is, the extent to which activity in that time bin predicts the trial type, as the average of the correct predictions in the test set. To determine the variability of this estimate, a bootstrapping procedure was applied in which 25% of neurons were subsampled from the overall population and the same procedure was repeated (50 resampling runs). The resulting traces were used to estimate the 95% confidence intervals of the initial estimate from the full population. Granger causality analysis To determine the degree of causal connectivity in the ensemble of recorded neurons within the PFC or their counterpart in our simulated network, we used the Weiner–Granger vector autoregressive (VAR) causality analysis as implemented in the multivariate Granger causality toolbox (MVGC) 25 . Spike train data from each recorded or simulated neuron within a session was converted to a continuous signal by binning in 1-ms increments 39 , 40 and convolving the resulting signal with a Gaussian filter (half width 5 ms). For all neurons in individual sessions, this analysis used 500-ms segments either within the delay period (delay) or just before (task engagement) along with an equal number of randomly selected segments recorded outside of the behavioural environment (out of task). For assessment of laser effects, a matched number of correct trials in the laser and non-laser condition were compared for each recording session across neurons. To improve stationarity in the signal, segments were adjusted by subtracting the mean and dividing by the s.d. of each segment 39 , 41 and stationarity was checked by determining whether the spectral radius of the estimated full model was less than one 25 . All models met this stationarity criteria. Model order was estimated empirically for each subset using Bayesian information criteria after which VAR model parameters were determined for the selected model order. On the basis of the resulting parameters, time-domain conditional Granger causality measurements were calculated for each cell pair across all trials. Causal density for a given condition in each session was taken as the mean pairwise-conditional causality 25 . Connectivity assay To assess the effect of changes in thalamic excitability on cortical connection strength, we measured intra-cortical responses evoked by ChR2-mediated activation of the contralateral cortex for V1/LGN (94 neurons in two mice) and PFC/mediodorsal thalamus (96 neurons in three mice). Responses to either cortical stimulation alone (10 ms ChR2 activation to the contralateral cortex), thalamic activation alone (500 ms SSFO activation in ipsilateral LGN or mediodorsal thalamus) or the combination were recorded in V1 and PFC (100 interleaved trials per condition). For the combined condition, thalamic activation preceded cortical stimulation by 100 ms. Computational modelling Network structure and dynamics . We constructed a model that consisted of excitatory (regular-spiking) and inhibitory (fast-spiking) PFC neurons as well as mediodorsal neurons. Within the PFC, regular-spiking cells formed subnetworks representing each task rule consisting of multiple interconnected chains. Neurons in each of these chains were locally connected to their nearest neighbour within the chain as well as to other chains within the same subnetwork. While neurons representing different rules were connected, connections were made stronger within each subnetwork (for example, among neurons representing the same rule) on the basis of our cross-correlation experimental data. Regular-spiking neurons of either rule sent overlapping projections to mediodorsal neurons and received reciprocal inputs from the mediodorsal thalamus. Mediodorsal inputs were modulatory with a longer time constant than for the PFC (1 ms versus 10 ms), and resulted in increased spiking of fast-spiking neurons (direct synaptic drive, w = 0.6) while providing an amplifying input (factor, 1.6×) to connections between regular-spiking neurons (regardless of rule tuning). During rule encoding, the arrival of input attributed to one rule simultaneously activated the starter neuron (first neuron in a chain) in chains encoding that rule, engaging mediodorsal neurons and enhancing their firing through synaptic convergence. In turn, mediodorsal neurons enabled signal propagation that was specific to that rule by amplifying currently active regular-spiking neuronal connections, while preventing irrelevant synchrony elsewhere through augmented inhibition. Spiking neuron model . We employed the leaky integrate-and-fire (LIF) model to simulate both of the network paradigms described above. LIF is a simplified spiking neuron model that is frequently used to mathematically model the electrical activity of neurons. The evolution of the membrane voltage of neuron j using the LIF equation is as follows: where C is the membrane capacitance, V j is the j th neuron’s membrane voltage, α is the leak conductance ( α = 0.95). I ext is an externally applied current with amplitude taken independently for each neuron from a uniform distribution ( μ = 0.825, s.d. = 0.25 for PFC and mediodorsal neurons). I j syn is the synaptic input to cell j , and this is defined as follows: where ω ij represents the strength of the connection between presynaptic cell i and the postsynaptic neuron j ; A ij is the connectivity matrix that denotes the connectivity map. τ is the spike duration (1 ms in our simulations) and the H( t ) is a Heaviside function that is zero for negative values ( t < τ ) and one for positive values ( t > τ ). In t his model the voltage across the cell membrane grows, and after it reaches a certain threshold ( V th = 1), the cell fires an action potential, and its membrane potential is reset to the reset voltage. Here, the resting potential ( E ) and reset-potential V reset are set to zero. The neuron enters a refractory period ( T ref = 1.5 ms) immediately after it reaches the threshold ( V = V th ) and spikes. To integrate the LIF equation, we used the Euler method with a step size of Δ t = 0.01 ms. To reproduce the spontaneous activity of the network, we introduced a noise that arrives randomly at each cell with a predefined probability ( f N = 10 Hz). Statistical analysis For each statistical analysis provided in the manuscript, an appropriate statistical comparison was performed. For large sample sets, the Kolmogorov–Smirnov normality test was first performed on the data to determine whether parametric or non-parametric tests were required. Variance testing for analysis involving comparisons of firing rates under differing behavioural conditions and following optogenetic manipulations was done using the Fligner–Killeen test of variance homoscedasticity. For small sample sizes ( n < 5) non-parametric tests were used by default. Two different approaches were used to calculate the required sample size. For studies in which sufficient information on response variables could be estimated, power analyses were performed to determine the number of mice needed. For sample size estimation in which effect size could be estimated, the sample number needed was estimated using power analysis in MATLAB (sampsizepwr) with a β of 0.7 (70%). For studies in which the behavioural effect of the manipulation could not be prespecified, including optogenetic experiments, we used a sequential stopping rule 42 . This method enables null-hypothesis tests to be performed in sequential stages, by analysing the data at several experimental points using non-parametric pairwise testing. In these cases, the experiment initially uses a small cohort of mice which are tested over multiple behavioural sessions. If the P value for the trial comparison across mice falls below 0.05, the effect is considered significant and the cohort size is not increased. If the P value is greater than 0.36 following four sessions that met criteria, the investigator stopped the experiment and retained the null hypothesis. Using this strategy, the required number of animals was determined to be between three and five animals per cohort across testing conditions. For multiple comparisons, a non-parametric ANOVA (Kruskal–Wallis H -test) was performed followed by pairwise post hoc analysis. All post hoc pairwise comparisons were two-sided. No randomization or investigator blinding was done for experiments involving electrophysiology. Blinding was used for experiments involving SSFO and behaviour (mediodorsal versus PFC). Code availability All computer code used for analysis and simulation in this study was implemented in MATLAB computing software (MathWorks). Code will be made freely available to any party upon request. Requests should be directed to the corresponding author. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. | Long assumed to be a mere "relay," an often-overlooked egg-like structure in the middle of the brain also turns out to play a pivotal role in tuning-up thinking circuity. A trio of studies in mice funded by the National Institutes of Health (NIH) are revealing that the thalamus sustains the ability to distinguish categories and hold thoughts in mind. By manipulating activity of thalamus neurons, scientists were able to control an animal's ability to remember how to find a reward. In the future, the thalamus might even become a target for interventions to reduce cognitive deficits in psychiatric disorders such as schizophrenia, researchers say. "If the brain works like an orchestra, our results suggest the thalamus may be its conductor," explained Michael Halassa, M.D., Ph.D., of New York University (NYU) Langone Medical Center, a BRAINS Award grantee of the NIH's National Institute of Mental Health (NIMH), and also a grantee of the National Institute of Neurological Disorders and Stroke (NINDS). "It helps ensembles play in-sync by boosting their functional connectivity." Three independent teams of investigators led by Halassa, Joshua Gordon, M.D., Ph.D., formerly of Columbia University, New York City, now NIMH director, in collaboration with Christoph Kellendonk, Ph.D. of Columbia, and Karel Svoboda, PhD, at Howard Hughes Medical Institute Janelia Research Campus, Ashburn, Virginia, in collaboration with Charles Gerfen, Ph.D., of the NIMH Intramural Research Program, report on the newfound role for the thalamus online May 3, 2017 in the journals Nature and Nature Neuroscience. The prevailing notion of the thalamus as a relay was based on its connections with parts of the brain that process inputs from the senses. But the thalamus has many connections with other parts of the brain that have yet to be explored, say the researchers. Halassa and colleagues studied communications between neurons in the mediodorsal thalamus and the prefrontal cortex in mice. Credit: Michael Halassa, M.D., Ph.D., NYU Langone Medical Center All three groups investigated a circuit that connects the mid/upper (mediodorsal) thalamus with the prefrontal cortex (PFC), the brain's thinking and decision making center. Brain imaging studies have detected decreased connectivity in this circuit in patients with schizophrenia, who often experience working memory problems. Dr. Halassa and colleagues found that neurons in the thalamus and PFC appear to talk back and forth with each other. They monitored neural activity in mice performing a task that required them to hold in mind information about categories, so that they could act on cues indicating which of two doors hid a milk reward. Optogenetically suppressing neuronal activity in the thalamus blocked the mice's ability to choose the correct door, while optogenetically stimulating thalamus neural activity improved the animals' performance on the working memory task. This confirmed a previously known role for the structure, extending it to the specialized tasks Halassa and colleagues used and demonstrating for the first time a specific role in the maintenance of information in working memory. What kind of information was the thalamus helping to maintain? The researchers found sets of neurons in the PFC that held in memory the specific category of information required in order to choose the correct door. They determined that the thalamus did not (at least in this case) relay such specific category information, but instead broadly provided amplification that was crucial in sustaining memory of the category in the PFC. It accomplished this by boosting the synchronous activity, or functional connectivity, of these sets of PFC neurons. "Our study may have uncovered the key circuit elements underlying how the brain represents categories," suggested Dr. Halassa. Dr. Gordon and colleagues saw similar results when they tested how the same circuit controlled a mouse's ability to find milk in a maze. The animals had to remember whether they had turned left or right to get their reward prior to a brief delay - and do the opposite. Also using optogenetics, the study teased apart differing roles for subgroups of PFC neurons and interactions with the brain's memory hub, the hippocampus. Thalamus inputs to the PFC sustained the maintenance of working memory by stabilizing activity there during the delay. "Top-down" signals from the PFC back to the thalamus supported memory retrieval and taking action. Consistent with previous findings, inputs from the hippocampus, were required to encode in PFC neurons the location of the reward - analgous to the correct door in the Halassa experiment. "Strikingly, we found two separate populations of neurons in the PFC. One encoded for spatial location and required hippocampal input; the other was active during memory maintenance and required thalamic input," noted Dr. Gordon. "Our findings should have translational relevance, particularly to schizophrenia. Further study of how this circuit might go awry and cause working memory deficits holds promise for improved diagnosis and more targeted therapeutic approaches." In their study, the Janelia team and Dr. Gerfen similarly showed that the thalamus plays a crucial role in sustaining short-term memory, by cooperating with the cortex through bi-directional interactions. Mice needed to remember where to move after a delay of seconds, to gather a reward. In this case, the thalamus was found to be in conversation with a part of the motor cortex during planning of those movements. Neuronal electrical monitoring revealed activity in both structures, indicating that they together sustain information held in the cortex that predicted in which direction the animal would subsequently move. Optogenetic probing revealed that the conversation was bidirectional, with cortex activity dependent on thalamus and vice versa. "Our results show that cortex circuits alone can't sustain the neural activity required to prepare for movement," explained Dr. Gerfen. "It also requires reciprocal participation across multiple brain areas, including the thalamus as a critical hub in the circuit." | 10.1038/nature22073 |
Medicine | Molecular switch controlling immune suppression may help turn up immunotherapies | Megan M. Kaneda et al. PI3Kγ is a molecular switch that controls immune suppression, Nature (2016). DOI: 10.1038/nature19834 Journal information: Nature , Cancer Discovery | http://dx.doi.org/10.1038/nature19834 | https://medicalxpress.com/news/2016-09-molecular-immune-suppression-immunotherapies.html | Abstract Macrophages play critical, but opposite, roles in acute and chronic inflammation and cancer 1 , 2 , 3 , 4 , 5 . In response to pathogens or injury, inflammatory macrophages express cytokines that stimulate cytotoxic T cells, whereas macrophages in neoplastic and parasitic diseases express anti-inflammatory cytokines that induce immune suppression and may promote resistance to T cell checkpoint inhibitors 1 , 2 , 3 , 4 , 5 , 6 , 7 . Here we show that macrophage PI 3-kinase γ controls a critical switch between immune stimulation and suppression during inflammation and cancer. PI3Kγ signalling through Akt and mTor inhibits NFκB activation while stimulating C/EBPβ activation, thereby inducing a transcriptional program that promotes immune suppression during inflammation and tumour growth. By contrast, selective inactivation of macrophage PI3Kγ stimulates and prolongs NFκB activation and inhibits C/EBPβ activation, thus promoting an immunostimulatory transcriptional program that restores CD8 + T cell activation and cytotoxicity. PI3Kγ synergizes with checkpoint inhibitor therapy to promote tumour regression and increased survival in mouse models of cancer. In addition, PI3Kγ-directed, anti-inflammatory gene expression can predict survival probability in cancer patients. Our work thus demonstrates that therapeutic targeting of intracellular signalling pathways that regulate the switch between macrophage polarization states can control immune suppression in cancer and other disorders. Main We investigated the association between immune responses and survival in primary tumours from human papilloma virus (HPV) + ( n = 97) and HPV − ( n = 423) head and neck squamous cell carcinoma (HNSCC) cohorts from the Cancer Genome Atlas 8 , 9 . High expression levels of pro-inflammatory mRNAs IL12A , IL12B , IFNG and CD8A were associated with increased survival in HPV + but not HPV − cohorts, whereas high expression of IL6 was negatively associated with survival ( Extended Data Fig. 1a–e ). HPV + patients with this favourable immune expression profile ( n = 35) had 97% survival at 3 years compared with 57% survival for patients without this profile ( n = 62) ( Fig. 1a ). Similar associations were observed in lung adenocarcinoma and gastric carcinoma patients ( Extended Data Fig. 1f, g ). These results suggest that therapeutic approaches that stimulate pro-inflammatory gene expression might enhance cancer patient survival. Figure 1: PI3Kγ promotes immune suppression. a , Multivariate immune response mRNA signature in HPV + HNSCC patients. n = 97 biological replicates; P = 0.0001; log-rank test. b , mRNA expression of genes involved in the immune response in Pik3cg −/− or wild-type (WT) peritoneal macrophages. n = 3 biological replicates; * P = 0.01 ; t -test. c , Tumour volumes in wild-type, Pik3cg −/− and PI3Kγ (PI3Kγi)-inhibitor-treated mice ( n = 15 biological replicates) P < 0.05, one-sided ANOVA with Tukey’s post-hoc test. Arrow shows time of initiation of daily treatment with the PI3Kγ inhibitor. d , Tumour weight ( P = 0.000001) and number of lung metastases ( P = 0.007) in mice with PyMT spontaneous breast carcinoma (scale bar, 200 μm) in wild-type ( n = 21 biological replicates) and Pik3cg −/− ( n = 8 biological replicates) backgrounds ( t -test). e , MHCII expression in wild-type or Pik3cg −/− TAMs. n = 3 biological replicates; P = 0.01 for both lung and breast tissue, as analysed by t -test. f , Fold change of mRNA expression in tumours and tumour-derived CD11b + cells from Pik3cg −/− and PI3Kγ-inhibitor-treated mice. n = 5 biological replicates; P < 0.01; t -test. g , Heat map of median-centred mRNA expression of genes involved in the immune response in tumours from wild-type and Pik3cg −/− mice. n = 3 biological replicates; P = 0.01; t -test. All experiments were performed two or more times. b , d , e , Data are shown as mean ± s.e.m. PowerPoint slide Source data Full size image We hypothesized that macrophage signalling pathways, such as those regulated by the class IB isoform PI3Kγ, might control the switch between immune stimulation and suppression in inflammation and cancer. PI3Kγ is abundantly expressed in myeloid cells but not in cancer cells 10 , 11 , 12 , 13 ( Extended Data Fig. 1h ) and promotes myeloid cell trafficking during inflammation and cancer 12 , 13 , 14 , 15 , 16 . Mice lacking PI3Kγ ( Pik3cg −/− ) had exaggerated, macrophage-mediated pro-inflammatory responses upon exposure to pathogenic stimuli ( Fig. 1b and Extended Data Fig. 1i–k ), suggesting that PI3Kγ inhibits macrophage inflammatory responses and might also do so in the tumour microenvironment. Mice lacking PI3Kγ and mice that were treated with PI3Kγ antagonists (TG100-115 (ref. 13 ) or IPI-549 (ref. 17 )) exhibited significantly ( P < 0.05, one-way ANOVA with Tukey’s post-hoc test) suppressed growth of implanted HPV + (MEER, mouse HPV + HNSCC cell line) and HPV − (SCCVII, mouse HPV − HNSCC cell line) HNSCC, lung carcinoma (LLC), and breast carcinoma (PyMT) tumours ( Fig. 1c and Extended Data Fig. 1l ). PI3Kγ inhibition did not directly affect the growth or survival of tumour cells, which do not express the kinase 13 , 14 , 15 , 16 ( Extended Data Fig. 1h, m ). PI3Kγ inhibition suppressed long-term growth and metastasis of spontaneous breast tumours, extended survival of mice with orthotopic breast tumours and enhanced the sensitivity of tumours to the nucleoside analogue gemcitabine ( Fig. 1d and Extended Data Fig. 1n–q ). Although PI3Kγ inhibition did not affect the accumulation of CD11b + –Gr1 − –F4/80 + tumour-associated macrophages (TAMs) in tumours ( Extended Data Fig. 2a–f ), it enhanced the expression of major histocompatibility complex class II (MHCII) and pro-inflammatory cytokines and inhibited the expression of immune-suppressive factors in tumours and TAMs, indicating that PI3Kγ controls the TAM switch between immune suppression and immune stimulation ( Fig. 1e–g and Extended Data Figs 2g–j , 3a–f ). To determine whether PI3Kγ directly regulates macrophage polarization, we analysed mRNA and protein expression in primary mouse macrophages stimulated in vitro with basal medium containing colony stimulating factor 1 (CSF-1) or in pro-inflammatory (IFNγ, lipopolysaccharide (LPS) and CSF-1) or anti-inflammatory (interleukin-4 (IL4) and CSF-1) conditions. Pro-inflammatory stimuli upregulated macrophage expression of innate immune proteins, cytokines and cell-surface receptors, whereas anti-inflammatory stimuli induced expression of immunosuppressive factors similar to those expressed in TAMs 1 ( Extended Data Fig. 4a–c ). Genes and proteins associated with immune activation, antigen presentation and T cell activation were upregulated in Pik3cg −/− and PI3Kγ-inhibitor-treated macrophages ( Fig. 2a–c and Extended Data Figs 4c–f , 5a–g ). By contrast, genes associated with immune suppression and chemoattraction were inhibited ( Fig. 2a–c and Extended Data Figs 4c–f , 5a–g ). These results confirm that PI3Kγ controls a macrophage switch between immune stimulation and suppression. Figure 2: PI3Kγ regulates NFκB and C/EBPβ during macrophage polarization. a , Heat map of mean fold-change in gene expression of Pik3cg −/− versus wild-type macrophages stimulated in mCSF, IFNγ/LPS or IL4 conditions ( n = 3 biological replicates) for genes that are significantly differently expressed ( P = 0.00001) in these two genotypes. b , mRNA expression of indicated genes in wild-type or Pik3cg −/− mice. n = 3 biological replicates; * P = 0.01; t -test. c , Expression levels of indicated proteins in wild-type and Pik3cg −/− mice. Data are mean ± s.e.m.; n = 4 biological replicates; * P = 0.01 or ** P = 0.001; t -test. d , e , p65 RelA ( d ) and C/EBPβ ( e ) DNA-binding activity in wild-type and Pik3cg −/− macrophages. n = 4 biological replicates; * P = 0.01 ( d ) and * P = 0.04 ( e ), t -test. f , g , Immunoblotting of pRelA and RelA, pC/EBPβ and C/EBPβ, and pAkt and Akt in LPS- ( f ) and IL4-stimulated ( g ) wild-type and Pik3cg −/− macrophages. h , Immunoblotting of IRAK1, pIKKβ and IKKβ, IκBα, pTBK1 and TBK1, PI3Kγ and actin in wild-type and Pik3cg −/− macrophages. i , mRNA expression in IKKβ-inhibitor-treated macrophages. n = 3 biological replicates; * P = 0.01 or ** P = 0.001; t -test. All experiments were performed two or more times. Data are mean ± s.e.m. PowerPoint slide Source data Full size image To investigate how PI3Kγ regulates macrophage immune responses, we analysed the DNA-binding activities of NFκB p65–RelA and the CCAAT/enhancer binding protein C/EBPβ in wild-type and Pik3cg -null macrophages, as NFκB promotes expression of inflammatory cytokines 18 , whereas C/EBPβ promotes expression of the immunosuppressive factor arginase 1 (Arg1, refs 19 , 20 ). PI3Kγ ablation rapidly and sustainably stimulated RelA DNA-binding activity in macrophages ( Fig. 2d and Extended Data Fig. 5h ). By contrast, PI3Kγ ablation suppressed DNA-binding activity of C/EBPβ ( Fig. 2e ). Consistent with these findings, PI3Kγ inhibition stimulated and sustained p65–RelA phosphorylation and simultaneously inhibited C/EBPβ and Akt phosphorylation ( Fig. 2f, g and Extended Data Fig. 6a, b ). Subsequently, we examined the effect of PI3Kγ inhibition on the stability and phosphorylation of proteins that activate NFκB, including the TLR4-associated adaptor protein IRAK-1 and inhibitory κ B kinase β, IKKβ, which promotes degradation of IκBα, with subsequent release of NFκB from an inhibitory IκB–NFκB complex 18 . PI3Kγ deletion enhanced phosphorylation of IKKβ and TBK1 and degradation of IRAK-1 and IκBα in LPS-stimulated Pik3cg −/− macrophages ( Fig. 2h ). As an IKKβ inhibitor suppressed the inflammatory phenotype observed in Pik3cg −/− macrophages ( Fig. 2i ), these results indicated that PI3Kγ was both a feedback inhibitor of the TLR4–NFκB activation pathway and a promoter of IL4 and C/EBPβ signalling. C/EBPβ has previously been linked with tumour immune suppression through its control of Arg1 expression 19 , 20 . Expression of constitutively activated PI3Kγ ( Pik3cg CAAX) 12 was sufficient to induce Arg1 expression in a manner that was inhibited by Cebpb , Mtor , or Rps6kb1 (also known as S6ka ) knockdown ( Extended Data Fig. 6c, d ). Both Cebpb knockdown and inhibition of S6Kα or mTOR by inhibitors suppressed expression of immune suppressive factors and stimulated expression of pro-inflammatory cytokines ( Extended Data Fig. 6e–g ). These results show that PI3Kγ promotes immune suppression by activating mTor–S6Kα–C/EBPβ and inhibiting NFκB, thereby controlling a switch that regulates the balance between immune suppression and stimulation. Since PI3Kγ blockade stimulated pro-inflammatory responses in macrophages, we investigated whether PI3Kγ blockade in macrophages promoted adaptive immunity. TAMs were isolated from wild-type and Pik3cg −/− tumours, mixed with tumour cells and adoptively transferred into new wild-type or Pik3cg −/− recipient mice ( Fig. 3a ). Tumour growth was significantly inhibited ( P = 0.01) in tumours containing Pik3cg −/− TAMs but not wild-type TAMs ( Fig. 3b ). The number of CD8 + T cells was significantly increased ( P = 0.0001) in tumours with Pik3cg −/− but not wild-type macrophages ( Fig. 3c and Extended Data Fig. 6h ), indicating that PI3Kγ signalling in TAMs inhibits CD8 + T cell recruitment to tumours. To determine whether macrophage-derived cytokines controlled tumour growth, we implanted tumour cells mixed with in vitro cultured macrophages or in conditioned medium from cultured macrophages into wild-type mice. Tumour growth was enhanced by IL4-stimulated wild-type macrophages and conditioned medium from wild-type macrophages, but inhibited by IL4-stimulated Pik3cg −/− macrophages and conditioned medium from Pik3cg −/− or PI3Kγ-inhibitor-treated macrophages and by all LPS-stimulated macrophages and conditioned medium from LPS-stimulated macrophages ( Fig. 3d, e ). To determine which macrophage-derived immune factors affect tumour growth in vivo , we treated wild-type and Pik3cg −/− TAMs ex vivo with inhibitors of mTOR, Arginase1, IKKβ, IL12 or nitric oxide synthase 2 (NOS2) before mixing with tumour cells and implanting in mice ( Extended Data Fig. 6i ). Blockade of mTOR or Arginase1 in wild-type macrophages suppressed tumour growth, while inhibition of NOS2, IL12 or IKKβ in Pik3cg −/− macrophages stimulated tumour growth. These results indicate that PI3Kγ–mTOR mediated immune suppression promotes tumour growth and that PI3Kγ inhibition reverses these effects by shifting macrophages towards NFκΒ-dependent pro-inflammatory polarization. Figure 3: Macrophage PI3Kγ suppresses T cell activation. a , Adoptive transfer method. b , Weights of tumours resulting from tumour cells implanted together with Pik3cg −/− , wild-type or no TAMs. MΦ, macrophages. n = 8 biological replicates; P = 0.01; one-sided ANOVA with Tukey’s post-hoc test). c , CD8 + T cells per mm 2 from b . n = 8 biological replicates with 2 technical replicates each; P = 0.001; one-sided ANOVA with Tukey’s post-hoc test). d , e , Weights of tumours resulting from tumour cells implanted together with in vitro cultured macrophages ( d , n = 8 biological replicates; P = 0.01 and P = 0.0001, as indicated) or with macrophage-conditioned medium ( e , n = 8 biological replicates; P = 0.01 and P = 0.0001, as indicated). f , Percentage of CD4 + and CD8 + T cells (out of the total number of CD3 + cells; * P = 0.001) and percentage of CD3 + T cells (out of the total number of live cells; ** P = 0.01; t -test) in wild-type and Pik3cg −/− tumours ( n = 3 biological replicates). g , Tumour volumes in Pik3cg −/− and/or Cd8 −/− mice. n = 6 biological replicates; P < 0.03, P < 0.001, as indicated; one-sided ANOVA with Tukey’s post-hoc test. h , i , Weights of tumours implanted with naive or tumour-derived T cells ( h , n = 8 biological replicates; P < 0.001; t -test) or inhibitor-treated T cells ( i , n = 16 biological replicates; * P < 0.01; ** P < 0.0001; one-sided ANOVA with Tukey’s post-hoc test). j , Protein concentrations of IFNγ (* P = 0.001; t -test) and granzyme B ( P < 0.03, t -test) in tumours and T cells from wild-type and Pik3cg −/− mice ( n = 3 biological replicates; NS, not significant). All data are shown as mean ± s.e.m. and all experiments were performed two or more times. PowerPoint slide Source data Full size image To investigate further whether macrophage PI3Kγ controlled tumour growth, mice bearing pre-established tumours were treated with PI3Kγ inhibitors in combination with clodronate liposomes, which deplete macrophages from tissues 21 . Separate PI3Kγ inhibitor and clodronate liposome treatment each partially inhibited tumour growth and stimulated T cell recruitment, but the combination had no additive effects, confirming that PI3Kγ expression in macrophages, rather than in other cell types, promotes tumour growth ( Extended Data Fig. 7a–d ). Similar results were observed when CSF1R inhibition 22 and PI3Kγ inhibition were combined ( Extended Data Fig. 7e, f ). PI3Kγ blockade stimulated T cell recruitment into tumours, as total and CD8 + T cell content increased in tumours from Pik3cg −/− mice without significantly altering systemic T cell content ( Fig. 3f and Extended Data Fig. 7g–i ). PI3Kγ inhibition did not suppress tumour growth in CD8-null or antibody-depleted mice, suggesting that PI3Kγ inhibition blocked tumour growth by recruiting and/or activating CD8 + T cells ( Fig. 3g and Extended Data Fig. 7j–k ). When T cells were isolated from tumour-bearing or naive mice, mixed with tumour cells and implanted in mice, only T cells from Pik3cg −/− tumour-bearing mice suppressed tumour growth ( Fig. 3h ). However, PI3Kγ inhibition did not directly activate T cells, as neither PI3Kγ deletion nor treatment of T cells with PI3Kγ inhibitors ex vivo affected T cell proliferation or activation; in contrast, PI3Kδ inhibition suppressed T cell activation in vitro and promoted tumour growth in vivo ( Fig. 3i and Extended Data Figs 7 l, m, 8 a, b). PI3Kγ inhibition promoted T cell-mediated cytotoxicity, as T cells isolated from Pik3cg −/− or PI3Kγ-inhibitor-treated tumours stimulated tumour-cell cytotoxicity ( Extended Data Fig. 8c–g ). T cells from Pik3cg −/− or PI3Kγ-inhibitor-treated mice expressed significantly more IFNγ and granzyme B ( P = 0.001 and P < 0.03, respectively) and significantly less TGFβ1 and IL10 protein ( P = 0.008 and P = 0.03, respectively) and mRNA ( P < 0.02) than T cells from wild-type mice ( Fig. 3j and Extended Data Fig. 8h–l ). Together, these results indicate that PI3Kγ inhibition in macrophages indirectly promotes both Th1 and cytotoxic adaptive immune responses. To investigate whether PI3Kγ inhibition interacts with other immune therapies, we combined PI3Kγ and the checkpoint inhibitor anti-PD-1 in mouse tumour models. PD-L1, but not PD-L2, was expressed in macrophages in vitro and in vivo ( Extended Data Fig. 9a, b ). PI3Kγ inhibition synergized with anti-PD-1 to suppress the growth of HPV + HNSCC tumours in Pik3cg −/− or inhibitor-treated male or female mice, inducing tumour regression in 86% of male and 90–100% of female mice, as well as continuous survival to date in 60% of male mice and 90–100% of female mice ( Fig. 4a–c and Extended Data Fig. 9c, d ). Notably, PI3Kγ inhibition also synergized with anti-PD-1 to reduce tumour growth, extending survival and inducing tumour regression in 30% of mice bearing HPV − HNSCC (SCCVII) tumours ( Fig. 4d–f and Extended Data Fig. 9e ). The combination of PI3Kγ and anti-PD-1 inhibitors activated T cell memory, as 100% of mice that had previously cleared HPV + tumours efficiently suppressed tumour growth when re-challenged with HPV + tumour cells and remained cancer-free ( Extended Data Fig. 9f ). PI3Kγ and PD-1 inhibitors each stimulated immune response gene expression and inhibited immune suppressive gene expression, MHCII expression in TAMs and CD8 + T cell recruitment to tumours; and the combination of therapies further elevated these parameters ( Fig. 4g–i and Extended Data Fig. 9g ). These studies showed that PI3Kγ inhibition can synergize with T-cell-targeted therapy to promote anti-tumour immune responses that induce sustained tumour regression in mouse models of cancer. Figure 4: PI3Kγ inhibition synergizes with anti-PD-1. a , d , Tumour volumes in anti-PD-1 (black arrows) treated wild-type or Pik3cg −/− mice with HNSCC HPV + tumours ( a , n = 13 biological replicates) and PI3Kγ-inhibitor (TG100-115)-treated mice with HPV − HNSCC tumours ( d , n = 13 biological replicates). **** P = 0.0001; ** P = 0.01; one-sided ANOVA with Tukey’s post-hoc test. b , e , Per cent survival of mice in a and d . c , f , Mean change in tumour volumes from individual mice from a and d . g , h , Median-centred heat map of whole tumour mRNA expression ( g ) and mRNA expression of select genes ( h ) from whole tumours of mice from d . n = 3 biological replicates; * P = 0.01; ** P = 0.001; *** P = 0.0001; one-sided ANOVA with Tukey’s post-hoc test. i , Representative immune cell flow-cytometry profiles from mice from d . j , Heat map of PI3Kγ-regulated mRNA expression in HPV + HNSCC patients ( n = 45 patients, P < 0.05; log-rank test). k , l , Multivariate PI3Kγ-regulated immune signature in HPV + HNSCC patients ( k , n = 97; P < 0.001; log-rank test) and lung adenocarcinoma patients ( l , n = 507; P < 0.001; log-rank test). All experiments were performed two or more times. a , d , h , Data are shown as mean ± s.e.m. PowerPoint slide Source data Full size image PI3Kγ-regulated immune responses might also affect outcome in cancer patients. We identified 43 PI3Kγ-regulated genes that are significantly ( P < 0.05) associated with survival in the Cancer Genome Atlas HPV + HNSCC patients ( Fig. 4j ). HPV + HNSCC patients ( n = 34) with a low PI3Kγ-activity profile showed 100% survival at 3 years, compared to 56% survival for the remaining 63 patients ( Fig. 4k ). In HPV − HNSCC patients, 39 of these genes were significantly ( P < 0.05) shifted in the direction of high PI3Kγ activity, consistent with a pattern of pervasive immune suppression and reduced survival in HPV − disease ( Extended Data Fig. 10a ). In lung adenocarcinoma patients, 18 genes predicted survival; 202 patients with a low PI3Kγ-activity profile had 73% survival at 3 years, compared to 55% survival for 305 patients with a high PI3Kγ-activity profile ( Fig. 4l ). These results suggest that a PI3Kγ-regulated immune suppression signature is associated with survival in cancer patients and that PI3Kγ inhibitors might provide clinical benefits for cancer patients. Here we have shown that PI3Kγ regulates innate immunity during inflammation and cancer ( Extended Data Fig. 10b, c ). Prior studies have implicated PI3Ks in the regulation of pro-inflammatory immune responses in macrophages, as pan-PI3K inhibitors and null mutations in the PI3Kγ effectors PDK1, Akt1 and TSC enhanced pro-inflammatory NFκB-dependent transcription in macrophages 23 , 24 , 25 , while inhibition of PTEN and SHIP, which oppose PI3K function, promotes immune suppression 26 , 27 . As macrophage reprogramming can enhance the activity of checkpoint inhibitors in cancer 5 , 14 , 22 , 28 , our studies indicate that inhibitory targeting of macrophage signalling pathways may provide novel approaches to improve the long-term survival of cancer patients. Methods Immune-related gene expression signature analysis in the Cancer Genome Atlas (TCGA) data We analysed TCGA data for association between mRNA expression level of 16 candidate immune-related genes ( ARG1 , IL10 , FOXP3 , CD68 , IL12A , IL12B , IFNG , CD8A , CD4 , ITGAM (also known as CD11B ), CD14 , TNF , IL1A , IL1B , IL6 and CCL5 ) and 5 year overall survival. Illumina HiSeq RNaseqV2 mRNA expression and clinical data for 520 head and neck squamous cell carcinoma samples were downloaded from the TCGA data portal. Median follow-up from diagnosis was 1.8 years with a range of 0.01 years to 17.6 years. Follow-up time was truncated at 5-years for analysis and 200 deaths occurred in this period. For each of the 16 candidate immune response genes, we scored subjects as above (high) or below (low) the median expression and compared survival using a log-rank test at 5% significance. HPV + patients were stratified into a favourable immune profile if they had expression above the median for the significant genes IL12A , IL12B , IFNG , CD8A and below the median for IL6 . Kaplan–Meier curves were plotted for these two groups. Similar methods were used to examine association of these 16 genes with 720 lung adenocarcinoma and 876 gastric carcinoma samples using the publically available data from KM Plotter 29 . In lung adenocarcinomas, 12 genes were significantly associated with survival; patients were scored as having a favourable immune profile if 7 or more of the 12 significant genes had expression in the favourable direction. In 876 gastric cancer samples, 8 genes were significantly associated with survival. Patients were scored as having a favourable immune profile if 5 out of the 8 genes had expression in the favourable direction. PI3Kγ-regulated gene expression signature analysis in TCGA data We investigated 66 immune-related genes in four functional classes, 17 genes related to antigen presentation (HLA class I and II molecules), 24 genes surveying T cell activation, 20 innate immune response genes ( IL6 , CCL7 and others) and 5 genes related to cancer cell signalling. These genes changed expression in response to PI3Kγ inhibition for association with survival in HPV + and HPV − TCGA HNSCC and lung adenocarcinoma cohorts. Within each cancer type, we scored subjects as above or below the median expression for each gene and compared survival using a log-rank test, using 10% false discovery rate (FDR) within each class as the significance threshold. HPV + and HPV − HNSCC survival were investigated separately, as HPV − HNSCC generally has a worse prognosis. Within each cohort, patients were classified as having a favourable PI3Kγ immune response profile if they had expression levels above or below the median in the direction of low PI3Kγ activity for the genes identified as significant. We compared the survival experience of favourable versus less-favourable profiles of patients using Kaplan–Meier curves. Out of the 66 experimentally identified PI3Kγ-regulated genes, 43 showed significant association with overall survival in the HPV + cohort (FDR < 10% within each functional class). Comparison of these genes between HPV + and HPV − cohorts showed that HPV − samples generally had significantly ( P < 0.05) lower expression of 42 genes in the antigen presentation and T cell activation classes, consistent with a pattern of adaptive immune suppression, and higher expression of genes in the innate immune response and cancer cell signalling classes, which were negatively associated with survival. Only MALT1 was not differentially expressed between the two groups ( P = 0.7). Mice Pik3cg −/− and Pik3cg −/− ,PyMT mice were generated as previously described 13 . Cd8 −/− and Cd4 −/− mice with a C57Bl/6J background were purchased from the Jackson Laboratory and crossed with syngeneic Pik3cg −/− mice. All animal experiments were performed with approval from the Institutional Animal Care and Use Committee of the University of California. Animals were euthanized before the IACUC maximum allowable tumour burden of 2 cm 3 per mouse was exceeded. Tumour studies Wild-type or Pik3cg −/− 6–8 week-old female or male syngeneic C57Bl/6J (LLC lung, PyMT breast and MEER HPV + HNSCC) or C3He/J (SSCVII HPV − HNSCC) mice were implanted with 10 6 tumour cells by subcutaneous injection (LLC, MEER, SCCVII) or by orthotopic injection (PyMT) ( n = 10–15) and tumour growth was monitored for up to 30 days. Tumour dimensions were measured once when tumours were palpable. Tumour volumes were calculated using the equation ( l 2 × w )/2. In some studies, wild-type and Pik3cg −/− mice with LLC tumours were treated with gemcitabine (150 mg kg −1 ) or saline by intraperitoneal (i.p.) injection on day 7 and day 14 ( n = 10). LLC were acquired from ATCC, PyMT were from L. Ellies (University of California), HPV + MEER were from J. Lee (Cancer Biology Research Center, Sanford Research/USD) and SCCVII squamous carcinoma cells were from S. Schoenberger (La Jolla Institute for Allergy and Immunology). All cell lines were tested for mycoplasma and mouse pathogens and checked for authenticity against the International Cell Line Authentication Committee (ICLAC; ) list. In some studies, mice bearing LLC, PyMT, HPV + MEER or HPV − HNSCC tumour cells were treated once daily by oral gavage with vehicle (5% NMP and 95% PEG 400), 15 mg kg −1 per day of the PI3Kγ inhibitor IPI-549 or by i.p. injection with 2.5 mg kg −1 twice per day of TG100-115 (ref. 13 ) beginning on day 8 post-tumour injection and continuing daily until euthanasia. IPI-549 is an orally bioavailable PI3Kγ inhibitor with a long plasma half-life and a K D value of 0.29 nM for PI3Kγ with >58-fold weaker binding affinity for the other class I PI3K isoforms 17 . Enzymatic and cellular assays confirmed the selectivity of IPI-549 for PI3Kγ (>200-fold in enzymatic assays and >140-fold in cellular assays over other class I PI3K isoforms 17 ). To study the effect of IPI-549 on lung tumour growth, LLC tumour cells were passaged three times in C57BL/6 albino male mice. When tumour volume reached 1,500 mm 3 , tumours were collected and single-cell suspensions were prepared. This tumour cell suspension was implanted subcutaneously in the hind flank of C57BL/6 albino male mice at 10 6 cells per mouse. Prior to initiating treatment with once daily IPI-549 (15 mg kg −1 orally), groups were normalized on the basis of tumour volume. In some studies, wild-type- and Pik3cg −/− -tumour-bearing mice were treated with 100 μg of anti-CD8 (clone YTS 169.4) or an isotype-control clone (LTF-2) from Bio X Cell administered by i.p. injections on day 7, 10 and 13 of tumour growth. For all tumour experiments, tumour volumes and weights were recorded at death. Anti-PD-1 tumour studies C57Bl/6J (wild-type) or Pik3cg −/− 6–8 week-old male or female mice (MEER HPV + HNSCC) or C3He/J (SCCVII HPV − HNSCC) were implanted with tumour cells by subcutaneous injection (10 6 MEER or 10 5 SCCVII). In HPV + MEER studies, wild-type and Pik3cg −/− mice were treated with four doses of 250 μg of anti-PD-1 antibody (clone RMP-14, Bio X Cell) or rat IgG2a isotype control (clone 2A3, Bio X Cell) every 3 days, starting when tumours became palpable on day 11 ( n = 12–14 mice per group). Wild-type mice bearing HPV + tumours were also treated with the PI3Kγ inhibitor TG100-115 (ref. 13 ) twice per day by i.p. injection, beginning on day 11. Tumour regressions were calculated as a percentage of the difference in tumour volume between the date treatment was initiated and the first date of death of the control group. For HPV − SCCVII studies, C3He/J mice were treated with PI3Kγ inhibitor (2.5 mg kg −1 TG100-115 i.p.) beginning on day 6 post-tumour inoculation and with six doses of anti-PD-1 antibody (250 μg clone RMP-14, Bio X Cell) or rat IgG2a isotype control (clone 2A3, Bio X Cell) every 3 days beginning on day 3 ( n = 12 mice per group) or with a combination of the two. Alternatively, mice were treated with 5 mg kg −1 TG100-115 twice per day ± anti-PD-1 (250 μg every 3 days) beginning on day 1 ( Fig. 4 ). Mice that completely cleared HPV + MEER tumours were re-injected with HPV + tumour cells contralateral to the initial tumour injection and tumour growth was monitored. PyMT models of mammary carcinoma The growth and metastasis of spontaneous mammary tumours in female PyMT + ( n = 13) and Pik3cg −/− ,PyMT + ( n = 8) mice was evaluated over the course of 0–15 weeks. Total tumour burden was determined by subtracting the total mammary gland mass in PyMT − mice from the total mammary gland mass in PyMT + mice. Lung metastases were quantified macroscopically and microscopically in H&E tissue sections at week 15. LPS-induced septic shock Septic shock was induced in wild-type and Pik3cg −/− mice via i.p. injection of 25 mg kg −1 LPS (Sigma, B5:005). Survival was monitored every 12 h and liver, bone marrow and serum were collected 24 h after LPS injection. Macrophage depletion studies C57Bl/6J female mice were implanted with 10 6 LLC tumour cells by subcutaneous injection. When the average tumour size was 250 mm 3 , mice were treated by i.p. injection with 1 mg per mouse clodronate or control liposomes ( ) every 4 days for 2 weeks in combination with daily administration of vehicle or IPI-549 (15 mg kg −1 per day orally). In other studies, 6-week-old female BALB/c mice were injected subcutaneously with 2.5 × 10 5 CT26 mouse colon carcinoma cells in 100 μl phosphate buffered saline (PBS) in the right flank. Eight days later, tumour-bearing mice were arranged into four groups ( n = 15) with an average tumour volume of 70 mm 3 . Oral administration of IPI-549 (15 mg kg −1 ) or vehicle (5% NMP and 95% PEG 400) and anti-CSF-1R antibody (50 mg kg −1 i.p. 3× per week, clone AFS98, Bio X Cell) began on day 8 after tumour injection via oral gavage at a 5 ml kg −1 dose volume and continued daily for a total of 18 doses. Tumour-infiltrating myeloid cell analysis Six-week-old female BALC/c mice were injected subcutaneously with 2.5 × 10 5 CT26 mouse colon carcinoma cells in 100 μl PBS in the right flank. On day 8 after tumour injection, tumour-bearing mice were grouped and treated daily with IPI-549 (15 mg kg −1 , orally) or vehicle (5%NMP and 95% PEG 400). In addition, mice were injected i.p. with 50 mg kg −1 anti-CD115 (Bio X Cell clone AFS98) or 50 mg kg −1 rat IgG2a isotype control (Bio X Cell clone 2A3) antibodies as described above for a total of three injections. Two days after the final injection mice were euthanized, tumours were digested in a mixture of 0.5 mg ml −1 collagenase IV and 150 U ml −1 DNase I in RPMI-1640 for 30 min at 37 °C and tumour-infiltrating myeloid cells were analysed by flow cytometry. In vivo macrophage adoptive transfer experiments CD11b + Gr1 − cells were isolated from single-cell suspensions of LLC tumours from donor mice by fluorescence-activated cell sorting (FACS) or serial magnetic bead isolation. Additionally, for some experiments, primary bone-marrow-derived macrophages were polarized and collected into a single-cell suspension. Purified cells were admixed 1:1 with LLC tumour cells and 5 × 10 5 total cells were injected subcutaneously into new host mice. Tumour dimensions were measured three times per week beginning on day 7. In antibody blocking studies, CD11b + Gr1 − cells were incubated with 5 μg anti-IL12 (clone RD1-5D9) or isotype (clone LTF-2, Bio X Cell) for 30 min before the addition of tumour cells. Mice were additionally treated intradermally with 5 μg of antibody 3 and 6 days after tumour cell inoculation. In some studies, CD11b + Gr1 − cells were pre-incubated with inhibitors of arginase (nor-NOHA, 50 μM, Cayman Chemical), iNOS (1400W dihydrocholoride, 100 μM, Tocris), mTOR (rapamycin, 10 μM Calbiochem), or IκKβ (ML120B, 30 μM, Tocris) for 30 min before the addition of tumour cells. Inoculated mice were further treated by intradermal injection with inhibitors at 3 and 6 days after inoculation. T cell adoptive transfer Donor C57Bl/6J (WT) or Pik3cg −/− mice were implanted with 10 6 LLC tumour cells by subcutaneous injection. On day 14 after tumour implantation, CD90.2 + , CD4 + or CD8 + cells were harvested by magnetic bead isolation (Miltenyi Biotec). T cells were mixed 1:1 with viable LLC tumour cells. Cell mixtures containing 5 × 10 5 total cells were injected into the flanks of naive wild-type or Pik3cg −/− mice ( n = 8–10 per group). Tumour growth, intratumoral apoptosis and necrosis were investigated over 0–16 days. In other studies, wild-type T cells were incubated at 37 °C and 5% CO 2 for 6 h with 10 or 100 nM IPI-549 (Infinity Pharmaceuticals) or Cal-101 (Selleck Chem). After 6 h, T cells were washed, admixed 1:1 with LLC tumour cells, and 10 6 total cells were injected subcutaneously into recipient mice. Tumour growth was monitored for 14 days. Isolation of single cells from mouse tumours Tumours were isolated, minced in a Petri dish on ice and then enzymatically dissociated in Hanks balanced salt solution containing 0.5 mg ml −1 collagenase IV (Sigma), 0.1 mg ml −1 hyaluronidase V (Sigma), 0.6 U ml −1 dispase II (Roche) and 0.005 MU ml −1 DNase I (Sigma) at 37 °C for 5–30 min. The duration of enzymatic treatment was optimized for greatest yield of live CD11b + cells per tumour type. Cell suspensions were filtered through a 70-μm cell strainer. Red blood cells were solubilized with red cell lysis buffer (Pharm Lyse, BD Biosciences) and the resulting suspension was filtered through a cell strainer to produce a single-cell suspension. Cells were washed once with PBS before use in flow cytometry analysis or magnetic bead purification. Peritoneal macrophage isolation Thioglycollate-elicited peritoneal macrophages were collected 96 h after i.p. injection of a 3% thioglycollate solution. Cells were collected from the peritoneal cavity in 10 ml of PBS and macrophage enrichment was performed by plating cells in RPMI with 10% FBS and 1% penicillin/streptomycin for 2 h at 37 °C and 5% CO 2 . After 2 h, non-adherent cells were removed with three PBS washes, and cells were analysed via flow cytometry and qPCR analysis. Flow cytometry staining and analysis Single-cell suspensions (10 6 cells in 100 μl total volume) were incubated with aqua live dead fixable stain (Life Technologies), FcR-blocking reagent (BD Biosciences) and fluorescently labelled antibodies and incubated at 4 °C for 1 h. Primary antibodies to cell surface markers directed against F4/80 (BM8), CD45 (30-F11), CD11b (M1/70), Gr1 (RB6-8C5), CD3 (145-2C11), CD4 (GK1.5), CD8 (53-6.7), CD273 (B7-DC), CD274 (B7-H1) were from eBioscience; Ly6C (AL-21), Ly6G (1A8), CD11c (HL3), and MHC-II (AF6-120.1) from BD Pharmingen, CCR2 (475301) from R&D Systems and CD206 (MR5D3) from AbD Serotech. For intracellular staining, cells were fixed, permeabilized using transcription factor staining buffer set (eBioscience) and then incubated with fluorescently labelled antibodies to FoxP3 (FJK-16 s) from eBioscience. Multicolour FACS analysis was performed on a BD Canto RUO 11 colour analyser. All data analysis was performed using the flow cytometry analysis program FloJo (Treestar). Magnetic bead purification of myeloid cells Single-cell preparations from bone marrow or tumours were incubated with FcR-blocking reagent (BD Biosciences) and then with 20 μl magnetic microbeads conjugated to antibodies against CD11b, Gr1, CD90.2, CD4 and CD8 (Miltenyi Biotech MACS Microbeads) per 10 7 cells for 20 min at 4 °C. Cells bound to magnetic beads were then removed from the cell suspension according to the manufacturer’s instructions. Flow cytometric sorting of cells from tumours and bone marrow For cell sorting, single-cell suspensions were stained with aqua live dead fixable stain (Life Technologies) to exclude dead cells and anti-CD11b-APC (M1/70, eBioscience) and anti-Gr1-FITC (RB6-8C5, eBioscience) antibodies. FACS sorting was performed on a FACS Aria 11 colour high speed sorter at the Flow Cytometry Core at the UC San Diego Center for AIDS Research. Live cells were sorted into the following populations: CD11b + Gr1 − , CD11b + Gr1 lo , CD11b + Gr1 hi and CD11b − Gr1 − cells. CD11b-positive cells were defined by increased staining over the isotype control, and Gr1 levels were defined both by comparison to the isotype control and relative staining to other populations. Mouse macrophage differentiation and culture Bone-marrow-derived cells were aseptically collected from 6–8 week-old female mice by flushing leg bones of euthanized mice with PBS, 0.5% BSA, 2 mM EDTA, incubating in red cell lysis buffer (155 mM NH 4 Cl, 10 mM NaHCO 3 and 0.1 mM EDTA) and centrifuging over Histopaque 1083 to purify the mononuclear cells. Approximately 5 × 107 bone-marrow-derived cells were purified by gradient centrifugation from the femurs and tibias of a single mouse. Purified mononuclear cells were cultured in RPMI + 20% serum + 50 ng ml −1 mCSF (PeproTech). Human macrophage differentiation and culture Human leukocytes from apheresis blood products were obtained from the San Diego Blood Bank. Cells were diluted in PBS, 0.5% BSA, 2 mM EDTA, incubated in red cell lysis buffer (155 mM NH 4 Cl, 10 mM NaHCO 3 and 0.1 mM EDTA) and centrifuged over Histopaque 1077 to purify mononuclear cells. Approximately 10 9 bone-marrow-derived cells were purified by gradient centrifugation from one apheresis sample. Purified mononuclear cells were cultured in RPMI + 20% serum + 50 ng ml −1 Human mCSF (PeproTech). Non-adherent cells were removed after 2 h by washing and adherent cells were cultured for 6 days to differentiate macrophages fully. Macrophage polarization Bone-marrow-derived macrophages were polarized with IFNγ (20 ng ml −1 , Peprotech) + LPS (100 ng ml −1 , Sigma) or LPS alone for 24 h, or IL4 (20 ng ml −1 , Peprotech) for 24–48 h. For inhibitor studies, PI3Kγ inhibitors (1 μM) (IPI-549, Infinity Pharmaceuticals and TG100-115, Targegen/Sanofi-Aventis), rapamycin (10 μM, Selleck), or ML120B (30 μM) were incubated with macrophages 1 h before the addition of polarizing stimuli. Total RNA was harvested from macrophages using the RNeasy Mini Kit (Qiagen) according to the manufacturer’s instructions. RNA sequencing Freshly isolated mouse bone marrow cells from nine wild-type and nine Pik3cg −/− mice were pooled into three replicates sets of wild-type or Pik3cg −/− cells and differentiated into macrophages for 6 days in RPMI + 20% FBS+ 1% penicillin/streptomycin + 50 ng ml −1 mCSF. Each replicate set of macrophages was then treated with mCSF, IL4 or IFNγ/LPS. Macrophages were removed from dishes, and RNA was collected using Qiagen Allprep kit. In addition, RNA was harvested from day 14 (500 mm 3 ) LLC tumours or purified CD11b + Gr1-F480 + TAMs from wild-type (C57BL/6) and Pik3cg −/− mice. RNA was collected using the Qiagen Allprep kit. RNA libraries were prepared from 1 μg RNA per sample for sequencing using standard Illumina protocols. RNA sequencing was performed by the University of California, San Diego Institute for Genomic Medicine. mRNA profiles were generated by single read deep sequencing, in triplicate, using Illumina HiSeq2000. Sequence analysis Sequence analysis was performed as previously described 16 . Sequence files from Illumina HiSeq that passed quality filters were aligned to the mouse transcriptome (mm9 genome build) using the Bowtie2 aligner4. Gene-level count summaries were analysed for statistically significant changes using DESeq. Individual P values were adjusted for multiple testing by calculating Storey’s q values using fdrtooltrimmer. For each gene, the q value is the smallest false discovery rate at which the gene is found significant. We analysed biological processes as defined by the Gene Ontology Consortium. Each gene ontology term defines a set of genes. The entire list of genes, sorted by the q value in ascending order, was subjected to a non-parametric variant of the gene set enrichment analysis (GSEA), in which the parametric Kolmogorov–Smirnov P value was replaced with the exact rank-order P value. We perform a Bonferroni adjustment of gene set P values for the number of gene sets tested. Heat maps of expression levels were created using in-house hierarchical clustering software that implements Ward clustering. The colours qualitatively correspond to fold changes. Individual quantitative RT–PCR cDNA was prepared using 1 μg RNA with the qScript cDNA Synthesis Kit (Quanta Biosciences). Sybr green-based qPCR was performed using human and mouse primers to Arg1 , Ifng , Il10 , Il12p40 , Il1b , Il6 , Ccl2 , Vegfa , Gapdh , Nos2 , Tgfb1 , Tnfa and mouse H2-Aa , H2-Ab1 , H2-Eb1 , and H60a (Qiagen QuantiTect Primer Assay). mRNA levels were normalized to Gapdh and reported as relative mRNA expression or fold change. siRNA-mediated knockdown and gene transfection Freshly isolated bone-marrow-derived CD11b + myeloid cells or differentiated macrophages were transfected by electroporation using an AMAXA mouse macrophage nucleofection kit with 100 nM of siRNA or 2 μg Pik3cg CAAX or pcDNA control plasmid. Non-silencing (Ctrl_AllStars_1) siRNA and Cebpb (MmCebpb_4 and MmCebpb_6), and Mtor (Mm_Frap1_1 and Mm_Frap1_2) siRNAs were purchased from Qiagen. After transfection, cells were cultured for 36–48 h in RPMI containing 10% serum and 10 ng ml −1 mCSF (PeproTech) or polarized as described above. ELISA assays Whole tumours, CD11b + Gr1 − cells, CD90.2 + cells, CD4 + cells and CD8 + cells isolated from LLC tumours were lysed in RIPA buffer and total protein concentrations were determined using a BCA protein assay (Pierce). Macrophage supernatants (100 μl) or 500 μg of total protein lysate from tumours were used in ELISAs to detect CCL2, TGFβ, IL1β, TNFα, IL6, IFNγ, IL10, IL12 and granzyme B (ready set go ELISA, eBioscience). Protein expression was normalized to total volume (supernatants) or mg total protein (tumour lysates). Quantitative colourimetric arginase determination The QuantiChrom arginase assay kit (DARG-200, BioAssay Systems) was used to measure arginase activity in primary mouse bone-marrow-derived macrophages from wild-type and Pik3cg −/− mice according to the manufacturer’s instructions. For all conditions, cells were harvested and lysed in 10 mM Tris (pH 7.4) containing 1 μM pepstatin A, 1 μM leupeptin, and 0.4% (w/v) Triton X-100. Samples were centrifuged at 20,000 g at 4 °C for 10 min. Transcription factor assays To measure NFκB and C/EBPβ activation, TransAM NFκB family and C/EBP transcription factor assay kits (43296 and 44196, Active Motif) were used according to the manufacturer’s protocol. Briefly, wild-type and Pik3cg −/− bone-marrow-derived macrophages were stimulated with LPS (100 ng ml −1 ) or IL4 (20 ng ml −1 ) and nuclear extracts were prepared in lysis buffer AM2 (Active Motif). Nuclear extracts were incubated with the immobilized consensus sequences and RelA, cRel or C/EBPβ were detected using specific primary antibodies. Quantification was performed via colourimetric readout of absorbance at 450 nm. Immunoblotting IL4 and LPS macrophage cultures were solubilized in RIPA buffer containing protease and phosphatase inhibitors. Thirty micrograms of protein was electrophorezed on Biorad precast gradient gels and electroblotted onto PVDF membranes. Proteins were detected by incubation with 1:1,000 dilutions of primary antibodies, washed and incubated with goat anti-rabbit-HRP antibodies and detected after incubation with a chemiluminescent substrate. Primary antibodies directed against Akt (11E7), p-Akt (244F9), IκBα (L35A5), IκKβ (D30C6), p-IκKα/β (16A6), RelA (D14E12), pRelA (93H1), C/EBPβ (#3087), p-CEBPβ (#3082), IRAK1 (D51G7), TBK1 (D1B4) and PI3Kγ (#4252) were from Cell Signaling Technology and pTBK1 (EPR2867(2)) was from Abcam. In vitro cytotoxicity assay CD90.2 + tumour-derived T cells were purified from LLC tumour-bearing wild-type and Pik3cg −/− or TG100-115 and control treated mice and then co-incubated with LLC tumour cells (target cells) at 2.5:1, 5:1 and 10:1 ratios of T cells to tumour cells (2 × 10 3 LLC tumour cells per well) for 6 h. Target cell killing was assayed by collecting the supernatants from each well for measurement of the lactate dehydrogenase release (Cytotox96 non-radioactive cytotoxicity assay kit, Promega). Immunohistochemistry Tumour samples were collected and cryopreserved in OCT, sections (5 μm) were fixed in 100% cold acetone, blocked with 8% normal goat serum for 2 h, and incubated anti-CD8 (53-6.7, 1:50 BD Biosciences) for 2 h at room temperature. Sections were washed three times with PBS and incubated with Alexa594-conjugated secondary antibodies. Slides were counterstained with 4′,6-diamidino-2-phenylindole (DAPI) to identify nuclei. Immunofluorescence images were collected on a Nikon microscope (Eclipse TE2000-U) and analysed using Metamorph image capture and analysis software (Version 6.3r5, Molecular Devices). The detection of apoptotic cells was performed using a TUNEL-assay (ApopTag fluorescein in situ apoptosis detection kit, Promega) according to the manufacturer’s instructions. Slides were washed and mounted in DAKO fluorescent mounting medium. Immunofluorescence images were collected on a Nikon microscope (Eclipse TE2000-U) and analysed with MetaMorph software (version 6.3r5) or SPOT software (version 4.6). Pixels per field or cell number per field were quantified in five 100× fields from ten biological replicates. Statistics Primary tumour samples with mRNA expression data were scored as above or below the median expression level, and tested for association with patient survival using a log-rank test at 5% significance. For studies evaluating the effect of drugs on tumour size, tumour dimensions were measured directly before the start of treatment, tumour volumes were computed and mice were randomly assigned to groups so that the mean volume ± s.e.m. of each group was identical. A sample size of ten mice per group provided 80% power to detect a mean difference of 2.25 standard deviation (s.d.) between two groups (based on a two-sample t -test with two-sided 5% significance level). Sample sizes of 15 mice per group provided 80% power to detect one s.d. difference between two groups. Data were normalized to the standard (control). Analysis for significance was performed by one-way ANOVA with a Tukey’s post-hoc test for multiple pairwise testing with more than two groups and by parametric or nonparametric Student’s t -test when only two groups were compared. We used a two-sample t -test (two groups) and ANOVA (multiple groups) when data were normally distributed and a Wilcoxon rank sum test (two groups) when data were not normally distributed. All mouse studies were randomized and blinded; assignment of mice to treatment groups, tumour measurement and tumour analysis was performed by coding mice with randomly assigned mouse number, with the key unknown to operators until experiments were completed. In tumour studies for which tumour size was the outcome, mice removed from the study owing to health concerns were not included in endpoint analyses. All experiments were performed at least twice; n refers to biological replicates. Data availability RNA sequencing data can be accessed using numbers GSE58318 ( in vitro macrophage samples) and GSE84535 ( in vivo tumour and tumour-associated macrophages samples) at . | Researchers at University of California San Diego School of Medicine and Moores Cancer Center have identified a strategy to maximize the effectiveness of anti-cancer immune therapy. The researchers identified a molecular switch that controls immune suppression, opening the possibility to further improving and refining emerging immunotherapies that boost the body's own abilities to fight diseases ranging from cancer to Alzheimer's and Crohn's disease. The findings are published in the September 19 online issue of Nature. "Immunotherapies, such as T cell checkpoint inhibitors, are showing great promise in early treatments and trials, but they are not universally effective," said Judith A. Varner, PhD, professor in the Departments of Pathology and Medicine at UC San Diego School of Medicine. "We have identified a new method to boost the effectiveness of current immune therapy. Our findings also improve our understanding of key mechanisms that control cancer immune suppression and could lead to the development of more effective immunotherapies." When confronted by pathogens, injury or disease, the initial response of the body's immune system comes in the form of macrophages, a type of white blood cell that express pro-inflammatory proteins called cytokines that, in turn, activate T cells, another immune cell, to attack the health threat. The macrophages then switch gears to express other cytokines that dampen T cell activation, stimulating tissue repair. In chronic inflammatory diseases such as Alzheimer's and Crohn's, however, macrophages associated with the malignancy continue to produce pro-inflammatory cytokines and other substances that kill or transform normal cells. In cancer, highly abundant microphages express anti-inflammatory cytokines that induce immune suppression, effectively stopping the healing process. In the Nature paper, Varner and colleagues pinpoint a key, suspected player: an enzyme in macrophages called PI-3 kinase gamma (PI3Ky). In mouse studies, they found that macrophage PI3Ky signaling promotes immune suppression by inhibiting activation of anti-tumor T cells. Blocking PI3Ky activated the immune response and significantly suppressed growth of implanted tumors in animal models. It also boosted sensitivity of some tumors to existing anti-cancer drugs and synergized with existing immune therapy to eradicate tumors. Varner and her colleagues at the Moores Cancer Center also identified a molecular signature of immune suppression and response in mice and cancer patients that may be used to track the effectiveness of immunotherapy. "Recently developed cancer immunotherapeutics, including T cell checkpoint inhibitors and vaccines, have shown encouraging results in stimulating the body's own adaptive immune response," said co-author Ezra Cohen, MD, who heads the cancer immunotherapy program at Moores Cancer Center. "But they are effective only on a subset of patients, probably because they do not alter the profoundly immunosuppressive microenvironment created by tumor-associated macrophages. Our work offers a strategy to maximize patient responses to immune therapy and to eradicate tumors. " The Nature paper builds upon other work by Varner and colleagues. In a paper first published online in May in Cancer Discovery, Varner's team reported that blocking PI3Ky in tumor-associated macrophages stimulated the immune response and inhibited tumor cell invasion, metastasis and fibrotic scarring caused by pancreatic ductal adenocarcinoma (PDAC) in animal models. In humans, PDAC is the most common malignancy of the pancreas It's aggressive and difficult to treat. Though only the 12th most common type of cancer in the United States, pancreatic cancer is the fourth most common cause of cancer-related death. "PDAC has one of the worst 5-year survival rates of all solid tumors, so new treatment strategies are urgently needed," said Megan M. Kaneda, PhD, an assistant project scientist in Varner's lab and collaborator on all of the papers. In a December 2015 paper published online in Cancer Discovery, Varner and colleagues described animal studies that revealed how disrupting cross-talk between B cells (another type of immune cell) and tumor-associated macrophages inhibited PDAC growth and improved responsiveness to standard-of-care chemotherapy. Specifically, that research team, which included scientists in San Francisco, Oregon and Switzerland, reported that inhibiting Bruton tyrosine kinase, an enzyme that plays a crucial role in B cell and macrophage functions, restored T cell-dependent anti-tumor immune response. In other words, it reactivated the natural, adaptive immune response in tested mice. | 10.1038/nature19834 |
Medicine | New application of existing drug offers personalized therapy for lung cancer | MEK inhibitors block growth of lung tumours with mutations in ataxia–telangiectasia mutated, Nature Communications, DOI: 10.1038/NCOMMS13701 Journal information: Nature Communications | http://dx.doi.org/10.1038/NCOMMS13701 | https://medicalxpress.com/news/2016-12-application-drug-personalized-therapy-lung.html | Abstract Lung cancer is the leading cause of cancer deaths, and effective treatments are urgently needed. Loss-of-function mutations in the DNA damage response kinase ATM are common in lung adenocarcinoma but directly targeting these with drugs remains challenging. Here we report that ATM loss-of-function is synthetic lethal with drugs inhibiting the central growth factor kinases MEK1/2, including the FDA-approved drug trametinib. Lung cancer cells resistant to MEK inhibition become highly sensitive upon loss of ATM both in vitro and in vivo . Mechanistically, ATM mediates crosstalk between the prosurvival MEK/ERK and AKT/mTOR pathways. ATM loss also enhances the sensitivity of KRAS- or BRAF-mutant lung cancer cells to MEK inhibition. Thus, ATM mutational status in lung cancer is a mechanistic biomarker for MEK inhibitor response, which may improve patient stratification and extend the applicability of these drugs beyond RAS and BRAF mutant tumours. Introduction Lung cancer remains the leading cause of cancer-related deaths 1 . Sequencing studies have identified the major genetic drivers in the various subtypes of lung cancer, including adenocarcinoma, the most frequent lung cancer 2 , 3 , 4 , 5 . However, only a minority of lung tumours harbour mutations in activated kinases such as EGFR, ALK, ROS1 and RET that can be targeted with specific small molecule inhibitors. Some of these (for example, gefitinib, erlotinib, crizotinib) have shown moderate success in the clinic, extending survival by several months on average 6 , 7 , 8 . However, none of these drugs provides a curative therapy in advanced disease setting due to emerging secondary resistance. The majority of lung cancer mutations, such as ATM (ataxia–telangiectasia mutated) loss-of-function mutations, which are found in 5–10% of patients, are not actionable with small-molecule inhibitors. Some of these so-called ‘undruggable’ mutations may result in vulnerabilities through synthetic lethal interactions that can be exploited with drugs 9 . The serine/threonine kinase ATM is well known for its involvement in the DNA damage response (DDR) and maintaining genome stability but it has also been implicated in other cancer-relevant processes including cell growth, metabolism and mitochondrial homeostasis 10 . The gene name is derived from the disease ataxia telangiectasia (A–T), a severe genetic disorder caused by homozygous germline mutations in the ATM gene. A–T patients are predisposed to cancer, particularly those of lymphoid origin, and ATM variants have also been associated with cancer predisposition 10 , 11 . More recently, cancer genome sequencing has revealed frequent ATM somatic mutations in a variety of solid tumours, including lung ( ∼ 10%), pancreatic ( ∼ 12%) and bladder ( ∼ 4%) cancers 2 , 4 , 5 , 12 , 13 . Interestingly, mutual exclusivity with p53 mutations in lung cancer suggests that ATM loss-of-function can partially substitute for p53 loss 5 . MEK1/2 are kinases that regulate cell proliferation and survival. MEK inhibitors are currently in clinical development for a variety of cancers, including lung cancer, with mutations specifically in the oncogene RAS or its downstream signalling components (for example, BRAF ), which occur in a large number of cancers. The first MEK inhibitor (trametinib) has recently been approved for treating BRAF mutant melanomas but in lung cancer results have not been as encouraging 14 , 15 . However, this does not rule out that MEK inhibitors (alone or in combination with other agents) are effective in a select lung cancer patient cohort. Thus, the identification of mutations that predict sensitivity to MEK inhibition, particularly a common one like ATM , remains of great interest and could have immediate clinical applications. MEK1/2 kinases have not been directly linked with DNA repair. Thus, our experiments reveal an unexpected link between growth factor signalling pathways and ATM loss, providing a strong rationale for testing MEK inhibitors in the context of ATM mutant tumours. Results A pharmacogenetic screen links ATM with MEK To search for synthetic lethal interactions between lung adenocarcinoma tumour suppressor genes and anti-cancer drugs, we employed an in vitro screening strategy using isogenic cell lines 16 , 17 . These isogenic cell lines only differ in the expression of one specific gene, thereby simplifying the interpretation of the screening hits. We employed the AALE lung bronchial epithelial cell line as a relatively benign lung cell type that has been immortalized with SV40 large T-antigen and hTERT, and can become tumourigenic upon KRAS or HRAS expression 18 . We compiled a list of 10 tumour suppressor genes (that is, APC , ATM , CDKN2A , ERBB4 , NF1 , PRKDC , PTEN , SMAD4 , SMARCA4 and STK11 ) that recur in human lung cancer based on publicly available tumour sequencing data and the Catalogue Of Somatic Mutations In Cancer (COSMIC) database 2 , 4 , 5 , 17 . As p53 and RB1 are inactive in AALE cells due to the expression of large T-antigen, these tumour suppressor genes were omitted. HRAS expressing cells were infected with validated shRNA vectors targeting each tumour suppressor gene and screened in a pooled format 16 against 106 diverse FDA-approved and experimental anti-cancer drugs ( Fig. 1a , Supplementary Fig. 1 , Supplementary Tables 1 and 2 and Supplementary Dataset 1 ). Figure 1: Chemical genetic screen reveals MEK inhibitor sensitivity of ATM-depleted cells. ( a ) Gene–drug interaction screen in AALE cells (see Methods). Synthetic lethal drug interactions are sorted based on Z -score for each indicated tumour suppressor. The interactions between ATM and AZD7762 (green circle) and PD0325901 (red circle) are indicated with black arrows. ( b ) Dose–response curve with the Chk1/2 inhibitor AZD7762. AALE cells were infected as indicated and treated with the drug for 3 days. Displayed is the relative viability that is calculated by normalizing the raw CellTiter-Glo data to the vehicle (DMSO) treated controls. Error bars indicate s.d. ( n =3). ( c ) Colony formation of AALE cells infected with ATM shRNA or control virus and treated with PD0325901 for 10 days. Shown is a representative example ( n =3). Numbers in the bottom right corners indicate quantification relative to DMSO-treated samples. ( d ) Growth curves of indicated AALE cell lines treated with PD0325901 (1 μM). Cells were counted and passaged every 3 days and seeded at equal densities. ** P <0.01, two-sided t -test ( n =2 biological replicates). ( e ) Western blot analysis of AALE cells infected with ATM knockdown vectors. ( f ) Cell viability of AALE cells infected with indicated vectors and treated with PD0325901 for 3 days. Data are normalized to vehicle (DMSO). Indicated are s.d.’s. **** P <0.0001, two-sided t -test ( n =3). ( g ) Relative cell viability of AALE cells stably infected with ATM shRNA or control viruses and treated with trametinib for 3 days. Data are normalized to vehicle (DMSO). Error bars indicate s.d.’s ( n =3). Full size image The top synthetic lethal interaction hit identified in the screen involved knockdown of ATM, a DNA damage signalling kinase, and the Chk1 kinase inhibitor AZD7762 (ref. 19 ; Fig. 1a,b ). This interaction is consistent with the known role of Chk1 as a critical downstream effector of the related DNA damage signalling kinase ATR, and suggests that ATM-deficient cells are more dependent on the ATR signalling axis. The second strongest drug sensitivity hit in ATM knockdown cells was unexpectedly with a MEK1/2 inhibitor (PD0325901) 20 ( Fig. 1a ). Given the potential clinical value of such an interaction, we went on to investigate this further. To validate the hit, we first repeated the infection of AALE cells with the individual ATM shRNA in the absence of HRAS. The ATM shRNA alone did not inhibit proliferation but upon treatment with MEK inhibitor we observed almost complete inhibition of colony outgrowth, consistent with the screening result ( Fig. 1c ). Similar results were obtained in growth curve assays ( Fig. 1d ). Two functionally validated ( Fig. 1e , Supplementary Fig. 2 ) independent hairpin sequences had the same effect, each showing a highly significant reduction in cell viability upon treatment with 1 μM PD0325901 compared with DMSO treatment (two-sided t -test, P <0.0001, Fig. 1f ). To ensure the synthetic lethal interaction with ATM knockdown was indeed due to inhibition of MEK, we used an alternative, FDA-approved and highly specific MEK inhibitor trametinib. Indeed, ATM knockdown cells were also markedly more sensitive to trametinib across a range of concentrations, indicating an on-target effect of the drugs ( Figs 1g and 2a ). Furthermore, overexpression of an inhibitor-resistant MEK1 mutant (MEK1-L115P) 21 in the ATM knockdown cells completely rescued the effect of trametinib ( Fig. 2b ), independently verifying an on target effect of the drug. Figure 2: ATM loss-of-function in lung cancer cell lines triggers MEK inhibitor sensitivity. ( a ) Chemical structures of MEK inhibitors used in this study. ( b ) Dose–response experiment of ATM knockdown (shRNA1) AALE cells infected with the indicated MEK1 expression vectors and treated with trametinib for 3 days. Data are normalized to vehicle-treated cells. Error bars indicate s.d.’s ( n =3). ( c ) Dose–response experiment of ATM knockdown NCI-H322 cells treated with trametinib for 5 days. Data are normalized to vehicle-treated cells. Error bars indicate s.d.’s ( n =3). ( d ) Effective concentration resulting in 50% growth inhibitory effect (EC50) is depicted for indicated cell line and compound combinations. The EC50 for the unresponsive control NCI-H460 cells was set to 20 μM. ( e ) Western blot analysis of CRISPR/Cas9 edited NCI-H322 clones. ( f ) Dose–response experiment of NCI-H322 cells in which both ATM alleles have been inactivated (four KO clones) or unedited control (four WT clones) treated with PD0325901, TAK-733, trametinib or pimasertib for 5 days. Data are normalized to vehicle-treated cells. Error bars indicate s.d.’s ( n =3). Full size image ATM inactivation sensitizes lung cancer cell lines We next extended our analysis to four additional lung cancer cell lines ( Fig. 2c,d ). H460 cells carry a KRAS mutation but were unexpectedly highly resistant (IC50>10 μM) to trametinib and to TAK-733, another MEK inhibitor ( Fig. 2a ). However, upon ATM knockdown, H460 cells then became highly sensitive (IC50<100 nM). A similar sensitization was observed for the intermediate sensitive cell lines H322 ( KRAS and BRAF wild type) and H1755 ( BRAF mutant). This suggests that ATM knockdown can affect the response to MEK inhibition in both the presence and absence of KRAS or BRAF mutations, both of which are common in lung cancer. Importantly, sensitization by ATM knockdown was not observed in the naturally ATM mutant H1666 cell line, supporting an on-target effect of the used shRNA sequences ( Fig. 2d ). Furthermore, ATM shRNAs did not sensitize cells to a control, unrelated inhibitor of c-MET/ALK, crizotinib ( Fig. 2d ). Thus, ATM knockdown sensitized several lung cancer cell lines with different genetic backgrounds to MEK inhibition, and even augmented the response of already sensitive BRAF mutant H1755 cells by an order of magnitude. As complementaryDNA add-back experiments are complicated by the large size of ATM (350 kDa) and poor transfectability of the lung cancer cell lines, we next employed RNA-guided nucleases (RGNs) 22 as an alternative approach to inactivate ATM and confirm the synthetic lethal interaction. To avoid a bias due to KRAS or BRAF mutations, we selected H322 cells for this experiment. A CRISPR single guide RNA (sgRNA) targeting exon 6 of ATM was employed, and the genomic region flanking the Cas9 cleavage site in single cell clones was analysed by Sanger sequencing ( Supplementary Table 3 ). ATM frameshift mutants were obtained at the expected ratios indicating that ATM was not required for viability. Furthermore, sequencing of potential exonic sgRNA off target sites did not reveal any undesired gene editing ( Supplementary Table 4 ). Complete absence of ATM protein expression was confirmed by western blot and non-edited wild-type clones obtained during the procedure were used as controls ( Fig. 2e ). As expected, these ATM-deficient NCI-H322 cells displayed a strongly reduced response to DNA damage as measured by phosphorylation of KAP1 and H2AX ( Supplementary Fig. 3 ). In agreement with the knockdown experiments, ATM knockout NCI-H322 cells were markedly more sensitive to MEK inhibition using four different inhibitors as compared with isogenic controls ( Fig. 2f , Supplementary Fig. 4 ). Together these experiments demonstrate that ATM inactivation by shRNA knockdown or CRISPR/Cas9 knockout strongly sensitizes lung cancer cells to MEK inhibition even in the absence of KRAS or BRAF mutations. ATM mutations associate with sensitivity to MEK inhibition To investigate whether the ATM-MEK gene–drug association also occurs in lung cancer cells harbouring ATM mutations, we assembled a set of nine ATM mutant and seven wild-type control lung cancer cell lines. To control for confounding mutations, we included equal numbers of KRAS/BRAF mutant cell lines in the ATM mutant and wild-type groups and the status of the ATM mutant cell lines was confirmed by Sanger sequencing. As the ATM mutations have not been functionally validated we employed a polymorphism phenotyping algorithm (PolyPhen v2) to predict which mutations were most likely to affect protein function, for example by causing a premature stop codon or a missense mutation affecting an evolutionary conserved amino acid, and further stratify the cell lines 23 . In addition, we investigated the DDR by measuring the phosphorylation of the canonical ATM substrates SMC1 and KAP1 upon exposure to ionizing radiation ( Supplementary Fig. 5 ). As expected, all ATM wild-type cells responded to ionizing radiation. In contrast, three out of the seven ATM mutant cell lines did not display an appreciable induction of SMC1 or KAP1 phosphorylation. This included the heterozygous ATM mutant cell line NCI-H1666. This experiment confirms that some of the cell lines with ATM mutations display aberrant DDR. However, it does not rule out that other aspects of ATM function are normal in the other cell lines that harbour ATM mutations 10 . Sensitivity to trametinib and TAK733 was determined in dose–response experiments and the area under curve (AUC) was calculated to compare the cell lines ( Fig. 3a,b , Supplementary Fig. 6 ). There was a strong correlation (Spearman’s correlation, rho=0.94, P <0.0001) between the potency of the two MEK inhibitors across the cell lines, indicating a shared mechanism of action ( Fig. 3c ). Out of the 16 cell lines, the top 4 most sensitive to both MEK inhibitors all carried ATM mutations. Furthermore, this enrichment of ATM mutations among cell lines with marked sensitivity was statistically significant even when only the biochemically validated cell lines were considered (two-sided t -test, P <0.01, Supplementary Fig. 7A ). In this panel, KRAS or BRAF mutational status did not significantly predict trametinib sensitivity. Yet, three out of the four most sensitive cell lines harboured KRAS or BRAF and ATM mutations, suggesting that cells with combined ATM and BRAF/KRAS are most likely to respond to MEK inhibitors. This is in agreement with the observation that MEK sensitive, BRAF mutant H1755 cells were further sensitized by ATM knockdown ( Fig. 2d ). Analysis of the COSMIC (Catalogue of Somatic Mutations in Cancer) database indicates that ATM mutations and BRAF or KRAS (but not TP53 and EGFR ) mutations co-occur at a rate that is consistent with a lack of genetic interaction ( Supplementary Fig. 7B ). Figure 3: Cancer-associated ATM mutations predict MEK inhibitor sensitivity. ( a ) Representative dose–response curves for sensitive and resistant lung cancer cell lines treated with trametinib for 5 days and normalized to vehicle control. Error bars indicate s.d.’s ( n =3). KRAS/BRAF genotypes for indicated cell lines: NCI-H460: KRAS-Q61H; NCI-H322: None; NCI-H23: KRAS-G12C; NCI-H1666: BRAF-G466V; NCI-H157: KRAS-G12R. ( b ) Sensitivity of indicated 16 cell lines to trametinib. Shown is the area under curve (AUC) derived from dose–response experiments as in a . When applicable, the heterozygous (het) or homozygous (hom) mutational status of ATM is indicated above the bars and mutational status for selected genes is indicated below. Error bars indicate s.d.’s ( n =3). ( c ) Area under curve values derived from dose–response experiments with TAK-733 and trametinib for 16 lung cancer cell lines. ( d ) Sensitivity of lung cancer cell lines in the Cancer Cell Line Encyclopedia (CCLE) to the MEK inhibitor AZD6244 (selumetinib). High activity area score (area above the curve 24 =AAS) indicates drug sensitivity. Each circle indicates a single cell line and cell lines are grouped according to genotype (WT=wild type for K-Ras, H-Ras , N-Ras , BRAF , c-RAF and ATM ; ATM = ATM mutant; RAS = K-Ras, H-Ras or N-Ras mutant). ATM mutations are labelled according to PolyPhen predictions (damaging >0.9, neutral <0.9). Black bar indicates mean AAS. ** P <0.01, **** P <0.0001, NS=not significant, two-sided t -test compared with WT group. ( e ) Analysis as in d for ARID1A mutant or wild-type cell lines for sensitivity to indicated MEK inhibitors. NS=not significant, two-sided t -test. ( f ) Analysis of ATM or RAS/BRAF mutant cell lines for response to drugs ( n =20) in CCLE data set. Indicated is the P value for each drug. Full size image To corroborate these results, we analysed data from the Cancer Cell Line Encyclopedia (CCLE) 24 . This public resource comprises a large panel of molecularly characterized cell lines, including 93 that are derived from lung cancer patients. The CCLE also contains information on the sensitivity of these cell lines to MEK inhibitors. The high frequency of ATM mutation in this collection (12 out of 93 cell lines, Supplementary Dataset 2 ) was similar to that previously reported in patients 2 , 4 , 5 . In agreement with our previous results, the mean response to the MEK inhibitors AZD6244 (selumetinib) and PD0325901 was significantly stronger in the ATM mutant cell lines ( Fig. 3d , Supplementary Fig. 8 ). Remarkably, filtering out cell lines harbouring ‘neutral’ mutations (those predicted by PolyPhen not to affect protein function) preferentially removed MEK inhibitor-resistant cell lines and improved statistical significance 100-fold ( Fig. 3d , Supplementary Fig. 8A ). Similar results were obtained using a second independent data set from the Genomics of Drug Sensitivity in Cancer – COSMIC project 25 ( Supplementary Fig. 8B ). This further strengthens the notion that it is specifically mutant ATM that contributes to MEK inhibitor sensitivity as filtering out random samples would have reduced statistical significance. Drug sensitivity was specific for ATM as no relationship between MEK inhibitors and an unrelated lung cancer tumour suppressor gene (that is, ARID1A ) was observed ( Fig. 3e ). Furthermore, MEK inhibitors were the only 2 drugs out of the 20 tested that were significantly (two-sided t -test, P <0.01) associated with ATM mutations ( Fig. 3f ), ruling out a general drug hypersensitivity phenotype due to ATM loss-of-function. To further assess the performance of ATM as a predictive biomarker in the CCLE data set, we calculated sensitivity (fraction of responsive lines that is identified), false positive rates (fraction that is predicted to be responsive but is not) and true positive rates (fraction that is responsive and correctly predicted) and compared these with KRAS / BRAF ( Supplementary Table 5 ). Encouragingly, ATM outperformed KRAS / BRAF in terms of true positive and false positive rate. Four out of seven (57%) of the cell lines predicted to be sensitive based on ATM mutation indeed responded to MEK inhibition. Importantly, a genetically stratified clinical trial with a 57% response rate would be considered successful (compared with the 23% that would have been observed using KRAS / BRAF ). Adding ATM status to KRAS / BRAF further increased sensitivity and true positive rate, while reducing false positive rate. Together our results reveal that a heterogeneous panel of lung cancer cells carrying ATM mutations is associated with high sensitivity to MEK inhibition. To experimentally demonstrate that patient-derived mutations specifically in ATM are directly involved in MEK inhibitor sensitivity, we sought to restore resistance in ATM mutant cells. NCI-H23 cells have a homozygous ATM point mutation (c.5756A>C) resulting in an amino-acid substitution of glutamine with proline: ATM(Q1919P). Accordingly, these are highly sensitive to MEK inhibitors ( Fig. 3a,b ). We reasoned that if loss of ATM is causally implicated in MEK inhibitor sensitivity, restoration of the endogenous ATM locus would confer resistance to these compounds. To test this, we designed an sgRNA targeting ATM in close proximity of the c.5756A>C mutation. To enable restoration of the wild type allele, we delivered this sgRNA into NCI-H23 cells together with Cas9 and a DNA oligo encompassing the corrected wild-type sequence as a template for homology-directed DNA repair ( Fig. 4a , Supplementary Table 1 ). Transfected cells were treated with MEK inhibitor so that any colonies that escaped drug treatment (for example, by restoring wild-type ATM) could be detected and analysed by sequencing. We observed restoration of mutant ATM to the wild-type allele in over 50% of the sequenced alleles, and only upon selection with trametinib or TAK733 ( Fig. 4b ). Thus, the ATM mutation found in NCI-H23 cells is required for the observed sensitivity to MEK inhibitors. Figure 4: Restoration of ATM point mutation renders cells resistant to MEK inhibition. ( a ) Schematic outline of CRISPR/Cas9 rescue experiment in ATM mutant NCI-H23 cells. ( b ) Sanger sequence chromatogram of cells treated as in a . Red dashed box indicates the mutant cytosine (C) that is replaced by adenosine (A). Full size image Compromised crosstalk between MEK/ERK and AKT/mTOR Having established that ATM and MEK are synthetic lethal in lung cancer cells, we next investigated the mechanism underlying this gene–drug interaction. To determine whether ATM kinase activity is required for the MEK dependency, we used a specific small-molecule inhibitor of ATM (KU60019) 26 in combination with MEK inhibition. A single concentration of trametinib (40 nM) was chosen that had minimal effect on viability on its own. When combined with low (<1 μM) concentrations of KU60019, we observed strong synergy as determined by Chou–Talalay combination index and deviation from Bliss additivity ( Fig. 5a,b , Supplementary Fig. 9A ). At higher concentrations (>2 μM) KU60019 displayed some toxicity as a single agent (possibly due to off target effects) and synergy scores were accordingly lower. Thus, inhibition of ATM kinase activity renders lung cancer cells more sensitive to MEK inhibition. Figure 5: ATM loss-of-function dampens AKT/mTOR signalling. ( a , b ) Drug synergy experiment in AALE cells with ATM inhibitor (KU60019) and trametinib (40 nM). Displayed is relative cell viability for KU60019 treatment alone and for co-treatment with trametinib. Error bars indicate s.d. ( n =3). ( b ) Deviation from Bliss additivity calculated from experiment in a . Error bars indicate s.d. ( n =3). ( c ) Fold induction of apoptotic cells after 3-day compound or vehicle (DMSO) treatment, as measured by annexin V positivity. Error bars indicate s.d.’s ( n =4). ** P <0.01, *** P <0.001, two-way ANOVA with Holm Sidak multiple comparisons correction. ( d ) Western blot analysis of ATM knockout and control NCI-H322 cells treated with TAK-733 (1.0 μM, 6 h) with indicated antibodies (pERK (T202/204); pAKT (S473); p-mTOR (S2448); p4EBP1 (T37/46); pS6K (T389)). Shown is a control and two independent knockout clones. For d , a representative blot for tubulin is used as loading control. ( e ) Drug synergy experiment on NCI-H322 control (+/+) and ATM knockout (−/−) cells treated with trametinib simultaneously with DMSO or two different concentrations of AKT inhibitor (MK2206). Relative cell viability is shown as dose–response curves. Effect of single MK2206 treatment on cell viability in the absence of trametinib is shown below as the bar graph. Deviation from Bliss additivity calculated for both control (grey bars) and knockout (red bars) cells for indicated concentrations is shown bottom right. Error bars indicate s.d. ( n =3). ( f ) Phosphorylation of indicated targets (pERK (T202/204); pAKT (S473); p-mTOR (S2448); p4EBP1 (T37/46); pS6K (T389)) in response to TAK-733 treatment (1.0 μM, 6 h) is shown as a fold change compared with DMSO-treated baseline (dashed line). Data were determined by quantification of digital western blot images in ImageJ for six wild-type (black symbols) and seven ATM mutant (red symbols) lung cancer cell lines. Median for each group is displayed as the horizontal line. Two-sided t -test was used to calculate the P values. Full size image We next analysed whether an altered DDR was involved in the synthetic lethal interaction. MEK inhibition in ATM null cells did not alter levels of the DNA double-strand-break marker γ-H2AX ( Supplementary Fig. 10A ). Moreover, MEK inhibition did not significantly alter phosphorylation of the ATM substrates KAP1 or SMC1 upon induction of DNA double-strand breaks ( Supplementary Fig. 10B ). These results suggested an alternative explanation might be involved, such as a change in signalling through pro-survival or anti-apoptotic pathways. Indeed, upon exposure to MEK inhibitors, H322 ATM null cells underwent apoptosis, as evident from annexin V staining ( Fig. 5c ). This suggests that an increased propensity to undergo programmed cell death in response to MEK inhibition underpins the differential response between ATM wild-type and knockout cells. One candidate pro-survival pathway is the PI3K/AKT/mTOR pathway, the activity of which is reciprocally linked to MEK/ERK signalling 27 , 28 . We treated isogenic NCI-H322 cells with MEK inhibitor for 6 h and determined the phosphorylation status of proteins along the AKT/mTOR pathway. In control cells, MEK inhibition resulted in increased phosphorylation of several components, consistent with abrogation of a negative feedback loop connecting the two pathways. Although pAKT itself was not elevated, phosphorylation of mTOR and 4EBP1 was increased in wild-type cells ( Fig. 5d ). Remarkably, this feedback mechanism was aberrant in ATM knockout cells. Instead of upregulation, we observed a downregulation of p4EBP1 and pS6K upon MEK inhibition whereas phospho-mTOR was unchanged and pAKT was slightly lower. Next, we investigated if altered crosstalk was also observed in cell lines that naturally harbour ATM mutations. We subjected 13 wild-type ( n =6) and ATM mutant ( n =7) cell lines to MEK inhibitor treatment for 6 h and determined the phosphorylation status of 4EBP1, AKT, mTOR and S6K. The individual cell lines did not respond identically, as expected from a set of (epi-) genetically heterogeneous cell lines. Strikingly, we observed a significant difference between the two groups for 4EBP1, mTOR and S6K, where the ATM mutant cell lines consistently displayed diminished or inverted feedback response ( Fig. 5f , Supplementary Fig. 11 ). A consistent reduction of pAKT in the absence or presence of MEK inhibition was not observed. To determine whether reduced signalling through AKT/mTOR pathway would be sufficient to sensitize cells to MEK inhibitors, we measured potential synergy with the AKT inhibitor MK2206. As before, concentrations of compounds were chosen that had minimal cytotoxic effects on their own. MEK and AKT inhibition strongly synergized in reducing cell viability ( Fig. 5e , Supplementary Fig. 9B ), in agreement with the previously reported observations in lung cancer models 29 , 30 . Importantly, this synergy was not observed in ATM-deficient cells, indicating that pro-survival compensatory signalling through the AKT–mTOR axis upon MAPK blockage requires functional ATM. In this context, signalling in ATM-deficient cells resembles the effect of an AKT inhibitor. Together, these results are consistent with a model in which ATM loss disturbs AKT/mTOR signalling thereby resulting in increased sensitivity to MEK inhibition. ATM loss-of-function sensitizes to MEK inhibition in vivo Next, we addressed whether the sensitization to MEK inhibition through ATM depletion in lung cancer cell lines was paralleled by a tumour response to MEK inhibitors in vivo by using mouse xenograft models. We first tested a patient-derived xenograft (PDX) model carrying a heterozygous ATM mutation (F858L) on the background of an activating KRAS mutation. This ATM mutation has been associated with increased radiosensitivity and cancer susceptibility, indicating impaired signalling function 31 , 32 . Consistent with our previous results, the MEK inhibitor selumetinib completely blocked tumour growth and actually induced tumour regression ( Fig. 6a ). Selumetinib was more effective than vinorelbine and carboplatin/paclitaxel combination therapy, indicating a superior effect over several standard of care chemotherapeutics. Although this experiment supports a role of ATM in the sensitivity to selumetinib, lack of an isogenic control prevented us to rule out that other factors (for example, KRAS ) are chiefly responsible for the response to MEK inhibition in this model. Figure 6: ATM depletion sensitizes tumours in vivo to MEK inhibition. ( a ) Patient-derived xenograft model with KRAS and ATM mutations. Displayed is the change in tumour volume compared with baseline at 21 and 28 days post treatment for the indicated treatments. Day 21 control versus selumetinib P <0.05; day 28 control versus selumetinib P <0.05. Two-sided t -test ( n =3 biological replicates). ( b ) Growth of control, ATM shRNA1 and ATM shRNA2 NCI-H460 xenografts. ( c ) As in b treated daily with the MEK inhibitor pimasertib (80 mg kg −1 p.o.). Shown is the mean and standard error. *** P <0.001 ( n =8, two-sided t -test). ( d ) Kaplan–Meier survival curve of mice as in b , c . Control-pimasertib versus ATMshRNA1 or ATMshRNA2 P <0.05, Log rank Mantel–Cox test. ( e ) Macroscopic images of excised H460 tumours treated as indicated daily for 14 days. ( f ) Immunohistochemistry staining of NCI-H460 tumours as in e at 14 days. Full size image To address this we next performed a xenograft experiment using an isogenic model. As the H322 cells did not graft in nude mice, we employed the MEK inhibitor-resistant H460 cells. Furthermore, this allowed us to address the effect of ATM loss in the context of a KRAS mutation in vivo ( Fig. 2d and Fig. 3a,b ). Cells were transduced with control or one of two different ATM shRNAs and injected subcutaneously in nude mice. Functional validation of the hairpins confirmed an impaired DDR in the ATM knockdown cells ( Supplementary Fig. 12 ). As expected, tumours grew at comparable speed indicating that the loss of ATM does not impact proliferation or survival on its own ( Fig. 6b ). However, we observed strong sensitization of NCI-H460 cells to the MEK inhibitor pimasertib upon loss of ATM ( Fig. 6c ). Specifically, ATM wild-type tumours increased in size almost sevenfold over the course of the 33-day experiment. In contrast ATM null tumours markedly slowed down growth the first week upon commencing drug treatment and subsequently tumour volume remained stable until the end of the experiment. Importantly, pimasertib treatment also resulted in a significant survival benefit for mice carrying ATM knockdown tumours ( Fig. 6d ). Furthermore, the tumour inhibitory effects were already apparent at 5 mg kg −1 pimasertib ( Fig. 6e ), which is more than 20-fold below the maximum tolerated dose of this compound. The marked difference in response between control and ATM-depleted xenografts was also evident from immunohistochemistry analysis ( Fig. 6f ). Consistent with in vitro findings, we observed an increase in the numbers of apoptotic cells, as measured by cleaved Caspase 3 staining. Furthermore, upon MEK inhibition, tumours displayed a strong reduction in phosphorylated 4EBP1 levels, consistent with a role for the AKT/mTOR pathway in the response to MEK inhibition. Expression of the proliferation marker Ki-67 was also strongly reduced in the ATM knockdown tumours upon MEK treatment, whereas only a minor reduction was observed in control tumours. This indicates that MEK inhibition in ATM-depleted tumours in vivo elicits a combination of proliferation arrest and apoptosis resulting in a strong inhibition of tumour growth. Discussion The synthetic lethal interaction between ATM and MEK in lung cancer cells identified here indicates that these two kinases are functionally tightly linked. Our experiments suggest that this link does not relate to DNA damage signalling, the canonical function of ATM, but rather the known coordination between the MAPK and AKT/mTOR signalling pathways, which leads to an increased dependency on MEK kinase activity for cell survival. Thus, whereas most cells activate the AKT/mTOR pathway to compensate for loss of pro-survival signalling when the MAPK pathway is inhibited, ATM-deficient cells are unable to take advantage of this feedback loop. As a result, ATM-mutant lung cancer cells undergo apoptosis when MEK is inhibited. The notion that simultaneous inhibition of the MAPK pathway and the PI3K/AKT/mTOR axis is detrimental to cancer cells is supported by previous studies, although the mechanism has not been fully resolved 29 , 30 . However, this combination treatment results in a dose-limiting toxicity in patients 33 . Several other lines of evidence support the involvement of ATM in growth factor signalling, metabolism and the AKT/mTOR pathway in particular. For instance, ATM or AKT mutations in mice and humans both result in insulin resistance 34 , 35 , 36 . Furthermore, ATM mutant fibroblasts display reduced AKT/mTOR activity upon growth factor stimulation 37 and a recent report indicates that ATM supports oncogenic HER2 signalling in breast cancer cells 38 . A direct link between ATM and AKT/mTOR has also been suggested by the ability of ATM to phosphorylate 4EBP1 and PTEN 39 , 40 . In response to reactive oxygen species (ROS), reactive nitrogen species or temozolomide (an alkylating agent), ATM inhibits mTORC1 via LKB1, AMPK and ULK1, resulting in increased autophagy 41 , 42 , 43 , 44 , 45 . Interestingly, at least some of these effects take place in the cytoplasm, indicating a non-nuclear function of ATM in regulating metabolism. Furthermore, a similar role in metabolism was shown in the yeast Aspergillus nidulans 46 , and ATM also localizes to mitochondria affecting mitophagy 47 . Together, these reports indicate an evolutionary conserved function of ATM in regulating metabolic homeostasis. Our experiments showing that ATM is required for crosstalk between the AKT/mTOR and MAPK pathways is in agreement with this role and provides new avenues to investigate ATM’s function in growth factor signalling. Importantly, however, none of these studies have directly or indirectly hinted at hypersensitivity of ATM mutant cells to MEK inhibition. Study of Atm heterozygous knockout mice have indicated that a 50% loss of Atm function does not result in a pronounced DNA repair phenotype 48 . This seems at odds with the observation that many of the ATM mutations in lung cancer are heterozygous and may thus only partially impair ATM. However, care must be taken to extrapolate these observations to somatic mutations in cancer, especially as most of these mutations have not been studied in any detail. Importantly, several observations show that absence of ATM protein does not equal presence of defective ATM. Indeed, ATM forms a homodimer and would thus be permissive to dominant negative effects of heterozygous mutations 49 . Similarly, mice with kinase dead Atm die before birth, whereas Atm null mice are viable 10 , 50 , 51 . Along the same lines, pharmacological inhibition of ATM does not phenocopy absence of ATM 52 . And human and mice carriers of ATM missense mutations (rather than truncation mutations that affect protein stability) have increased cancer incidence and can display dominant negative effects in cell line models 53 , 54 , 55 , 56 . The study of ATM function in cancer is further complicated by its involvement in other processes, including metabolic homeostasis as mentioned above. ATM is a large (350 kDa) protein and it is likely that different domains are critical for its various functions and it is conceivable that some ATM mutations impact these non-canonical functions while maintaining largely normal DNA damage signalling. Indeed, some of the MEK sensitive, ATM mutant cell lines responded to IR by phosphorylating KAP1 and SMC1. Although this may suggest normal ATM function in the DDR, other (not tested) functions of ATM could be affected in these cell lines. Thus, a seemingly normal response to IR in ATM mutant cells should not be interpreted as compelling evidence for normal ATM function. Inversely, however, absence of ATM kinase activity likely indicates a broad functional defect in ATM. MEK inhibitors are currently being tested in clinical trials for efficacy in RAS or BRAF mutant lung cancer. However, these mutations alone do not adequately predict response to MEK inhibition 24 , as also shown in this study. Furthermore, some RAS and BRAF wild-type lung cancer cell lines display strong dependency on MEK. Indeed, the most sensitive cell line in our panel (EBC-1) is KRAS and BRAF wild type but carries an ATM mutation. These observations indicate that the determinants of sensitivity to MEK inhibitors in lung cancer are still largely unresolved. Yet, unraveling the precise molecular requirements for MEK inhibitor efficacy will likely be a key determinant for the clinical success of these drugs in this highly challenging and genetically heterogeneous tumour type. Until now, only experimental compounds (for example, drugs inhibiting PARP 57 , 58 , 59 , ATR 60 ) have displayed a preferential toxicity in ATM loss-of-function cancer cells and none have been validated in lung tumours, where ATM is frequently mutated 2 , 4 , 5 . We show that ATM mutation in lung cancer cells results in a strong sensitization to drugs targeting MEK, including the FDA-approved drug trametinib. Thus, our findings suggest that including ATM mutational status in lung cancer as a mechanistic biomarker for MEK inhibitors can improve patient stratification, potentially extending the applicability of these drugs beyond RAS and BRAF mutant tumours. Methods Cell culture and general reagents AALE cells 18 were cultured in DMEM/F12 medium with 15% fetal bovine serum (FBS). All lung cancer cell lines were obtained from ATCC, except LCLC-103H cell line that was purchased from the DSMZ-German Collection of Microorganisms and Cell Cultures. All cell lines were maintained in RPMI medium with 10% FBS. All cells were grown in the presence of penicillin-streptomycin at 37 °C and 5% CO 2 . The NCI-H157 cell line used in this study has been included in the database of commonly misidentified cell lines since it is suspected to be contaminated with NCI-H1264. We consider this fact irrelevant for the study conclusions since both cell lines are derived from lung carcinoma and both are KRAS mutant. Furthermore, this cell line was only employed to show the range of sensitivity to MEK inhibition in a diverse large set of lung cancer cell lines. Phospho-SMC1 (S957) and gamma-H2AX (S319) antibodies were obtained from Millipore. SMAD4, ATM (2C1) and 53BP1 (H300) antibodies were purchased from Santa Cruz Biotechnology, β-actin from Sigma-Aldrich, phospho-KAP1 (S824) and KAP1 antibody from Bethyl Laboratories. All the other antibodies were from Cell Signaling Technology. All antibodies were used at a 1:1,000 dilution, except for α-tubulin that was used at a 1:5,000 dilution. Pemetrexed was obtained from Santa Cruz Biotechnology, TAK-733, trametinib, crizotinib, KU60019 and MK2206 from Selleck. Etoposide, paclitaxel, vinblastine, irinotecan, topotecan, gemcitabine, ifosfamide and neocarzinostatin (NCS) were purchased from Sigma. All other compounds were purchased from SynThesis Medchem (China). MEK constructs were obtained from Addgene. To generate mutants, a QuikChange Site-Directed Mutagenesis Kit (Agilent Technologies) was employed. Introduced point mutations were verified by Sanger sequencing and mutants were shuttled into a gateway compatible pBABE-puro vector. All shRNAs were cloned into lentiviral pLKO.1-puro vector ( Supplementary Table 1 ). Generation of isogenic cell lines and small-molecule screen shRNAs for tumour suppressors were introduced into cells by lentiviral transduction followed by puromycin selection. All of them were validated using western blotting or qRT–PCR. For the STK11 and NF1 isogenic cell lines, untransformed AALE cells were used. For all other tumour suppressors, HRAS-V12G transformed cells were employed. Stable cell lines were then individually tagged with DNA barcoded lentivirus and pooled 16 . For determining screening conditions, the drug concentration resulting in 50% AALE cell killing (IC50) was determined by performing 9-point dose–response experiments for all compounds. Based on these results, three concentrations were selected for the screen (IC50 and IC50 −/+ 4-fold). One day after seeding pooled cells, drugs (or DMSO) were added in quadruplicates. After 6 days, genomic DNA was extracted and the barcodes were amplified by PCR using a biotinylated primer, labelled with streptavidin–phycoerythrin, and hybridized to Luminex xMAP beads coupled to the antisense barcode sequence. Samples were measured on a Flexmap 3D plate reader (Luminex). Data were analysed 16 , 61 using a linear regression-based method 16 , 61 . Dose–response experiments and clonogenic and apoptosis assays Unless indicated otherwise, all cell viability assays were performed using the Cell Titer-Glo assay (Promega). Cells were counted and seeded in 96-well plates in triplicates. Next day, compounds were added and 3–5 days later, cell viability was assessed. AUC and EC50 determination was performed using GraphPad Prism. The percentage deviation from Bliss independency model was calculated by using the following formula: Here, E is the effect on viability of drugs x and y expressed as a percentage of the maximum effect. Chou–Talalay drug combination (CI) indices were calculated by using the formula: Here E drug1 is the effect (in percent of maximum effect) of drug 1 alone, E drug2 the effect of drug 2 alone and E ci is the effect of both drugs combined. A drug combination index <1 is considered synergistic. For colony formation assay, 10,000 cells were seeded on six-well plates and treated with drug or vehicle control for ∼ 3 weeks until clear colonies were formed. Colonies were fixed with 3.7% formaldehyde and stained with 0.1% crystal violet. For determination of apoptosis, cells were treated with MEK inhibitor or vehicle and analysed for annexin V positivity (Biolegend) and DNA content (propidium iodide, Sigma) by FACS. Quantitative real-time PCR RNA was extracted using an RNeasy MinElute Cleanup kit (Qiagen). Isolated RNA was then subjected to DNAse treatment (Turbo-DNA free, Ambion). Reverse transcription was carried out using random hexamer primers and RevertAid reverse transcriptase (Fermentas). Quantitative real-time PCR was performed employing the KAPA SYBR FAST ABI Prism (Peqlab). Analysis was carried out in triplicates, using GAPDH as a control gene. Western blotting Cells were lysed in RIPA lysis buffer (50 mM Tris, 150 mM NaCl, 0.1% SDS, 0.5% sodium deoxycholate, 1% NP-40) supplemented with protease and phosphatase inhibitors. Lysates were sonicated, centrifuged and cooked with reducing sample buffer. Protein samples were separated by SDS–polyacrylamide gel electrophoresis on 4–12% gradient gels (Invitrogen) and transferred onto PVDF or nitrocellulose membranes. Quantification of band intensity on digital images was done in ImageJ and intensity of phosphorylation normalized to total protein staining. Uncropped images of all the blots in main figures are included as Supplementary Fig. 13 . CRISPR/Cas9 mediated genome engineering Cells were plated at high density and co-transfected with a gBlock (IDT) encompassing the guide RNA (5′-GGATGCTGTTCTCAGACTGACGG-3′) expression cassette and a plasmid encoding the Cas9 nuclease 62 . Individual cell clones were expanded and the ATM target region located in exon 6 was amplified by PCR using flanking primers (fwd: 5′-GCGACCTGGCTCTTAAACTG-3′; rev: 5′-CAGAAAGTGTTGGACTTGGTTG-3′) and subsequently analysed by Sanger sequencing. Confirmation of monoallelic indels was determined by TA cloning of individual PCR products into a pCR 2.1 Vector (TA cloning KIT, Life Technologies) followed by Sanger sequencing of bacterial colonies. To restore the mutated ATM allele in NCI-H23 cell line, cells were plated at high density and co-transfected with a plasmid encoding the Cas9 nuclease, a PCR product encompassing the guide RNA (5′-ACTACATGAGAAGACCAAAGAGG-3′) and a double-stranded 120 bp oligonucleotide containing the wild-type sequence (5′-TGCTGTTTGGATAAAAAATCACAAAGAACAATGCTTGCTGTTGTGG ACTACATGAGAAGACAAAAGAGGTAATGTAATGAGTGTTGCTTCTTACGTTTAGGATCTAGAG TGTAACTTGTT-3′). An oligonucleotide containing a silent codon change resulting in the same amino-acid substitution present in NCI-H23 was used as a control. After selection with puromycin, cells were plated and treated with DMSO, trametinib or TAK-733 for 3 weeks. DNA was isolated from surviving colonies and the mutated locus was amplified by PCR using flanking primers (fwd: 5′-CCCAGGCTAGTCAGTGAGTTC-3′; rev: 5′-GGAGCCAAGAAGGCTGCATAA-3′) followed by Sanger sequencing. Analysis of CRISPR off-target sites Potential off-target sites for the CRISPR sgRNA were predicted using the online tool crispr.mit.edu. Top five predicted exonic off-target sites were amplified from two independent NCI-H322 knockout clones and one control NCI-H322 wild-type clone by PCR using specific primers (TCP1- fwd: 5′-TGCGGGCA CAACATTATCCT-3′, rev: 5′-CTCAGTATTCAGCCCTCAGCA-3′; GAPDH- fwd: 5′-TTCTAGGGTCTGGGGCAGAG-3′, rev: 5′-AAAACTATGCGAGGTGGGCA-3′; GALNT2- fwd: 5′-GAGAGGTGCCTGGCTTCTAC-3′, rev: 5′-GTGAAAGACAGAAGCGTGCG-3′; VAV1- fwd: 5′-CCAGCTCCTAGCAGTGTCTG-3′, rev: 5′-AGGAAGACGGGGACTCACAT-3′; CLEC9A- fwd: 5′-TGTTTTTGGGGGAGGTGATGT-3′, rev: 5′-TGTTGGCGTGTTAACCCTGA-3′). PCR products were cleaned up by ExoSAP (Affymetrix) and labeled with BrightDye Terminator Kit (Nimagen). Samples were purified by gel filtration through Sephadex resin (Sigma) and sequenced on ABI 3500 Genetic Analyzer (Applied Biosystems). CCLE and COSMIC data set analysis Data sets were downloaded from the respective data portal (that is, Broad Institute or Sanger/COSMIC) and mutation and drug sensitivity data was compiled in a single file. Only drug sensitivities were considered that had been tested on >80% of the lung cancer cell lines. All data was analysed in PRISM. Polyphen analysis was performed using the online PolyPhen V2 tool. Mutations were considered damaging when the score was >0.9. For mutation co-occurrence analysis, we looked at mutation profiles defined in the COSMIC database for cancer census genes (v70). We simulated randomized cohorts while keeping constant the empirical gene-wise mutation rates and the patient-wise mutation burden. Then, we compared the co-occurrence of mutations in ATM and other lung-cancer genes in the database and the simulated cohorts. Immunofluorescence microscopy Cells were plated onto coverslips (VWR) in a 24-well plate. Next day, cells were treated with 50 ng ml −1 NCS or 2 μM TAK-733 for 30 min and 24 h, respectively, and DMSO treatment used as a control. Cells were allowed to recover for 2 h, washed twice with ice-cold PBS and fixed with 4% PFA+0.1% Triton X-100 in PBS for 20 min on ice. Cells were permeabilized with 0.5% Triton X-100 in PBS for 20 min at room temperature and blocked with 10% FCS+0.1% Triton X-100 in PBS for 1 h with always three washes between individual steps. Primary and secondary (Alexa Fluor 546 goat anti-rabbit and Alexa Fluor 488 goat anti-mouse; Invitrogen) antibodies were diluted in blocking solution and incubated for 1 h at room temperature. Finally, cells were stained with DAPI (1:1,000 in PBS, Sigma-Aldrich) for 20 min at room temperature in the dark. Cell images were acquired on a deconvolution microscope (Leica). Xenografts and immunohistochemistry Experimental procedures were approved by the Medical University of Vienna ethics committees and conform to Austrian regulations. NCI-H460 (500,000 cells) were injected subcutaneously in nude mice and allowed to form palpable tumours before randomization and starting treatment with pimasertib. Number of animals (four for each treatment arm) in the study was chosen based on a large expected effect size. Animals that did not form tumours were excluded from the experiment. Drug was administered daily per oral gavage and tumours were measured using calipers and tumour volume was estimated using V =1/2( L × W 2 ), where L is the longest dimension (length) and W is width (shortest dimension). The experiment was not blinded. For the Kaplan–Meier survival curves, animals bearing tumours larger than 1,000 mm 3 were considered as dead. Ki67 immunohistochemistry stainings were prepared using a Ventana Benchmark Ultra automated staining device, applying the CONFIRM anti-Ki67 rabbit monoclonal primary antibody (clone 30-9, Ventana Medical Systems, Inc., Tucson, AZ) according to the manufacturer’s protocol. Immunohistochemical stainings for cleaved Caspase 3 and phospho-4EBP1 (T37/46) (both from Cell Signaling Technology, dilution 1:200 and 1:1,000, respectively) were performed according to the manufacturer’s protocol. For PDX models, a tumour sample was removed from the tibia of a NSCLC patient and a tumourgraft model was generated. Selumetinib was dosed orally at 100 mg kg −1 , daily, for 28 days. Vinorelbine was dosed i.v. at 5 mg kg −1 once a week, Carboplatin was dosed i.p. at 60 mg kg −1 once a week and Paclitaxel was dosed i.v. at 10 mg kg −1 once a week. All test agents were formulated according to the manufacturer’s specifications. Beginning day 0, tumour dimensions were measured twice weekly by digital caliper and data, including individual and mean estimated tumour volumes (Mean TV±s.e.m.), are recorded for each group. Tumour volume was calculated using the formula: TV=width 2 × length × π /6. Data availability The authors declare that the data supporting the findings of this study are available within the paper and its Supplementary Information files. Additional information How to cite this article: Smida, M. et al . MEK inhibitors block growth of lung tumours with mutations in ataxia–telangiectasia mutated. Nat. Commun. 7, 13701 doi: 10.1038/ncomms13701 (2016). Publisher’s note : Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | Lung cancer remains the leading cause of cancer-related deaths worldwide. In contrast to other tumour types, lung tumours present a high number of genomic alterations—this is a consequence of exposure to carcinogenic substances found in tobacco smoke, which is the main cause of lung cancer. About 10 percent of lung tumours carry mutations in a gene called ATM. However, there are no drugs available in the clinic to treat ATM mutant lung cancer. Using cutting-edge, high-throughput drug screens that analyse how the genetic makeup of the patient affects their response to drugs, the team of Sebastian Nijman at the Ludwig Institute for Cancer Research in Oxford made a surprising discovery—cancer cells with ATM mutations are sensitive for drugs that inhibit an enzyme called MEK. The study was published in Nature Communications. MEK is part of a biochemical pathway which is responsible for supporting proliferation and survival of the cell, while ATM plays a central role during the DNA damage response. In ATM-deficient lung cancer cells, Nijman's team found that MEK inhibition results in cells being unable to proliferate, leading to apoptosis. An unexpected finding, as MEK inhibitors have so far been approved for the treatment of a type of skin cancer but not for lung cancer. "Normally, lung cancer cells are resistant to MEK inhibition as they activate compensatory signals," Ferran Fece, one of the two first authors on the study and former Ph.D. student at CeMM, explains. "In contrast, ATM mutant cells fail to do this and subsequently cannot cope with the blocking of MEK and die. We call this type of unexpected drug sensitivity synthetic lethality." Michal Smida, the other shared first author on the article, adds: "We knew that cancer mutations can lead to extreme sensitivity to some drugs. But finding these cancer Achilles' heels is very difficult as they are difficult to predict and extremely rare. We screened a large number of gene and drug combinations and got lucky." The study constitutes a substantial contribution for the development of a future precision medicine: ATM mutations could be used as a potential biomarker to stratify lung cancer patients to receive a MEK inhibitor. ATM is found to be mutated in 8-10 percent of lung adenocarcinomas—given that this type of tumour is among the most prevalent for both men and women worldwide, a significant number of patients could benefit from a MEK inhibitor based treatment. | 10.1038/NCOMMS13701 |
Medicine | Not 'brains in a dish': Cerebral organoids flunk comparison to developing nervous system | Cell stress in cortical organoids impairs molecular subtype specification, Nature (2020). DOI: 10.1038/s41586-020-1962-0 , nature.com/articles/s41586-020-1962-0 Journal information: Nature | http://dx.doi.org/10.1038/s41586-020-1962-0 | https://medicalxpress.com/news/2020-01-brains-dish-cerebral-organoids-flunk.html | Abstract Cortical organoids are self-organizing three-dimensional cultures that model features of the developing human cerebral cortex 1 , 2 . However, the fidelity of organoid models remains unclear 3 , 4 , 5 . Here we analyse the transcriptomes of individual primary human cortical cells from different developmental periods and cortical areas. We find that cortical development is characterized by progenitor maturation trajectories, the emergence of diverse cell subtypes and areal specification of newborn neurons. By contrast, organoids contain broad cell classes, but do not recapitulate distinct cellular subtype identities and appropriate progenitor maturation. Although the molecular signatures of cortical areas emerge in organoid neurons, they are not spatially segregated. Organoids also ectopically activate cellular stress pathways, which impairs cell-type specification. However, organoid stress and subtype defects are alleviated by transplantation into the mouse cortex. Together, these datasets and analytical tools provide a framework for evaluating and improving the accuracy of cortical organoids as models of human brain development. Main Organoid models harness the natural self-assembly properties of development to produce 3D cultures from stem cells that recapitulate aspects of an endogenous organ’s structure and function. Organoids have applications in disease modelling, drug screening, and regenerative medicine. Single-cell RNA sequencing (scRNA-seq) provides a powerful method for comparing the fidelity of organoid cell types to their primary cell counterparts across tissues. In the liver and kidney, benchmarking comparisons with normally developing organs indicate that 3D culture better recapitulates primary cell types than adherent culture 6 , 7 . However, the lack of a comprehensive catalogue of cell types in the normal developing human brain and their molecular features has prevented careful evaluation of the strengths and weaknesses of cerebral organoids. In vitro models of human cortical development are particularly valuable because early events during neurogenesis and synaptogenesis may underlie neuropsychiatric disorders, and experimental access to the developing human cortex is otherwise limited. Initial studies have indicated that broad classes of cells are preserved in cortical organoid models 3 , 8 but also hint at distinctions between organoids and primary cells 4 , 9 , 10 . In particular, the extent to which spatial and temporal gradients of gene expression and cell-type maturation are recapitulated in organoids is unclear (Extended Data Fig. 1 ). Although analysis of some of the first organoid models suggested the emergence of spatial gradients 1 , 2 , 11 , we know little about the fidelity and organization of areal cell types in organoids, in part because we lack molecular cell signatures of cortical areas in the developing brain. Comparison of human cortex and organoids To evaluate the fidelity of cortical cell types in organoids, we performed high-throughput scRNA-seq of samples from the developing human cortex and cortical organoids, and compared the results with published organoid single-cell sequencing datasets. To characterize molecular features and gene-expression signatures during human cortical development, we performed scRNA-seq of dissociated cells from five cortical samples collected at 6–22 gestational weeks (GW), encompassing the period of neurogenesis. To assess cell-type differences across cortical areas, we studied primary samples from seven regions, including prefrontal (PFC), motor, parietal, somatosensory and primary visual (V1) cortices as well as hippocampus, resulting in transcriptomic data from 189,409 cells ( Methods , Fig. 1a , Supplementary Table 1 ). These primary data were compared to data from 235,121 single cells from 37 organoids (Fig. 1b ). We generated forebrain organoids by following three previously published protocols using different levels of directed differentiation to evaluate whether increased stringency in patterning signals results in more endogenous-like cellular subtypes 1 , 4 , 8 , 12 (Extended Data Fig. 2 ). To assess biological replicability, we used three induced pluripotent stem cell (PSC) lines and one embryonic stem cell line. Organoids were maintained under the same conditions, except for protocol-specific medium formulations (Extended Data Fig. 2 ), and samples were collected for immunohistochemistry and scRNA-seq after 3, 5, 8, 10, 15 and 24 weeks of differentiation to evaluate relevant cell types (Extended Data Figs. 3 , 4 ). Last, we compared our reference dataset to published organoid single-cell data generated from 276,054 cells across eight protocols, including time points from six months to a year 3 , 4 , 5 , 8 , 9 , 13 , 14 , 15 . This enabled us to extend our comparisons to later stages of differentiation (Extended Data Figs. 5 , 6 ). Fig. 1: Cell types in cortical primary and organoid samples. a , Single-cell sequencing of primary cortical cells identifies a number of cell types. These cell types are labelled in the t -distributed stochastic neighbour-embedding ( t -SNE) plot on the left, and markers of cell-type identity depict progenitors (SOX2), oRGs (HOPX), IPCs (EOMES), newborn neurons (NEUROD6), maturing neurons (SATB2) and inhibitory interneurons (DLX6-AS1). Single-cell data can be explored at b , Single-cell sequencing of cortical organoid cells generated from four different pluripotent stem cell lines and three protocols with varied levels of directed differentiation generates similar cell types to primary cortex, but the population proportions differ. The proportion of cells for each marker in each sample type are: SOX2 + (primary 15.4%, organoid 41.2%), HOPX + (primary 7.6%, organoid 4.2%), EOMES + (primary 4.1%, organoid 1.5%), NEUROD6 + (primary 51.9%, organoid 20.3%), SATB2 + (primary 32.5%, organoid 2.0%), and DLX6–AS1 + (primary 17.1%, organoid 3.5%). Full size image Impaired cell-type fidelity in organoids We identified broad cell types that corresponded to radial glia, intermediate progenitor cells (IPCs), maturing neurons and interneurons in both datasets (Fig. 1 , Supplementary Tables 2 , 3 ). In primary cortical samples, we also found clusters of microglia, oligodendrocyte precursors, mural cells and endothelial cells. Additionally, we identified previously described subtypes of radial glia, as well as a few instances of area- and layer-specific subtypes of excitatory neurons. In the primary samples, there was extensive intermixing within clusters of ages and cortical areas (Extended Data Fig. 4a ). Within our organoids, cell lines, protocols and ages also intermixed, with variation primarily resulting from differences between cell types. Across lines and protocols, the forebrain marker FOXG1 was broadly expressed (Extended Data Figs. 5 b, 7a ), and the cell-type composition was similar across organoids of the same ages, even between lines and protocols, validating differentiation towards forebrain identity. Organoids had 45% fewer cells expressing HOPX, a marker of outer radial glia (oRG), than primary samples, and 63% fewer EOMES + IPCs, as previously noted 3 , 16 . We also found a 94% reduction in the number of SATB2 + upper layer neurons in organoids compared to primary samples (Fig. 1b , Extended Data Fig. 3 ). To quantitatively compare cell types in primary and organoid samples, we performed correlation analysis of marker genes (see Methods ) based upon our clusters in each dataset. We categorized each cluster in terms of its class (neuronal or non-neuronal), cell-cycle state (dividing or postmitotic), type (radial glia, excitatory neuron and so on) and subtype (for example, oRG, layer IV excitatory neuron), and quantified the correlation between organoid and primary cell categories. Neural class and proliferative state were largely preserved, as has been previously reported 3 , 4 , 8 , 13 . However, cell types ( P = 1.02 × 10 −20 ) and subtypes ( P = 5.34 × 10 −38 ) were significantly less well correlated to all organoid-derived cells, regardless of protocol (Fig. 2a ). Our correlative analysis across all published organoid datasets suggested that a number of radial glia or neuronal clusters corresponded equally well to multiple primary cell subtypes and thus were designated as ‘pan-radial glia’ or ‘pan-neuronal’. Lack of subtype resolution resulted in a smaller number of high-quality subtypes in organoids compared to primary samples (see Methods ; Fig. 2a ). We validated our observation of limited subtype specificity between datasets using five additional batch correction methods and observed little overlap between organoid and primary clusters (Extended Data Fig. 8 , Supplementary Table 5 ). Fig. 2: Molecular comparisons of cell subtypes between primary and organoid samples. a , Each cluster was classified by marker genes for class, state, type and subtype (primary: n = 5 individuals across independent experiments; organoids: n = 37 organoids from 4 PSC lines across 4 independent experiments). Heat map shows correlation between pairwise combinations of marker genes (red intensity: Pearson’s correlation from −1 to 1). First histogram indicates cell subtypes in primary (orange) and organoid (blue) samples. Second histogram shows quantitative correlation from the best match for each category averaged across clusters (mean + s.d.; subtype versus type, ** P = 0.0073; subtype versus state, **** P = 0.00008; subtype versus class, **** P = 0.003; Welch’s two-sided t -test). b , Marker specificity of primary and organoid cluster markers. Using VariancePartition, the genes defining metadata properties were evaluated for contribution to overall variance. Genes contributing >25% variance by cell type were used in the Venn diagram. Box and whisker (mean + s.d.) depicts level of specificity at class (**** P = 4.4 × 10 −14 ), state (**** P = 4.7 × 10 −18 ), type (**** P = 1.02 × 10 −20 ) and subtype level (**** P = 5.34 × 10 −38 ). Welch’s two-sided t -test; primary, n = 5; organoids, n = 37). Dot plot depicts genes that discriminate between radial glia and neuron identity in primary samples. Each dot is a gene, shown as the average expression in radial glia ( x -axis) and neuron ( y -axis) for primary (orange) or organoid (blue) cells. c , Differential expression (two-tailed Wilcoxon rank sum test) between clusters annotated as oRG cells in primary and organoid datasets generated log 2 (fold change) ( x -axis) and −log 10 (adjusted P ) ( y -axis). Primary, n = 5; organoids, n = 37). A pseudocount of 500 was assigned to comparisons with an adjusted P = 0. Many measurements were significant, including expression of the oRG identity gene PTPRZ1 . Week-8 organoids had minimal co-expression of PTPRZ1 and HOPX (top), while at GW15 the outer subventricular zone (oSVZ) contains extensive co-localization (repeated independently 3×). White arrows, double-positive cells; yellow arrows, single-positive cells. Scale bar, 50 μm. d , Differential expression (two-tailed Wilcoxon test) between cell clusters annotated as upper layer neurons. Source data Full size image Organoid radial glia lack specificity The differentiation program that generates neurons from radial glia is highly conserved 17 , 18 , 19 , 20 , and we sought to identify genes that strongly discriminate progenitors from neurons. We were surprised to find that primary cell types are defined by more than twice as many genes as organoid cells, and that type-defining genes largely did not overlap between datasets (Fig. 2b , Extended Data Fig. 9 , Supplementary Table 6 ). We used a gene score metric that quantifies the degree of enrichment and specificity for each marker gene in a dataset (Methods), which is initially low in primary cells but increases substantially over development (Extended Data Fig. 7e ). In all cases, organoids exhibited a significantly lower gene score that did not resolve over time (Extended Data Fig. 7e ), suggesting that markers of progenitors and differentiated cells might be co-expressed (Fig. 2b ). We plotted the normalized counts for each gene that discriminated neurons from radial glia in primary samples, finding that neurons had low expression of radial glia markers, and radial glia did not express neuronal markers. However, we found substantial co-expression of these markers in organoid cells, resulting in a lower correspondence between organoid and primary cell types and subtypes. We explored how well organoid radial glia recapitulated their primary cell subtype counterparts at the transcriptomic level by focusing the comparison on oRG cells. A number of genes were more highly expressed in organoid oRGs than in primary oRGs, and these genes were largely related to glycolysis or endoplasmic reticulum (ER) stress (Supplementary Table 7 ). One of the genes that was most highly upregulated in primary oRG cells, but had very low expression in organoid oRG cells, was PTPRZ1 , a known oRG marker 21 (Fig. 2c ). To validate this finding, we stained primary and organoid samples for PTPRZ1 and HOPX, a canonical oRG marker 22 , 23 , and found that primary tissue contained more HOPX and PTPRZ1 co-expressing cells than did organoids (Fig. 2c , Extended Data Fig. 9e ). We performed similar differential expression analysis between upper layer neuron clusters, and found that two genes required for neuronal maturation and projection pattern specification, MEF2C and SATB2 24 , 25 , were substantially upregulated only in primary cells (Fig. 2d ). Even when cellular subtypes can be assigned to organoids, they lack molecular subtype identifiers. Cell maturation is impaired in organoids The progression of developmental events, such as the birth of neuronal and glial cell types, occurs more rapidly in organoids than in primary tissue, and progenitor and neuronal zones do not expand as broadly as in vivo (Fig. 3a ). A primitive radial glial scaffold is observed at 5 weeks of differentiation in organoids, whereas the oRG scaffold expands predominantly after 15 weeks of development in the primary human cortex. Over the course of 15 weeks of differentiation, the organoid progenitors differentiate, the ‘scaffold’ dissolves and intermixed populations of neurons and glia develop. Using our transcriptomic data, we sought to explore how cellular maturation was affected as a result of the faster temporal development we observed cytoarchitecturally in organoids. To explore the maturation of progenitor cells, we used weighted gene co-expression network analysis (WGCNA 26 ) to generate gene modules of strongly correlated genes in primary radial glia. We consolidated networks that correlated with sample age into a pseudoage metric and then correlated pseudoage with actual age, observing a strong positive correlation in primary radial glia (Fig. 3b , Extended Data Fig. 10 , Supplementary Table 8 ). With confidence in our networks, we applied them to the organoid radial glia. We saw limited correlation between organoid pseudoage and actual age, suggesting that the molecular maturation programs that exist in vivo are not activated in organoids. Notably, this heterogeneous maturation level existed within each organoid (Extended Data Fig. 10c ), indicating that there is variability between individual cells and not just across organoids, lines or batches. The lack of a radial glia molecular maturation signature in organoids correlates with the absence of molecular diversity in this model. The effect of radial glia subtype and maturation on the role of these cells as neural precursors is unknown, but the dysregulation of these programs in organoids may affect their ability to completely recapitulate differentiation trajectories of cortical neurons in vivo. Fig. 3: Maturation of cortical lamina and radial glia. a , Immunohistochemistry of SOX2 + progenitors, HOPX + oRG cells, CTIP2 + deep layer neurons and TBR2 + IPCs in primary and organoid samples showing laminar structure during neurogenesis. Primary samples express SOX2 and TBR2 in the ventricular zone and CTIP2 in the cortical plate at GW13. By GW15, HOPX + oRGs are born and reside in the oSVZ. The cortex expands markedly over the following weeks with more HOPX + oRG cells residing in the oSVZ, providing a scaffold on which neurons migrate. Organoids express similar markers to GW13 samples by week five of differentiation with multiple ventricular zone-like structures. oRG cells arise and increase between weeks eight and ten. The radial architecture expands and dissolves over this period. By week 15, a mix of cell types is present in the organoid. Organoids shown were differentiated using the ‘least directed’ differentiation protocol and staining was validated independently three times (primary: n = 4 biologically independent samples; organoid: n = 3 biologically independent samples). Scale bar, 50 μM. b , Pseudoage was calculated by identifying networks from a 10,000-cell subset of primary radial glia that highly correlated to age (either positively or negatively). These networks were then collapsed into a single ‘age network’. The module eigengene for this age network was then calculated on the remaining data and used for pseudoage. Pseudoage is indicated by the graph line and shading represents the geometric density standard error of the regression. The primary dataset (orange) has a high Pearson’s correlation and R 2 value, while the organoid dataset has no correlation to the pseudoage metric. Source data Full size image Definition of cortical areal signatures Recent studies have uncovered molecular differences between excitatory neurons across cortical regions 27 , 28 , 29 , and these differences may emerge during neurogenesis 19 . Given that regional specification may represent a central feature of neuronal identity, we investigated how the molecular properties of areal identity emerge. We leveraged primary cell data collected from seven cortical regions. For genes that were uniquely enriched in each region, we calculated a weighted average expression (eigengene) across primary and organoid cells (Supplementary Table 9 ). In primary cells, some signatures, such as those from the PFC, temporal lobe, hippocampus and V1, were highly enriched in their respective areas (Extended Data Fig. 11 ). Notably, the parietal lobe tracked closely with the temporal lobe and the somatosensory and motor cortex co-expressed signatures, suggesting a lack of areal segregation between these regions at the time points sampled (Extended Data Fig. 11c ). The earliest samples in our dataset preceded the development of anatomical distinctions between cortical regions, and thus could not be subdissected. The early ‘telencephalon’ samples were highly enriched for V1 signatures, but additional work is required to clarify whether excitatory neurons born early in development all begin by expressing V1 areal genes, or if this was a sampling artefact of our dissections. These data offer a new categorization of cortical area signatures and enable us to evaluate the areal identities of cortical organoid neurons. Areal signatures reflected in organoids Our analysis indicates that many aspects of neuronal subtype are not preserved or are averaged into a pan-neuronal identity in organoids. However, our primary data suggest that areal identity is an early marker of neuronal differentiation. Using areal signatures from primary cells, we were able to evaluate the closest areal identity for each excitatory organoid neuron profiled by scRNA-seq. We were surprised to discover that most neurons corresponded to a defined areal signature (Fig. 4b ) despite the lack of thalamic input, which is thought to refine areal identity 30 , 31 . Although each organoid contained neurons with multiple areal identities, the strength of areal correspondence of organoid excitatory neurons was robust, including to regions such as PFC and V1 (Fig. 4c , Extended Data Fig. 11d ). Regardless of the PSC line or differentiation protocol used, cortical organoids comprised heterogeneous areal identities. To investigate whether cells that corresponded to different areas were spatially segregated within an organoid, we performed immunohistochemistry for two sets of area-specific genes. PFC excitatory neurons co-express the projection-specification transcription factors SATB2 and BCL11B (also known as CTIP2), and through a narrow topographical transition these markers segregate entirely in V1 19 . We explored the expression of these factors in our organoids, and observed both co-expression and segregation of SATB2 and CTIP2 in adjacent cells (Fig. 4d ). AUTS2 and NR2F1 are well-described genes with rostral–caudal gradient expression patterns, and we similarly observed cells expressing either of these factors in proximal space. Together, these data suggest a model in which differentiation of cortical excitatory neurons is strongly defined by areal identity, and organoids recapitulate this process, but without spatial organization. Fig. 4: Analysis of areal identity in organoid excitatory neurons. a , Each of seven cortical areas was used to generate a unique area gene signature by comparing expression with the other six areas. The unique signatures were considered networks, and module eigengenes across area networks were calculated for each primary and organoid cell. The area with the highest normalized eigengene (normalized to the highest score within each area for equal comparison) was designated as the areal identity of that cell. b , Cortical composition for organoids across protocols. Areal identity was assigned for each cell within an organoid and the areal composition is shown for the 37 organoids in our dataset. Organoid samples are listed from earliest to latest stage collected (weeks 3–24). Within a time point, the organoid protocol used is ordered from least to most directed differentiation; each time point is comprised of multiple PSC lines. Every organoid has heterogeneous areal expression. c , The average module eigengene score for each primary (orange) and organoid (blue) cell designated (primary) or assigned (organoid) PFC or V1 identity (primary, n = 5 independent samples across 5 experiments; organoid, n = 37 organoids across 4 independent experiments). The average value for PFC was not significantly different between organoid and primary, and the V1 organoid cells had higher correlation to the V1 signature than primary cells, indicating that areal identity in the organoid strongly resembles normal development (box plots: centre line shows mean, box limits show range and whiskers show standard deviation; two-sided Welch’s t -test, P = 0). d , Validation of intermixing of areal identities in organoid samples differentiated using the least directed differentiation protocol. In the PFC, BCL11B and SATB2 co-localize in the same cell, whereas in V1 cells they are mutually exclusive. Both patterns are in close proximity in the organoid. AUTS2 is a rostrally expressed transcription factor whereas NR2F1 is a caudally expressed factor, but they are adjacent in the organoid. Scale bar, 50 μM; representative image shown ( n = 3 replicates each). Source data Full size image Cellular stress increases in organoids Modules related to the activation of glycolysis and ER stress are enriched in organoid cells 4 , and additional analysis using four orthogonal co-clustering methods showed that stress pathways were upregulated in organoids across all protocols (Extended Data Figs. 12 , 13 ). We confirmed that several genes that were upregulated in organoid datasets 4 , including the glycolysis gene PGK1 32 and the ER stress genes ARCN1 33 and GORASP2 34 , 35 , were enriched at the protein level in organoids (Extended Data Fig. 12 ). Immunostaining verified that, regardless of the stage of organoid differentiation or PSC line used, there was increased expression of PGK1, ARCN1 and GORASP2 in distinct organoid domains, not restricted to the organoid core. To probe the origin of this cellular dysregulation, we first evaluated the expression of stress genes during normal human cortical development. Fixed cryosectioned tissue samples showed little expression of these genes throughout peak neurogenesis (Extended Data Fig. 12c ), though some ER stress was observed at earlier cortical stages (Extended Data Fig. 13b, c ). As ER stress and glycolysis genes are not canonically activated during cortical development, we hypothesized that the in vitro conditions of the organoid model resulted in increased cellular stress. We first evaluated the activation of stress pathways in PSCs and were surprised to find expression of both ARCN1 and GORASP2, suggesting that ER stress occurs in stem cells before organoid formation (Extended Data Fig. 12b ). To determine the rate at which cellular stress arises in vitro, we cultured organotypic cortical slices for one week and observed negligible change in stress activation compared with acutely fixed samples. However, we did observe upregulation of ARCN1 and GORASP2 in primary dissociated cells after one week in culture (Extended Data Figs. 12 , 13 ), suggesting that cellular stress may be a broader feature of in vitro culture. Organoid environment activates stress To test whether aggregate cell-culture conditions induced cellular stress, we transplanted GFP-labelled primary progenitors from GW14–20 into organoids. After 2.5 weeks we observed GFP-labelled SOX2 + HOPX + primary radial glia within the organoids (Fig. 5b , Extended Data Fig. 14 ). We isolated GFP + primary cells and performed scRNA-seq to compare pre- and post- transplanted cells to our primary reference dataset (Supplementary Table 10 ). We found a marked increase in the expression of the glycolysis gene PGK1 and the ER stress gene GORASP2 in primary cells transplanted into, or generated within, organoids (Fig. 5c , Supplementary Table 11 ). Fig. 5: Influence of culture on metabolic stress and cell type. a , Primary samples were progenitor-enriched, GFP-labelled and transplanted into organoids. After 2.5 weeks, GFP + cells were isolated by fluorescence-activated cell sorting (FACS) and processed for scRNA-seq. IHC, immunohistochemistry. b , Primary GFP + cells integrate into organoids differentiated using the least directed protocol. Scale bar, 50 μM. Pre-transplantation cells have similar profiles and cellular subtypes as primary data. After transplantation, there is a decrease in subtype correlation ( n = 7 biologically independent samples across 2 independent experiments). Mean ± s.d. subtype correlation indicated on graph ( P = 2.8 × 10 −9 , two-sided Welch’s t -test). c , After transplantation, primary cells show increased expression of the stress genes PGK1 (**** P = 1.76 × 10 −87 , two-sided Student’s t -test) and GORASP2 (arrow; **** P = 9.60 × 10 −63 , two-sided Student’s t -test) as indicated by width of coloured domain in each respective violin plot ( n = 7 samples across 2 experiments). Scale bar, 50 μM. d , Organoids were dissociated, GFP-labelled and injected into the cortex of P4 mice. After 2–5 weeks, mouse brains were collected for scRNA-seq and immunostaining. e , Human cells are visualized by GFP and human nuclear antigen expression ( n = 13 mice transplanted with 14 organoids derived from 2 induced PSC lines across 2 independent experiments). Organoid cells express markers of progenitors (HOPX), neurons (CTIP2 and SATB2) and astrocytes (GFAP). Mouse vascular cells (laminin and CD31) innervate the transplant. f , Post-transplantation, organoid cells show reduced expression of stress genes. Scatter plots show decreased staining intensity in transplanted organoids (error bars, s.d.) of PGK1 (**** P = 9.64 × 10 −7 ), ARCN1 (**** P = 1.28 × 10 −5 ) and GORASP2 (**** P = 1.88 × 10 −6 ; n = 19 sections from 6 transplanted mice across 2 experiments, each marker stained independently; two-sided Student’s t -test). Violin plots show a decrease in PGK1 (**** P = 2.16 × 10 −17 ) and GORSASP2 (** P = 0.0019) expression in organoid cells post-transplant from single-cell analysis ( n = 1,980 cells from 7 transplanted mice across 2 experiments, two-sided Welch’s t -test). Source data Full size image Increased stress impairs cell subtype Mouse knockout studies have suggested that the activation of ER stress pathways can inhibit cell-type specification 36 , 37 , so we investigated whether metabolic stress affected specification in transplanted primary cells. We noted similar subtypes when we compared pre-transplantation cell clusters and our primary reference data. By contrast, post-transplanted primary cells had significantly lower subtype correlation, similar to organoid cells, and they lacked markers of specific progenitor or neuronal subtypes (Fig. 5b , Extended Data Fig. 14e ). To test whether the differences were driven by the induction of stress pathways, we also generated 3D aggregates of dissociated primary cortical cells from GW14/15 and found a similar upregulation of metabolic stress genes (Extended Data Fig. 13e ). However, we observed an intermediate phenotype in which subtype correlation was significantly ( P = 0.0037) higher in primary aggregates than in post-transplanted primary cells, but still significantly lower than in pre-transplant cells (Extended Data Fig. 14e ). Some of these discrepancies might be attributable to the presence of other cell types, including microglia, endothelial cells and pericytes in the primary aggregate, that may promote normal maturation and differentiation. Transplantation rescues cell stress To determine whether an in vivo environment could rescue the cellular stress derived from organoid culture conditions, week-8 organoids were dissociated, virally labelled with GFP and transplanted into the cortices of postnatal day four (P4) mice (Fig. 5d ). Two or five weeks after transplantation, organoid-derived cells could be visualized incorporated into the mouse cortex (Fig. 5e , Extended Data Fig. 14f ). After five weeks, organoid-derived cells had intricate morphologies and showed reduced expression of cellular stress markers. The glycolysis gene PGK1 and the ER stress gene ARCN1 were not expressed, and the ER stress gene GORASP2 showed reduced expression compared to normal organoid conditions (Fig. 5f , Extended Data Fig. 14g ). As organoid cells showed reduced stress after transplantation, we evaluated whether organoid-derived cells were capable of higher subtype specificity when removed from the in vitro environment. We isolated GFP + organoid cells two or five weeks after transplantation for scRNA-seq and compared pre- and post-transplantation organoid-derived cells to our primary reference. We noted increased cell subtype specification of both oRG cells and newborn neurons (Extended Data Fig. 14h ), suggesting that metabolic stress contributes to specification deficiencies in organoid cells ( Supplementary Discussion ). Conclusions We have provided a comprehensive molecular characterization of developing human cortical cell types and their preservation in brain organoid models. Using single-cell transcriptomics, we have identified broad cell classes and types, as well as fine-grained subtypes such as outer radial glia progenitors in primary human samples. Compared to primary tissue, organoids contain a smaller number of cell subtypes and their cells often co-express marker genes, resulting in broad type assignment, such as pan-radial glia or pan-neuron. We have used this dataset to generate pseudoage metrics and provide in-depth analysis of area-specific gene signatures and their developmental trajectories in primary and organoid neurons. Finally, we have identified a role for stress pathway activation in the impaired subtype specification of cortical organoid cell types; the lack of specificity in organoids must be carefully considered when studying developmental processes, cell-type-specific disease phenotypes or cellular connectivity. In addition, metabolic stress in utero could lead to molecular identity changes, with potential consequences for human brain development. Overall, our compilation of raw and analysed data, paired with visualization of single-cell clustering in a cell browser, provides a valuable resource for better understanding of normal human development and to benchmark the fidelity of in vitro cellular data. Methods No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment. PSC expansion culture The human induced PSC lines H28126 (Gilad Laboratory, University of Chicago), 13234 and WTC10 (Conklin Laboratory, Gladstone Institutes), which were previously authenticated 4 , and the embryonic stem cell line H1 (WiCell, authenticated at source), were expanded on matrigel-coated six-well plates. Cells tested negative for mycoplasma. Stem cells were thawed in StemFlex Pro Medium (Gibco) containing 10 μM Rock inhibitor Y-27632. Medium was changed every other day and lines were passaged when colonies reached about 70% confluency. Stem cells were passaged using PBS-EDTA and residual cells were manually lifted with cell lifters (Fisher). All lines used for this study were between passage 25 and 40. Cortical organoid differentiation protocols Cortical organoids were differentiated using three directed differentiation protocols. In brief, PSC lines were expanded and dissociated to single cells using Accutase. After dissociation, cells were reconstituted in neural induction medium at 10,000 cells per well in a 96-well v-bottom low-adhesion plate. After 18 days, organoids from all protocols were transferred from 96-well to 6-well low-adhesion plates and moved onto an orbital shaker rotating at 90 rpm. Throughout the culture duration organoids were fed every other day. Organoids were collected for immunohistochemistry and scRNA-seq after 3, 5, 8 or 10 weeks of culture. For the least directed differentiation protocol 1 , GMEM-based induction medium included 20% knockout serum replacer (KSR), 1× non-essential amino acids, 0.11 mg/ml sodium pyruvate, 1× penicillin–streptomycin, 0.1 mM β-mercaptoethanol, 5 μM SB431542 and 3 μM IWR1-endo. Medium was supplemented with 20 μM Rock inhibitor Y-27632 for the first 6 days. After 18 days, the medium was changed to DMEM/F12 medium containing 1× glutamax, 1× N2, 1× CD lipid concentrate and 1× penicillin–streptomycin. After 35 days, organoids were moved into DMEM/F12-based medium containing 10% FBS, 5 μg/ml heparin, 1× N2, 1× CD lipid concentrate and 0.5% matrigel (BD). After 70 days, the medium was additionally supplemented with 1× B27 and the matrigel concentration was increased to 1%. In the directed differentiation protocol 4 , 8 , the induction medium consisted of GMEM including 20% KSR, 1× non-essential amino acids, 0.11 mg/ml sodium pyruvate, 1× penicillin–streptomycin and 0.1 mM β-mercaptoethanol supplemented with 5 μM SB431542, 3 μM IWR1-endo and 2 μM dorsomorphin. From days 9 to 25 small molecules were removed and the induction medium was instead supplemented with 10 ng/ml EGF and 10 ng/ml FGF. After 25 days the medium was changed to DMEM/F12 medium containing 1× glutamax, 1× N2, 1× CD lipid concentrate and 1× penicillin–streptomycin. After 35 days, organoids were moved into DMEM/F12-based medium containing 10% FBS, 5 μg/ml heparin, 1× N2, 1× CD lipid concentrate and 0.5% matrigel (BD). After 70 days, the medium was additionally supplemented with 1× B27 and the matrigel concentration was increased to 1%. The most directed 12 protocol used a DMEM/F12-based induction medium containing 15% KSR, 1× MEM-NEAA, 1× glutamax, 100 μM B-ME, 100 nM LDN-193189, 10 μM SB431542, and 2 μM XAV939. For the first 2 days, the medium was supplemented with 50 μM Rock inhibitor Y-27632 and 5% heat-inactivated FBS. After 10 days, organoids were moved into neuronal differentiation medium consisting of equal parts DMEM/F12 and neurobasal medium containing 0.5% N2, 1% B27 without vitamin A, 1% glutamax, 0.5% MEM-NEAA, 0.025% human insulin solution, 50 μM B-ME and 1% penicillin–streptomycin. After 18 days, organoids were maintained in maturation medium containing equal parts DMEM/F12 and neurobasal medium with 0.5% N2, 1% B27, 1% glutamax, 0.5% NEAA, 0.025% human insulin solution, 50 μM B-ME, 20 ng/ml BDNF, 200 μM cAMP and 200 μM ascorbic acid. Immunohistochemistry Cortical organoids and primary human cortical tissue samples were collected, fixed in 4% PFA, washed with 1× PBS and submerged in 30% sucrose in 1× PBS until saturated. Samples were embedded in cryomolds containing 50% OCT (Tissue-tek) and 50% of 30% sucrose in 1× PBS and frozen at −80 °C. Primary samples were sectioned at 20 μM and organoids at 16 μM onto glass slides. Antigen retrieval was performed on tissue sections using a citrate-based antigen retrieval solution at 100× (Vector Labs) which was boiled to 95 °C and added to slides for 20 min. After antigen retrieval, slides were briefly washed with PBS and blocked with PBS containing 5% donkey serum, 2% gelatin and 0.1% Triton for 30 min. Primary antibodies were incubated in blocking buffer on slides at 4 °C overnight, washed with PBS containing 0.1% Triton three times and then incubated with AlexaFluor secondary antibodies (Thermo Fisher) at room temperature for 2 h. Primary antibodies included mouse: SOX2 (Santa Cruz, 1:500, sc-365823), HOPX (Santa Cruz, 1:250, sc-398703), SATB2 (Abcam, 1:250, ab51502), AUTS2 (Abcam, 1:100, ab243036), human nuclei (Millipore, 1:500, MAB1281); rabbit: HOPX (Proteintech, 1:500, 11419-1-AP), GORASP2 (Proteintech, 1:50, 10598-1-AP), ARCN1 (Proteintech, 1:50, 23843-1-AP), PGK1 (Thermo Fisher 1:50, PA5-13863), PTPRZ1 (Atlas, 1:250, HPA015103), NR2F1 (Novus, 1:100, NBP1-31259); rat: CTIP2 (Abcam, 1:500, ab18465); sheep: EOMES (R&D, 1:200, AF6166); guinea pig: NEUN (Millipore, 1:500, ABN90); and chicken: GFP (Aves, 1:500, GFP-1020). Primary sample collection All primary tissue was obtained and processed as approved by the UCSF Human Gamete, Embryo and Stem Cell Research Committee (GESCR, approval 10-05113). All experiments were performed in accordance with protocol guidelines. Informed consent was obtained before sample collection for the use of all tissue samples within this study. First and second trimester human cortex tissue was collected from elective pregnancy termination specimens from San Francisco General Hospital and the Human Developmental Biology Resource (HDBR). Tissue was collected only with previous patient consent for research and in strict observation of legal and institutional ethical regulations. Dissociation Primary human cortical samples were dissociated using papain (Worthington) containing DNase. Samples were grossly chopped and then placed in 1 ml papain and incubated at 37 °C for 15 min. Samples were inverted three times and incubation continued for another 15 min. Next, samples were triturated by manually pipetting with a glass pasteur pipette approximately ten times. Dissociated cells were spun down at 300 g for 5 min and papain removed. 10× capture and sequencing Single-cell capture from live cells was performed following the 10× v2 Chromium manufacturer’s instructions for both primary and organoid samples. For primary samples, each sample was its own batch. For organoid samples, batch is indicated in the metadata annotation in Supplementary Table 1 . In each case, 10,000 cells were targeted for capture and 12 cycles of amplification for each of the cDNA amplification and library amplification were performed. Libraries were sequenced as per manufacturer recommendation on a NovaSeq S2 flow cell. Clustering We first explored the cell-type identities of primary and organoid samples using Louvain-Jaccard clustering 19 , 38 . Prior to clustering, batch correction was performed in a similar way to previous approaches 39 . In brief, each set of cells within a batch was normalized to the highest expressing gene, making the range of expression from 0 to 1. These values were multiplied by the average number of counts within the batch. These normalized datasets were piped into Seurat v.2 40 , in which cells with fewer than 500 genes per cell or more than 10% of reads aligning to mitochondrial genes were discarded. Normalized counts matrices were log 2 -transformed, and variable genes were calculated using default Seurat parameters. Data were scaled in the space of these variables, and the batch was regressed out. Principal component analysis was performed using FastPCA, and significant principal components were identified using a published formula 38 . In the space of these significant principal components, the k = 10 nearest neighbours were identified as per the RANN R package. The distances between these neighbours were weighted by their Jaccard distance, and Louvain clustering was performed using the igraph R package. If any clusters contained only one cell, the process was repeated with k = 11 and upwards until no clusters contained only one cell. Cluster markers and t -SNE plots were generated with Seurat package default parameters. Cell-type annotations Primary cell-type annotations of clusters were performed by comparison to previously annotated cell types, and when a repository of substantial matching was not available, a combination of literature-based annotation of layer or maturation stage identity was used. The genes used to annotate each cluster are highlighted in Supplementary Table 2 . When a cluster was substantially enriched based upon an age or an areal metadata property, this empirical observation was used to inform the annotation. Organoid cell types were first annotated by their similarity to primary cell clusters; if the correspondence was at or above 0.4 and only one primary cell type had such a high correspondence, the primary cell type was applied to the organoid cluster. If the correspondence was between 0.2 and 0.4 and included only one similarity, that cell type was used to identify the organoid cell type unless there was an obvious discrepancy in top marker gene expression between the two clusters. If no correlation was above 0.2, literature annotations or unknown identities were assigned. If an organoid cluster correlated equally well (within 10%) with multiple primary subtypes of the same or similar cell type, ‘pan’ identity was assigned. Low-quality cell types for all analyses were assigned when markers were dominated (>60%) by mitochondrial genes, ribosomal genes, or pseudogenes. Occasionally, an intersection of these approaches was used for organoid clusters, as indicated in Supplementary Table 3 . Correlation analysis Correlation analysis was generated in the space of marker genes. For each marker gene, a specificity score was calculated. This score equalled the ‘enrichment’ (log 2 (fold change) of the marker compared to other clusters) and the ‘specificity’ (the percentage of the relevant cluster expressing the marker divided by the percentage of other clusters expressing the marker). These two values were multiplied by one another to obtain the final score, and this was represented across all marker genes for each sample in box-and-whisker plots. A matrix of all markers across all clusters was created for each individual dataset; if a marker was not expressed at all in a certain cluster, it was marked as 0. If a value was divided by 0 to calculate the score, the score was placed as a dummy score at 1,500. Matrices between comparisons were correlated in the space of overlapping marker gene space using Pearson’s correlations. Co-clustering analysis Each of the five batch correction methods was performed with default parameters. For each analysis, the same 20,000 cell subset of each organoid and primary cells was used because most of the algorithms were too computationally intensive to perform on the full dataset. Linear mixed models VariancePartition 41 was used for linear mixed model analysis. Analysis was performed in a randomized subset of 50,000 genes in the space of expressed genes across the metadata properties noted in Extended Data Fig. 9 . Age was used as a continuous variable and all other variables were assigned as discrete. WGCNA and maturation analysis WGCNA networks were calculated as previously described 19 in 10,000 randomly chosen primary radial glia cells and in parallel from 10,000 randomly chosen organoid radial glia cells. These networks were applied to the remaining primary and organoid cells using the ModuleEigengene function from the WGCNA R package. Pseudoage was calculated by taking networks that correlated highly to age in the 10,000 cell subset and combining their genes into a single gene set. Principal component analysis was performed in this gene space in the full space of radial glia and the loading of the first principal component dictated the pseudoage. This analysis was performed reciprocally. Area signatures Area signatures were obtained by performing pairwise differential expression between each of the seven cortical areas and the six remaining areas. Differential expression across all of the areas was combined, with a count of how many times a gene was differentially expressed in an area from each of the pairwise comparison. While combining the lists, the enrichment and specificity were averaged across all six analyses and multiplied by the number of times the gene appeared as a marker for an area of interest. This value, the ‘area specificity score’, was compared across all areas. For any genes that were considered markers of multiple areas, the area with the highest area specificity score was allocated to the gene as a marker, thus making all area markers unique to one area alone. This is how some areas have a higher percentage of cells assigned to another area other than their area or origin, and enables cleaner comparison of areal pattern emergence. Each set of area marker genes were designated as a network, and the correlation of each cell to this area was calculated by applyModules and calculating a module eigengene. After assignment, to normalize unequal module eigengene distributions, within a dataset the module eigengenes were normalized by area and the assigned area for a cell was the area for which that cell had the highest module eigengene. PSA-NCAM protocol and viral infection of primary cells Primary cortical samples were grossly dissected to isolate the ventricular and subventricular zones excluding cortical plate neurons. Dissociated cells were enriched for neural progenitor cells using a PSA-NCAM antibody and the MACs magnetic sorting kit (Miltenyi Biotech). In brief, dissociated cells were incubated with the PSA-NCAM antibody for 30 min at room temperatre, washed with PBS containing 0.1% BSA and added to an equilibrated magnetic L column. Cells positive for antibodies bind to the column and negative cells are collected in the elute. The negatively sorted cells were pelleted at 300 g for 10mins and supernatant removed. Negatively sorted samples were infected with a CMV::GFP adenovirus (Vector Biolabs), which preferentially labels progenitors, at 37 °C for 15 min. Cells were spun down for 5 min at 300 g and reconstituted in 500 μl medium. Cells were counted and 50,000 primary cells isolated for transplantation in 100 μl medium. Primary cell transplantation into organoids Week-10 or -15 organoids made from the 13234 induced PSC line using the least directed differentiation protocol were placed at the air–liquid interface on Millicell (Millipore) inserts to limit movement. Fifty thousand primary cortical cells were reconstituted in medium containing DMEM/F12, 5% FBS, 1× N2 (Thermo Fisher), 1× B27 (Thermo Fisher), 1× penicillin–streptomycin (Thermo Fisher) and CD lipid concentrate (Thermo Fisher). Cells were slowly pipetted on top of the semi-dry organoids and left to integrate into the organoids for 30 min at 37 °C. Afterwards, organoids were gently lifted off of inserts by increasing medium volume by 3 ml. After 2 days, organoids were transferred to a new 6-well plate without inserts and the medium supplemented with 1% GF-reduced matrigel and 1× amphotericin B. FACS purification of GFP + cells Cells were dissociated into a single-cell suspension as described above. Cell suspensions were triturated and placed on top of 4 ml 22% Percoll. Tubes containing Percoll and cell suspension were spun at 500 g for 10 min without break. The supernatant was discarded, and the cell pellet resuspended in HBSS with BSA and glucose. Cells were sorted using a Becton Dickinson FACSAria using 13 psi pressure and a 100-μm nozzle aperture. All FACS gates were set using unlabelled cells. Data were analysed post hoc for enrichment percentages with FlowJo software. Maintenance of transplanted organoids Organoids were maintained in 6-well low-adhesion plates in DMEM/12 with glutamax (Thermo) medium containing 10% FBS (Hyclone), 1% GF-reduced matrigel (Corning), 1× N2 (Thermo Fisher), 1× B27 (Thermo Fisher), 1× CD lipid concentrate (Thermo Fisher), 5 μg/ml heparin, 1× penicillin–streptomycin, and 1× amphotericin B (Gibco). The medium was changed every other day for the duration of culture. Transplants were collected for immunohistochemistry at weeks 1, 2.5, 4 and 6 post-transplant. At week 2.5, paired organoid samples were FACS-sorted for GFP+ cells and captured cells were collected for single-cell RNA sequencing. Organoid cell transplantation into mice Mouse experiments were approved by UCSF Institutional Animal Care and Use Committee (IAUCUC) protocol AN178775-01 and performed in accordance with relevant institutional guidelines. Organoids from the H28126 and 13234 induced PSC lines were differentiated using the least directed protocol described above. Week-7/8 organoids were dissociated and labelled with a CMV:: GFP adenovirus for 30 min at 37 °C. Cells were pelleted and immediately transplanted into postnatal NSG mice (NOD.Cg- Prkdc scid Il2rg tm1Wjl /SzJ, stock No:005557) at four days of age (P4). Using a stereotaxic rig, either the PFC or V1 of the left hemisphere was localized and 10,000 cells were transplanted into the cortex at each injection site. Two transplantations or injections, 0.5 mm apart, were made per mouse within the same cortical area of 10,000 cells each. Mice were allowed to develop for either 2 or 5 weeks post-transplantation before being euthanized. A total of 13 mice were used (7 male, 6 female), and no statistical method was used to determine this sample size. Mice were euthanized, and the brain was extracted and grossly dissected using GFP expression to visualize relevant areas for collection. Tissue with GFP expression was dissociated for 30 min at 37 °C using papain, manually triturated, centrifuged and resuspended in HBSS. Cells were sorted for GFP using the FACS strategy described previously. After FACS isolation, GFP + cells were used for scRNA-seq using the 10× V2 platform. Mice from the same experiment were collected in parallel and perfused with 10 ml PBS followed by 10 ml 4% PFA before the brains were extracted. Mouse brains were further fixed overnight at 4 °C, washed in 1× PBS three times for 30 min each, and rocked overnight at 4 °C in 30% sucrose in PBS. Brains were embedded in 50/50 mix of 30% sucrose and OCT before being cryosectioned. Mouse studies were not randomized or blinded prior to analysis. Organotypic slice culture Primary cortical tissue was maintained in CO 2 -bubbled artificial cerebral spinal fluid until being embedded in a 3% low melt agarose gel. Embedded tissue was live-sectioned at 300 μm using a vibratome (Leica) and plated on Millicell (Millipore) inserts in a 6-well tissue culture plate. Slices were cultured at the air–liquid interface in medium containing 32% Hanks BSS, 60% BME, 5% FBS, 1% glucose, 1% N2 and 1% penicillin–streptomycin–glutamine. Slices were maintained for 7 days in culture at 37 °C and the medium was changed every third day. Primary cortical aggregates Primary human cortical samples from gestational weeks 14 and 15 were gross-dissected at the outer subventricular zone, removing the cortical plate. Samples were dissociated using papain as described previously. Samples were aggregated in 96 v-bottom low-adhesion plates (S bio) containing 20,000 cells per well. They were aggregated in DMEM/F12 with Glutamax (Thermo) based medium containing 10% FBS (Hyclone), 1× N2 (Thermo Fisher), 1× CD lipid concentrate (Thermo Fisher), 5 μg/ml Heparin, 1× penicillin–streptomycin and 1× amphotericin B (Gibco). Medium was supplemented with 20 μM Rock inhibitor for the first week. After 2 weeks, aggregates were transferred to 6-well low-adhesion plates and medium was supplemented with 1% matrigel and 1× B27. Dissociated primary cell culture Dissociated primary cortical cells were reconstituted in DMEM/F12-based medium containing 1× N2 (Thermo Fisher), 1× B27 (Thermo Fisher), 1× penicillin–streptomycin (Thermo Fisher) and 1× sodium pyruvate (Thermo Fisher). Cells were plated at one million cells per ml in 12-well matrigel-coated tissue culture plates. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability Single-cell RNA sequencing data have been deposited in dbGAP for accession ‘A cellular resolution census of the developing human brain’ and in GSE132672. An interactive browser of single-cell data and raw and processed count matrices can be found at the UCSC cell browser website: Source Data for Figs. 1 – 5 and Extended Data Figs. 1 – 14 are available online. Remaining source data can be retrieved directly from the single-cell data available in public repositories or from the UCSC cell browser website. | Brain organoids—three-dimensional balls of brain-like tissue grown in the lab, often from human stem cells—have been touted for their potential to let scientists study the formation of the brain's complex circuitry in controlled laboratory conditions. The discussion surrounding brain organoids has been effusive, with some scientists suggesting they will make it possible to rapidly develop treatments for devastating brain diseases and others warning that organoids may soon attain some form of consciousness. But a new UC San Francisco study offers a more restrained perspective, by showing that widely used organoid models fail to replicate even basic features of brain development and organization, much less the complex circuitry needed to model complex brain diseases or normal cognition. "Some people have branded organoids as 'brains in a dish' but our data suggest this is a huge exaggeration at this point," said Arnold Kriegstein, MD, Ph.D., a professor of neurology in the UCSF Weill Institute for Neurosciences, John G. Bowes Distinguished Professor in Stem Cell and Tissue Biology, and director of the UCSF Eli and Edythe Broad Center for Regeneration Medicine and Stem Cell Research, whose lab has been a leader in the development of cerebral organoid models (see prior studies here, here and here.) "We find that organoids do not develop the distinctive cell subtypes or regional circuit organization that characterize normal human brain circuits. Since most human brain diseases are highly specific to particular cell types and circuits in the brain, this presents a grave challenge to efforts to use organoids to accurately model these complex conditions." The new study, published January 29, 2020 in Nature, arose out of the lab's ongoing efforts to comprehensively map the gene expression programs that orchestrate brain development based on samples of normal human brain tissue, a project led by Kriegstein lab postdoctoral researcher Aparna Bhaduri, Ph.D. The lab aims to make a genetic atlas of human brain development available as a valuable resource for comparing normal brain development to what goes awry in developmental brain diseases such as autism. However, when another postdoctoral researcher, Madeline Andrews, Ph.D., began comparing Bhaduri's data from the developing brain to the lab's organoid models, she quickly discovered that the exquisitely organized developmental programs seen in normal brain tissue were significantly disrupted in the lab's organoids. Cerebral Organoids Fail to Develop Crucial Cell Types, Organization The researchers measured gene expression in more than 235,000 individual cells extracted from 37 different organoids (themselves generated using three different laboratory protocols and four different starting stem cell lines) and compared these gene expression patterns to what they saw in about 189,000 brain cells from a range of brain areas and developmental timepoints in normally developing human brains. This analysis revealed that instead of differentiating normally into the brain's distinctive cell types, organoid cells appeared to experience an identity crisis: expressing a mixed bag of genes normally found in very different kinds of cells. At first, the organoids developed structured "rosettes" of cells that resemble some features of the developing brain, but these quickly dissolved into a hodge-podge of intermingled cells. "We were able to identify the major broad categories of cell types, but the normal diversity of subtypes—which play key role in the proper function of neural circuits—was lacking," Kriegstein said. To make sure these results extended to other common ways of making organoids used outside the Kriegstein lab, the researchers compared single-cell gene expression data from eight different organoid protocols published in the scientific literature (for a total of more than 276,000 individual cells) to their atlas of normal gene expression in the developing brain. In every case the published organoids showed the same lack of appropriate development into distinctive cell types as the lab had seen in their own models. "The brain's ability to wire together different cell types into highly structured and regionally distinctive circuits is central not only to normal brain function and cognition, but it is also these highly specific circuits that go awry in different ways in brain diseases such as autism, schizophrenia, and other psychiatric and neurological disorders," Andrews said. "Before we can use organoids to study these diseases and search for potential cures, we need to ensure they are actually modeling the brain circuits that are affected," Bhaduri added. Reducing Cellular Stress Could Improve Organoid Models In addition to their confused developmental programming, the brain organoid models all expressed abnormally high levels of cellular "stress" genes, which control cells' response to harmful environmental conditions such as lack of oxygen. The researchers hypothesized that this heightened cellular stress could be caused by methods used when the organoids were being grown in the lab, and might be preventing the organoids from developing proper neural cell types and regional organization. To test this hypothesis, the researchers took cells from developing organoids and implanted them into the brains of mice to eliminate stressors caused by being grown in a lab dish. In this more natural context, the organoids' cellular stress levels quickly fell to normal levels, and normal developmental programs began to reassert themselves. Conversely, when the researchers took early developing neural tissue and tried to grow it with their laboratory organoids, stress genes became more activated, and the young neurons developed the same kind of identity crisis seen in the lab's organoids. These results suggest that neuroscientists' ambitions to model complex brain organization in organoids will require a significant rethinking of how organoids are grown in the lab to try to reduce levels of cellular stress. "Different groups have optimized how they culture organoids in lots of different ways, so the fact that we see these issues across organoids from different laboratories suggests it's probably going to take a pretty big overhaul to improve how organoids turn out," Andrews said. "That's not going to be an easy task, but I'm hopeful that these results and Aparna's unique dataset of the genetic programs in the normally developing brain will point the field in the right direction." The authors emphasize that organoids can still be a useful tool in the many kinds of research that do not require accurately modeling specific brain circuits or their dysfunction, such as a recent paper by Bhaduri and colleagues that used organoids as a way to study the aggressive spread of glioblastoma brain cancer in a lab dish. "But these results are pretty clear that organoids are far from reproducing a real developing brain in the lab," Bhaduri said. | 10.1038/s41586-020-1962-0 |
Chemistry | Fluorescence-activating beta-barrel protein made from scratch for first time | Jiayi Dou et al, De novo design of a fluorescence-activating β-barrel, Nature (2018). DOI: 10.1038/s41586-018-0509-0 Journal information: Nature | http://dx.doi.org/10.1038/s41586-018-0509-0 | https://phys.org/news/2018-09-fluorescence-activating-beta-barrel-protein.html | Abstract The regular arrangements of β-strands around a central axis in β-barrels and of α-helices in coiled coils contrast with the irregular tertiary structures of most globular proteins, and have fascinated structural biologists since they were first discovered. Simple parametric models have been used to design a wide range of α-helical coiled-coil structures, but to date there has been no success with β-barrels. Here we show that accurate de novo design of β-barrels requires considerable symmetry-breaking to achieve continuous hydrogen-bond connectivity and eliminate backbone strain. We then build ensembles of β-barrel backbone models with cavity shapes that match the fluorogenic compound DFHBI, and use a hierarchical grid-based search method to simultaneously optimize the rigid-body placement of DFHBI in these cavities and the identities of the surrounding amino acids to achieve high shape and chemical complementarity. The designs have high structural accuracy and bind and fluorescently activate DFHBI in vitro and in Escherichia coli , yeast and mammalian cells. This de novo design of small-molecule binding activity, using backbones custom-built to bind the ligand, should enable the design of increasingly sophisticated ligand-binding proteins, sensors and catalysts that are not limited by the backbone geometries available in known protein structures. Main There have been considerable recent advances in designing protein folds from scratch 1 , 2 , as well as redesigning already existing native scaffolds to bind small molecules 3 , 4 , 5 , but two outstanding unsolved challenges remain. The first is the de novo design of all-β proteins, which is complicated by the tendency of β-strands and sheets to associate intermolecularly to form amyloid-like structures if their register is not perfectly controlled 6 . The second is the design of proteins customized to bind small molecules of interest, which requires precise control over backbone and side chain geometry 5 , as well as the balancing of the often opposing requirements of protein folding and function 7 . Success in developing such methods would reduce the longstanding dependency on natural proteins by enabling protein engineers to craft new proteins optimized to bind chosen small-molecule targets, and lay a foundation for de novo design of proteins customized to catalyse specific chemical reactions. Principles for designing β-barrels β-barrels are single β-sheets that twist to form a closed structure in which the first strand is hydrogen-bonded to the last 8 . Anti-parallel β-barrels are excellent scaffolds for ligand binding, as the base of the barrel can accommodate a hydrophobic core to provide overall stability, and the top of the barrel can provide a recessed cavity for ligand binding 9 , often flanked by loops that can contribute further binding affinity and selectivity 10 . However, all β-sheet topologies are notoriously difficult to design from scratch—to our knowledge, there has been no reported success to date—although several descriptive parametric models of β-barrels have been proposed 11 , 12 , 13 . We first set out to address this challenge by parametrically generating regular arrangements of eight anti-parallel β-strands using the equations for an elliptic hyperboloid of revolution (adapted from previously published work 14 , Fig. 1a ). β-barrels are characterized by their shear number ( S )—the total shift in strand registry between the first and last strands—which determines the hydrophobic packing arrangement and the diameter of the barrel 15 , 16 ( Supplementary Methods ). We selected a shear number of S = 10 because it is difficult to achieve good core packing for S = 8 (the barrel has a smaller diameter and the C α –C β vectors point directly at each other), and S = 12 results in a cavity that is too large to fill with side chains (Extended Data Fig. 1b–d ). We generated ensembles of hyperboloids by sampling the elliptical parameters and the tilt of the generating lines with respect to the central axis around ideal values computed for S = 10, and then placed C α atoms on the hyperboloid surface (Fig. 1a , Supplementary Methods ). As found in earlier simulation work 17 , backbones generated with constant angles between strands could not achieve perfectly regular hydrogen bonding. To resolve this problem, we introduced force-field guided variation in local twist by gradient-based minimization. We selected the backbones with the most extensive inter-strand hydrogen bonding, connected the strands with short loops and carried out combinatorial sequence optimization to obtain low-energy sequences (Extended Data Fig. 1a ). Synthetic genes encoding 41 such designs were produced and the proteins were expressed in E. coli . Almost all were found to be insoluble or oligomeric; none of this first set of 41 designs were monomeric with an all-β circular dichroism spectrum (Supplementary Table 2 ). Fig. 1: Principles for designing β-barrels. a , b , Two methods for β-barrel backbone generation. a , Parametric generation of 3D backbones on the basis of the hyperboloid model. The cross-section of the barrel is controlled on the global level with parameters ( r , r A and r B ). b , Specification of residue connectivity in a 2D map and assembly of 3D backbones with Rosetta. The cross-section geometry is controlled on the local level with torsion angle bins specified for each residue. c , Incorporation of glycine kinks and β-bulges reduces Lennard–Jones repulsive interactions in β-barrels. Full backbones are shown on the left and one C β -strip is shown on the right. Top, no β-bulge, no glycine kink; middle, one glycine kink in the middle of each C β -strip, no β-bulge; bottom, one glycine kink in the middle of each C β -strip, and β-bulges placed near the β-turns. d , Blueprint used to generate a β-barrel of type (strand number n = 8; shear number S = 10) with a square cross-section suitable for ligand binding. The values of the barrel radius ( r ) and tilt of the strands ( θ ) used to place glycines are determined by the choice of n and S . The residues in the 2D blueprint (left) and the 3D structure (middle) are coloured by backbone torsion bins (right, ABEGO types nomenclature from Rosetta). Shaded and open circles represent residues facing the barrel interior and exterior, respectively. Glycine positions are shown as yellow circles and β-bulges as stars. The ‘corners’ in the β-sheet resulting from the presence of glycine kinks are shown as vertical dashed lines. C , barrel circumference; D , distance between strands. Full size image In considering the possible reasons for the failure of the initial designs, we noted that many of the backbone hydrogen-bond interactions on the top and bottom of the barrels were distorted or broken (Extended Data Fig. 1e, f ). To investigate the origins of this distortion, we experimented with three alternative approaches to generating uniform β-barrel backbones lacking loops and with valine at every position as a place-holder ( Supplementary Methods ). In all cases, we observed breaking of hydrogen-bond interactions after structure minimization with the Rosetta relaxation protocol (Extended Data Fig. 2a ), suggesting that there is strain inherent to the closing of the curved β-sheet on itself. To identify the origin of this strain, we repeated the relaxation after imposing strong constraints on the hydrogen-bond interactions to prevent them from breaking. The strain manifested in two places. First, steric clashes build up along strips of side chains in the directions of the hydrogen bonds, perpendicularly to the direction of the β-strands (‘C β -strips’, Fig. 1c ). Second, a number of residues acquired unfavourable left-handed twist (Extended Data Fig. 2b, c ; the chirality of the peptide backbone favours right-handed twist). To reduce the strain arising from steric clashes between C β atoms, and from the local left-handed twist, we replaced the central valine residue of each C β -strip with glycine residues (which are normally disfavoured in β-sheets 18 ). The achiral glycine can have a left-hand twist without disrupting the β-sheet hydrogen-bond pattern 15 , 19 and lacks a C β atom, reducing the steric clashes within C β -strips (Fig. 1c , middle). The backbones of most of these glycine residues shifted to the positive Φ torsion bin after minimization, forming torsional irregularities in the β-sheet (‘glycine kinks’ 15 , Extended Data Fig. 2d–e ). On the basis of these observations, we hypothesized that large local deviations in the ideal β-strand twist are necessary to maintain continuous hydrogen-bond interactions between strands in a closed β-barrel, and hence that a parametric approach assuming uniform geometry was not well-suited to building such structures. Therefore, we chose to build β-barrel backbones starting from a 2D map specifying the peptide bonds, the backbone torsion angle bins 20 and the backbone hydrogen bonds (Fig. 1b ). In contrast to parametric backbone design, which may be viewed as a ‘3D-to-2D’ approach as a 3D surface is generated and then populated with residues, this alternative strategy proceeds from 2D to 3D and can readily incorporate local torsional deviation (Fig 1a ). We generated 3D protein backbones using Rosetta Monte Carlo structure generation calculations starting from an extended peptide chain 21 , guided by torsional and distance constraints from the 2D map. We found that we could control the volume and the 3D shape of the β-barrel cavity by altering the placement of glycine kinks in the 2D map. Such kinks increase local β-sheet curvature, forming corners in an otherwise roughly circular cross-section (Extended Data Fig. 2f, g ). We chose to design a square barrel shape and created four corners in the β-sheet by placing five glycine kinks to eliminate strain in the five C β -strips, and one glycine kink to adjust the curvature of the longest hairpin (Fig. 1d , Extended Data Fig. 3a , Supplementary Methods ). With this choice, the resulting 3D backbones have a large interior volume suitable for a ligand-binding cavity. When such backbones were built with canonical type I′ β-turns connecting each β-hairpin, we observed steric strain at the extremities of the C β -strips (Fig. 1c , centre) and disruption of hydrogen-bond interactions after structure relaxation (Extended Data Fig. 3e ). This probably arises because the considerable curvature at the glycine kinks requires that the β-hairpins paired with it (dashed vertical line in Extended Data Fig. 3b ) have greater right-handed twist than can be achieved with canonical β-turns. We reasoned that accentuated right-handed twist could be achieved by incorporating β-bulges—disruptions of the regular hydrogen-bonding pattern of a β-sheet 2 , 22 , 23 . Indeed, we found that strategic placement of β-bulges on the bottom of the barrel (defined as the side of the N and C termini) and bulge-containing β-turns 22 on the top of the barrel eliminated steric strain and stabilized the hydrogen bonds between the β-strand residues flanking the turns (Fig. 1c , bottom, Extended Data Fig. 3e, f ). To tie together the bottom of the barrel, we introduced a ‘tryptophan corner’ 24 , 25 by placing a short 3–10 helix, a glycine kink and a Trp at the beginning of the barrel, and an interacting Arg at the C terminus (Extended Data Fig. 3g–j ). Five hundred backbones were generated from the 2D map incorporating the above features, and Rosetta flexible-backbone sequence design calculations were carried out to identify low-energy sequences for each backbone. Four designs with low energy and backbone hydrogen bonding throughout the barrel were selected for experimental characterization (Extended Data Fig. 4a ). The sequences of these designs are not related to those of known native proteins (BLAST E values greater than 0.1), and fold into the designed structure in silico (Fig. 2a ). Synthetic genes encoding the designs were expressed in E. coli . Three of the designs were expressed in the soluble fraction and purified; two had characteristic β-sheet far-ultraviolet circular dichroism (CD) signal (Fig. 2 , Extended Data Fig. 4b ). Size-exclusion chromatography (SEC) coupled with multi-angle light scattering (MALS) showed that one was a stable monomer (BB1) and the other (BB2) a soluble tetramer (Extended Data Fig. 4c ). Fig. 2: Folding, stability and structure of design BB1. a , In silico folding energy landscape. Each grey dot indicates the result of an independent ab initio folding calculation; black dots show results of refinement trajectories starting from design model and light grey dots from the lowest-energy ab initio models. b , Size-exclusion chromatogram of the purified monomer (14 kDa). c , Far-ultraviolet CD spectra at 25 °C (grey line), 95 °C (black dashed line) and cooled back to 25 °C (black dotted line). d , Near-ultraviolet CD spectra in Tris buffer (grey line) and 7 M GuHCl (black line). e , Cooperative unfolding in GuHCl monitored by near-ultraviolet CD signal at 285 nm (grey line) and tryptophan fluorescence (black line). f–j , Superpositions of the crystal structure (grey) and the design model (pink): overall backbone superposition ( f ); section along the β-barrel axis showing the rotameric states of core residues ( g ); one of the top loop with a G1 β-bulge ( h ); and equatorial cross-section of the β-barrel, showing the geometry of the interior volume ( i ). The glycine kinks are shown as sticks. The bottom of the panel shows the cross-sections of the three closest native β-barrel structures on the basis of TM-score 33 (RCSB Protein Data Bank (PDB) IDs: 1JMX (0.77); 4IL6 (chain O) (0.73); 1PBY (0.71)). j , One of the bottom loops with a classic β-bulge. k , Crystal structure and 2 mF o − DF c electron density of the tryptophan corner, contoured at 1.5 σ . Source data Full size image BB1 exhibited a strong near-ultraviolet CD signature, which suggests an organized tertiary structure (Fig. 2d ). The design was stable at 95 °C, and cooperatively unfolded in guanidine-denaturation experiments (Fig. 2e ). The crystal structure of BB1 solved at 1.6 Å resolution was very close to the design model (1.4 Å backbone root mean squared deviation (r.m.s.d.) over 99 of 109 residues; Extended Data Fig. 4d–f ). Essentially all of the key features of the design model are found in the crystal structure (Fig. 2f–k ). The barrel cross-section in the crystal structure is very similar to that of the design model, with an overall square shape with corners at the glycine kinks. Natural β-barrel crystal structures do not have this shape; the cross-sections of the closest structure matches in the PDB are shown in Fig. 2i . All seven designed β-turns and β-bulges are correctly recapitulated in the crystal structure (Fig. 2h, j ), along with the 3–10 helix and tryptophan corner (Fig. 2k ). Design of small-molecule-binding β-barrels Having determined principles for de novo design of β-barrels, we next sought to design functional β-barrels with binding sites tailored for a small molecule of interest. We chose DFHBI (Fig. 3a , left, green), a derivative of the intrinsic chromophore of GFP, to test the computational design methods. Owing to its internal torsional flexibility in solution, DFHBI does not fluoresce unless it is constrained in the planar Z conformation 26 , 27 . We sought to design protein sequences that fold into a stable β-barrel structure with a recessed cavity lined with side chains to constrain DFHBI in its fluorescent planar conformation. We chose to take a three step approach: (1) de novo construction of β-barrel backbones, (2) placement of DFHBI in a dedicated pocket, and (3) energy-based sequence design. For the first step, we stochastically generated 200 β-barrel backbones on the basis of the 2D map described above (Extended Data Fig. 5b–d ). Fig. 3: Computational design and structural validation of β-barrels with recessed cavities for ligand binding. a , Left, ensembles of side chains generated by the RIF docking method making hydrogen-bonding (upper left) and hydrophobic interactions (lower left) with DFHBI (green); pre-generated interacting rotamers are shown in grey with backbone C α highlighted by magenta spheres. Right, ensemble of 200 β-barrel backbones with C α atoms surrounding the binding cleft indicated by magenta spheres. b , Each ligand–scaffold pair (left) with multiple ligand-coordinating interactions from RIF docking is subjected to Rosetta energy-based sequence design calculations (right): positions around the ligand (light purple, above the dashed line) are optimized for ligand binding; the bottom of the barrel (dark grey, below the dashed line), for protein stability. c , Left, crystal structure (cyan cartoon with grey surface) of b10 with a recessed binding pocket filled with water molecules (red spheres). Middle, b10 design model backbone (silver) superimposed on the crystal structure (cyan). Right, comparison of crystal structure and design model for two different barrel cross-sections (indicated by dashed lines); glycine C α atoms are indicated by spheres in the upper layer. Full size image The placement of the ligand in the binding pocket requires sampling of both the rigid-body degrees of freedom of the ligand, and the sequence identities of the surrounding amino acids that form the binding site. Because of the dual challenges associated with optimization of structure and sequence simultaneously, most approaches to designing ligand-binding sites to date have separated sampling into two steps: rigid-body placement of the target ligand in the protein-binding pocket and then design of the surrounding sequence 4 , 5 , 28 . This two-step approach has the limitation that the optimal rigid-body placement cannot be determined independently of knowledge of the possible interactions with the surrounding amino acids. The RosettaMatch method 29 can identify rigid-body and interacting-residue placements simultaneously, but is limited to a small number of pre-defined ligand-interacting residues 3 . We addressed these challenges with a new ‘rotamer interaction field (RIF)’ docking method that simultaneously samples over rigid-body and sequence degrees of freedom. RIF docking first generates an ensemble of billions of discrete amino acid side chains that make hydrogen-bonding and non-polar hydrophobic interactions with the target ligand (Fig. 3a , right). Then, scaffolds are docked into this pre-generated interacting ensemble using a grid-based hierarchical search algorithm (Extended Data Fig. 5a ). We used RIF docking to place DFHBI into the upper half of the β-barrel scaffolds, resulting in 2,102 different ligand–scaffold pairs with at least four hydrogen-bonding and two hydrophobic interactions (Fig. 3a ). To identify protein sequences that not only buttress the ligand-coordinating residues from the RIF docking but also have low intra-protein energies to drive protein folding, we developed and applied a Monte Carlo-based sequence design protocol that iterates between (1) fixed-backbone design around the ligand-binding site to optimize the ligand-interaction energy and (2) flexible-backbone design for the rest of protein, optimizing the total complex energy (Fig. 3b ). Forty-two designs with large computed folding-energy gaps and low-energy intra-protein and protein–ligand interactions were selected for experimental characterization, plus an additional 14 disulfide-bonded variants (Extended Data Fig. 5e ). Ligand-docking simulations after extensive structure refinement revealed that owing to the approximate symmetry of the hydrogen-bonding pattern of DFHBI, many of the designed binding pockets could accommodate the ligand in two equally favourable orientations (Extended Data Fig. 5f ). Synthetic genes encoding the 56 designs were obtained and the proteins were expressed in E. coli . Thirty-eight of the proteins were well-expressed and soluble; SEC and far-ultraviolet CD spectroscopy showed that 20 were monomeric β-sheet proteins (Supplementary Table 3 ). Four of the oligomer-forming designs became monomeric upon incorporation of a disulfide bond between the N-terminal 3–10 helix and the barrel β-strands. The crystal structure of one of the monomeric designs (b10) was solved to 2.1 Å, and was found to be very close to the design model (0.57 Å backbone r.m.s.d., Fig. 3c ). The upper barrel of the crystal structure maintains the designed pocket, which is filled with multiple water molecules (Fig. 3c , Extended Data Fig. 6b ). Thus, the design principles described above are sufficiently robust to enable the accurate design of potential small-molecule-binding pockets. Two of the 20 monomeric designs—b11 and b32—were found to activate DFHBI fluorescence by 12- and 8-fold with binding dissociation constants ( K D ) values of 12.8 and 49.8 μM, respectively (Extended Data Fig. 6f ). Knockout of interacting residues in the designed binding pocket eliminated fluorescence (Extended Data Fig. 6g ). The ligand-binding activity comes at a substantial stability cost as almost half of the barrel is carved out to form the binding site: although the nonfunctional BB1 design does not temperature-denature, both b11 and b32 undergo reversible thermal melting transitions (Extended Data Fig. 6e ), even though b11 contains a stabilizing disulfide bond (the parent design that lacks the disulfide (b38) is not a monomer; Extended Data Fig. 6c, d ). We sought to improve the binding interactions by redesigning β-turns around the ligand-binding site (Supplementary Table 6 ). b11L5F, a 110-residue protein with a five-residue fifth turn, activated DFHBI fluorescence by 18-fold with a K D value of 7.5 μM (Extended Data Fig. 6f, h ). The sequence determinants of b11L5F fold and function were investigated by assaying the effect of each single amino acid substitution (19 × 110 = 2,090 in total) on both protein stability 30 and DFHBI activation on the yeast cell surface. The function (fluorescence activation) and stability (proteolysis resistance) landscapes have similar overall features consistent with the design model, with residues buried in the designed β-barrel geometry being much more conserved than surface-exposed residues (Fig. 4a , Extended Data Fig. 7a, b ). The function landscape suggests that the geometry of the designed cavity is critical to activating DFHBI fluorescence: the key sequence features that specify the geometry of the cavity—the glycine kinks and the tryptophan corner—are strictly conserved (Fig. 4a ). Among all substitutions of the seven coordinating residues from RIF docking, only a single substitution (V103L) increased fluorescence (Fig. 4c , upper panel). Whereas the structure and function landscapes were very similar at the bottom of the barrel (Fig. 4b ), there was a notable trade-off between stability and function at the top of the barrel around the designed binding site (Fig. 4c ): many substitutions that stabilize the protein markedly reduce fluorescence activation (Fig. 4c , right). This bottom–top contrast indicates that success in de novo design of fold and function requires a substantial portion of the protein (in our case, the bottom of the barrel) to provide the driving force for folding as the functional site will probably be destabilizing. Fig. 4: Sequence dependence of fold and function. Each position was mutated one at a time to each of the other 19 amino acids, and the resulting library was subjected to selection for fluorescence or stability to proteases. a , Top, b11L5F backbone model coloured according to relative fluorescence activation score at each position. Blue positions are strongly conserved during yeast selection; red positions are frequently substituted by other amino acids. Residues buried in the design model are much more conserved than solvent-exposed residues. Bottom, all the mutations to the glycine kinks (spheres in the top panel) and tryptophan (W) corner considerably reduced fluorescence; the shape of the designed structure is critical for the designed function. b , c , Bottom ( b ) and top ( c ) comparisons of b11L5F side chains coloured by relative fluorescence activation scores (top row) and stability scores (bottom row). In the bottom of the barrel, core residues were strongly conserved in both the function and stability selections ( b ); in the top barrel there is a clear function–stability trade-off with the key DFHBI-interacting residues being critical for function but far from optimal for stability ( c , substitution patterns at these positions are shown on the right). Fluorescence activation and stability scores were derived from two biologically independent experiments with a greater than tenfold sequencing coverage. Standard deviation and confidence intervals are provided in Extended Data Fig. 7 . Full size image Guided by the comprehensive protein stability and fluorescence activation maps, we combined substitutions at three positions that improved function without compromising stability (V103L, V95AG and V83ILM; Extended Data Fig. 8a, b ), and obtained variants with tenfold higher DFHBI fluorescence that form stable monomers without a disulfide bond (b11L5F.1; Extended Data Fig. 8c ). The crystal structure of one of the improved variants (b11L5F_LGL; mutant 83L/95G/103L in Extended Data Fig. 8b ) was solved to 2.2 Å and was very close to the design model with the majority of the buried side chains adopting the designed conformation (Extended Data Fig. 9a–d ). However, the electron density in the binding site could not be resolved, consistent with the multiple DFHBI binding modes suggested by the docking calculations (Extended Data Fig. 9e–g ; Extended Data Fig. 5f ). A second round of computational design calculations was carried out to favour a specific binding mode by optimizing the protein–ligand interactions in the lowest-energy docked conformation, and to rearrange the hydrophobic packing interactions in the bottom of the barrel now freed from the disulfide bond. Five designs predicted by ligand-docking calculations to have a single ligand-binding conformation were experimentally tested and three showed increased fluorescence activity, the best of which increased the fluorescence by approximately 1.4-fold (b11L5F.2; Extended Data Fig. 8d–e ). Screening of two combinatorial libraries (based on b11L5F.1 and b11L5F.2), incorporating additional beneficial substitutions identified in the b11L5F stability and function maps, yielded variants with another 1.5–2-fold increased fluorescence and improved protein stability (Extended Data Figs. 8 f–h, 10a, b ). We refer to these mini-fluorescence-activating proteins as mFAPs in the remainder of the text; mFAP0 and mFAP1 are variants of b11L5F.2, and mFAP2 of b11L5F.1. mFAP1 and mFAP2 activate 0.5 μM DFHBI fluorescence by 80- and 60-fold with K D values of 0.56 μM and 0.18 μM, respectively (Fig. 5d ). Fig. 5: Structure and function of mFAPs. a , b , 2F o − F c omit electron density in the mFAP1–DFHBI complex crystal structure contoured at 1.0 σ . a , DFHBI is clearly in the planar Z conformation rather than the non-fluorescent twisted conformations. b , The planar conformation is stabilized by closely interacting residues. c , Superposition of mFAP1 design model (silver) and the crystal structure (green). Hydrogen bonds coordinating DFHBI are indicated with dashed lines. d , Fluorescence emission spectra of 0.5 μM DFHBI with or without 5 μM mFAPs, excited at 467 nm. e , f , Confocal micrographs of E. coli cells expressing mFAP2 in the presence of DFHBI ( e ) and yeast cells displaying Aga2p–mFAP2 fusion proteins on the cell surface ( f ). Scale bars, 20 μm ( e ), 10 μm( f ). g , h , Overlay of widefield epifluorescence (green) and brightfield (grey) images of fixed COS-7 cells expressing sec61β–mFAP1 ( g ) and mito–mFAP2 ( h ; mito, mitochondrial targeting sequence) with expanded views of the fluorescence in the boxed regions. Scale bars, 20 μm ( g , h ), 3 μm (expanded). Two biological replicates were performed with similar observation. Full size image The 1.8 Å and 2.3 Å crystal structures of mFAP0 and mFAP1 in complex with DFHBI were virtually identical to the design models with an overall backbone r.m.s.d. of 0.91 Å and 0.64 Å (Fig. 5a–c , Extended Data Fig. 9h, i ). DFHBI is in the planar Z conformation with unambiguous electron density in both structures (Fig. 5a , Extended Data Fig. 9j ). In addition to three designed hydrogen bonds, a water molecule was found to interact with the solvent-exposed phenol group in DFHBI (Fig. 5b ). The DFHBI-binding modes in the crystal structures are nearly identical to the lowest-energy docked conformations used in the second round of design calculations, with all-atom r.m.s.d. values of 0.12 Å and 0.35 Å, respectively (Fig. 5c , Extended Data Fig. 9k ). Three mutations shared by mFAP0 and mFAP1 in the bottom barrel (P62D, M65L and L85M or L85Y, Extended Data Fig. 10b ) probably stabilize the protein by helical capping and subtle hydrophobic rearrangements (Extended Data Fig. 9l ). The M27W mutation in mFAP1 introduced an additional hydrogen bond to DFHBI that probably produces the 5-nm red-shift in its fluorescence spectra (Fig. 5d , Extended Data Fig. 10c, e ). mFAP2, which is based on b11L5F.1, has a six-residue insertion in the seventh β-turn that is predicted to form multiple intra-loop hydrogen bonds (Extended Data Fig. 10b , right). In vivo fluorescence activation To determine whether the designed DFHBI-binding fluorescence-activating proteins function in living cells, we imaged mFAP1 and mFAP2 in E. coli , yeast and mammalian cells by conventional wide field epifluorescence microscopy and confocal microscopy. Both mFAP1 and mFAP2 activated fluorescence less than 5 min after addition of 20 μM DFHBI. Cytosolic expression of mFAPs in E. coli and mammalian cells resulted in clear fluorescence throughout the cells (Fig. 5e , Extended Data Fig. 10f ). Yeast cells with mFAPs targeted to the cell surface displayed fluorescence in a thin region outside of the plasma membrane (Fig. 5f , Extended Data Fig. 10g ). Fusion of the mFAPs to a mitochondria-targeting signal peptide and to the endoplasmic reticulum-localized protein sec61β resulted in fluorescence tightly localized to these organelles in both fixed (Fig. 5g, h ) and living cells (Supplementary Videos). The quantum yields of mFAP1 and mFAP2 in complex with DFHBI are 2.0% and 2.1%, respectively (Extended Data Fig. 4g , comparable with Y-FAST:HBR 31 ). The brightness of de novo mFAPs in complex with DFHBI is about 35-fold lower than that of eGFP; there is still considerable room for improving their fluorescence activity. Conclusion It is instructive to compare the structures of our designed fluorescence-activating proteins with those of natural fluorescent proteins (Fig. 6 ). Both are β-barrels, and have similar chromophores, but our designs have less than half the residues and narrower barrels connected with short β-turns (Fig. 6a ). In both cases, specific protein–chromophore interactions reduce energy dissipation from intramolecular motions 32 , but the hydrogen bonding and hydrophobic packing around DFHBI is different from GFP and tailored to the smaller and simpler β-barrel (Fig. 6b ). The precise structural control enabled by computational design, together with the greater exposure of the chromophore, may prove useful for fluorescence-based imaging and sensing applications. Fig. 6: Comparison of structures of GFP and mFAP1. a , Surface mesh and ribbon representations of structures of GFP (left, PDB ID: 1EMA) and the computationally designed mFAP1 (right) with the chromophores (spheres) embedded in the protein. GFP, a product of natural evolution, has more than twice the number of residues, and a taller (top) and wider (bottom) barrel. Resolved water molecules in the crystal structures are shown as light purple spheres. b , Close-up of chromophore binding interactions in GFP (left) and mFAP1 (right). Full size image The comparison in Fig. 6 highlights the two primary advances in this paper: the first successful de novo design of a β-barrel, and the first full de novo design of a small-molecule-binding protein. The first advance required the elucidation of general principles for designing β-barrels, notably the requirement for systematic symmetry-breaking to enable hydrogen bonding throughout the barrel structure. These principles, identified by pure geometric considerations, coupled with computer simulations after the failure of the initial parametric design approach, are borne out by both the crystal structures and the sequence fitness landscapes. The second advance goes considerably beyond the design of ligand-binding proteins and catalysts to date, which has relied on repurposing naturally occurring scaffolds. The three-step approach described in this paper—first, identifying the basic principles required for specifying a general fold class; second, using these principles to generate a family of backbones with pocket geometries matched to the ligand or substrate of interest and third, designing complementary binding pockets buttressed by an underlying hydrophobic core—provides a general solution to the problem of de novo design of ligand-binding proteins. This generative approach enables the exploration of an effectively unlimited set of backbone structures with shapes customized to the ligand or substrate of interest and provides a test of our understanding of the determinants of folding and binding that goes well beyond descriptive analyses of existing protein structures. Methods Computational design of non-functional β-barrels De novo design of non-functional β-barrels can be divided into two main steps: backbone construction and sequence design. For backbone construction, two different approaches were presented: parametric backbone generation and fragment-based backbone assembly. Example scripts and command lines for each method are available in Supplementary Data . Parametric backbone generation and sequence design on the basis of hyperboloid models β-strand arrangements were generated using the equation of a hyperboloid of revolution with an elliptic cross-section, sampling the elliptic radii around the ideal value of β-barrel radius with number of strands ( n ) and the shear number ( S ) (see Supplementary Methods ). Eight β-strands were arranged as equally spaced straight lines running along the surface of the hyperboloid. A reference C α atom was defined as the intersection between the first strand and the cross-section ellipse. The other C α atoms were systematically populated along the eight strands from this reference residue. The peptide backbone was generated from the C α coordinates using the BBQ software 34 . The arrangements of discrete β-strands were minimized with geometric constraints to favour backbone hydrogen bonds. One round of fixed-backbone sequence design calculation was carried out to pack the barrel cavity with hydrophobic residues. The resulting β-strand arrangements with the best hydrogen-bond connectivity and the tightest hydrophobic packing were selected to be connected by short (two to four residues) β-turns. Two iterations of the loop hashing protocol implemented in RosettaRemodel 35 were performed to close the strands and refine the turns. The sequence design of those β-turns was constrained to sequence profiles derived from natural proteins. Low-energy amino acid sequences were obtained for the connected backbones using a flexible-backbone design protocol (see Supplementary Data ). Designs with high sequence propensity for forming β-strands, reasonable peptide-bond geometry and tight-packed hydrophobic cores are selected for experimental test (see Supplementary Table 2 ). Backbone assembly from fragments guided by a 2D map The presented 2D map (Fig. 1d ) was designed with the longest strand length observed in soluble β-barrel structures to obtain a β-barrel tall enough to accommodate a hydrophobic core and a binding cavity. The length of each strand depends on its specific position and the shear number of the barrel (see Supplementary Methods ). Glycine kinks and β-bulges were placed on the map as described in the main text. Specific β-turn types were used to connect the β-strands on the basis of their relative positions to β-bulges (see Supplementary Methods ). On the basis of this 2D map, we generated a constraint file and a blueprint file to guide the assembly of the barrel using peptide fragments from Rosetta fragments library. In the constraint file, each backbone hydrogen bond was described as a set of distance and angle constraints (Extended Data Fig. 5b ). A set of distance and torsion constraints specific to the tryptophan corner were added to the constraint file (Extended Data Fig. 3g–j , Supplementary Methods ). In the blueprint file, a torsion angle bin was attributed to every residue in the peptide chain, according to the ABEGO nomenclature of Rosetta. After minimizing the assembled backbones using Rosetta centroid-scoring function with imposed constraints, our protocol output an ensemble of poly-valine β-barrel backbones with defined glycine kinks, β-bulges, β-turns and the backbone of the tryptophan corner. The main challenge of building scaffolds with this protocol is to properly balance structure diversity and reasonable backbone torsion angles with the strong geometric constraints imposed during minimization. For this work, we circumvented this problem by performing two additional rounds of sequence design calculation to regularize and prepare scaffolds for designing ligand-binding β-barrels (Extended Data Fig. 5b–d , Supplementary Methods ). Sequence design of nonfunctional β-barrels Five hundred poly-valine backbones with good hydrogen bonds and torsion angles were selected as input for Rosetta sequence design. Low-energy sequences for the desired β-barrel fold were optimized over several rounds of flexible-backbone sequence design. We used a genetic algorithm to effectively search the sequence space: each parent backbone was used as input to produce ten designs through individual Monte Carlo searching trajectory. The best ~10% of the output designs were selected on the basis of the evaluation for total energy, backbone hydrogen bonds, backbone omega and Φ/Ψ torsion angles and hydrophobic-packing interactions. The selected models were used as inputs for the next round of design calculation. After 12 rounds of design and selection, no more improvements on the backbone quality metrics were observed (an indication of searching convergence). We then performed a backbone refinement by minimization in Cartesian space and a final round of design calculation (backbone flexibility was limited in torsion space for all the design calculation). The final top designs converged to the offspring of three initial backbones, sharing 36% to 99% sequence identity. For every parent backbone, one or two designs with the best hydrophobic packing interactions were selected for experimental characterization. The four designs (BB1–4) share 46% to 72% sequence identity. Computational design of DFHBI-binding fluorescence-activating β-barrels DFHBI is short for the chemical name ((Z)-4-(3,5-difluoro-4-hydroxybenzylidene)-1,2-dimethyl-1H-imidazol-5(4H)-one). De novo design of DFHBI-binding β-barrels consists of three steps: (1) generation of ensembles β-barrel scaffolds (see above), (2) ligand placement by RIF docking and (3) sequence design. Two hundred input scaffolds were generated in step 1 and used in the following steps. Example scripts and command lines are available in Supplementary Data . RIF docking The RIF docking method performs a simultaneous, high-resolution search of continuous rigid-body docking space as well as a discrete sequence-design space. The search is highly optimized for speed and in many cases, including the application presented here, is exhaustive for given scaffold–ligand pair and design criteria. RIF docking comprises two steps. In the first step, ensembles of interacting discrete side chains (referred to as ‘rotamers’) tailored to the target are generated. Polar rotamers are placed on the basis of hydrogen-bond geometry whereas apolar rotamers are generated via a docking process and filtered by an energy threshold. All the RIF rotamers are stored in ~0.5 Å sparse binning of the six-dimensional rigid body space of their backbones, allowing extremely rapid lookup of rotamers that align with a given scaffold position. To facilitate the next docking step, RIF rotamers are further binned at 1.0 Å, 2.0 Å, 4.0 Å, 8.0 Å and 16.0 Å resolution. In the second step, a set of β-barrel scaffolds is docked into the produced rotamer ensembles, using a hierarchical branch-and-bound search strategy (see Extended Data Fig. 5a ). Starting with the coarsest 16.0 Å resolution, an enumerative search of scaffold positions is performed: the designable scaffold backbone positions are checked against the RIF to determine whether rotamers can be placed with favourable interacting scores. All acceptable scaffold positions (up to a configurable limit, typically ten million) are ranked and promoted to the next search stage. Each promoted scaffold is split into 2 6 child positions in the six-dimensional rigid-body space, providing a finer sampling. The search is iterated at 8.0 Å, 4.0 Å, 2.0 Å, 1.0 Å and 0.5 Å resolutions. A final Monte Carlo-based rotamer packing step is performed on the best 10% of rotamer placements to find compatible combinations. Sequence design of DFHBI-binding β-barrels A total number of 2,102 DFHBI–scaffold pairs from RIF docking were continued for Rosetta sequence design. Our design protocol iterated between a fixed-backbone binding-site design calculation and a flexible-backbone design for the rest of scaffold positions. Three variations of this design protocol were used during the sequence optimization. In the initial two rounds of design calculation, RIF rotamers (interacting residues placed during RIF docking) were fixed to maintain the desired ligand coordination. Repacking of RIF rotamers was allowed in the final round of design calculation, assuming that the binding sites have been optimized enough to retain these interactions. A Rosetta mover that biases aromatic residues for efficient hydrophobic packing was added after the first round of design. A similar selection approach and Cartesian minimization as described for non-functional sequence design were used to propagate sequence search and refine the design models. Evaluations on ligand-binding interface energy and shape complementarity were added to the selection criteria. The final set of designs was naturally separated into clusters on the basis of their original RIF docking solutions. For each cluster, a sequence profile was generated to guide an additional two rounds of profile-guided sequence design. Forty-two designs from 22 RIF docking solutions (20 input scaffolds) were selected for experimental characterization (see Supplementary Table 3 ). Post-design model validation and ligand-docking simulation To validate the protein and ligand conformations of the selected designs, we performed ligand-docking simulations on refined apo-protein models. Protein model refinement was carried out on the unbound model of the designs by running five independent 10-ns molecular dynamics simulations with structural averaging and geometric regularization 36 . Then ligand-docking simulation was performed on this refined unbound model using RosettaLigand 37 using Rosetta energy function 38 , allowing rigid-body orientation and intra-molecular conformation of the ligand as well as surrounding protein residues (both on side chains and backbones) to be sampled. The ligand-binding energy landscapes were generated by repeating 2,000 independent docking simulations. Design of disulfide bonds The disulfide bonds were designed between the N-terminal 3–10 helix and a residue on one of the β-strands on the opposite side to the tryptophan corner. The first six residues of the designs model were rebuilt with RosettaRemodel 35 and checked for disulfide bond formation using geometric criteria. Once a disulfide bond was successfully placed, the N-terminal helix was redesigned. Redesign of β-turns for b11 Three β-turns (loops 3, 5 and 7) surrounding the DFHBI-binding site of b11 were redesigned to make additional protein–ligand contacts. A set of ‘pre-organized’ loops with high content of intra-loop hydrogen bonds and low B-factors were collected from natural β-barrel structures, and used as search template to build individual loop fragment library. Those custom libraries were used as input for RosettaRemodel to build an ensemble of loop insertions in the b11 design model bound to DFHBI. Two rounds of flexible-backbone design calculation were carried out to optimize ligand interface energy and shape complementarity using sequence profiles to maintain the template backbone hydrogen bonds. Designed loop sequences were validated in silico by kinematic loop closure 39 (KIC). Five hundred loop conformations were generated by independent KIC sampling and scored by Rosetta energy function. Thirty-six designs with improved ligand-interface energy, shape complementarity and converging conformation sampling were selected for experimental characterization (see Supplementary Data , Supplementary Table 6 ). Redesign of β-barrel core and DFHBI-binding site for b11L5F.1 After releasing the disulfide bond in b11L5F, with ligand modelled in the lowest-energy docked conformation for b11L5F (see Extended Data Fig. 5f , right), we performed another round of design calculation to further optimize the β-barrel core packing and ligand-binding interactions. The design protocol was very similar to the one used before with fixed ligand–hydrogen-bonding residues from RIF docking. Five designs with 9–15 mutations after manual inspection were selected for experimental characterization. Protein expression and purification Genes encoding the non-functional β-barrel designs (41 from parametric design and 4 from fragment-base design) were synthesized and cloned into the pET-29 vector (GenScript). Plasmids were then transformed into BL21*(DE3) E. coli strain (NEB). Protein expression was induced either by 1 mM isopropyl β- d -thiogalactopyranoside (IPTG) at 18 °C, or by overnight 37 °C growth in Studier autoinduction medium. Cells were lysed either by sonication (for 0.5–1-l cultures) or FastPrep (MPBio) (for 5–50-ml cultures). Soluble designs were purified by Ni-NTA affinity resin (Qiagen) and monomeric species were further separated by Akta Pure fast protein liquid chromatography (FPLC)(GE Healthcare) using a Superdex 75 increase 10/300 GL column (GE Healthcare). Fifty-six genes encoding DFHBI-binding designs were synthesized and cloned into pET-28b vector (Gen9). Protein expression and purification were carried out in the same way. Circular dichroism (CD) Purified protein samples were prepared at 0.5 mg/ml in 20 mM Tris buffer (150 mM NaCl, pH 8.0) or PBS buffer (25 mM phosphate, 150 mM NaCl, pH 7.4). Wavelength scans from 195 nm to 260 nm were recorded at 25 °C, 75 °C, 95 °C and cooling back to 25 °C. Thermal denaturation was monitored at 220 nm or 226 nm from 25 °C to 95 °C. Near-ultraviolet wavelength scan from 240 nm to 320 nm and tryptophan fluorescence emission were recorded in the absence and presence of 7 M guanidinium chloride (GuHCl). Chemical denaturation in GuHCl was monitored by both tryptophan fluorescence and near-ultraviolet CD signal at 285 nm. The concentration of the GuHCl stock solution was measured with a refractometer (Spectronic Instruments). Far-ultraviolet CD experiments were performed on an AVIV model 420 CD spectrometer (Aviv Biomedical). Near-ultraviolet CD and tryptophan fluorescence experiments were performed on a Jasco J-1500 CD spectrometer (Jasco). Protein concentrations were determined by 280 nm absorbance with a NanoDrop spectrophotometer (Thermo Scientific). Melting temperatures were estimated by smoothing the sparse data with a Savitsky–Golay filter of order 3 and approximating the smoothed data with a cubic spline to compute derivatives. Reported T m values are the inflection points of the melting curves. Size-exclusion chromatography with multi-angle light scattering Protein samples were prepared at 1–3mg/ml and applied to a Superdex 75 10/300 GL column (GE Healthcare) on a LC 1200 Series HPLC machine (Agilent Technologies) for size-based separation, and a miniDAWN TREOS detector (Wyatt Technologies) for light-scattering signals. Fluorescence binding assay Protein-activated DFHBI fluorescence signals were measured in 96-well plate format (Corning 3650) on a Synergy neo2 plate reader (BioTek) with λ ex = 450 nm or 460 nm and λ em = 500 nm or 510 nm. Binding reactions were performed at 200 µl total volume in PBS pH 7.4 buffer. Protein concentrations were determined by 280 nm absorbance as described above. DFHBI (Lucerna) were resuspended in DMSO as instructed to make 100 mM stock and diluted in PBS to 0.5–10 µM. Library construction Deep mutational scanning library for b11L5F were constructed by site-directed mutagenesis as described 40 . One hundred and eleven PCR reactions were carried out using DNA oligonucleotides directed to each position in two 96-well polypropylene plates (USA Scientific, 1402-9700), and products were pooled and purified by gel extraction kit (Qiagen) for yeast transformation. Combinatorial libraries for b11L5F.1 and b11L5F.2 were assembled using synthesized DNA oligonucleotides (Integrated DNA technologies) as described 41 . Selected positions were synthesized with 1–2% mixed bases to control mutation rate and library size. Full-length assembled genes were amplified and purified for yeast transformation as described 42 . Yeast surface display and fluorescence-activated cell sorting Transformed yeast cells (strain EBY100) 42 were washed and re-suspended in PBSF (PBS plus 1 g/l of BSA). DFHBI in DMSO stock was diluted in PBSF for labelling yeast cells at various concentrations. PBSF-treated cells were incubated with DFHBI for 30 min to 1 h at room temperature on a benchtop rotator (Fisher Scientific). Library selections were conducted using the GFP fluorescence channel at 520 nm with 488-nm laser on a SH800 cell sorter (Sony). Proteolysis treatment and fluorescence labelling were performed in the same way as described 30 . Cell-sorting parameters and statistics for all selections are given in Supplementary Table 16 . Deep sequencing and data analysis Pooled DNA samples for b11L5F deep mutational scanning library were transformed twice to obtain biological replicates. Two libraries were treated and sorted in a parallel fashion. Yeast cells of naive and selected libraries were lysed and plasmid DNA was extracted as described 43 . Illumina adaptor sequences and unique library barcodes were appended to each library by PCR amplification using population-specific primers (see Supplementary Table 8 ). DNA was sequenced in paired-end mode on a MiSeq Sequencer (Illumina) using a 300-cycle reagent kit (Catalogue number: MS-102-3003). Raw reads were first processed using the PEAR program 44 and initial counts analysed with scripts adapted from Enrich 45 . Stability scores were modelled using sequencing counts from proteolysis sorts as described 30 . Unfolded states were modelled without disulfide bonds (cysteine replaced by serine). Function scores were modelled using sequencing counts from DFHBI fluorescence sorts. A simple meta-analysis statistical model with a single random effect was applied to combine two replicates using the framework developed in Enrich2 46 . BB1 crystal structure BB1 protein was concentrated to 20 mg/ml in an AMICON Ultra-15 centrifugation device (Millipore), and sequentially exchanged into 20 mM Tris pH 8.0 buffer. The initial screening for crystallization conditions was carried out in 96-well hanging drop using commercial kits (Hampton Research and Qiagen) and a mosquito (TTP LabTech). With additional optimization, BB1 protein crystallized in 0.1 M BIS-Tris pH 5.0 and 2 M ammonium sulfate at 25 °C by hanging drop vapour diffusion with 2:1 (protein:solution) ratio. Diffraction data for BB1 was collected over 200° with 1° oscillations, 5-s exposures, at the Advanced Light Source (Berkeley) beamline 5.0.1 on an ADSC Q315R area detector, at a crystal-to-detector distance of 180 mm. The data was processed in space group P2 1 to 1.63 Å using Xia2 47 . The BB1 design model was used as a search model for molecular replacement using the program Phaser 48 , which produced a weak solution (TFZ 6.5). From this, a nearly complete model was built using the Autobuild module in Phenix 49 . This required the rebuild-in-place function of autobuild to be set to ‘False’. Iterative rounds of model building in the graphics program Coot 50 and refinement using Phenix.refine 51 produced a model covering the complete BB1 sequence. Diffraction data and refinement statistics are given in Supplementary Table 18 . b10, b11L5F_LGL crystal structure and mFAPs–DFHBI co-crystal structures b10 was initially tested for crystallization via sparse matrix screens in 96-well sitting drops using a mosquito (TTP LabTech). Crystallization conditions were then optimized in larger 24-well hanging drops. b10 crystallized in 100 mM HEPES pH 7.5 and 2.1 M ammonium sulfate at a concentration of 38 mg/ml. The crystal was transferred to a solution containing 0.1 M HEPES pH 7.5 with 3.4 M ammonium sulfate and flash-frozen in liquid nitrogen. Data were collected with a home-source rotating anode on a Saturn 944+ CCD and processed in HKL2000 52 . b11L5F_LGL was concentrated to 19.6 mg/ml (1.58 mM), incubated at room temperature for 30 minutes with 1 mM TCEP then mixed with an excess of DFHBI (re-suspended in 100% DMSO). b11L5F_LGL complexed with DFHBI was screened via sparse matrix screens in 96-well sitting drops using a mosquito (TTP LabTech) and crystallized in 100 mM Bis-Tris pH6.5 and 45% (v/v) Polypropylene Glycol P 400. The crystal was flash-frozen in liquid nitrogen directly from the crystallization drop. Data were collected with a home-source rotating anode on a Saturn 944+ CCD and processed in HKL2000 52 . mFAP0 and mFAP1 were mixed with excess DFHBI (re-suspended in 100% DMSO), while keeping the final DMSO concentration at less than 1%. The mFAP0 and mFAP1 complexes were then concentrated to approximately 41 mg/ml and 64 mg/ml, respectively, and initially tested for crystallization via sparse matrix screens in 96-well sitting drops using a mosquito (TTP LabTech). Crystallization conditions were then optimized in larger 24-well hanging drops macroseeded with poor-quality crystals obtained in sitting drops. mFAP0 complexed with DFHBI crystallized in 200 mM sodium chloride, 100 mM HEPES pH 7.5 and 25% (w/v) polyethylene glycol 3350. The crystal was transferred to the mother liquor plus 2 mM DFHBI and 10% (w/v) polyethylene glycol 400 then flash-frozen in liquid nitrogen. Data were collected at the Berkeley Center for Structural Biology at the Advanced Light Source (Berkeley), on beamline 5.0.2 at a wavelength of 1.0 Å. and processed in HKL2000 52 . mFAP1 complexed with DFHBI crystallized in 100 mM MES pH6.5 and 12% (w/v) polyethylene glycol 20,000. The crystal was transferred to the mother liquor plus 2 M DFHBI and 15% glycerol then flash-frozen in liquid nitrogen. Data were collected with a home-source rotating anode on a Saturn 944+ CCD and processed in HKL2000 52 . Structures were solved by molecular replacement with Phaser 48 via phenix 49 using the Rosetta design model with appropriate residues cut back to C α and DFHBI removed. The structure was then built and refined using Coot 50 and phenix 51 , respectively, until finished. Diffraction data and refinement statistics are given in Supplementary Table 18 . Statistics and reproducibility In Fig. 1c , the models were coloured on the basis of the mean values of repulsion energy by position (Rosetta fa_rep) derived from a set of poly-valine backbones relaxed with constraints ( n = 189 independently generated models); relaxed with constraints with a glycine in the middle of each C β -strip ( n = 186 independently generated models) and relaxed without constraints with glycines and β-bulges ( n = 194 independently generated models). This experiment has been performed twice on different sets of backbones and produced similar results. In Fig. 2b–e , BB1 was purified and sized with SEC at least five times independently, yielding different ratios of monomeric-to-oligomeric species (20–75%). The fraction of monomer could be increased by heat-shocking the cells at 42 °C shortly before induction. Two biological replicates of the far- and near-ultraviolet CD and tryptophan fluorescence spectra acquisition of BB1 were done with similar results, and the chemical denaturation experiment performed once. Extended Data Fig. 4a–c , the analysis of BB1 with SEC–MALS was repeated twice on independently prepared protein samples and similar molecular masses were obtained. Additionally, the experiments were repeated on one sample stored at 4 °C at different time points ( t = 0, t = 7 days and t = 30 days); all experiments had similar results and confirmed the stability of the monomeric species. BB2, 3 and 4 were purified once. The molecular mass (with SEC–MALS) and the far-ultraviolet CD spectra of the purified proteins were tested once. The sizing of purified BB1 mutants was performed once, with wild-type BB1 as an internal control. Cell culture and transfection COS-7 cells (ATCC CRL-1651, confirmed negative for mycoplasma) were grown in DMEM supplemented with 1× NEAA, 100 units/ml penicillin, 100 μg/ml streptomycin, and 10% FBS; and collected using 0.25% trypsin EDTA. Per transfection, approximately one million cells were transfected with 2 μg of plasmid using 18 μl of Lonza s.e. cell supplement, 82 μl of Lonza s.e. nucleofection solution and pulse code DS-120 on a Lonza 4D X Nucleofector system. After nucleofection cells were immediately seeded into ibidi μ-Slide eight-well glass bottom chambers at a density of ~30,000 cells/well and incubated overnight at 37 °C. Cell fixation Cells were fixed at 37 °C for 10 min in PFA/GA fixation solution containing 100 mM aqueous PIPES buffer pH 7.0, 1 mM EGTA, 1 mM MgCl 2 , 3.2% paraformaldehyde, 0.1% glutaraldehyde; reduced for 10 min with freshly prepared 10 mM aqueous sodium borohydride; then rinsed with PBS for 5 min. Microscopy Conventional widefield epifluorescence imaging was performed on an inverted Nikon Ti-S microscope configured with a 60 × 1.2 NA water-immersion objective lens (Nikon), a light emitting diode source (LED4D120, Thorlabs), a multiband filter set (LF405/488/532/635-A-000, Semrock) and images were captured with a Zyla 5.5 sCMOS camera (Andor). The samples were illuminated 470 nm light at an intensity of ~2 W/cm 2 and with 200 ms exposures. For live-cell experiments, samples were incubated at 37 °C with Gibco CO 2 Independent Medium containing 50 µM DFHBI for 10 min before imaging. Time-lapse movies were acquired over a period of 5 min with a 200-ms exposure every 5 s. For fixed cell imaging, samples were incubated at room temperature (~22 °C) in PBS containing 50 µM DFHBI for 10 min before imaging. Reporting summary Further information on experimental design is available in the Nature Research Reporting Summary linked to this paper. Code availability The Rosetta macromolecular modelling suite ( ) is freely available to academic and non-commercial users. Commercial licenses for the suite are available via the University of Washington Technology Transfer Office. Design protocols and analysis scripts used in this paper are available in the Supplementary Information and at . The source code for RIF docking implementation is freely available at . Data availability The atomic coordinates and experimental data of BB1, b10, b11L5F_LGL, mFAP0–DFHBI, and mFAP1–DFHBI crystal structures have been deposited in the RCSB Protein Database with the accession numbers of 6D0T , 6CZJ , 6CZG , 6CZH and 6CZI respectively. All the design models, Illumina sequencing data, sequencing analysis and source data (Figs. 2 , 4 , Extended Data Figs. 6e , 7 , 8a, h ) are available at . | For the first time, scientists have created, entirely from scratch, a protein capable of binding to a small target molecule. Researchers from the University of Washington School of Medicine report the advance in the Sept. 12 issue of the journal Nature. Previously, such small-molecule binding proteins have been made by altering proteins already existing in nature. That approach significantly limited the possibilities. The ability to make such proteins from scratch, or "de novo," opens the way for scientists to create proteins unlike any found in nature. These proteins can be custom-designed with high precision and affinity to bind to and act on specific small molecule targets. The lead authors of the paper are Jiayi Dou and Anastassia A. Vorobieva, both senior fellows in the lab of senior author David Baker, professor of biochemistry at the UW School of Medicine and director of the Institute of Protein Design at UW Medicine. Baker is also an investigator at the Howard Hughes Medical Institute. The technique should have wide application in research, medicine and industry, according to Baker. "The successful de novo design of custom-built proteins with small-molecule binding activity sets the stage for the creation of increasingly sophisticated binding proteins that will not have the limitations seen with proteins that have been designed by altering existing protein structures," he explained. To make the protein, the researchers had to achieve another first: Creating from scratch a cylinder-shaped protein called a beta-barrel. The beta-barrel structure was ideal because one end of the cylinder could be designed to stabilize the protein, while the other end could be used to create a cavity that can serve as the binding site for the target molecule. Proteins are made of long chains of amino acids. Once synthesized, these chains fold into precise shapes that allow the proteins to perform their functions. The shapes these chains assume are typically incredibly convoluted, but two regular features often occur: alpha-helices, which form when the sections chain winds around a central axis, and sheet-like structures, called beta-sheets. This clip shows the action of a laboratory-designed, fluorescence-activating beta barrel protein Credit: Institute for Protein Design/UW Medicine Beta-sheets form when two or more sections from different parts of the amino acid chain, because of folding, run side-by-side in 3-D space. These sections are "stitched together" by hydrogen bonds, creating a sheet-like structure. These beta-sheets, in turn, can assemble into barrel-like structures, called beta-barrels. In nature, beta-barrels proteins bind a wide range of small molecules. To design the new protein, Dou and Vorobieva used a software platform, developed in the Baker lab, called Rosetta. It can predict what shape a particular chain of amino acids will assume after synthesis and can tell how changing individual amino acids along the chain may alter that shape. This predictive power makes it possible to test out different combinations of amino acids to design a protein with the desired shape and function. To create the cavity, the researchers used a powerful new docking algorithm, called "Rotamer Interaction Field" (RIF), developed by William Sheffler, a senior research scientist in the Baker lab. RIF rapidly identifies all potential structures of cavities that fulfill the pre-requisites for binding specific molecules. Equipped with the new RIF docking methods, Dou, Vorobieva and Sheffler designed the beta barrels to bind a compound called DFHBI, a component similar to what is housed inside green fluorescent protein, which fluoresces when exposed to certain frequencies of light. Green fluorescent protein is routinely used in biological research to locate molecules and structures within living organism and to track their movement. Anastassia A. Vorobieva. holding her new son, with her research colleague Jiayi Dou. The two scientists led the design and testing of a beta-barrel protein that activates fluorescence. The new protein, built from scratch, is an advance in custom-designing proteins to precisely target small molecules. Credit: Institute for Protein Design/UW Medicine In their paper, the researcher demonstrate that their custom-designed protein avidly bound and activated the DFHBI compound. "It worked in bacterial, yeast and mammalian cells," said Dou, "and being half the size of green fluorescent protein should be very useful to researchers." Baker said that the approach will allow researchers to explore an effectively unlimited set of backbone structures with shapes customized to bind the molecule of interest. "Equally important," he added, "it greatly advances our understanding of the determinants of protein folding and binding beyond what we have learned from describing existing protein structures." | 10.1038/s41586-018-0509-0 |
Medicine | Researchers find protein induces non-shivering muscle heat generation | Sarcolipin is a newly identified regulator of muscle-based thermogenesis in mammals, Nature Medicine (2012) doi:10.1038/nm.2897 Abstract The role of skeletal muscle in nonshivering thermogenesis (NST) is not well understood. Here we show that sarcolipin (Sln), a newly identified regulator of the sarco/endoplasmic reticulum Ca2+-ATPase (Serca) pump1, 2, 3, 4, 5, is necessary for muscle-based thermogenesis. When challenged to acute cold (4 °C), Sln−/− mice were not able to maintain their core body temperature (37 °C) and developed hypothermia. Surgical ablation of brown adipose tissue and functional knockdown of Ucp1 allowed us to highlight the role of muscle in NST. Overexpression of Sln in the Sln-null background fully restored muscle-based thermogenesis, suggesting that Sln is the basis for Serca-mediated heat production. We show that ryanodine receptor 1 (Ryr1)-mediated Ca2+ leak is an important mechanism for Serca-activated heat generation. Here we present data to suggest that Sln can continue to interact with Serca in the presence of Ca2+, which can promote uncoupling of the Serca pump and cause futile cycling. We further show that loss of Sln predisposes mice to diet-induced obesity, which suggests that Sln-mediated NST is recruited during metabolic overload. These data collectively suggest that SLN is an important mediator of muscle thermogenesis and whole-body energy metabolism. Journal information: Nature Medicine | http://dx.doi.org/10.1038/nm.2897 | https://medicalxpress.com/news/2012-09-protein-non-shivering-muscle.html | Abstract The role of skeletal muscle in nonshivering thermogenesis (NST) is not well understood. Here we show that sarcolipin (Sln), a newly identified regulator of the sarco/endoplasmic reticulum Ca 2+ -ATPase (Serca) pump 1 , 2 , 3 , 4 , 5 , is necessary for muscle-based thermogenesis. When challenged to acute cold (4 °C), Sln −/− mice were not able to maintain their core body temperature (37 °C) and developed hypothermia. Surgical ablation of brown adipose tissue and functional knockdown of Ucp1 allowed us to highlight the role of muscle in NST. Overexpression of Sln in the Sln-null background fully restored muscle-based thermogenesis, suggesting that Sln is the basis for Serca-mediated heat production. We show that ryanodine receptor 1 (Ryr1)-mediated Ca 2+ leak is an important mechanism for Serca-activated heat generation. Here we present data to suggest that Sln can continue to interact with Serca in the presence of Ca 2+ , which can promote uncoupling of the Serca pump and cause futile cycling. We further show that loss of Sln predisposes mice to diet-induced obesity, which suggests that Sln-mediated NST is recruited during metabolic overload. These data collectively suggest that SLN is an important mediator of muscle thermogenesis and whole-body energy metabolism. Main Endothermic animals require heat production from within to maintain their core temperature. Mammals, including humans, depend on both shivering and NST mechanisms for temperature homeostasis 6 , 7 . Although muscle shivering is an immediate response to cold stress, continuous muscle shivering leads to exhaustion and muscle damage; therefore, NST mechanisms are activated in these conditions. Brown adipose tissue (BAT) is an important site of NST in most mammals 8 , 9 , 10 . Several studies have suggested that in addition to BAT, skeletal muscle has a role in NST 7 , 11 , 12 , but the molecular details of its involvement have not been completely explored. Studies conducted on heater organs (modified ocular muscle) 13 , 14 , 15 , 16 of fish have shown that continuous sarcoplasmic reticulum Ca 2+ transport (caused by an inherently leaky ryanodine receptor) has evolved for heat production without muscle contraction. Similarly, in malignant hyperthermia, mutations in Ryr1 predispose to abnormal Ca 2+ leak when exposed to anesthetic compounds. This condition elevates the amount of cytosolic Ca 2+ and results in excessive activation of Serca-mediated Ca 2+ transport and heat production 17 . However, it is not currently known to what extent the sarcoplasmic reticulum Ca 2+ transport machinery is recruited in muscle-based NST in the absence of contraction. Recent in vitro studies have suggested that Sln can increase heat production by uncoupling Serca-mediated ATP hydrolysis from Ca 2+ transport 18 , 19 . In this study, we sought to determine whether the Sln-Serca interaction is an important mechanism for muscle-based thermogenesis in vivo . The generation of the Sln −/− mouse model has been previously described 4 . Loss of Sln in this model enhances Serca activity and improves muscle function in the muscle tissues where it is expressed 4 , 20 . When housed at 22 ± 1.5 °C (mean ± range), Sln −/− mice had an optimal average core temperature of 36.8 ± 0.3 °C (mean ± s.e.m.) ( Supplementary Fig. 1a ) and surface body heat ( Fig. 1a ). Because BAT is an important contributor to NST in rodents, loss of Sln could be compensated for by BAT. Therefore, we surgically ablated intrascapular BAT (iBAT), which constitutes ≥60% of the total BAT content, in a set of wild-type (WT) and Sln −/− mice to minimize its contribution. At 22 ± 1.5 °C, removal of iBAT in WT and Sln −/− mice did not have a major effect on the core temperature ( Fig. 1a and Suplementary Fig. 1a ), physical activity ( Supplementary Fig. 1b ), oxygen consumption (VO 2 ) or respiratory exchange ratio of the mice ( Supplementary Fig. 1c,d ). These data show that Sln −/− mice can maintain an optimal core temperature at 22 ± 1.5 °C, suggesting that Sln-dependent NST is not activated under these conditions. Figure 1: Sln −/− mice are not able to maintain optimal core temperature ( ∼ 37 °C) and develop hypothermia when challenged with acute cold. ( a ) Infrared imaging of surface body heat in WT and Sln −/− mice with or without iBAT at 22 °C and 4 °C. ( b ) Core body temperature after acute cold exposure in WT and Sln −/− mice, with and without iBAT. n = 22 (WT + iBAT), n = 23 (WT – iBAT), n = 22 ( Sln −/− + iBAT), n = 27 ( Sln −/− – iBAT). ( c ) Percentage of mice reaching ERC. ( d , e ) Average drop in core temperature (Tc) during the first 2 h of 4 °C challenge in WT and Sln −/− mice first acclimatized to 22 °C ( d ) or 30 °C ( e ). ( f , g ) Oxygen consumption averaged over the first two hours of acute cold exposure in WT and Sln −/− mice first acclimatized to 22 °C ( f ) or 30 °C ( g ). * P < 0.05, ** P < 0.01, *** P < 0.001, NS, not significant as analyzed by Student's t test or one-way analysis of variance (ANOVA). All data are means ± s.e.m. Full size image To determine whether Sln is important for cold-induced thermogenesis, we exposed Sln −/− mice to acute cold (4 °C) in a temperature-controlled Comprehensive Lab Animal Monitoring System (CLAMS) module and monitored their core temperature using an infrared camera and implantable transponders. Infrared imaging of Sln −/− mice challenged to a temperature of 4 °C showed a substantial reduction in surface body heat, as determined by a switch in heat intensity from red to yellow ( Fig. 1a ). When exposed for a prolonged period (10 h) to 4 °C, the iBAT-ablated Sln −/− mice, despite their skeletal muscle shivering ( Supplementary Fig. 2a and Supplementary Video 1 ), could not maintain optimal core temperature, and their average body temperature dropped drastically from 37 °C to 32.2 ± 1.4 °C after 4 h and then dropped to 26.9 ± 1.9 °C after 6 h ( Fig. 1b ). The iBAT-ablated Sln −/− mice died of hypothermia during 10 h of cold exposure; therefore, we removed the iBAT-ablated Sln −/− mice from cold exposure when their core temperature reached 25 °C (early removal criteria, ERC). More than 85% of the iBAT-ablated Sln −/− mice reached ERC ( Fig. 1c ). However, Sln −/− mice with intact iBAT maintained a lowered average core temperature of 33.8 ± 0.6 °C and survived cold challenge, suggesting that BAT can compensate for the loss of Sln. WT mice with intact iBAT were able to maintain an average core temperature of 36.3 ± 0.2 °C during cold challenge (close to the optimal core temperature of 37 °C), which is in agreement with published results 21 . Notably, iBAT-ablated WT mice were able to maintain their core temperature (35.8 ± 0.4 °C) and survive the cold challenge ( Fig. 1b,c ), suggesting that Sln-mediated NST can compensate for loss of iBAT. We further evaluated the cold sensitivity (on challenge to 4 °C) of the Sln −/− mice in a group that we acclimatized for 3 weeks at 30 °C (thermoneutrality for mice) to functionally downregulate the activity of BAT 22 , 23 . We used this method instead of a double knockout for Sln and Ucp1 because very few double knockout mice are born (below the Mendelian ratio), and those that survive to adulthood may develop additional compensatory mechanisms, complicating interpretations of the results (L.A.R. and M.P., unpublished data). The Sln −/− mice acclimatized to 30 °C showed a drastic drop in their average core temperature from 37 °C to 32.5 ± 0.6 °C, whereas Sln −/− mice maintained at 22 °C had average core temperatures that decreased from 37 °C to 35.0 ± 0.6 °C during the first hour of cold exposure ( Fig. 1d,e ). The average core temperature of WT littermates acclimatized at 30 °C decreased from 37 °C to 35.2 ± 0.4 °C in first hour of cold challenge ( Fig. 1e ), which suggests that Sln can compensate for reduced BAT function. These results, together with those from the iBAT ablation ( Fig. 1b,c ) studies, suggest that Sln-mediated muscle thermogenesis has a crucial role in the maintenance of core body temperature that is independent of BAT activity. We further evaluated how the loss of Sln affected metabolic rates by measuring the VO 2 in WT and Sln −/− mice at 22, 30 and 4 °C. The WT and Sln −/− mice housed at 22 °C showed higher basal VO 2 ( Fig. 1f,g ) compared to the groups of mice maintained at 30 °C. When challenged to 4 °C, both WT and Sln −/− (both reared at 22 °C) showed comparable increases in VO 2 ; however, the Sln −/− mice acclimatized to 30 °C showed a blunted increase in VO 2 (with 17.2% lower final VO 2 ) compared to WT littermates ( Fig. 1g ). These data suggest that acclimatization at 30 °C reduces the contribution of BAT to metabolic rate, and mice lacking Sln that are acclimatized at 30 °C are unable to increase their metabolic rate when challenged to 4 °C as compared to WT mice. To ascertain that the inability to maintain core temperature by Sln −/− mice during acute cold exposure is primarily caused by loss of Sln expression, we re-expressed Sln in the knockout mice. We achieved this rescue by mating a mouse model overexpressing Sln under the control of the skeletal α-actin promoter with the Sln −/− mice. This new mouse model ( Sln −/−/OE ) had high expression of Sln in the skeletal muscles ( Fig. 2a ). Importantly, overexpression of Sln in the null background fully restored thermogenesis; the iBAT-ablated Sln −/−/OE mice did not develop hypothermia during challenge to 4 °C ( Fig. 2b ). These data collectively suggest that Sln is essential for skeletal-muscle–based facultative thermogenesis. Figure 2: Reintroduction of Sln in Sln −/− mice completely restores thermogenesis, and Sln is necessary for muscle-based NST. ( a ) Transgenic overexpression of Sln in Sln −/−/OE mice. R, red gastrocnemius; S, soleus; D, diaphragm; E, extensor digitorum longus; T, tibialis anterior; G, gastrocnemius; Q, quadriceps; Casq1, skeletal calsequestrin. ( b ) Core body temperature during acute cold exposure after Sln overexpression in the Sln −/− ( n = 8) background. ( c – e ) Core body temperature during acute cold exposure in WT mice ( c ) and in Sln −/− mice ( d , e ) after curare treatment. ( f ) Physical activity of the indicated mouse strains with and without curare treatment. ** P < 0.01, *** P < 0.001 by Student's t test. All data are means ± s.e.m. Full size image In addition to shivering, skeletal-muscle–based NST has been suggested to have a crucial role in thermogenesis 6 , but direct experimental evidence supporting this is lacking. To show that skeletal muscle is an important site of NST, we inhibited shivering in WT mice (C57BL/6J) by administering a low dose of curare (0.4 mg per kg body weight intraperitoneally 24 ), which is known to competitively block the binding of the neurotransmitter acetylcholine to its receptors 25 . When exposed to cold, untreated WT mice with intact iBAT showed normal shivering and maintained an average core temperature of 36.3 ± 0.2 °C ( Fig. 2c ). However, WT mice treated with curare showed a marked ( ∼ 50%) reduction in shivering but were able to maintain an average core temperature of 35.4 ± 0.4 °C, which is close to the optimal core temperature, suggesting that NST mechanisms are sufficient for the maintenance of core temperature in the absence of shivering ( Supplementary Fig. 2b and Supplementary Video 2 ). Next, we minimized shivering with curare in iBAT-ablated WT mice to highlight the existence of muscle-based NST. Notably, iBAT-ablated WT mice were able to maintain their core temperature when exposed to 4 °C, and treatment with curare did not cause any significant additional decrease (35.1 ± 0.5 °C) ( Fig. 2c ), suggesting that skeletal-muscle–based NST is an important mechanism for thermoregulation. We then investigated whether blunting of shivering by treatment with curare affects thermogenesis in Sln −/− mice with intact iBAT. Notably, reduction of shivering by curare treatment caused a rapid decline in core temperature in these mice during the first hour of exposure (37.1 ± 0.4 °C to 34.5 ± 0.8 °C in 30 min and then to 33.9 ± 0.9 °C in 60 min) ( Fig. 2d,e ). These data indicate that shivering is an important component of thermoregulation during the initial phase of cold challenge. Curare treatment only modestly altered physical activity, but the treated mice were able to move around freely and showed normal grooming behavior ( Fig. 2f ). Further, measurements of VO 2 in WT and Sln −/− mice during cold challenge showed that treatment with curare did not affect metabolic rate ( Supplementary Fig. 2c,d ). To show the role of Ryr1-mediated Ca 2+ leak in Sln-mediated NST, we used dantrolene, an inhibitor of Ryr commonly used to treat malignant hyperthermia 26 . Administration of dantrolene (4 mg per kg body weight intraperitoneally) did not affect physical activity, including shivering, in WT mice with and without intact iBAT and Sln −/− mice with intact iBAT exposed to 4 °C ( Supplementary Fig. 3a ), but in WT mice with intact iBAT pretreated with dantrolene and exposed to 4 °C, core temperature decreased to 34.6 ± 0.4 °C ( Fig. 3a ). Similarly, treatment of iBAT-ablated WT mice with dantrolene caused a significant ( P < 0.001) decrease in their average core temperature to 33.0 ± 1.0 °C ( Fig. 3a ). However, treatment with dantrolene had little effect on Sln −/− mice, as they were already cold sensitive ( Fig. 3b ). These data may suggest that Ryr1-mediated Ca 2+ leak could be involved in increasing the cytoplasmic calcium pool that is essential for stimulating thermogenesis through Serca. VO 2 was increased after cold challenge in WT mice with and without intact iBAT and Sln −/− mice with intact iBAT pretreated with dantrolene ( Supplementary Fig. 3b ). Figure 3: Molecular basis of Sln-mediated thermogenesis. ( a , b ) Core body temperature after acute cold exposure and after dantrolene treatment in WT mice ( a ) and Sln −/− mice ( b ) with or without iBAT. n = 8 (WT + iBAT), n = 8 (WT – iBAT), n = 11 ( Sln −/− + iBAT). ( c ) Alignment of the mouse Sln and Plb protein sequences. The amino acid residues Asp30 in Plb and Glu7 in Sln were mutated to cysteine for crosslinking experiments with Serca1a. ( d ) Western blot analysis showing Sln crosslinking to Serca1a in the presence of Ca 2+ (0–100 μM). ( e ) Western blot showing crosslinking of Plb with Serca1a in the presence of Ca 2+ (0–100 μM). All data are means ± s.e.m. Full size image Although previous studies have suggested that the Serca pump can generate heat 27 , 28 during Ca 2+ transport, the exact mechanism of this process has not been explored. To determine how the Sln interaction with Serca leads to heat generation, we compared the interaction of Sln and Serca to that of phospholamban (Plb), a known regulator of the Serca pump, by chemical crosslinking using microsomes from HEK 293 cells in the presence of increasing concentrations of Ca 2+ . We introduced cysteine at a homologous position, as determined by a comparison of the amino acid sequences of Sln and Plb, to create a site for crosslinking ( Fig. 3c ). Previous studies have shown that N30C Plb specifically crosslinks with Serca, and by screening the N-terminal residues of Sln for crosslinking with Serca 29 , we found E7C Sln, homologous to N30C Plb, specifically crosslinks with Serca. Crosslinking studies with bismaleimidohexane (BMH, a homobifunctional sulfhydryl crosslinker) showed that Sln harboring the E7C mutation continues to interact with Serca even in the presence of high concentrations of Ca 2+ (0.1–100 μM) ( Fig. 3d ). However, the interaction of Plb harboring the N30C mutation with Serca is abolished at concentrations of Ca 2+ above 0.1 μM, as has been previously reported 29 ( Fig. 3e ). This finding showing that Sln and Ca 2+ can bind to Serca simultaneously suggests that Sln has the ability to promote uncoupling of the pump, leading to increased ATP hydrolysis and heat production. We next tested whether the presence of Sln results in increased energy cost by challenging Sln −/− mice with a high-fat diet (HFD) at 22 ± 1.5 °C for 12 weeks. HFD feeding resulted in significantly more weight gain in Sln −/− mice compared to WT mice ( Fig. 4a,b ) despite a lower food (weight and calorie) intake ( Supplementary Fig. 4 ). Sln −/− mice had higher fat content, as determined by magnetic resonance imaging (MRI) (2.55-fold ± 0.33-fold) (mean ± s.e.m.) and fat pad weights, than WT mice ( Fig. 4c,d ). Further, a histological analysis showed that white adipose tissue, BAT and liver cells of HFD-fed Sln −/− mice had more fat droplet accumulation than those of WT mice ( Fig. 4e ). HFD-fed WT mice were less obese than Sln −/− mice ( Fig. 4a and Supplementary Fig. 5 ) but had upregulated Sln expression (3-fold to 5-fold in the soleus), suggesting that Sln-mediated NST is recruited to increase metabolism ( Fig. 4f ), but the expressions of Serca1a and Serca2a were not altered in these mice. HFD-fed Sln −/− mice showed elevated serum glucose, cholesterol and triglyceride concentrations and elevated glucose intolerance ( Fig. 4g and Supplementary Fig. 4c,d ). We also found an upregulation of Ucp1 expression in the BAT of both HFD-fed groups, as reported previously 30 ( Supplementary Fig. 5b ), and the expressions of Serca and Sln were not altered in cardiac muscle by HFD ( Supplementary Fig. 5c ). These data provide evidence that Sln-Serca–based NST is recruited during metabolic overload to increase energy expenditure, thereby reducing adiposity. Figure 4: Sln −/− mice are prone to develop obesity when fed HFD. ( a ) Increase in body weight during 12 weeks of HFD feeding ( n = 8 mice per group). ( b ) Sln −/− mice fed HFD become obese compared to WT mice. ( c ) MRI of the fat distribution in representative mice fed on HFD. ( d ) Determination of the fat content of the mice in a . WAT, white adipose tissue. ( e ) Representative histological sections of intraperitoneal WAT, BAT and liver stained with H&E and osmium tetroxide (osmium tet) (seen as black dots). Scale bar, 25 μm. ( f ) Representative western blots of soleus and diaphragm muscle from chow-fed and HFD-fed WT and Sln −/− mice. n = 4. ( g ) Glucose tolerance test. * P < 0.05, ** P < 0.01 for the comparison of WT and Sln −/− HFD-fed mice by one-way ANOVA. All data are means ± s.e.m. Full size image In conclusion, our findings show that skeletal muscle is an important site of NST and the SLN-SERCA interaction is the basis for skeletal-muscle thermogenesis. We suggest that SLN-mediated NST in the maintenance of core temperature is central in animals that have reduced BAT content 6 , 31 or in which functional BAT is absent (birds 32 and pigs 33 ). These findings are relevant to large mammals, including humans, where SLN is expressed several fold higher than in rodents 34 and BAT content becomes restricted in adult life 35 . Based on these findings, we propose that the SLN-SERCA interaction in skeletal muscle can serve as a potential target to modulate energy metabolism and treat metabolic-overload–induced obesity. Methods Mice. The generation of Sln −/− mice has been described previously 36 . Sln overexpression in mouse skeletal muscle was achieved using the skeletal α-actin promoter 37 . The transgenic line 4, with protein overexpression in skeletal muscle but not other tissues, was selected for further breeding with the Sln −/− mice to generate Sln −/−/OE mice. The genotypes of the mice were determined by PCR from tail snips, and all mice were maintained in the C57BL/6J genetic background. The study protocol was approved by the Ohio State University Institutional Animal Care and Use Committee (OSU-IACUC). All of the animal procedures were carried out at our Association for Assessment and Accreditation of Laboratory Animal Care International–accredited animal facility and conducted in accordance with the Guide for the Care and Use of Laboratory Animals. iBAT removal surgery and measurements of core temperature. Surgical removal of iBAT was performed under anesthesia according to OSU-IACUC–approved protocols. A 1- to 2-cm incision was made on the dorsal skin, the iBAT was removed by dissection, and the skin was closed with wound clips. Control mice underwent similar surgery except without removal of iBAT. Mice were allowed to fully recover before they were challenged to 4 °C. The core body temperature of the mice was measured using telemetric transponders (IPTT-300, BioMedic Data System Inc, Seaford, DE, USA) implanted just below the skin in the interscapular region. The whole-body core temperature was calculated as the mean temperature from each group between 2 and 4 h after cold challenge. The surface temperature of the mice was imaged using a high-resolution infrared camera (Ti32-60HZ Thermal Imager, Fluke Corporation, Everett, WA, USA). Acute cold challenge. Acute cold exposure of mice to 4 °C was performed in the CLAMS (CLAMS Columbus Instruments Inc, Columbus, OH, USA) set up, which is temperature controlled. For these experiments, 12-week-old male mice were individually housed, and their core temperature was measured using telemetric transponders. Metabolic parameters (oxygen consumption and carbon dioxide production) were monitored throughout the experiment by indirect calorimetry. In addition, the physical activity of each mouse was monitored using a multidimensional infrared light detection system placed on bottom and top levels of each individual cage of the CLAMS. Age-matched WT and Sln −/− mice were acclimatized to 30.0 ± 1.0 °C for 3 weeks, and the mice were challenged to acute cold at 4 °C as described above. Shivering in WT and Sln −/− mice was recorded for 1 h during the 4 °C cold challenge, and representative stretches of these recordings are presented as Supplementary Videos 1 and 2 . Treatment with curare and dantrolene. Curare and dantrolene were solubilized in an aqueous solution containing 0.9% NaCl and administered intraperitoneally. Curare (D-tubocurarine hydrochloride, Sigma-Aldrich, St. Louis, MO, USA) was administered at 0.4 mg per kg body weight, as previously reported 38 . Dantrolene (dantrolene sodium salt, Sigma-Aldrich, St. Louis, MO, USA) was administered at 4 mg per kg body weight, as previously reported 39 . After administration of the drug, the mice were visually monitored for 15 min and then transferred to the CLAMS for acute cold challenge. The number of shivering episodes (>5 s) was quantified for 1 h in the curare-treated mice ( n = 4) during cold challenge. Chemical crosslinking of Sln with Serca. Complementary DNA (cDNA) of rat Serca1 and mouse Sln and Plb were cloned into the pcDNA 3.1(+) vector between the restriction sites NheI and XhoI, and EcoRI and XhoI, respectively. The residues in the Sln (Glu7) and Plb (Asn30) sequences were mutated to cysteine for crosslinking through the QuikChangeTM site-directed mutagenesis method (Stratagene, La Jolla, CA). Serca1 cDNA with E7C Sln or N30C Plb cDNA was co-transfected into cultured HEK 293 cells using lipofectamine, and microsomes were prepared as described previously 40 after 48 h of transfection. Chemical crosslinking was performed using the homobifunctional sulfhydryl crosslinker BMH (Thermo Scientific Inc., Rockford, IL, USA) as described previously 41 . The reaction mixture contained 40 mM 3-(N-morpholino)propanesulfonic acid (MOPS) (pH 7.0), 3.2 mM MgCl 2 , 75 mM KCl, 3mM ATP and 1 mM ethylene glycol tetraacetic acid (EGTA). The calculation of free calcium was done by Maxchelator freeware, and CaCl 2 was added to make the desired amount of free calcium in the reaction. Individual reactions contained 15 μg of microsome, and the reaction was started by adding 0.1 mM BMH and then incubating for 1 h. The reaction was stopped by adding SDS-PAGE sample-loading buffer containing 15% SDS with 100 mM dithiothretol. Crosslinked samples were subjected to SDS-PAGE and standard western blotting using antibodies to Sln (custom made, 1:2,000 dilution), Plb (Zymed 1:3,000 dilution) and Serca1a (custom made, 1:2,000 dilution). HFD-induced obesity. Eight-week-old Sln −/− mice and WT littermates (age and weight matched) were segregated into four experimental groups and fed either a standard chow diet (3.0 kcal/g, 4.25% kCal from fat) or an HFD (4.73 kcal/g, 45% kCal from fat; D12451, Research Diet Inc., New Brunswick, NJ, USA). Body weight was measured once every week and was expressed as weight gained in grams per mouse. Body fat distribution was measured by nuclear magnetic resonance using a 9.4T system (Bruker BioSpin, Billerica, MA, USA) in our Small Animal Imaging Facility. The mice were anaesthetized with 1–1.5% isoflurane and secured on an animal bed inside the MRI scanner. The mice were monitored using a small animal monitoring system (Model 1025, Small Animals Instruments, Inc. Stony Brook, NY, USA), and the heart rates of the mice were maintained in the range 350–450 beats per minute by adjusting the amount of the anesthesia. To further confirm the MRI results, the fat pads were weighed and expressed as a percentage of the total body weight. Serum chemistry. Serum chemistry in HFD-fed mice was performed after overnight fasting. Serum glucose was measured using Breeze 2 glucometer (Bayer HealthCare LLC, Tarrytown, NY, USA). Serum triglyceride, cholesterol and ketone were measured using a CardioChek PA kit (HealthCheck Systems, Brooklyn, NY, USA). Glucose tolerance tests were performed on fasted mice (12 h) by intraperitoneal injection of D -glucose (2 g per kg of body weight). Western blotting. Expression of Sln, Serca1a, Serca2a (custom-made antibodies, 1:3,000 dilution), Casq1 (Affinity bioreagents antibody to Casq1, MA3913, 1:1,000 dilution) and Ucp1 (Abcam, ab10983, 1:2,000 dilution) were determined using standard western blotting techniques. The desired amount of whole homogenate supernatant was adequately resolved by SDS-PAGE (16% Tris-tricine gel for Sln and 8% or 10% Tris-glycine gel for other proteins). Proteins were transferred to a 0.2-μM nitrocellulose membrane for Sln or a 0.45-μM nitrocellulose membrane for the other proteins and blocked with 5% milk or BSA containing 0.05% Tween 20 for 60 min. Secondary horseradish-peroxidase–conjugated antibody (KPL Inc., 074-1506) was applied for 1 h at room temperature at a dilution of 1:50,000 in Tris-buffered saline containing 0.05% Tween-20. Signal was detected using an enhanced chemiluminescence reagent (Pierce Thermo Scientific). The protein bands were then scanned (HP Imaging). Statistical analyses. Data are expressed as means ± s.e.m. Statistically significant differences between two groups were assessed by Student's t test or one-way ANOVA. Change history 06 December 2012 The authors would like to add two co-authors, A. Russell Tupling and Eric Bombardier, to the study. The author list, Acknowledgments and Author Contributions have been corrected in the HTML and PDF versions of this article. | (Medical Xpress)—A team of researchers working in Ohio has found evidence that suggests that the protein sarcolipin, normally a calcium regulator pump, also serves as a means of causing muscles to generate body heat independent of shivering. In their paper published in Nature Medicine, the team says their results show that energy heat derived from brown fat burning white fat stores, or muscle shivering, are not the only means the body has of keeping warm. The traditional view of thermogenesis, where energy stores are burned to keep the body warm, is that brown fat grabs resources from white fat and burns it or muscles contract and relax rhythmically producing shivering, are the only two means mammals have for maintaining warmth when it's colder outside than inside their bodies. This new research shows that there is a third way as well, and it involves the protein sarcolipin, which the researchers say, causes otherwise idle muscles to burn energy which creates heat which also helps to keep the body warm. The researchers came to this conclusion by studying the impact of brown fat, muscles and sarcolipin on heat generation in mice. First they genetically caused a small group of mice to cease producing sarcolipin, then they removed all of their brown fat. They also removed the brown fat from another small test group still able to produce sarcolipin. Afterwards, all of the mice in both groups were subjected to a 4 °C cold test environment. The mice that weren't producing sarcolipin died within ten hours, those that were, even without the aid of their brown fat, survived. This proves, the team says, that sarcolipin does indeed induce idle muscle to produce heat, enough obviously to keep mice alive in a cold cage. But there was another, perhaps even more intriguing, finding as well. The team reports that the mice that had their ability to produce sarcolipin removed tended to gain weight as well when put on a high fat diet; so much so that they grew to become 33 percent heavier than mice that continued to produce sarcolipin. This, the team says, might just have implications in the human world as it seems plausible that increased amounts of sarcolipin production might result in more energy burning and thus weight loss. That hasn't been tested yet, but it most certainly will be, both by this team and others searching for a way to help people lose weight despite inappropriate diets or limited exercise regimens. | doi:10.1038/nm.2897 |
Earth | Fire danger in the high mountains is intensifying, shows study of four decades of data | Mohammad Reza Alizadeh et al, Elevation-dependent intensification of fire danger in the western United States, Nature Communications (2023). DOI: 10.1038/s41467-023-37311-4 | https://dx.doi.org/10.1038/s41467-023-37311-4 | https://phys.org/news/2023-04-danger-high-mountains-decades.html | Abstract Studies have identified elevation-dependent warming trends, but investigations of such trends in fire danger are absent in the literature. Here, we demonstrate that while there have been widespread increases in fire danger across the mountainous western US from 1979 to 2020, trends were most acute at high-elevation regions above 3000 m. The greatest increase in the number of days conducive to large fires occurred at 2500–3000 m, adding 63 critical fire danger days between 1979 and 2020. This includes 22 critical fire danger days occurring outside the warm season (May–September). Furthermore, our findings indicate increased elevational synchronization of fire danger in western US mountains, which can facilitate increased geographic opportunities for ignitions and fire spread that further complicate fire management operations. We hypothesize that several physical mechanisms underpinned the observed trends, including elevationally disparate impacts of earlier snowmelt, intensified land-atmosphere feedbacks, irrigation, and aerosols, in addition to widespread warming/drying. Introduction Mountains provide a variety of ecosystem services, including supplying about 50% of freshwater globally and an even higher fraction in mountainous arid regions (e.g., 70% of runoff in the western US) 1 , 2 . Orographic temperature and precipitation gradients in montane areas facilitate stratified vegetation belts 3 , which promote local hotspots of biodiversity with a high degree of complexity that are particularly vulnerable to ecosystem changes in response to chronic (e.g., warming) and/or acute (e.g., wildfire) stressors 4 . Even small perturbations in these mountainous areas have large repercussions for hydrological and ecological processes, with cascading effects on downstream human-environmental systems 3 , 5 , 6 , 7 . A growing body of literature points to elevation-dependent trends in meteorological and land surface characteristics in montane regions of the world in response to the underlying warming signal 6 , 7 , 8 . In particular, more rapid warming of surface air temperature has been documented at higher elevations compared to that of lower elevations in many regions globally 7 , 9 . Similarly, trends in land surface temperature 10 , reference evapotranspiration 11 , snow cover and snow water equivalent 12 , 13 , and vegetation greening 14 , 15 have also been documented to vary across elevational gradients. Furthermore, elevation-dependent precipitation trends are observed in some regions, although the sign of such changes and associated mechanisms vary across studies 2 , 16 . Changes in the energy and water balance alter the fire danger level 17 , which is defined as the potential for a fire to ignite, spread and require suppression action. The literature has shown widespread increases in fire danger in many regions globally 18 , 19 , 20 , 21 , but how fire danger trends change across the elevational gradient has not been studied. Recent literature shows that atmospheric warming weakened the high-elevation flammability barrier and enabled the upslope advance of fires 22 , and facilitated high-elevation fires that are unprecedented in modern history 23 . Here we use the fire danger representation in the US National Fire Danger Rating System (NFDRS) 24 to investigate trends in fire danger across elevations. We note that NFDRS fire danger indices were empirically derived based on mathematical models of fire behavior, and they do not fully capture the energy and water balance related to fuels and fire 24 . Previous studies have linked these indices with regional burned area 25 and the growth of individual fires 26 , and they are operationally used by fire management agencies across the US 27 . We evaluated elevation-dependent trends in fire danger indices between 1979 and 2020 across 15 level III Omernik ecoregions of the western US 28 that are mountainous. We augmented this list with a variety of meteorological variables that are commonly used in fire studies 29 , 30 and posed two questions: (1) Are there elevation-dependent trends in fire danger indices across montane regions of the western US? (2) If so, do they culminate in the elevational synchronization of fire danger? We answered these questions using the gridMET dataset of daily meteorological and NFDRS variables (~4 km grid) 31 , Omernik level III ecoregions map from the US Environmental Protection Agency 28 , and the National Elevation Dataset (10 m resolution) from the US Geological Survey. We present the results of the energy release component (ERC, Fuel Model G in NFDRS 77 24 ) in the main paper and all other NFDRS indices (burning index [BI], 100-h and 1000-h dead fuel moisture—FM100 and FM1000, respectively) and meteorological variables (vapor pressure deficit [VPD], temperature, relative and specific humidity, and reference evapotranspiration) in the Supplementary Information. ERC indicates the available energy per unit area at the flame front, measuring the dryness of dead and live fuels. ERC is less directly influenced by temperature than other fire danger indices. It is, however, strongly influenced by relative humidity—which is in turn related to temperature—and precipitation. Here, we select a fuel-agnostic measure across western landscapes through a single fuel model (model G) not to conflate heterogeneity in vegetation distribution with heterogeneity in climate trends. We, however, show that general conclusions hold for other fire danger indices that are not dependent on the fuel model. Results Elevation-dependent trends in warm-season ERC All fire danger indices, as well as meteorological variables, showed marked drying/warming trends over the period of 1979–2020 in all 15 mountainous ecoregions of the western US and across all elevation bands (Fig. 1 , Supplementary Figs. S 1 –S 8 ). Temporal trends in warm-season-average (hereafter warm-season; May–September) fire danger indices were computed over 500 m elevation bands in each ecoregion using least squares linear regression (e.g., Fig. 1a ), and linear slope of trends was calculated across elevation bands (e.g., Fig. 1b ). The former indicates temporal trends in fire danger indices in each band, whereas the latter points out whether or not trends are magnified at higher elevations compared to the lower land. Fig. 1: Elevation-dependent trends in fire danger across montane ecoregions in the western US. a Temporal trends in warm-season (May–September) average energy release component (ERC) from 1979 to 2020 in each elevation band in each mountainous ecoregion of the western US. Elevation bands with less than 250 km 2 of land are removed from the analysis and are shown with gray shading. b Slope of ERC temporal trends across elevation bands, where positive values indicate a larger intensification of ERC at higher elevations. (© OpenStreetMap contributors 2017. Distributed under the Open Data Commons Open Database License (ODbL) v1.0.) 57 . Gray shading shows non-mountainous ecoregions that are not studied here. Hatched areas indicate statistically significant trends at the 95% confidence level. “m a.s.l.” stands for meter above sea level, and “yrs/km” stand for years per kilometer of elevation. Full size image Warm-season ERC trends were most pronounced at higher elevations and least pronounced at lower elevations (Fig. 1a ). Median ERC among all ecoregions increased by nearly 15 units during 1979–2020 in the highest elevation band (>3000 m). By contrast, median ERC across all ecoregions increased by only ~6 units—smallest across all elevation bands—during 1979–2020 in the lowest elevation band (0–500 m) (Fig. 1a ). Among individual ecoregions and elevation bands, the largest increase in ERC (17 units) from 1979 to 2020 was observed at >3000 m in Central Basin and Range, whereas the smallest increase in ERC (1 unit) was observed in the 0–500 m elevation band of the maritime affected North Cascades (Fig. 1a ). Positive elevation-dependent ERC trends (i.e., larger increases in ERC with elevation gain) were found in 13 of the 15 studied ecoregions (Fig. 1b ). This trend is even more pronounced for dead fuel moisture (FM100 and FM1000), with all ecoregions showing more marked drying trends in higher elevations (Supplementary Figs. S 3 , S 4 ). Positive elevational ERC slopes range between 1 and 3 units/km in the 42 years of study (Fig. 1b ). For Central Basin and Range, for example, the highest elevations (>3000 m) had an additional 6.4 units of increase in ERC compared to lower elevations (1000–1500 m) from 1979 to 2020. Accelerated increases in fire danger at higher elevations imply synchronization of fire danger across elevations, posing marked fire management challenges 22 , 32 , 33 , 34 . Warm-season average ERC in the most recent decade (2011–2020) was larger than that of the earliest decade (1981–1990) in all ecoregions and across all elevation bands (Fig. 2 ). Similar drying/warming behavior was observed when viewed through the lens of other fire danger indices and meteorological variables (Supplementary Figs. S 9 –S 17 ). The largest median relative increase in warm-season ERC (19%) during 2011–2020 vs. 1981–1990 across all ecoregions was observed in the highest elevations (>3000 m), and the smallest relative increase (10%) was observed in the lowest elevations (0–500 m). Furthermore, a range of different patterns are observed in warm-season ERC values across different elevation bands within and between ecoregions (Fig. 2 ). Some ecoregions (e.g., Sierra Nevada) were associated with a comparable range of warm-season ERC values across elevation bands, whereas others (e.g., Central Basin and Range) showed widely different values across the elevational gradient (Fig. 2 ). In general, the elevational gradient of warm-season ERC climatology is more pronounced in drier/warmer ecoregions (Fig. 2 ). Southern ecoregions (e.g., New Mexico Plateau/Mountains) were expectedly associated with a higher warm-season ERC range compared to northern ecoregions (e.g., Canadian Rockies), which follow latitudinal temperature gradients (Fig. 2 ). Fig. 2: Decadal average warm-season energy release component (ERC) in each elevation band in each ecoregion. Results for 1981–1990 and 2011–2020 are shown in blue and orange colors, respectively. “m” stands for meter. Full size image Warm-season average temperature generally decreases monotonically with elevation gain, but elevational relationships of fire danger indices are non-monotonic (Fig. 3 ). Fire danger indices depend on the energy balance driven by moisture availability, evaporative demand, and temperature (as well as wind for BI) and are hence not merely a simple function of temperature. Responses of ERC, BI, and VPD, as well as FM100, FM1000 range from (1) monotonically decreasing (increasing for FM100 and FM1000) with elevation (Fig. 3a ; Arizona/New Mexico Mountains) to (2) increasing (decreasing for FM100 and FM1000) in response to elevation gain (Fig. 3b ; Cascades), and (3) increasing (decreasing for FM100 and FM1000) to a certain elevation band and a reversed trend afterward (Fig. 3c ; Sierra Nevada). We hypothesize that lower temperature and higher moisture availability in higher elevations in Arizona/New Mexico mountains lead to the monotonic decline in fire danger with elevation (Fig. 3a ). By contrast, the lower elevation western slope of the Cascades is impacted by maritime air mass leading to reduced fire danger indices compared to higher elevations (Fig. 3b ). In Sierra Nevada, lower elevations are adjacent to California’s Central Valley that is heavily irrigated and promotes elevated humidity in the boundary layer that moderates ERC 35 , whereas lower humidity in mid-elevations (1000–2000 m) on the western slope of the region fosters the most intense fire danger indices while decreased temperature and increased moisture (e.g., due to orographic increase in precipitation and snow cover) promote reduced fire danger indices at higher elevations (>2000 m). Other ecoregions are shown in Supplementary Figs. S 18– S 29 . Fig. 3: Elevational changes in the climatology of meteorological variables and fire danger indices. Average warm-season values of mean daily temperature (Tmean), precipitation (Pr), energy release component (ERC), burning index (BI), vapor pressure deficit (VPD), daily reference evapotranspiration (based on alfalfa; Etr), 100-h and 1000-h dead fuel moisture (FM100 and FM1000, respectively), minimum and maximum daily relative humidity (Rmin and Rmax, respectively), and specific humidity (SPH) from 1979 to 2020 in each elevation band are presented for a Arizona/New Mexico Mountains, b Cascades, and c Sierra Nevada. “m a.s.l.” stands for meter above sea level, “mm” stands for millimeter, “kg” stands for kilograms, and “kPa” stands for kilopascal. Full size image Elevation-dependent increase in critical fire danger days We now turn our attention to critical fire danger days, which are associated with high fire activity and potential for fire growth. We considered a threshold of ERC = 60 as the tipping point for increased fire activity across all studied ecoregions, following Brown et al. 36 that showed a majority of large forest fires (>400 ha) in the western US started on days with ERC ≥ 60. Our analysis of fire records in the Fire Occurrence Database 37 confirmed this reporting and showed that 77 and 83% of large (>400 ha) and very large (4000 ha) fires from 1992 to 2020 in the studied ecoregions were associated with ERC ≥ 60 on their discovery date. These statistics also hold for the entire western US. The constant threshold of ERC = 60 is selected here to warrant consistency and inter-comparability across ecoregions. We found an increase in critical fire danger days during 1979–2020 in all elevation bands and all ecoregions (Fig. 4a ). The highest median increase in critical fire danger days in all ecoregions during the 42 years of this study occurred between 2500 and 3000 m with an overall increase of 63 days, of which 22 days occurred outside of the warm season (Supplementary Tables S1– S3 ). By contrast, the lowest median increase in critical fire danger days across all ecoregions occurred in the 0–500 m elevation band, adding >22 extra fire danger days in 42 years (Supplementary Tables S1 – S3 ; Fig. 4a ). Higher elevations in 10 of the 15 studied montane ecoregions were associated with a larger rate of increase in critical fire danger days compared to lower elevations (Fig. 4b ). The highest slope of trend in critical fire danger days as a function of elevation was observed in the Central Basin and Range, indicating an additional >28 fire danger days in 42 years per 1 km of elevation gain (Fig. 4b ). Fig. 4: Elevation-dependent increase in critical fire danger days. a Temporal trends in critical fire danger days from 1979 to 2020 in each elevation band and ecoregion. b Slope of temporal trends in critical fire danger days across elevation bands. (© OpenStreetMap contributors 2017. Distributed under the Open Data Commons Open Database License (ODbL) v1.0.) 57 . Hatched areas indicate statistically significant trends at the 95% confidence level. “m a.s.l.” stands for meter above sea level, and “yrs/km” stand for years per kilometer of elevation. Full size image Trends in critical fire danger days viewed through the lens of other fire danger indices follow a similar pattern as that of ERC (Supplementary Figs. S 30 –S 32 ). The largest median rate of the relative increase in annual critical fire danger days (119%), based on ERC, across all ecoregions during 2011–2020 as compared to 1981–1990 was observed at >3000 m, and the lowest relative increasing rates were observed at <1500 m ranging between 38 and 43% (Fig. 5 ). Results based on other fire danger indices follow a similar pattern (Supplementary Figs. S 33 –S 35 ). Furthermore, critical fire danger days synchronized across all elevation bands in the most recent decade in many ecoregions, such as Central Basin and Range and Sierra Nevada, indicating lessened topographical fire danger relief in a warming climate (Fig. 5 ). However, the decrease of critical fire danger days with elevation gain, due to the lower baseline ERC at higher elevations (Fig. 2 ), is noted in multiple ecoregions, such as Wasatch and Unita Mountains (Fig. 5 ). Finally, the number of critical fire danger days across ecoregions follow the latitudinal temperature gradient with the highest occurring in the southern region and the lowest in the northern region (Fig. 5 ). This is expected given baseline ERC values are higher in the southern ecoregions and our adopted fire danger threshold (ERC = 60) is constant across all studied ecoregions. Fig. 5: Annual critical fire danger days. Decadal average critical fire danger days per year based on energy release component (ERC) ≥ 60, during 1981–1990 (blue) and 2011–2020 (orange). “m” stands for meter. Full size image Finally, to augment this analysis, we also used the 75 th and the 95 th percentiles of daily ERC records from 1979 to 2020 in each grid pooled over the entire calendar year as the threshold for high and extreme fire danger conditions (Supplementary Fig. S 36 ). We then estimated the number of high and extreme fire danger days in each grid and averaged them in each elevation band, and replicated our trend analysis. Results (Supplementary Figs. S 37 –S 40 ) confirmed the findings of the constant threshold-based analysis (ERC = 60), although nuanced differences exist between the two approaches—especially between those of the constant and the 95 th percentile-based thresholds. Furthermore, we conducted this percentile-based threshold analysis for other variables (Supplementary Figs. S 41 –S 52 ) with similar conclusions. Discussion Here we documented larger fire danger trends at high elevations compared to low elevations across mountainous ecoregions in the western US over the past four decades. Our results pointed to the synchronization of fire danger across elevations in many ecoregions and indicated the reduction and disappearance of topographical fire danger relief in a warming climate. Elevation-dependent fire danger intensification implies that higher elevations that were historically wet enough to buffer fire ignition and slow/hinder fire propagation have become conducive to large fire activity in recent decades 22 . This trend is expected to intensify further, given the projected warming and drying trends in the western US 38 . Elevational synchronization of fire danger along with the documented spatial synchronization of fire danger across the western US forests 19 implies further strains on the limited fire suppression and management resources. We also documented concerning trends in critical fire danger days, especially at higher elevations. Our results showed that a marked portion of this increase in critical fire danger days occurred outside of the warm season, particularly in the southern ecoregions with a higher ERC climatology baseline. We recognize that ground observations of meteorological variables are rare at high elevations, which might induce uncertainty in the reported trends based on a gridded product (gridMET 31 ). Our analysis of fire danger trends across the elevation gradient using ground observations, however, confirmed the reported findings (Supplementary Fig. S 53 ), although for a limited number of ecoregions (five) and stations (a total of 79) constrained by data availability. We also note that elevation-dependent warming has been widely demonstrated across the globe, including in the mountains of the western US (Pepin et al. 6 , 7 and references therein); and this trend on the backbone of widespread drying in the fire season 39 is expected to induce elevation-dependent fire danger intensification. Furthermore, Alizadeh et al. 22 showed that normalized burned area in the high-elevation forests of the western US increased at a higher rate than its low-elevation counterpart from 1984 to 2019, pointing to a weakened flammability barrier in the high elevations, providing secondary evidence for the herein reported trends. We hypothesize that several mechanisms contribute to the elevation-dependent trends in fire danger. Earlier snowmelt and shrinking of snow cover decrease albedo at high elevations that historically stored large snow packs. This contributes to surface warming as a result of increased absorption of incoming solar energy 7 . While earlier snowmelt may not have directly been the main contributor to the largest intensification rates of fire danger at higher elevations, especially in the warm season, the indirect impact of earlier snowmelt can contribute to soil desiccation and land-atmosphere feedbacks strengthening that intensify fire danger 40 , 41 . Land-atmosphere feedbacks have always been a driving factor in water-limited low-elevation regions but have historically been less ubiquitous in energy-limited, moist highlands that have observed the largest soil moisture decline rates in recent decades in response to warming 38 . The warming and drying cycle, due to land-atmosphere feedback, is further intensified by inhibiting cloud formation and its associated energy balance effects, as well as increasing the boundary layer depth that traps heat in the atmosphere 40 . Similarly, warming and drying lead to higher cloud base heights, reducing the total precipitation that reaches the ground 42 . Aerosols also play a role in the observed trends, as valleys of the western US trap fire smoke, dust, and anthropogenic particles and change long- and short-wave radiative balance 7 , 43 . Higher concentrations of aerosols in the valleys buffer the direct impact of the incoming shortwave radiation on the surface weather 44 . This enables the surface air temperature to be cooler than its potential 45 . The aerosol impact is lower at high elevations 46 . Another significant contributor to the smaller intensification of fire drivers in the lowest elevation bands in some ecoregions is the impact of agricultural irrigation on regulating valley temperatures 35 . Elevation-dependent intensification of fire danger has important implications for future ecological and hydrological characteristics of montane ecosystems 34 . High-elevation mesic forests are associated with long return interval (several decades to millennia), high-intensity, stand-replacing fires, and their frequent occurrence may alter the population, community, composition, and structure of these forests 47 , 48 , 49 . Fire impacts compounded by a warming climate also threaten high-elevation plant species by facilitating pathways for low-elevation species, including invasive annual grasses, to move to upper ground 50 . Increasing high-elevation fire activity also has significant implications for: (1) water availability through removing vegetation cover and impacting snow accumulation and melt 34 , 50 , (2) water quality, through introducing various pollutants and facilitating a magnified increase of stream temperature 7 , 51 , and (3) landscape morphology, through enhanced erosion rates and stream incision 52 . In-depth understanding of elevation-dependent trends in fire danger is specifically important as fire suppression efforts are least effective in high-elevation, mesic forests, which when burned can significantly impact the vulnerable highlands’ flora and fauna, and can have adverse effects that cascade to lower elevations that depend on high-elevation emanated ecosystem services 53 . Methods We calculated the warm-season average of meteorological and US National Fire Danger Rating System indices, herein referred to as fire danger indices, using daily values in each grid from the gridMET 31 dataset (~4 km resolution), and then averaged them for each 500 m elevation band in 15 mountainous ecoregions of the western US. We selected Omernik level 3 ecoregions 28 that encompass mountain ranges of the western US. We then used these variables for (1) estimating least squares linear trends in warm-season averages and (2) quantifying the number of critical fire danger days. We considered fire drivers in each ecoregion separately since each ecoregion includes rather similar ecoclimatic characteristics. We divided each ecoregion to land encapsulated in 500 m elevation bands (e.g., band 1: 0–500 m above sea level, a.s.l., band 2: 500–1000 m a.s.l., …, band 7: >3000 m a.s.l.) to investigate elevation-dependent trends in various biophysical and atmospheric variables. We removed bands with <250 km 2 of land (less than 16 grid cells in gridMET) from the analysis to ensure robust results. Supplementary Table S4 lists the surface area encapsulated in each elevation band in each ecoregion. We used daily average temperature, precipitation, vapor pressure deficit (VPD), energy release component (ERC), burning index (BI), 100-h dead fuel moisture (FM100, representing small diameter fuel), 1000-h dead fuel moisture (FM1000, representing large diameter fuel), minimum and maximum daily relative humidity (Rmin and Rmax, respectively), and specific humidity (SPH). We used the US National Fire Danger Rating System 77 24 for the calculation of fire danger indices. Furthermore, we employed the Fuel Model G (dense conifer stands) for ERC and BI calculations over the entire western US landscapes as a fuel-agnostic measure for documenting climate-driven fire danger trends, not to conflate heterogeneity in vegetation distribution with heterogeneity in climate trends. We defined the warm season as May–September since this period is associated with enhanced fire activity in the western US. For warm-season trends, we used the May–September average of each variable. We present the results based on ERC in the main paper and other variables in the Supplementary Information. For critical fire danger days, we counted the number of days in which the daily variable exceeded its defined threshold. This threshold is selected from the literature and is associated with increased fire activity and growth potential 36 , 54 . Threshold values were selected as: ERC = 60; FM100 = 8%, FM1000 = 10%, and VPD = 2 kPa 54 , 55 . We augmented this analysis with a local percentile-based threshold for high and extreme fire danger days, in which the 75 th and 95 th (25 th and 5 th for fuel moisture) percentiles of long-term daily time series of various variables in each grid pooled over the entire calendar year were selected as the threshold, and the number of high and extreme fire danger days in each grid in each year was estimated accordingly. Grid estimates were then averaged over the entire elevation band, which were subsequently used for trend analyses. In all analyses, the slope of linear least squares regression was presented. The underlying warm-season ERC data justified the use of linear trends (Supplementary Fig. S 54 ), but we acknowledge that not all variables necessarily satisfy the assumptions of a linear regression analysis. We provided warm-season averages of all variables in the Supplementary Data to enable more in-depth analyses. Finally, the two-sided t -statistic was used to test the null hypothesis that the slope coefficient of linear regression is equal to zero. Upon rejection of the null hypothesis ( p -value ≤ 0.05), we accept the alternative hypothesis that the trend is significant. Data availability The referenced climate and fire danger data can be obtained from the gridMET dataset available at: . The referenced Omernik ecoregion boundaries are available at: . The referenced elevation data are obtained from the National Elevation Dataset, which is available at: . The processed elevation-dependent warm-season fire danger indices are available in the Supplementary Data 1 file. Code availability Source codes are available at: Alizadeh (2022) 56 . alizadeh-mr/Wildfire-danger-indices: Initial Release (v1.0.0). Zenodo, ( ) | As wildfire risk rises in the West, wildland firefighters and officials are keeping a closer eye on the high mountains—regions once considered too wet to burn. The growing fire risk in these areas became startling clear in 2020, when Colorado's East Troublesome Fire burned up and over the Continental Divide to become the state's second-largest fire on record. The following year, California's Dixie Fire became the first on record to burn across the Sierra Nevada's crest and start down the other side. We study wildfire behavior as climate scientists and engineers. In a new study published in Nature Communications, we show that fire risk has intensified in every region across the West over the past four decades, but the sharpest upward trends are in the high elevations. High mountain fires can create a cascade of risks for local ecosystems and for millions of people living farther down the mountains. Since cooler, wetter high mountain landscapes rarely burn, vegetation and dead wood can build up, so highland fires tend to be intense and uncontrollable. They can affect everything from water quality and the timing of meltwater that communities and farmers rely on, to erosion that can bring debris and mud flows. Ultimately, they can change the hydrology, ecology and geomorphology of the highlands, with complex feedback loops that can transform mountain landscapes and endanger human safety. Four decades of rising fire risk Historically, higher moisture levels and cooler temperatures created a flammability barrier in the highlands. This enabled fire managers to leave fires that move away from human settlements and up mountains to run their course without interference. Fire would hit the flammability barrier and burn out. However, our findings show that's no longer reliable as the climate warms. We analyzed fire danger trends in different elevation bands of the Western U.S. mountains from 1979 to 2020. Fire danger describes conditions that reflect the potential for a fire to ignite and spread. Over that 42-year period, rising temperatures and drying trends increased the number of critical fire danger days in every region in the U.S. West. But in the highlands, certain environmental processes, such as earlier snowmelt that allowed the earth to heat up and become drier, intensified the fire danger faster than anywhere else. It was particularly stark in high-elevation forests from about 8,200 to 9,800 feet (2,500-3,000 meters) in elevation, just above the elevation of Aspen, Colorado. We found that the high-elevation band had gained on average 63 critical fire danger days a year by 2020 compared with 1979. That included 22 days outside the traditional warm season of May to September. In previous research, we found that high-elevation fires had been advancing upslope in the West at about 25 feet (7.6 meters) per year. Credit: Mohammad Reza Alizadeh, CC BY Cascading risks for humans downstream Mountains are water towers of the world, providing 70% of the runoff that cities across the West rely on. They support millions of people who live downstream. High-elevation fires can have a significant impact on snow accumulation and meltwater, even long after they have burned out. For example, fires remove vegetation cover and tree canopies, which can shorten the amount of time the snowpack stays frozen before melting. Soot from fires also darkens the snow surface, increasing its ability to absorb the Sun's energy, which facilitates melting. Similarly, darkened land surface increases the absorption of solar radiation and heightens soil temperature after fires. The result of these changes can be spring flooding, and less water later in the summer when communities downstream are counting on it. Fire-driven tree loss also removes anchor points for the snowpack, increasing the frequency and severity of avalanches. Frequent fires in high-elevation areas can also have a significant impact on the sediment dynamics of mountain streams. The loss of tree canopy means rainfall hits the ground at a higher velocity, increasing the potential for erosion. This can trigger mudslides and increase the amount of sediment sent downstream, which in turn can affect water quality and aquatic habitats. Erosion linked to runoff after fire damage can also deepen streams to the point that excess water from storms can't spread in high-elevation meadows and recharge the groundwater; instead, they route the water quickly downstream and cause flooding. Hazards for climate-stressed species and ecosystems The highlands generally have long fire return intervals, burning once every several decades if not centuries. Since they don't burn often, their ecosystems aren't as fire-adapted as lower-elevation forests, so they may not recover as efficiently or survive repeated fires. Studies show that more frequent fires could change the type of trees that grow in the highlands or even convert them to shrubs or grasses. Wet mountain areas, with their cooler temperatures and higher precipitation, are often peppered with hot spots of biodiversity and provide refuges to various species from the warming climate. If these areas lose their tree canopies, species with small ranges that depend on cold-water mountain streams can face existential risks as more energy from the Sun heats up stream water in the absence of tree shading. While the risk is rising fastest in the high mountains, most of the West is now at increasing risk of fires. With continuing greenhouse gas emissions fueling global warming, this trend of worsening fire danger is expected to intensify further, straining firefighting resources as crews battle more blazes. | 10.1038/s41467-023-37311-4 |
Medicine | Synapses last as long as the memories they store, neuroscientist finds | Impermanence of dendritic spines in live adult CA1 hippocampus, DOI: 10.1038/nature14467 Journal information: Nature | http://dx.doi.org/10.1038/nature14467 | https://medicalxpress.com/news/2015-06-synapses-memories-neuroscientist.html | Abstract The mammalian hippocampus is crucial for episodic memory formation 1 and transiently retains information for about 3–4 weeks in adult mice and longer in humans 2 . Although neuroscientists widely believe that neural synapses are elemental sites of information storage 3 , there has been no direct evidence that hippocampal synapses persist for time intervals commensurate with the duration of hippocampal-dependent memory. Here we tested the prediction that the lifetimes of hippocampal synapses match the longevity of hippocampal memory. By using time-lapse two-photon microendoscopy 4 in the CA1 hippocampal area of live mice, we monitored the turnover dynamics of the pyramidal neurons’ basal dendritic spines, postsynaptic structures whose turnover dynamics are thought to reflect those of excitatory synaptic connections 5 , 6 . Strikingly, CA1 spine turnover dynamics differed sharply from those seen previously in the neocortex 7 , 8 , 9 . Mathematical modelling revealed that the data best matched kinetic models with a single population of spines with a mean lifetime of approximately 1–2 weeks. This implies ∼ 100% turnover in ∼ 2–3 times this interval, a near full erasure of the synaptic connectivity pattern. Although N -methyl- d -aspartate (NMDA) receptor blockade stabilizes spines in the neocortex 10 , 11 , in CA1 it transiently increased the rate of spine loss and thus lowered spine density. These results reveal that adult neocortical and hippocampal pyramidal neurons have divergent patterns of spine regulation and quantitatively support the idea that the transience of hippocampal-dependent memory directly reflects the turnover dynamics of hippocampal synapses. Main The hypothesis that synaptic connectivity patterns encode information has profoundly shaped research on long-term memory. In the hippocampus, synapses in basal CA1 mainly receive inputs from hippocampal area CA3, and the CA3 → CA1 projection has been widely studied regarding its plasticity and key role in memory. As in the neocortex, dendritic spines in the hippocampus are good proxies for excitatory synapses 12 , motivating time-lapse imaging of spines as a means of monitoring synaptic turnover 7 , 8 , 9 , 10 . Previous work has illustrated in vivo imaging of CA1 spines in acute and recently also in chronic preparations 13 , 14 , 15 . We tracked spines for up to ∼ 14 weeks by combining microendoscopes of diffraction-limited resolution 14 (0.85 NA), a chronic mouse preparation for time-lapse imaging in deep brain areas 4 , and Thy1-GFP mice that express green fluorescent protein (GFP) in a sparse subset of CA1 pyramidal neurons ( Fig. 1 and Extended Data Fig. 1 ). Histological analyses confirmed that this approach induced minimal activation of glia ( Extended Data Fig. 2 ), as shown previously 4 , 16 . Figure 1: Dendritic spines are dynamic in CA1 hippocampus of the adult mouse. a , A sealed, glass guide tube implanted dorsal to CA1 allows time-lapse in vivo imaging of dendritic spines. b , A doublet microendoscope projects the laser scanning pattern onto the specimen plane in the tissue. Inset: red lines indicate optical ray trajectories. c , CA1 dendritic spines in a live Thy1-GFP mouse. d , Time-lapse image sequences. Arrowheads indicate spines that either persist across the sequence (white arrowheads), disappear midway (red), arise midway and then persist (green), arise midway and then later disappear (yellow), or disappear and then later appear at an indistinguishable location (cyan). Scale bars: 500 μm ( b , inset); 10 μm ( c ); 2 μm ( d ). PowerPoint slide Full size image A major concern was that it is not possible to distinguish two or more spines spaced within the resolution limit of two-photon microscopy. This issue is critical for studies of hippocampal spines, which are more densely packed than neocortical spines 17 . To gauge how commonly the appearances of adjacent spines merged together in optical images, we examined tissue slices from Thy1-GFP mice, using both two-photon microendoscopy and stimulated-emission depletion (STED) microscopy. STED microscopy offered super-resolution ( ∼ 70 nm full width at half maximum (FWHM) lateral resolution), an optical resolution nearly nine times finer than that of two-photon microendoscopy 14 ( ∼ 610 nm), permitting tests comparing pairs of images of the same CA1 dendrites ( Fig. 2a and Extended Data Fig. 3 ). Figure 2: A simple kinetic model is sufficient to describe CA1 pyramidal cell spine dynamics. a , Two-photon microendoscopy and STED imaging of the same dendrites in vitro. Top, two-photon images depict spines closer than the resolution limit as merged entities. Bottom, asterisks mark example visually scored spines, showing cases in which nearby spines do (right) or do not (left) merge. b , Fraction of spines ( n = 151 total) seen by two-photon imaging that were one, two or three spines as determined by STED imaging. Black bars show mean ± s.d. for 12 dendrites. c , Separations between adjacent unmerged spines and pairs of spines that appeared merged by two-photon imaging. Open grey circles mark individual results from each of n = 150 spines. Black bars show mean ± s.d. d , Example computer-simulated, time-lapse image sequence used to quantify how resolution limits impact measured spine densities and dynamics. e , Computational modelling predicts the underestimation of spine density due to the finite optical resolution. Blue diagonal line: perfect detection of all spines. Black horizontal dashed lines: typical ranges of spine densities on pyramidal cells in neocortex and hippocampus. Red data: results from visually scoring simulated images of dendrites of varying spine densities. Black curve: prediction from the scoring model using 600 nm as the minimum separation between two spines correctly distinguished. f , Modelling predicts the overestimation of spine stability due to merging of adjacent spines in resolution-limited images. Blue data: survival fraction values (mean ± s.e.m.) for actual spine turnover in computer simulations (spine density: 2.56 µm −1 ). Red data: apparent turnover for these same simulated dendrites, as scored from simulated two-photon images. Black curves: theoretical predictions for spine survival based on the scoring model. Scale bars: 1 μm ( a ); and 2 μm ( d ). PowerPoint slide Full size image As expected, we saw nearby spines in STED images that appeared merged in the two-photon images ( Fig. 2a ). Of 151 spines that appeared unitary in the two-photon images, 23 ± 3.6% (standard error of the mean (s.e.m.)) were actually two spines and 6.0 ± 1.6% were actually three spines ( n = 12 dendrites) ( Fig. 2b ). Distances between merged spines in the two-photon images (0.51 ± 0.14 µm; mean ± standard deviation (s.d.)) were below the (0.61 µm) resolution limit 14 ( Fig. 2c ). Clearly, merging can induce illusory spine stability, since two or more real spines must vanish for a merged spine to disappear. To treat merging effects quantitatively, we developed a mathematical framework that permits systematic examination of turnover dynamics across different kinetic models and investigation of how the merging of spines on imaging alters the manifestations of these dynamics in two-photon imaging data ( Supplementary Information and Extended Data Figs 4 , 5 , 6 , 7 ). We used computer simulations to study how the density and apparent kinetics of merged spines vary with geometric variations of individual spines, spine density, resolution and spine kinetics ( Supplementary Information and Extended Data Fig. 8 ). We also checked experimentally whether fluctuations in spine angle and length, and the radius of the dendrite near the spine, might impact measures of spine turnover ( Extended Data Figs 6 and 9 ). By simulating time-lapse image series, we scored and analysed synthetic data across a broad range of optical conditions, spine densities, geometries and turnover kinetics ( Fig. 2d and Extended Data Figs 4 and 5 ). The simulations and mathematical modelling confirmed that naive analyses of two-photon data are inappropriate at the spine densities in CA1, owing to merging and the resulting illusion of increased stability ( Supplementary Information and Extended Data Figs 4 and 7 ). For stability analyses, we followed previous studies 7 , 8 , 9 in our use of the survival fraction curve, S ( t ), the fraction of spines appearing in the initial image acquired at t = 0 that also appeared in the image acquired at time t . The shape and asymptotic value of S ( t ) provide powerful constraints on kinetic models of turnover and the fraction of spines that are permanent (that is, a rate constant of zero for spine loss) 7 , 8 , 9 ( Supplementary Information ). Strikingly, visual scoring of simulated images ( Fig. 2d and Extended Data Fig. 5 ) yielded underestimates of spine density ( Fig. 2e ) and patent overestimates of the lifetimes of spines ( Fig. 2f ). But crucially, our treatment accurately predicted the relationship between the actual density and the visually determined underestimate ( Fig. 2e ), and properly explained the apparent turnover dynamics, S ( t ), in terms of the actual kinetics ( Fig. 2f and Extended Data Figs 5c , 7 ). Overall, the simulations showed that face-value interpretations of two-photon images from CA1 are untrustworthy, but that it is possible to make quantitatively correct inferences about spine kinetics provided that one properly accounts for the optical resolution. Using the same framework, we next analysed real data. Initial analyses focused on four mice in which we acquired image stacks of CA1 pyramidal cells and tracked spines every 3 days for 21 days (60 dendrites total; 50 ± 7 (mean ± s.d.) per session) ( Fig. 3a ). Whenever individual spines appeared at indistinguishable locations on two or more successive sessions, we identified these as observations of the same spine. Overall, we made 4,903 spine observations (613 ± 71 (mean ± s.d.) per day). Spine densities were invariant over time ( Fig. 3b ) (Wilcoxon signed-rank test; n = 16–50 dendrites per comparison of a pair of days; significance threshold = 0.0018 after Dunn–Šidák correction for 28 comparisons; P > 0.047 for all comparisons), as were spine volumes ( P = 0.87; n = 43 spines, Kruskal–Wallis analysis of variance (ANOVA)) ( Extended Data Fig. 10 ) and the turnover ratio, the fraction of spines arising or vanishing since the last session 7 ( Fig. 3b ) (Wilcoxon signed-rank test; 14–40 dendrites; significance threshold = 0.0025 after correction for 20 comparisons; P > 0.31 for all comparisons). Fewer than 50% of spines (46 ± 2%; mean ± s.e.m.; n = 4 mice) were seen throughout the experiment ( Extended Data Fig. 1d, e ), although our simulations had shown that this naive observation overestimated spine stability. Figure 3: NMDA receptor blockade, but not environmental enrichment, altered spine turnover dynamics. a , Schedule of baseline imaging sessions. b , Neither measured spine densities (top) nor turnover ratios (bottom) varied for mice in their home cages. Horizontal lines: mean spine density and turnover ratio. c , Spine survival (top) and newborn spine survival (bottom). d , Schedule for study on environmental enrichment. e , f , No significant differences existed between baseline (black points) and enriched conditions (red) regarding spine density ( e , top), turnover ( e , bottom), survival ( f , top), or newborn spine survival, ( f , bottom). g , Schedule for the study on NMDA receptor blockade. h , MK801 caused a significant decline in spine density. Data are from mice imaged four times before (black) and six times during (green) MK801 administration. Black and green horizontal lines respectively indicate mean densities during baseline and on the last 4 days of MK801 treatment. The density decrease was highly significant (Wilcoxon signed-rank test; n = 29 dendrites; P = 0.0007). i , The decline in spine density early in MK801 treatment arose from a transient, highly significant difference between the rates at which spines were lost (darker bars) and gained (lighter bars) (Wilcoxon signed-rank test; n = 29 dendrites; P = 0.0008). Greyscale and green-shaded bars represent percentages of spines gained and lost for sessions before and during MK801 dosage. **** P < 0.001. We normalized spine densities to their mean values in baseline conditions, which were 1.03 μm −1 ( b , top), 0.90 μm −1 ( e , top) and 0.78 μm −1 ( h ). All error bars are s.e.m. for dendrites. PowerPoint slide Full size image The time invariance of spine densities and turnover ratios implied that through our mathematical framework we could determine the underlying kinetic parameters governing turnover. S ( t ) curves for the total and newborn spine populations ostensibly resembled those reported for the neocortex 7 , 8 , 9 ( Fig. 3c ). However, unlike in the neocortex, the 46% of spines seen in all sessions differed notably from the odds that a spine was observed twice in the same location across two distant time points, as quantified by the asymptotic value of S ( t ) (73 ± 3%). This discrepancy suggested that, as our modelling had indicated, many CA1 spines might vanish and reappear in an ongoing way at indistinguishable locations. We next tested whether a prolonged environmental enrichment would alter spine turnover. Previous data from rats have indicated that basal CA1 spine density can rise ∼ 10% after environmental enrichment 18 . Data from mice are limited to CA1 apical dendrites and have yielded contradictory results 19 , 20 ( Supplementary Discussion ). In three mice we imaged a total of 55 basal dendrites (39 ± 14 (s.d.) per day) across 16 sessions, 3 days apart ( Fig. 3d ). After session 8 we moved the mice to an enriched environment, where they stayed throughout sessions 9–16. We made 8,727 spine observations in total (545 ± 216 (s.d.) per day). Comparisons of baseline and enriched conditions revealed no differences in spine density or turnover ( Fig. 3e ) ( n = 10–53 dendrites; P > 0.10, 8 paired comparisons of density; P > 0.039, 7 paired comparisons of turnover ratio; Wilcoxon signed-rank tests with significance thresholds of 0.006 and 0.007, respectively, after corrections for multiple comparisons). Neither were there differences in spine survival ( P > 0.057; 7 time points; n = 10–49 dendrites; Wilcoxon signed-rank test; significance threshold of 0.007 after Dunn–Šidák correction), nor in newborn spine survival ( P > 0.29; 6 time points; n = 5–35 dendrites; significance threshold of 0.008; Fig. 3f ). Thus, in mice, continuous enrichment does not substantially alter spine dynamics on CA1 basal dendrites. Nevertheless, mean volumes of stable spines underwent a slight (7 ± 3% (s.e.m.)) but significant decline upon enrichment (Wilcoxon signed-rank test; P = 0.007; 60 spines tracked for 16 sessions) ( Extended Data Fig. 10d ). Data on the structural effects of long-term potentiation (LTP) suggest an explanation of these findings; CA1 spine densities rise transiently after LTP induction but return to baseline values 2 h later 21 , implying that continual enrichment would cause no net change in spine densities ( Supplementary Discussion ). We next examined whether blockade of NMDA glutamate receptors impacts spine turnover. These receptors are involved in multiple forms of neural plasticity, including in the CA1 area 22 . In the neocortex, NMDA receptor blockade stabilizes spines by slowing their elimination while keeping their formation rate unchanged 10 . We tracked CA1 spines across 10 sessions at 3-day intervals in mice receiving the NMDA receptor blocker MK801 beginning after session 4 and onward ( Fig. 3g ). We examined 26 dendrites (25 ± 1.4 (s.d.) per session), made 5,020 spine observations (502 ± 32 (s.d.) per day), and found that MK801 induced a significant decline (12 ± 3% (s.e.m.)) in spine density (Wilcoxon signed-rank test; 25 dendrites; P = 0.0007) ( Fig. 3h ). This stemmed from a transient disparity in the rates of spine loss versus gain (loss rate was 215 ± 42% (s.e.m.) of the rate of gain; Wilcoxon signed-rank test; 25 dendrites; P = 0.0008) ( Fig. 3i ). These results indicate that the survival odds of CA1 spines depend on NMDA receptor function, illustrate our ability to detect changes in spine dynamics, and show that CA1 and neocortical spines have divergent responses to NMDA receptor blockade. To ascertain the underlying time constants governing spine turnover, we compared the S ( t ) curves of visually scored spines to predictions from a wide range of candidate kinetic models ( Fig. 4a ). In each model there was a subset (0–100%) of permanent spines; the remaining spines were impermanent, with a characteristic lifetime, τ . Since environmental enrichment left the observed spine dynamics unchanged, we pooled the baseline and enriched data sets to extend the analyses to longer time-scales ( Fig. 4b ). By varying the actual spine density, fraction of stable spines, and characteristic lifetime for the unstable fraction, we identified the model that best fit the S ( t ) curves, using a maximum likelihood criterion ( Supplementary Information ). This model had 100% impermanent spines, with an actual density of 2.6 μm −1 and τ of ∼ 10 days ( Fig. 4a, b ). This is twice the ∼ 5-day lifetime reported for the transient subset of neocortical spines 7 , 8 , 9 . There were also models with both permanent and impermanent spines that gave reasonable, albeit poorer fits to the CA1 data ( Fig. 4a, b ). Crucially, our analysis identified all models whose fits were significantly worse than the best model (white regions in Fig. 4a ; P < 0.05, likelihood-ratio test); we regarded these as unsatisfactory in accounting for the CA1 data ( Supplementary Information ). Figure 4: CA1 and neocortical spines exhibit distinct turnover kinetics. a , Multiple kinetic models are consistent with data on spine survival. Each model considered had two subpopulations: permanent and impermanent spines. Abscissa: mean lifetime for impermanent spines. Ordinate: fraction of spines that are permanent. Each datum is for an individual model; colour denotes the level of statistical significance at which the model could be rejected. Red points denote models that best fit the data. No results are shown for models incompatible with the data ( P < 0.05). Models that best fit data from CA1 have ∼ 100% impermanent spines, with a ∼ 10 day lifetime (green arrowhead). There are also models with permanent subpopulations that cannot be statistically rejected (for example, red arrowhead). Models that best fit patterns of neocortical spine turnover (black arrowhead), from mice age- and gender-matched 7 to those used here, have ∼ 50–60% permanent spines and a shorter lifetime ( ∼ 5 days) for impermanent spines than in CA1. Models lacking permanent spines poorly fit the neocortical data; grey arrowhead marks the model for the grey curve in d . The four arrowheads indicate the models that generated the colour-corresponding curve fits in b , d . b , Empirically determined survival curve for CA1 spines (black data: mean ± s.e.m.; data set of Fig. 3e, f ) over 46 days, compared to predictions (solid curves) from two of the models in a (green and red arrowheads in a ). Green curve: best-fitting model, which has no permanent spines. Red curve: an example model with both stable and unstable spines. c , Owing to the higher density of spines in CA1 than in the neocortex (inset), optical merging is far more common in CA1. Given what appears to be one spine, vertical bars represent the probability as determined from the computational model that the observation is actually of 1–5 spines. Probabilities were calculated using spine density values of the inset, 0.75 µm as the minimum separation, L , needed to distinguish adjacent spines. Error bars: range of results for L within 0.5–1.0 µm. d , Empirically determined survival curves for neocortical spines (data set from ref. 7 ) over 28 days, compared with predictions (solid curves) from two different models for spine turnover in a (grey and black arrowheads in a ). Black curve: best fit attained with a stable subpopulation of spines. Grey curve: a model lacking permanent spines, poorly fitting the data. e , Spines in CA1 and neocortex differ substantially in proportions of permanent versus impermanent spines, spine lifetimes, effects of NMDA receptor blockade, and learning or novel experience. PowerPoint slide Full size image Our results pointing to a single population of unstable spines in CA1 contrast markedly with findings in adult neocortex, where >50% of spines seem permanent 7 , 8 , 9 . To make even-handed comparisons, we used our framework to re-analyse published data acquired in the mouse somatosensory neocortex that had supported this conclusion 7 . Owing to the lower density of neocortical spines, merging is far less of a concern ( Fig. 4c ), and our modelling confirmed that ∼ 60% of neocortical spines are stable over very long timescales, supporting past conclusions 7 . Nevertheless, we found very significant differences between CA1 and neocortical spine turnover dynamics ( Fig. 4a ; P = 0.01, likelihood ratio test). Models with only impermanent spines, which well described CA1, were insufficient ( P < 10 −62 , likelihood ratio test) for neocortex ( Fig. 4a, d ). The discrepant lifetimes of impermanent spines in the two areas ( ∼ 10 days (CA1) versus ∼ 5 days (neocortex)) posed further incompatibility ( Fig. 4a, e ). Conversely, models that explained neocortical spine turnover were incompatible with the CA1 data ( P = 0.01, likelihood ratio test). Modelling alone cannot eliminate the possibility that CA1 basal dendrites have some permanent spines, but if any such spines exist they compose a far smaller fraction than in the neocortex. Hence, CA1 and the neocortex have distinct spine dynamics, percentages of impermanent spines, and turnover time constants. Further distinguishing CA1 and the neocortex are the contrary roles of NMDA receptor blockade ( Figs 3h, i and 4e ). In the neocortex, MK801 promotes stability by decreasing spine loss 10 and blocking spine addition 11 ; conversely, NMDA receptor activation may speed turnover via addition of new spines and removal of pre-existing neocortical spines supporting older memories. In CA1, MK801 speeds turnover and promotes instability ( Fig. 3h, i ), suggesting that NMDA receptor activation may transiently slow turnover and stabilize spines. Indeed, LTP induction in CA1 is associated with stabilization of existing spines and growth of new spines 5 , 23 , 24 . A natural interpretation is that spine dynamics may be specialized by brain area to suit the duration of information retention. The neocortex, a more permanent repository, might need long-lasting spines for permanent information storage and shorter-lasting ones ready to be stabilized if needed 8 , 9 . The hippocampus, an apparently transient repository of information, might only require transient spines. The ∼ 1–2-week mean lifetime for the ∼ 100% impermanent CA1 spines implies a near full erasure of synaptic connectivity patterns in ∼ 3–6 weeks, matching the durations that spatial and episodic memories are hippocampal-dependent in rodents 1 , 2 (but see also ref. 25 ). The ensemble place codes of CA1 neurons also refresh in ∼ 1 month 16 , which could arise from turnover of the cells’ synaptic inputs. Since 75–80% of CA3 → CA1 inputs are monosynaptic 26 , CA1 spine impermanence probably implies a continuous re-patterning of CA3 → CA1 connectivity throughout adulthood, which, owing to the sheer number of synapses and sparse connections is unlikely to assume the same configuration twice. Supporting these interpretations, artificial neural networks often show a correspondence between synapse lifetime and memory longevity 27 , although in some models spine turnover and memory erasure can be dissociated 28 , 29 . Computational studies also show that elimination of old synapses can enhance memory capacity 29 , 30 . More broadly, networks that can alter synaptic lifetimes, not just connection strengths, can more stably store long-term memories while rapidly encoding new ones 27 . The data described here are consistent with a single class of CA1 spines, but future studies should examine both the finer kinetic features and cellular or network mechanisms of turnover 23 . By using fluorescence tags to mark spines undergoing plastic changes, in vivo imaging might help relate connection strengths, spine lifetimes and memory performance. Finally, researchers should investigate spine turnover in animals as they learn to perform a hippocampal-dependent behaviour, to build on the results here by looking for direct relationships between CA1 spine stability and learning. Methods Animals and surgical preparation Stanford University’s Administrative Panel on Laboratory Animal Care approved all procedures. We imaged neurons in mice expressing GFP driven by the Thy1 promoter 31 (heterozygous males 10–12 weeks old, GFP-M lines on a C57BL/6 × F1 background). We did not perform any formal randomization in the assignment of mice to specific groups, but we informally selected mice in a random manner without use of any exclusion criteria. We performed surgeries as previously published 4 but with a few modifications. We anaesthetized mice using isoflurane (1.5–3% in O 2 ) and implanted a stainless steel screw into the cranium above the brain’s right hemisphere. We performed a craniotomy in the left hemisphere (2.0 ± 0.3 mm posterior to bregma, 2.0 ± 0.3 mm lateral to midline) using a 1.8-mm-diameter trephine and implanted the optical guide tube with its window just dorsal to, but not within, area CA1, preserving the alveus. Guide tubes and microlenses Guide tubes were glass capillaries (1.5 mm (ID), 1.8 mm (OD) or 2 mm (ID), 2.4 mm (OD); 2–3 mm in length). We attached a circular coverslip, matched in diameter to that of the capillary’s outer edge, to one end of the guide tube by using optical epoxy (Norland Optical Adhesive 81). We used 1.0-mm-diameter micro-optical probes of diffraction-limited resolution (0.8 NA, 250 μm working distance in water) that were encased in a 1.4-mm-diameter sheath 14 . In vivo two-photon imaging We used a modified commercial two-photon microscope (Prairie Technologies) equipped with a tuneable Ti:Sapphire laser (Chameleon, Coherent). We tuned the laser emission to 920 nm and adjusted the average illumination power at the sample ( ∼ 5–25 mW) for consistency in signal strength across imaging sessions in each mouse. For microendoscopy we used a 20 × 0.8 NA objective (Zeiss, Plan-Apochromat) to deliver illumination into the microlenses. In some cases we imaged directly through the glass cannula using a Olympus LUMPlan Fl/IR 0.8 NA ×40 water immersion objective lens and confirmed that the optical resolution in all three spatial axes was identical between the two approaches. Beginning at 15–18 days after surgery, we imaged mice every 3 days under isoflurane anaesthesia (1.5% in O 2 ) for a total of 8–16 sessions each lasting 60–90 min. We imaged some mice at irregular intervals up to 80 days. MK801 treatment In mice subject to the protocol of Fig. 2d , after the fourth imaging session we administered MK801 (Tocris Bioscience; 0.25 mg g −1 body weight; dissolved in saline) in two intraperitoneal injections each day (8–10 h apart) as described previously 10 . Enriched environment Animals given an enriched environment had a larger cage (42 (length) × 21.5 (width) × 21.5 (height) cm 3 ) that contained a running wheel, objects of various colours, textures and shapes, plastic tunnels, and food with different flavours. We changed the objects, as well as their placements within the cage, every 3–4 days to encourage exploration and maintain novelty. We provided food and water ad libitum . Histology At the end of in vivo experimentation, we deeply anaesthetized mice with ketamine (100 mg kg −1 ) and xylazine (20 mg kg −1 ). We then perfused PBS (pH 7.4) into the heart, followed by 4% paraformaldehyde in PBS. We fixed brains overnight at 4 °C and prepared floating sections (50 mm) on a vibrating microtome (VT1000S, Leica). Before in vitro imaging of GFP fluorescence, we washed the fluorescent sections with PBS buffer several times and quenched them by incubation in 150 mM glycine in PBS for 15 min. After three washes in PBS, we mounted sections with Fluoromout-G (Southern Biotech). We inspected the sections using either two-photon fluorescence imaging or a STED microscope (Leica TCS STED CW, equipped with a Leica HCX PL APO 100 × 1.40 NA oil-immersion objective.). For immunostaining sections were washed with PBS buffer several times before quenching and permeabilization (15 min incubation in 0.1% Triton-X in PBS). Sections were incubated in blocking solution (1% Triton X-100, 2% BSA, 2% goat serum in PBS) for 4 h. Primary antibodies (rat anti-CD68, FA11-ab5344, Abcam, 1:100 dilution; mouse anti-GFAP, MAB3402, Millipore, 1:500 dilution) were diluted in blocking solution and sections were incubated overnight in this solution. The following day sections were washed with PBS and incubated in diluted secondary antibody: Cy5-conjugated goat anti-mouse IgG, A10524 and Cy5-conjugated goat anti-rat IgG, A10525; both Molecular Probes; both 1:1,000 dilution, in blocking solution for 3 h. All staining procedures were done at room temperature. After three washes in PBS, sections were mounted with Fluoromout-G (Southern Biotech). Histological specimens were inspected on a confocal fluorescence microscope (Leica SP2 AOBS). Imaging sessions We mounted the isoflurane-anaesthetized mice on a stereotactic frame. To perform microendoscopy we fully inserted the microendoscope probe into the guide tube such that the probe rested on the guide tube’s glass window. To attain precise and reliable three-dimensional alignments across all imaging sessions of the brain tissue undergoing imaging, we used a laser-based alignment method. We positioned a laser beam such that when the mouse’s head was properly aligned, the beam reflected off the back surface of the microendoscope and hit a designated target. This ensured that at each imaging session the long axis of the microendoscope was perpendicular to the optical table to within ∼ 1 angular degree. Otherwise we inserted a drop of water in the cannula and imaged using the water immersion objective. As previously described 14 , we experimentally confirmed that the resolution limits of the two approaches were essentially identical. Image acquisition During the first imaging session, we selected several regions of brain tissue for longitudinal monitoring across the duration of the time-lapse experiment. Each of these regions contained between 1 and 7 dendritic segments visibly expressing GFP. In each imaging session, we acquired 6–8 image stacks of each selected regions using a voxel size of 0.0725 × 0.0725 × 0.628 μm 3 . Image pre-processing To improve the visual saliency of fine details within each image stack, initial pre-processing of the image stacks involved a blind deconvolution based on an expectation-maximization routine (Autodeblur from Autoquant). We then aligned all individual images acquired at the same depth in tissue using the TurboReg plug-in routine for ImageJ. Finally, we averaged pixel intensities across the aligned stacks, yielding a single stack that we used in subsequent analyses. Scoring of dendritic spines We scored spines using a custom MATLAB interface that supported manual labelling of spines using the computer mouse, measurements of dendrite length and spine position, and alignments of time-lapse sets of image stacks. For each region of tissue monitored, we loaded all the image stacks acquired across time, such that the temporal sequence of the stacks was preserved but the experimenter was blind to their dates of image acquisition during spine scoring. We excluded images whose quality was insufficient to score spines. We scored spines similarly to as described previously 32 but with a few modifications. We labelled protrusions as dendritic spines only if they extended laterally from the dendritic shaft by >0.4 µm ( Extended Data Fig. 6a, b ). We did not include protrusions of <0.4 µm in the analysis ( Extended Data Fig. 6c, d ). When a spine first appeared in the time-lapse image data we assigned it a unique identity. We preserved the spine’s identity across consecutive time points if the distances between the spine in question and two or three of its neighbouring spines were stable. In ambiguous cases, which were hardly the norm, we required stability to <2 µm. The surviving fraction, S ( t ), at time t was defined as the fraction of spines present on the first imaging day that were also present a time t later. For the deliberate purpose of attaining conservative estimates of (for example, lower bounds on) the proportion of impermanent spines, in the calculation of S ( t ) we handled the 18% of spines in the recurrent-location category ( Fig. 1d and Extended Data Fig. 1d, e ) in the following way. When checking pairs of images acquired an interval t apart, we deliberately did not distinguish between whether the second image contained the original spine or its replacement spine at the same location. This approach thereby underestimated spine turnover as inferred from analyses of S ( t ), implying that our conclusion of CA1 spine impermanence is not only mathematically conservative but also robust to any scoring errors in which we might have erroneously missed a spine that had in fact persisted to subsequent imaging sessions. Turnover ratio was defined as the sum of spines gained and lost between two consecutive time points normalized by the total number of spines present at these time points. Spines lost or gained were defined as the number of spines lost or gained between two consecutive time points, respectively, normalized by the total number of spines present at these time points. To make coarse estimates of spine volumes, for stable spines we determined each spine’s fluorescence within a manually drawn region of interest (ROI) in the axial section in which the spine head appeared at its biggest diameter. We normalized this value by the fluorescence value attained by moving the ROI to within the nearby dendritic shaft (as in ref. 7 ). Statistical analysis To test for differences in spine densities, turnover ratios and surviving fractions either over time or between different groups, we used non-parametric two-sided statistical testing (Wilcoxon signed-rank, Mann–Whitney U and Kruskal–Wallis ANOVA tests) to avoid assumptions of normality and Dunn–Šidák correction for multiple comparisons. Sample size was chosen to match published work 7 , 8 , 9 . To compare experimentally measured spine survival to the theoretical predictions from kinetic modelling, we assessed the goodness-of-fit for each model by using both the reduced chi-squared statistic and the log-likelihood function ( Supplementary Methods ). Both the mean and covariance of the surviving fraction depended on the model parameters and influenced the goodness-of-fit ( Supplementary Methods ). We also required that the parameter describing the minimal separation needed to resolve two spines was 0.5–1 µm (other values for this minimal separation are implausible) ( Supplementary Methods ). Simulated data sets We modelled the microscope’s optics on the basis of prior measurements 14 and tuned the kinetics of spine turnover, spine geometries and dendrite geometries to produce simulated image sequences that the data analyst judged to be similar to the actual data ( Supplementary Information and Extended Data Fig. 5 ). In some data sets, we matched the simulated spine kinetics to those inferred from our in vivo measurements. | Our memories are as fleeting as the brain structures that store them, or so the theory goes. When the connections – called synapses – between neurons break, the memories they hold are thought to evaporate along with them. The idea seemed good, but has been hard to test. Now a Stanford team has taken on the challenge, studying a brain region called the hippocampus, which stores "episodic" memories. These are the memories of events or conversations that might be forgotten over time if the memories aren't used. The challenge to studying synapses in this region is that the hippocampus is so deep and the connections so densely packed that no microscope could easily monitor the synapses' longevity. Now Mark Schnitzer, an associate professor of biology and of applied physics, has leveraged microscopy tools developed in his lab and for the first time was able to monitor the connections, called synapses, between hippocampal neurons and confirm what neuroscientists thought might be happening. In the mice he and his team studied, the connections between neurons lasted about 30 days, roughly the duration over which episodic memories are believed to stay in the mouse hippocampus. The work was published on June 22 in Nature. "Just because the community has had a longstanding idea, that doesn't make it right," Schnitzer said. Now that the idea has been validated, he said, his technique could open up new areas of memory research: "It opens the door to the next set of studies, such as memory storage in stress or disease models." Mobile memories When mice experience a new episode or learn a new task that requires spatial navigation, the memory is stored for about a month in a structure at the center of the brain called the hippocampus (it is stored slightly longer in people). If mice have hippocampus-disrupting surgery within a month of forming a memory – a memory of meeting a new cage-mate or navigating a maze – that memory is lost. If the disruption occurs after more than a month, then the mouse still retains the memory of a new friend or location of food. That's because the memory had been relocated to a different region of the brain, the neocortex, and is no longer susceptible to disruption in the hippocampus. "The thought is that memories are gradually moved around the brain," said Schnitzer, who is also a member of Stanford Bio-X and the Stanford Neurosciences Institute. "The neocortex is a long-term repository, whereas considerable evidence indicates that memories stay in the mouse hippocampus only about a month." In the past, scientists at Cold Spring Harbor Laboratory in New York and elsewhere had monitored connections between neurons in the neocortex, nearer the brain's surface and therefore visible with little disruption to the brain. They watched not the connections themselves, but the bulbous projections called spines that form connections at their tips. Watching the spines come and go serves as a proxy for knowing when excitatory connections between neurons are created and broken. Those scientists found that about half of the spines in the neocortex were permanent and the rest turned over approximately every five to 15 days. "The interpretation was that about half the spines in the neocortex are long-term repositories for memories while others retain malleability for new memories or forgetting," Schnitzer said. Deep and dense If the same line of thinking held true for the hippocampus as it did for the neocortex, spines in the hipocampus should turn over roughly every 30 days along with the memories they hold. Verifying that idea had been challenging, however, because the hippocampus is deeply buried in the brain and the spines in that region are so densely packed that multiple spines can appear to merge into one. Schnitzer said there were three components to his team's ability to track spines in the hippocampus. The first was a technique he reported in 2011 that allows scientists to stably image a single neuron in a living mouse over long time periods. The next was an optical needle, called a microendoscope, that provides high-resolution images of structures deep within the brain. Even with a stable and high-resolution way of imaging neurons in the hippocampus over time, the team still faced the challenge of distinguishing when spines are gained and lost if they couldn't tell the difference between a single spine and several merged bulges. "The ability to resolve spines in the hippocampus is right on the hairy edge of our technological capability," Schnitzer said. The team overcame that problem with a mathematical model that took into account the limitations of the optical resolution and how that would affect the image datasets depicting the appearances and disappearances of spines. What Schnitzer and his team found in this analysis is that the region of the hippocampus that stores episodic memories contains spines that all turn over every three to six weeks – roughly the duration of episodic memory in mice. Schnitzer said that the work confirmed a long-held idea about how the brain stores memories. Using the same techniques, scientists can now probe additional aspects of how memories are formed, remembered and eventually lost at the level of the individual connections between neurons. | 10.1038/nature14467 |
Physics | How long does a tuning fork ring? 'Quantum-mechanics' solve a very classical problem | Phonon-tunnelling dissipation in mechanical resonators, Garrett D. Cole, Ignacio Wilson-Rae, Katharina Werbach, Michael R. Vanner, Markus Aspelmeyer, Nature Communications, 8 March, 2011, DOI: DoI: 10.1038/ncomms1212 | http://dx.doi.org/10.1038/ncomms1212 | https://phys.org/news/2011-03-tuning-fork-quantum-mechanics-classical-problem.html | Abstract Microscale and nanoscale mechanical resonators have recently emerged as ubiquitous devices for use in advanced technological applications, for example, in mobile communications and inertial sensors, and as novel tools for fundamental scientific endeavours. Their performance is in many cases limited by the deleterious effects of mechanical damping. In this study, we report a significant advancement towards understanding and controlling support-induced losses in generic mechanical resonators. We begin by introducing an efficient numerical solver, based on the 'phonon-tunnelling' approach, capable of predicting the design-limited damping of high-quality mechanical resonators. Further, through careful device engineering, we isolate support-induced losses and perform a rigorous experimental test of the strong geometric dependence of this loss mechanism. Our results are in excellent agreement with the theory, demonstrating the predictive power of our approach. In combination with recent progress on complementary dissipation mechanisms, our phonon-tunnelling solver represents a major step towards accurate prediction of the mechanical quality factor. Introduction Mechanical coupling of a suspended structure to its supports is a fundamental energy loss mechanism in micromechanical and nanomechanical resonators 1 . Referred to variously as clamping 2 or anchor loss 3 , this process remains significant even in devices fabricated from high-quality materials operated in vacuum and at cryogenic temperatures, and is in fact unavoidable in any non-levitating system. Although much progress has been made towards the understanding of mechanical dissipation at the microscale and nanoscale 2 , 4 , obtaining reliable predictions for the fundamental design-limited quality factor, Q , remains a major challenge while direct experimental tests are scarce 5 , 6 , 7 . At the same time, the implementation of high-quality micromechanical and nanomechanical systems is becoming increasingly important for numerous advanced technological applications in sensing and metrology, with select examples including wireless filters 3 , 8 , on-chip clocks 9 , microscopy 10 , 11 , 12 , 13 and molecular-scale mass sensing 14 , 15 , and recently for a new generation of macroscopic quantum experiments that involve mesoscopic mechanical structures 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 . Here, we introduce a finite-element-enabled numerical solver for calculating the support-induced losses of a broad range of low-loss mechanical resonators. We demonstrate the efficacy of this approach via comparison with experimental results from microfabricated devices engineered to isolate support-induced losses by allowing for a significant variation in geometry, while keeping other resonator characteristics approximately constant. The efficiency of our solver results from the use of a perturbative scheme that exploits the smallness of the contact area, specifically the recently introduced 'phonon-tunnelling' approach 24 . This results in a significant simplification over previous approaches and paves the way for CAD-based predictive design of low-loss mechanical resonators. The origins of mechanical damping in microscale and nanoscale systems have been the subject of numerous studies during the last decades, and several relevant mechanisms for the decay of acoustic mechanical excitations, that is, phonons, have been investigated 2 , 4 . These include: (i) fundamental anharmonic effects such as phonon–phonon interactions 4 , 25 , thermoelastic damping (TED) 4 , 25 , 26 , 27 , 28 and the Akhiezer effect 4 , 25 ; (ii) viscous or fluidic damping involving interactions with the surrounding atmosphere or the compression of thin fluidic layers 29 , 30 , 31 ; (iii) material losses driven by the relaxation of intrinsic or extrinsic defects in the bulk or surface of the resonator 32 , 33 , 34 , 35 , 36 , 37 for which the most commonly studied model is an environment of two-level fluctuators 38 , 39 and (iv) support-induced losses, that is, the dissipation induced by the unavoidable coupling of the resonator to the substrate 3 , 7 , 8 , 40 , 41 , which corresponds to the radiation of elastic waves into the supports 5 , 6 , 24 , 42 , 43 , 44 . This last mechanism poses a fundamental limit, as vibrations of the substrate will always be present. These various dissipation processes add incoherently such that the reciprocals of the corresponding Q -values satisfy 1/ Q tot =∑ i 1/ Q i , where i labels the different mechanisms. Thus, in a realistic setting, care must be taken to isolate the contribution under scrutiny. In contrast to all other damping mechanisms (i–iii), which exhibit various dependencies with external physical variables such as pressure and temperature, support-induced dissipation is a temperature- and scale-independent phenomenon with a strong geometric character that is present in any suspended structure. Moreover, its scale independence implies that the same analysis can be applied to both microscale and nanoscale devices. We exploit this geometric character to isolate the support-induced contribution and obtain a direct experimental test of phonon-tunnelling dissipation. The numerical solver we introduce provides a new technique to efficiently model support-induced losses for a broad class of mechanical structures. Previous approaches have relied on either the direct solution of an elastic wave radiation problem involving the substrate 6 , 7 , 42 , 43 , 44 or the simulation of a perfectly absorbing artificial boundary 5 , 41 , with systematic tests as a function of geometry limited to a few specific cases 5 , 6 , 7 . In contrast, our technique represents a substantial simplification in that it reduces the problem to the calculation of a perfectly decoupled resonator mode together with free elastic wave propagation through the substrate in the absence of the suspended structure. A key feature of our method is to combine a standard finite-element method (FEM) calculation of the resonator mode together with the use of an extended contact at the support. This allows us to treat complex geometries, taking proper account of interference effects between the radiated waves. In summary, we develop and test an efficient method for calculating the clamping loss of high- Q mechanical resonators. Our analysis includes a thorough experimental verification of this theoretical framework by employing resonators that are specifically designed to isolate the clamping-loss contribution to the total dissipation 1/ Q . The measured damping in these structures matches the theoretical predictions and demonstrates in a direct manner the strong geometric character of this fundamental dissipation channel. Results Phonon-tunnelling approach In analogy to radiation tunnelling in photonics and electron tunnelling in low-dimensional structures, we adopt a 'phonon tunnelling' picture to describe the support-induced losses 24 . In this picture, the mechanical resonance of interest, characterized by frequency ω R , is regarded as a phonon cavity that is weakly coupled to the exterior by a hopping process, whereby the elastic energy leaks out of the resonator through the narrow contact areas from which it is suspended. Within this framework, one can start from the harmonic Hamiltonian associated with the elastic scattering eigenmodes of the entire structure, including the substrate, and derive a quantum model for the Brownian motion experienced by each resonance of the suspended structure. The corresponding weak tunnel couplings can be obtained to lowest order in the small parameter k R d , where 1/ k R is the characteristic length scale over which the resonator mode varies appreciably and d is the characteristic dimension of the contact area S from which the resonator is suspended. For typical structures that exhibit high- Q mechanical resonances, k R d ≪ 1 is comfortably satisfied. This justifies the weak coupling approximation and leads to a general expression for the associated dissipation 1/ Q in terms of the 'overlaps' between the scattering modes and the resonator mode. In the limit d →0, the leading contribution is obtained by replacing the scattering modes by the free (unperturbed) modes of the supports, which yields 24 Here, and are the stress and displacement fields associated with the normalized resonator mode, and are the analogous fields for the continuum of support modes labelled by q (eigenfrequencies ω ( q )), and ρ s and ρ R are, respectively, the densities of the substrate and resonator materials. The resonator mode should satisfy either (i) free or (ii) clamped boundary conditions at the contact area, S , depending on the behaviour of the eigenmode when S is small, whereas the unperturbed support modes should satisfy the converse. These homogeneous boundary conditions correspond, respectively, to and so that only one of the two terms in the surface integral is finite. In general, the decomposition between 'resonator volume' and 'supports' consistent with the weak coupling condition need not be unique. Examples of case (i) are pedestal geometries, such as microspheres, microdisks or microtoroids, when the pedestal is included in the support 24 . It is worth noting that for these geometries, if the pedestal is assumed to have perfect impedance match with the substrate, equation (1) leads to a particularly simple result for the Q of an axially symmetric resonance 7 , 24 , which has been verified in ref. 7 for the radial breathing mode of microtoroid structures. On the other hand, examples of case (ii) include the planar structures investigated here, when the resonator volume consists of the portion of the structure that is free-standing. A rigorous derivation of equation (1) is given in ref. 24 . Alternatively, if one uses a decomposition of the displacement field in terms of the unperturbed support modes and the discrete modes of the resonator volume, equation (1) follows simply from applying Fermi's Golden rule to phonon decay, with the interaction Hamiltonian between the resonator volume (labelled <) and the surrounding supports (labelled >) given by for case (i) and for case (ii). Within this framework, it is straightforward to realize that the validity of equation (1) is more general than the condition k R d ≪ 1 and will also apply to any resonance, for which the support-induced frequency shift is small compared with the relevant mode spacing (that is, the free spectral range at the corresponding resonant frequency) so that the weak coupling assumption is warranted. For our case, the use of this master formula is completely equivalent to previous intuitive approaches based on forcing the substrate with the stress source generated by the resonator mode 6 , 42 , 43 , 44 , as can be shown rigorously by using—for the elastic Green's function of the substrate—a spectral decomposition in terms of its free modes. In the presence of mode coupling 5 , 7 not induced by disorder, our treatment remains valid provided that the mode mixing is not dominated by support-induced interactions, which includes the case where it is accounted for by FEM assuming perfect clamping and excludes cases where symmetry breaking induced by the support is relevant. Finally, one should note that in the weak-coupling regime, it is straightforward to incorporate mode coupling not accounted for by the FEM into our phonon-tunnelling formalism. Q -solver Though the aforementioned framework is completely general, to investigate the predictive power of our approach, we focus specifically on the flexural modes of a symmetric plate geometry of thickness t that is inscribed in a circle of radius R , with the contact area S corresponding to the outer rim of an idealized circular undercut (undercut distance of L und ). To calculate the theoretical Q -values of such devices via equation (1), we have developed a numerical solution technique that determines the normalized resonator eigenmode and eigenfrequency via FEM (with at S ) and is based on a decomposition into cylindrical modes for the support, which is approximated by the substrate modelled as an isotropic elastic half-space. The latter approximation is expected to be quantitatively precise for the low-lying flexural resonances when the underetched gap between the suspended structure and the substrate satisfies h < R (where h is the gap height), and the largest resonant wavelength for elastic wave propagation in the substrate is smaller than the relevant length scales characterizing the mounting of the sample (see below). The aforementioned weak-coupling condition, k R d ≪ 1, follows in this case from t ≪ R . From equation (1), exploiting the fact that the eigenmodes of an elastic half-space are given by straightforward analytical expressions 45 , we obtain (see Methods section for details of this derivation) Here, we introduce the dimensionless functions , and the linear stress Fourier components with n =0, ±1, ±2, .... The different types of relevant plane-wave modes of the half-space 45 (that is, longitudinal ( l ), transverse vertical ( t ) and surface acoustic waves ( s ) given that transverse horizontal waves do not contribute) are labelled by γ = l , t , s with c γ , the corresponding speed of sound—as determined by the density ρ s , Poisson ratio ν s and Young's modulus E s of the substrate. We adopt spherical coordinates for the incident wave vector with polar angle θ and cylindrical coordinates for the position . The squared displacements are given by analytical expressions, that only depend on γ, cos θ and ν s 24 , 45 , which lead to straightforward integrals for the functions detailed in the Methods section. If one considers low frequency modes that are symmetric with respect to both the x − z and y − z planes so that f z ,0 ≠0 and ω R ≪ c γ / R ∀ γ , one can approximate the series in equation (2) by the n =0 term with the evaluated at . For a Poisson ratio ν s =1/3, this yields the following approximation where f z ,0 corresponds to the total force applied on the contact area S . For the typical micromechanical resonators analysed here (see below), this approximation deviates from equation (2) by 20%. Finally, we highlight that it is straightforward to generalize the above to in-plane modes and the rim need not be continuous, as in cases where the resonator volume makes contact with the support at a disjoint set of small areas (for example, a bridge geometry with no undercut). Free–free design To experimentally verify our solver, we have developed 'free–free' micromechanical resonators consisting of a central plate (resonator) of length L and width w suspended by four auxiliary beams as depicted in Figure 1a . These structures are etched from a high-reflectivity monocrystalline distributed Bragg reflector (DBR)—as described in the Methods section, suited for Fabry–Perot-based optomechanical systems 46 . The devices used in this study constitute a variant of the previously demonstrated free–free flexural design in which auxiliary beams with widths w s ≪ w and lengths L s = λ t /4 (where λ t is the resonant wavelength for the propagation of torsional waves) placed at the nodes of the central resonator mode provide noise filters to suppress support-induced losses 3 . A major drawback with the λ t /4-beam design is that the resulting auxiliary beam length can be excessive. In fact for the eigenfrequencies investigated in this work, the corresponding beam length (>400 μm at 1.7 MHz) leads to proliferation of low-frequency flexural resonances that compromise the stability of the optical cavity and render mode identification difficult. We circumvent this issue by utilizing instead a reduced length L s ≪ λ t /4 chosen to avoid spectral overlap between the free–free resonance and flexural resonances of the auxiliary beams. Figure 1: Mapping out phonon-tunnelling dissipation in a free–free resonator. ( a ) Schematic diagram of the resonator geometry. ( b ) Normalized squared centre of mass displacement of a single auxiliary-beam central-resonator contact calculated via FEM (the inset shows the profile of the free–free mode as approximated by Euler–Bernoulli theory). ( c ) Simulated dissipation (see equation (2)) as a function of the auxiliary beam's y -coordinate ( y a ). Values corresponding to eight discrete geometries were calculated here with t =6.67 μm, w s =7 μm, w =42 μm, L =132 μm, R =116 μm and L und =27 μm—the line is simply a guide for the eye. The FEM-calculated mode shapes correspond to the three extreme examples of the resonator design, from left to right: auxiliary beams near the resonator centre ( y a =13 μm), beams near the ideal nodal position ( y a =37.4 μm) and beams attached at the ends ( y a =62.5 μm). The theoretical clamping loss limit 1/ Q th for nodal positioning is always finite with the geometry closest to this position (indicated by the arrow) yielding 1/ Q th ≈2×10 −7 . Full size image The free–free design provides an ideal platform to isolate and measure phonon tunnelling dissipation: first, by altering the attachment position of the auxiliary beams, this design allows for a significant variation of geometry, while approximately preserving the frequencies and effective surface-to-volume ratios of the resonators. As these characteristics are kept constant, one can rule out the influence of additional damping mechanisms (specifically those driven by internal losses and surface effects) on the variation in Q and hence isolate support-induced losses in the measured devices. Second, the free–free resonators provide an intuitive illustration of the strong geometric character of the support-induced dissipation. Heuristically, the clamping loss will be proportional to the elastic energy radiated through the auxiliary beams, which should approximately scale as the squared deflection of their contacts with the central resonator (see Fig. 1b,c ). Thus, varying the contact position of the auxiliary beams results in a characteristic modulation of the damping rate, which approximately maps out the central resonator mode shape ( Fig. 1b ). As expected, the minimum-loss design corresponds to the geometry in which the auxiliary beams are attached at the nodes of the fundamental resonance of the central resonator. It is interesting to note that the theoretical clamping loss limit 1/ Q th for nodal positioning is always finite as described in Figure 1c . In turn, for generic placement away from the nodal points, one obtains for the improvement in Q with respect to the clamped–clamped configuration the heuristic relation Q f−f / Q c−c ∼ ( w /2 w s ) 2 , which assumes that ω R and the effective mass m R are the same for both configurations and ω R lies away from the flexural resonances of the auxilliary beams. This figure of merit can be derived from equation (3), if one uses the approximate scalings and , which follow from neglecting the undercut, using thin-plate elasticity, and exploiting the fact that w s ≪ L to analyse the elastic wave propagation in the auxilliary beams 24 . Measured dissipation To identify the mechanical modes of our microfabricated resonators (see Figure 2a for an example of a completed device), we compare the optically measured resonator frequencies, as a function of the auxiliary beam position, with the theoretical eigenfrequency variation. The simulated values are generated using the geometric parameters determined via careful analysis of the completed resonators (see Supplementary Method ). As can be seen in Figure 2b , in addition to the symmetric free–free resonance, there is also an antisymmetric eigenmode with comparable frequency. We observe no mode coupling between these resonances, which is consistent with the specific mirror symmetries of the structure. The frequencies are accurately reproduced by the FEM simulation, if we allow for frequency offsets that are solely dependent on the mode parity (262 kHz offset for the free–free mode and 89 kHz offset for the antisymmetric mode). We attribute these shifts to a material-related dissipation mechanism involving both surface and bulk contributions (see Supplementary Method for further details). Figure 2: Characterization of the completed free–free resonators. ( a ) Optical micrograph of the 5×5 mm chip containing the batch-fabricated microresonators as well as an electron micrograph highlighting a single suspended structure; the scale bar in this image is 20 μm. ( b ) Simulated (left) and measured (right) eigenfrequencies as a function of the auxiliary beam y -coordinate. The measured values (discrete points) show excellent agreement with the simulated data set, albeit with a slight offset dependent on the parity of the mode. The fitting lines in the right plot correspond to a mean frequency offset of 262 kHz for the symmetric (sym) free–free modes and 89 kHz for the neighbouring antisymmetric (antisym) modes (inset images show the FEM-derived mode shapes). Lower panels—examples of the fitting techniques utilized for Q -value extraction including: ( c ) Lorentzian fitting of the free–free resonance (captured on a spectrum analyser) for a device with R =116 μm and y a =29 μm resulting in Q =4.5×10 4 and ( d ) ringdown fitting of the same device using linear regression of the natural log of the mean square of the free-ringdown signal captured single-shot with a high-speed oscilloscope yielding Q =4.46×10 4 . The inset includes the residuals to the linear fit showing an excellent agreement with the expected exponential decay. Full size image All dissipation measurements have been performed at high vacuum (10 −7 mbar) and at cryogenic temperatures (20 K) to suppress fluidic and thermoelastic damping in the devices ( Fig. 2c,d ). Under these conditions, we record quality factors spanning 1.4×10 4 to 5.1×10 4 , with the minimum Q corresponding to the free–free mode of devices with an auxiliary position of 62.5 μm and R =116 μm, and with the maximum Q to the geometry closest to nodal positioning (37.4 μm) for the same radius and type of mode (see Fig. 3 ). For the symmetric mode, we readily observe the expected characteristic modulation in Q as a function of the placement of the auxiliary beams with a relative variation of Δ Q exp / Q exp ∼ 260% ( ∼ 80%) for R =116 μm ( R =131 μm). At the same time, the use of the free–free geometry ensures that the frequency variation is kept small, with a range of Δ f / f ∼ 20% ( ∼ 10%). In contrast, the Q -values for the antisymmetric mode are nearly constant with Q ≈2.1×10 4 ( Fig. 3c ). This is expected as the theoretical support-induced loss for this mode is negligible. Additionally, as this resonance involves mainly deformations of the auxiliary beams, its dissipation is not correlated with the mode shapes of the central resonator. The damping of this mode is instead dominated by other sources of dissipation, most likely by the material-related losses that are also responsible for the frequency shifts. Thus, we obtain an independent corroboration that the characteristic Q -variation observed for the free–free mode is indeed induced by the modification of the geometry rather than by the small frequency variation present in the devices. Figure 3: Compiled dissipation results displaying excellent agreement between the theory and experiment. ( a , b ) Comparison of experimental measurements at T =20 K, with theoretical dissipation values for the free–free mode of resonators with measured central dimensions of 132×42 μm and radius R =116 μm and R =131 μm, respectively. Panel ( a ) includes SEM images of the three extreme designs (for R =116 μm) with overlaid CAD models of the resonator geometry. Both ringdown and spectrally-derived data are included, with values averaged over two nominally identical chips (error bars denote a confidence interval of 99%). We include both raw simulated data as well as fitted data (continuous lines are a guide to the eye) incorporating a constant offset 1/ Q * =2.41×10 −5 . For the effective substrate, we utilize the mechanical properties of Ti, which is the main constituent of the positioning system on which the chips are mounted ( ρ =4,540 kg m −3 , E s =116 GPa and ν s =0.34). ( c ) Measured dissipation for the antisymmetric (antisym) mode of the same structures exhibiting a lack of geometric dependence. Full size image Discussion To quantitatively compare the measurements with our numerical predictions, two issues must be considered: (i) our model only captures support-induced losses, although other loss mechanisms may still contribute to the overall damping in the devices and (ii) the parameters for the half-space model of the substrate must be properly chosen. Consideration (i) together with the fact that we have designed sets of resonators for which the frequencies and effective surface-to-volume ratios are kept approximately constant implies that any additional damping mechanism that is relevant at low temperatures and high vacuum, but is insensitive to the variation in geometry, should contribute a constant offset 1/ Q * in the measured dissipation 1/ Q tot . Consideration (ii) is non-trivial given the long-wavelength nature of the elastic waves radiated into the substrate. For an average resonator frequency of 2.12 MHz, estimates of the maximum wavelength for the freely propagating elastic waves yield a value of 2.5 mm, which largely exceeds the wafer thickness (300±25 μm). Thus, the mechanical material parameters for the substrate should be determined by the properties of the underlying stage and positioning mechanism in the cryostat rather than those of the chip itself. Hence, we assume for the half-space the mechanical properties of polycrystalline commercially pure (grade 2) titanium (see the caption of Figure 3 for more details), of which the bulk of the structure beneath the resonator consists. Taking all of this into account, the theory shows remarkable agreement with the measured dissipation (as shown in Fig 3 ). It is important to note that the only free parameter used in the model of the free–free mode is a constant offset of 1/ Q * =2.41×10 −5 . Although the exact nature of the corresponding dissipation mechanism is currently unknown, we assume that it arises from material losses in the resonator epi-structure. It should be noted that most commercially viable resonators operate in a regime where TED dominates, and in some instances, intuitive understandings of the support-induced damping 3 , 8 , 40 have allowed for its suppression below other limiting damping mechanisms. Nonetheless, if current efforts to minimize TED in such structures at room temperature are successful 28 , support-induced losses may pose the next challenge for maximizing Q . On the other hand, in fundamental research thrusts employing high vacuum and cryogenic systems, support-induced losses can become a dominant factor 7 , 41 . For example, the free–free designs explored here provide a route to minimize support-induced losses for application in optomechanical experiments utilizing the micromechanical resonator as an end mirror in a high-finesse Fabry–Perot cavity 46 . To gauge the relevance of our 'free–free' micromirror design in this context, it is instructive to compare the fundamental limit at nodal positioning Q th ≈5×10 6 and the maximum measured Q -value of 5.1×10 4 with the corresponding results for the fundamental flexural mode of a clamped bridge of comparable dimensions. In fact, for the typical dimensions considered, as required for integration in a high-performance Fabry–Pérot cavity, we obtain a theoretical limit Q c−c ∼ 10 3 in line with previous measurements on monocrystalline DBR optomechanical structures 47 . Given the scale-independent nature of support-induced losses, our solver can be applied equally well to nanoscale mechanical devices. We find that for a recent demonstration of a nanomechanical doubly clamped beam coupled to a superconducting qubit at milliKelvin temperatures 48 , the measured values for the resonator's maximum Q (≈6×10 4 ) can be understood solely via the phonon-tunnelling loss model (beam geometry of 0.3×0.18×6 μm; M. LaHaye, private communication), which predicts a Q -value of 5.4×10 4 , in excellent agreement with the experimental value. In addition, the phonon-tunnelling framework is also applicable to prestressed nanoresonators such as Si 3 N 4 strings 34 or membranes and has recently been experimentally verified for the latter 49 . In conclusion, we have developed an efficient FEM-enabled numerical method for predicting the support-induced dissipation in microscale and nanoscale mechanical resonators. In combination with existing models for other relevant damping channels (for example, fluidic and TED 27 , 28 ), our 'phonon-tunnelling' solver makes further strides towards accurate prediction of Q . Furthermore, we provide a stringent experimental test of the corresponding theory using resonators engineered to isolate support-induced losses. Our results unambiguously demonstrate that phonon-tunnelling plays a significant role in the mechanical dissipation of these devices and illustrate the strong geometric character of this fundamental damping mechanism. Finally, we note that as the weak-coupling approximation underlying our treatment is more general than the condition of small contact area, our numerical solver can in principle be extended to other relevant scenarios such as phononic-band-gap structures 41 . Methods Numerical calculation of Q -values To derive equation (2) from equation (1), we adopt for the free elastic half-space 45 , modelling the decoupled support, a decomposition into eigenmodes (with n =0, ±1, ±2, ...) that have axial symmetry with respect to z (see Fig. 1 ). These are related to the plane wave eigenmodes by where we adopt spherical coordinates for the incident wavevector ( θ = π /2 for γ = s and θ ≤ π /2 otherwise). We note that for the suspended plate geometry considered, the appropriate resonator mode satisfies at the contact S so that we need to evaluate the second term in equation (1). The thin-plate condition t ≪ R directly allows us, given the flexural nature of the modes of interest, to neglect stresses at S that are parallel to the substrate, with the possible exception of bending-moment contributions 45 —this also applies if there are small transverse dimensions comparable to t . However, the bending-moment contributions also become negligible in the limit t / R →0, as can be shown by using: (i) that, given ω R ≪ c γ / R ∀ γ , we can Taylor expand at the origin in the integral over S , (ii) that we can assume relevant stresses to be concentrated around the ends of the auxiliary beams so that the bending moments at S are mostly oriented along y , (iii) the reflection symmetries with respect to the y − z (operator ) and x − z (operator ) planes and (iv) that, barring interference effects, these bending-moment contributions are at most of relative order 24 k R t —here is the resonant wavevector for the propagation of flexural waves. Thus, we find that for all mode types other than −+ (antisymmetric (symmetric) with respect to ( )), the correction associated to neglecting the bending moments scales as Δ Q / Q ∼ ( k R t ) 2 , whereas for −+ modes, it scales as Δ Q / Q ∼ ( k R t ) (note that L ∼ R ). In turn, we find that the relative error in using equation (1), arising from the weak-coupling approximation, scales in this case as Δ Q/Q ∼ |Δ ω R |/ ω R ∼ |Δ I ( ω R )|/2 ω R ∼ ( k R t ) 3 , where the phonon-tunnelling-induced frequency shift Δ ω R is approximated by where I ( ω ) is the environmental spectrum 24 . Hence, we can assume and neglect the variation of across the thickness t (that is, the z -dependence at S ), so that the support modes only enter into equation (1) through . To determine the latter, we adopt cylindrical coordinates , exploit that reflection at the free surface preserves the tangential component of the wavevector implying and use the Bessel integral Thus from equations (4,5,6), we obtain where we have also used that is independent of ϕ . Subsequently, substitution of equation (7) into equation (1) leads to equation (2) after using that here where d γ is the dimensionality (that is, d γ =3 for γ ≠ s and d γ =2 for γ = s ), performing the substitution ω = c γ q (for each γ ), and integrating over ω . Finally, substitutionof the explicit expressions for the plane wave eigenmodes (see for example Appendix A in ref. 24 ) and ν =cos θ into the definition of allows us to obtain: where we use the ratio α ≡( c t / c l ) 2 =(1−2 ν s )/2(1− ν s ) for the supports' material ( ν s is the corresponding Poisson ratio). In turn, ξ ( α ) is the ratio of the propagation velocity of surface waves to c t , which is always less than unity 45 , and The sum in equation (2) can be reduced to a sum over n ≥0 by noting that J − n ( x )=(−1) n J n ( x ) and that as the resonator mode is real, the linear stress Fourier components satisfy . Furthermore, the length of the central resonator L is comparable to the radius R , and we focus on low-lying resonances of the suspended structure so that the aforementioned condition ω R ≪ c γ / R ∀ γ is always satisfied. This implies for m > n and , which can be understood by considering the behaviour of the Bessel functions for small arguments. Thus, we find that in equation (2), the sum over the index n is dominated by the first non-vanishing term as determined by the reflection symmetries . The latter also imply ( n =0, 1, 2, ...): where the resonator mode of type α , β satisfies and . To efficiently extract the above from the FEM simulation, we convert them into volume integrals using an adequate Gaussian weight so that, for example, for a fully symmetric mode, we have where we again use cylindrical coordinates and V denotes the resonator volume. In addition, we exploit that the reflection symmetries naturally allow to perform the FEM simulation on a single quadrant. Thus, numerical evaluation can be conveniently performed using a fixed a * and a mesh size M such that ( V /4 M ) 1/3 < a * ≪ t . We have checked the convergence and estimate the numerical error to be of order 5%. Numerical simulations of the resonator mode are performed with the aid of COMSOL multiphysics. Accurate three-dimensional CAD models representing the resonator geometry are generated using Solidworks (matched with high-quality scanning electron microscope images as described in Supplementary Method ), and the bidirectional interface between the two programs is exploited to perform a parametric sweep of the auxiliary beam contact position for determining the pertinent information about the relevant mode, namely its eigenfrequency, linear stress Fourier components f z,n and normalization constant. In this instance, a single CAD file is used with a global variable incorporated to control the lateral position of the auxiliary beams with respect to the centre of the central resonator. We use for the mechanical properties of our single-crystal resonators an anisotropic material model incorporating the elastic stiffness matrix for the epitaxial structure as obtained from a weighted average between the relative content of GaAs and AlAs (46.37% GaAs/53.63% AlAs). The corresponding parameters are: C 11 =119.6 GPa, C 12 =55.5 GPa, C 44 =59.1 GPa and ρ R =4,483 kg m −3 . The resonator axes are aligned along 〈100〉 (zinc-blende structure). Note that we ignore the 6° misorientation of the germanium substrate, as we have checked that it has a negligible impact (error of 0.3%) on the simulated frequency response of the resonators. Finally, as a non-trivial check, we have applied our numerical method to bridge geometries with no undercut for which a simple analytic expression is valid in the limit of large aspect ratio (see Supplementary Method ). Epitaxial material structure and resonator fabrication procedure The layer structure for our high reflectivity resonators consists of 40.5 periods of alternating quarter-wave GaAs (high index) and AlAs (low index) grown lattice-matched to an off-cut monocrystalline germanium substrate. The ideal total thickness of the heterostructure is 6,857.6 nm, with individual layer thicknesses of 77.6 and 91.9 nm for the GaAs and AlAs, respectively, yielding a nominal peak reflectivity at 1,064 nm, as with our previous optomechanics experiments 47 . With this design, the germanium substrate enables the use of a high-selectivity gas-phase etching procedure, based on the noble-gas halide XeF 2 , to rapidly and selectively undercut the underlying germanium substrate. Thus, we realize a free-standing epitaxial Bragg mirror via a simple and fast-turnaround fabrication procedure. The details of both the epitaxial material design and microfabrication procedure are covered in ref. 50 . Measurement technique To characterize the frequency response of our microresonators, we utilize a custom-built optical fibre interferometer featuring a continuous flow 4 He cryostat as the sample chamber 51 . High-sensitivity displacement resolution is achieved in this system via optical homodyne interferometry. Cryogenic testing of these devices is necessitated because of the limitations imposed by TED at room temperature. Estimation of the magnitude of TED is possible using the analytical and finite element models developed previously 26 , 27 , 28 , which predict a Q -value of ∼ 4,000 for the current DBR composition and thickness at 1.8 MHz and 300 K—consistent with performed measurements. To avoid TED, our cryostat enables interrogation down to 20 K (resulting in an estimated TED limited Q of 9.9×10 8 ); the minimum temperature is currently limited by the large view-port above the sample stage. Additionally, this system is capable of vacuum levels down to 2.5×10 −7 mbar at cryogenic temperatures, removing any additional damping induced by fluidic or squeeze film effects 29 , 30 , 31 . The eigenmodes of the resonator are excited by driving a high-frequency (10 MHz) piezo disc soldered to a copper stage in thermal contact with the cold finger. For spectral characterization, the piezo disc is driven with white noise and the resonator frequency response is recorded on a spectrum analyser. For the free-ringdown measurements, the decay of a resonantly excited device is recorded in a single shot on a high-speed oscilloscope (see Supplementary Method for further details). Additional information How to cite this article: Cole, G. D. et al . Phonon-tunnelling dissipation in mechanical resonators. Nat. Commun. 2:231 doi: 10.1038/ncomms1212 (2011). | Austrian and German researchers at the University of Vienna and Technische Universitaet Muenchen have solved a long-standing problem in the design of mechanical resonators: the numerical prediction of the design-limited damping. They report their achievement, which has a broad impact on diverse fields, in the forthcoming issue of Nature Communications. The article describes both a numerical method to calculate the mechanical damping as well as a stringent test of its performance on a set of mechanical microstructures. From the wooden bars in a xylophone or the head of a drum, to the strings and sound box of a guitar or violin, musical instruments are the most familiar examples of mechanical resonators. The actual mechanical vibrations of these instruments create acoustic waves that we hear as sound. The purity of the emitted tone is intimately related to the decay of the vibration amplitude, that is, the mechanical losses of the system. A figure of merit for mechanical losses is the quality factor, simply called "Q", which describes the number of oscillations before the amplitude has decayed to a minute fraction of its starting value. The larger Q, the purer the tone and the longer the system will vibrate before the sound damps out. In addition to the aesthetic examples found in a concert hall, mechanical resonators have become increasingly important for a wide variety of advanced technological applications, with such diverse uses as filtering elements in wireless communications systems, timing oscillators for commercial electronics, and cutting-edge research tools which include advanced biological sensors and emerging quantum electro- and optomechanical devices. Rather than producing pleasing acoustics, these applications rely on very "pure" vibrations for isolating a desired signal or for monitoring minute frequency shifts in order to probe external stimuli. For many of these applications it is necessary to minimize the mechanical loss. However, it had previously remained a challenge to make numerical predictions of the attainable Q for even relatively straightforward geometries. Researchers from Vienna and Munich have now overcome this hurdle by developing a finite-element-based numerical solver that is capable of predicting the design-limited damping of almost arbitrary mechanical resonators. "We calculate how elementary mechanical excitations, or phonons, radiate from the mechanical resonator into the supports of the device", says Garrett Cole, Senior Researcher in the Aspelmeyer group at the University of Vienna. "This represents a significant breakthrough in the design of such devices." The idea goes back to a previous work by Ignacio Wilson-Rae, physicist at the Technische Universitaet Muenchen. In collaboration with the Vienna group the team managed to come up with a numerical solution to compute this radiation in a simple manner that works on any standard PC. The predictive power of the numerical Q-solver removes the guesswork that is currently involved (e.g., trial and error prototype fabrication) in the design of resonant mechanical structures. The researchers point out that their "Q-solver" is scale independent and thus can be applied to a wide range of scenarios, from nanoscale devices all the way up to macroscopic systems. | DoI: 10.1038/ncomms1212 |
Medicine | New microscopy technique reveals activity of one million neurons across the mouse brain | Jeffrey Demas et al, High-speed, cortex-wide volumetric recording of neuroactivity at cellular resolution using light beads microscopy, Nature Methods (2021). DOI: 10.1038/s41592-021-01239-8 Journal information: Nature Methods | http://dx.doi.org/10.1038/s41592-021-01239-8 | https://medicalxpress.com/news/2021-08-microscopy-technique-reveals-million-neurons.html | Abstract Two-photon microscopy has enabled high-resolution imaging of neuroactivity at depth within scattering brain tissue. However, its various realizations have not overcome the tradeoffs between speed and spatiotemporal sampling that would be necessary to enable mesoscale volumetric recording of neuroactivity at cellular resolution and speed compatible with resolving calcium transients. Here, we introduce light beads microscopy (LBM), a scalable and spatiotemporally optimal acquisition approach limited only by fluorescence lifetime, where a set of axially separated and temporally distinct foci record the entire axial imaging range near-simultaneously, enabling volumetric recording at 1.41 × 10 8 voxels per second. Using LBM, we demonstrate mesoscopic and volumetric imaging at multiple scales in the mouse cortex, including cellular-resolution recordings within ~3 × 5 × 0.5 mm volumes containing >200,000 neurons at ~5 Hz and recordings of populations of ~1 million neurons within ~5.4 × 6 × 0.5 mm volumes at ~2 Hz, as well as higher speed (9.6 Hz) subcellular-resolution volumetric recordings. LBM provides an opportunity for discovering the neurocomputations underlying cortex-wide encoding and processing of information in the mammalian brain. Main Two-photon microscopy (2pM) 1 , 2 , 3 with genetically encodable calcium indicators (GECIs) 4 , 5 , 6 has emerged as the standard technique for imaging neuronal activity at depth within scattering brain tissue. However, anatomical and functional observations suggest that complex brain functions emerge from highly parallel computation 7 , 8 in which sensory information 9 , 10 and behavioral parameters 11 , 12 are mapped onto brain-wide neuronal populations 13 , 14 , 15 , 16 at scales beyond the fields of view (FOVs) of conventional microscopes (<0.5 mm). Maximizing volumetric FOVs (v-FOVs) toward brain-wide imaging requires both mesoscopic optical access and optimal spatiotemporal sampling, such that information is obtained as fast as possible, each voxel provides information at sufficient signal-to-noise ratio (SNR), and the microscope records only the minimum amount of information necessary to resolve features of interest (for example, cell bodies) in order to devote remaining resources to imaging the largest possible volumes at time scales compatible with calcium imaging. Many 2pM platforms have demonstrated mesoscopic optical performance 17 , 18 , 19 , 20 , 21 , 22 , 23 , however in these systems, spatiotemporal sampling remains suboptimal: the high-repetition-rate lasers typically employed lead to oversampling in the lateral plane and the need for multiple pulses per pixel to improve SNR at low pulse energies. Accordingly, performance is limited to, at most, multiplane rather than volumetric performance, slow frame rates, and low SNR, particularly when imaging at depth. As we have argued previously 24 , 25 , single-pulse-per-voxel acquisition maximizes SNR per unit power delivered to the brain, and additionally, sampling at the minimum lateral density dictated by the application frees up temporal resources toward scaling the v-FOV. Another approach for scaling the v-FOV is using parallel excitation to increase the information rate. Some systems employ laterally 26 or axially 27 , 28 , 29 extended point-spread functions (PSFs) to form projections of the volume such that only two-dimensional scanning is required, whereas others excite multiple sample locations simultaneously, using spatially resolved detection to reconstitute images 30 , 31 . However, these approaches suffer from scattering-mediated crosstalk at depth, and work best for sparsely labeled samples, reducing applicability to imaging large networks. Temporal multiplexing 19 , 23 , 25 , 32 , 33 , 34 can be used to increase information throughput by scanning n copies of a single laser pulse, which are delayed in time and directed toward separate regions of the sample. For beam-to-beam delays exceeding the fluorescence lifetime (~6–7 ns for GCaMP 33 ), fluorescence at the detector can be reassigned to reconstitute the FOV scanned by each beam, resulting in an n -fold increase in data rate. However, multiplexed systems employing typical Ti:Sapphire lasers (~80 MHz repetition rate) are limited to, at most, a roughly twofold increase in data rate 19 , 23 , 32 , 33 , 34 and suffer from the oversampling inefficiencies mentioned above. Lowering laser repetition rates to the few-MHz regime can simultaneously increase the maximum possible degree of multiplexing n and improve sampling efficiency 25 . However, many multiplexing platforms require chains of beam splitters and dedicated optical paths for delaying and steering each beam, resulting in the unfavorable scaling of system complexity with increasing n 10 , 19 , 23 , 25 , 33 , 34 and leaving the majority of the interpulse timing interval unexploited 25 . Recently, multipass temporal multiplexing schemes have been demonstrated, in which beams undergo multiple round trips through a single set of components that tag the light with a specific delay and focal position in the sample corresponding to each pass through the system 35 , 36 . While using a multipass design, it is in principle possible to achieve higher degrees of multiplicity at a reduced optical complexity, but current realizations either exhibit a fundamentally limited potential for an increase in multiplicity 36 or are inconsistent with one-pulse-per-voxel excitation 10 , 35 , 37 . Here we demonstrate LBM: a high-speed optical acquisition technique for mesoscopic and volumetric 2pM. In LBM, the microscope scans a set of axially separated and temporally distinct foci (‘beads’) as opposed to a single focus (Fig. 1a ). The beads record information throughout the entire depth range of the sample (~500 μm) within the deadtime between subsequent pulses of the laser (~200 ns), thus LBM can probe entire volumes within the time it takes to scan a single plane. Furthermore, by using efficient spatial sampling, LBM allows for expansion of v-FOVs to mesoscopic scales while retaining GCaMP-compatible volume rates. Our light beads are formed by a cavity-based multiplexing approach called the many-fold axial multiplexing module (MAxiMuM) that allows for scaling of the multiplicity limited by only the fluorescence lifetime of GCaMP and the interpulse interval of the laser. Crucially, MAxiMuM also allows for flexible control of the relative power and position of each beam. Using MAxiMuM, we demonstrate 30-fold axial multiplexing and a voxel acquisition rate of 141 MHz with ~16-μm plane-to-plane axial separation, conditions that are optimized for, and compatible with, sampling densely labeled tissue volumes at fluorescence-lifetime-limited rates, with one-pulse-per-voxel SNR-maximized excitation while utilizing the entire interpulse time interval. Fig. 1: LBM schematics. a , An ultrafast pump pulse is split into 30 copies, which are delayed in time and focused into different depths in the sample, forming a column of ‘light beads.’ The column of foci is thus sampling the entire volume scanned at the nominal frame-rate of the microscope. Each bead is temporally distinct, allowing time-binned decoding of its fluorescence and the plane from within the volume from where it was emitted. b , MAxiMuM schematic. The red beam represents light entering the cavity, formed by four concave mirrors (M 1 –M 4 ). A partially transmissive mirror (PRM) reinjects most of the light back into the cavity. Beams accumulate an axial offset (Δ z ) and a temporal offset (Δ τ ) for each round trip, forming a column of light beads. Full size image We have realized the design of our LBM on a mesoscopy platform that allows access to a ~6 × 6 mm 2 FOV at subcellular resolution (0.6 numerical aperture) 18 , demonstrating volumetric and single-cell-resolution recording from volumes of ~3 × 5 × 0.5 mm, encompassing portions of the visual (VISp), somatosensory (SSp), posterior parietal (PTLp), and retrosplenial (RSP) areas of GCaMP6s-labeled 6 , 38 mouse neocortex at a ~5-Hz volume rate. Additionally, we highlight the versatility of LBM on this platform by recording in a variety of configurations, ranging from moderately sized FOVs (600 × 600 × 500 μm) with voxel resolution capable of resolving subcellular features to FOVs (5.4 × 6 × 0.5 mm) encompassing both hemispheres of the mouse cortex and capturing the dynamics of populations exceeding 1,000,000 neurons. We find that correlated activity of neurons in these experiments have characteristic lengths ≫ 1 mm. Stimulus- and behavior-tuned populations captured by LBM exhibit richly varied responses at the single-trial level—including subpopulations with correlated trial-to-trial variations in their responses—underlining the need for such volumetric and mesoscopic calcium-imaging techniques to understand real-time neurocomputations performed by intercortical circuitry. Results Light bead generation with MAxiMuM MAxiMuM is a stand-alone unit that generates columns of light beads that can be interfaced with any given microscope. Laser light is focused above a partially reflective mirror (PRM) at the entrance of the cavity and is subsequently re-imaged by a series of concave mirrors (Fig. 1b and Extended Data Figs. 1 and 2 ). An intentional offset Δz between the nominal focus of the re-imaging mirrors and the input of the cavity results in an axial shift in the beam’s focus as the beam returns to the entrance. Before reaching the cavity entrance, the beam encounters the PRM, which reflects the majority of light back into the cavity for another round trip, with a small lateral offset relative to the first beam. The remaining fraction of the light couples out of the cavity and is sent toward the microscope. Owing to the axial offset, each beam exiting the cavity focuses to a shallower depth in the sample, with a relative decrease in optical power such that the power in the i th beam is given by P i ∝ T (1 − T ) i , where T is the transmission of the PRM. Maintaining constant SNR over the imaged depth range requires an exponential increase in laser power for deeper foci due to loss of ballistic photons in scattering tissue. To achieve this, we adjusted the transmission of the cavity, T , such that the relative power increase between subsequent beams matches the increase required to offset additional tissue scattering due to the axial separation between adjacent planes, δ z , resulting in constant SNR for all multiplexed beams. This condition is given by: $$T = 1 - \exp \left( { - \frac{{\rm{\delta} z}}{{l_{\rm{s}}}}} \right)$$ (1) where l s is the scattering mean free path of brain tissue (~200 μm for at 960 nm 39 ), and the relationship between the axial separation of beams exiting the cavity (Δ z ) and those in the sample (δ z ) is given by Δ z = M 2 × δ z , with M being the magnification of the microscope. Equation 1 provides a design rule for achieving a given axial sampling density with LBM (Supplementary Note 1 ). This design flexibility represents a distinguishing feature of LBM and a key difference compared with reverberation microscopy 36 , where, owing to the fixed transmission of 50%, maintaining SNR requires an axial separation of ~100 μm, limiting multiplicity to a handful of beams within the penetration depth of 2pM. We integrated our LBM approach into an existing mesoscope (Extended Data Fig. 3 ) and characterized each light bead in sample space (Extended Data Fig. 4 and Supplementary Note 2 ) to ensure desired temporal and spatial characteristics. Through these calibrations, we confirmed fluorescence-lifetime-limited bead-to-bead delays (6.7 ns), minimal crosstalk between channels, linear sampling over the total axial range of ~500 μm, and lateral and axial diameters of ~1 μm and ~13 μm, respectively, for each light bead, which is sufficient for cellular-resolution imaging of densely labeled samples. Optimization of spatiotemporal sampling efficiency In order to maximize spatiotemporal sampling efficiency and record from the largest possible FOV, only the minimum amount of information necessary to faithfully extract features of interest—in our case, neuronal cell bodies—should be recorded. To explore this limit systematically, we conducted in vivo experiments in the neocortex of GCaMP6f-expressing mice. We recorded several high-resolution single-plane data sets and fed them into our analysis pipeline (Extended Data Fig. 5 and Supplementary Note 3 ), comparing the extracted footprints and time series to manually segmented ground truths for each data set. Using F -score (defined as the harmonic mean of the true- and false-positive rates) as a metric for extraction fidelity, we evaluated how performance deteriorates with increasing sampling sparsity by removing pixels from the lateral image stacks. Consistent with our previous results 24 , 25 , we found that the F -score only detrimentally declines for lateral spatial sampling >5 μm (Extended Data Fig. 6a ). Thus, to maximize the imaging volume while maintaining extraction fidelity, we found an optimum sampling of ~5 μm in the lateral plane. Multiregional and multisensory imaging of activity from >200,000 neurons in mouse cortex We validated LBM in vivo by recording from the neocortex of awake and behaving mice that transgenically expressed GCaMP6s in glutamatergic neurons 6 , 38 . Using our optimized spatial sampling strategy, we could maintain a volume rate of ~5 Hz within a volume of ~3 × 5 × 0.5 mm, allowing us to resolve GCaMP transients. We chose placement of our v-FOV to encompass as many distinct regions as possible within a single cortical hemisphere including SSp and PTLp, as well as RSP and VISp (Fig. 2a ), at depths corresponding to layers I through IV (Fig. 2b ). We employed a dual-sensory stimulus paradigm in these recordings, consisting of perturbation of whiskers contralateral to the imaged hemisphere and presentation of high-contrast drifting gratings. We placed animals on a treadmill equipped with motion tracking and video-based behavioral tracking that allowed us to capture any movements of the hind or forelimbs that lacked correlation with the above controlled stimuli (Supplementary Video 1 ). We refer to any such movement as a spontaneous behavior in subsequent analyses. Fig. 2: Recording of 207,030 neurons at 4.7-Hz rate within a volume of ~3 × 5 × 0.5 mm in the cortex of a GCaMP6s-expressing mouse during whisker and visual stimulation. a , Three-dimensional (3D) rendering of extracted neuron spatial coordinates and maximum projected activity for a 9-minute recording. The transverse brain image reproduced from the Allen Brain Atlas, Brain Explorer 2 (ref. 51 ). b , Y – Z projection of the neuron density. Approximate boundaries between cortical layers are denoted with green lines. All depth values are displayed relative to the pia. c , Mean projection image from the recording in a at 344 μm depth; scale bar, 250 μm. Inset, zoomed in boxed region of c ; scale bar, 100 μm. d , Subset of 50 example traces with whisker and visual stimuli denoted by red and blue markers, respectively. Offset, 0.5 × Δ F / F 0 . Source data Full size image In this modality, we could record from ~200,000 neurons distributed across a ~500 μm axial range within the cortical regions mentioned above (volume rendering in Fig. 2a and Supplementary Video 2 ). As expected (Extended Data Fig. 6 ), and consistent with the spatiotemporal characteristics of our light beads (Extended Data Fig. 4 ), we could indeed resolve individual neurons (Fig. 2c,d and Supplementary Videos 3 and 4 ), allowing for extraction of their time series (Extended Data Fig. 7 and Supplementary Video 5 ). The time scale of the observed calcium transients is consistent with the response time of GCaMP6s (Extended Data Fig. 8 ) and shows correlations with both visual and whisker stimuli. We computed the distribution of correlations between the time series of each neuron and the stimuli presentation, that is the whisker trials and the visual trials, as well as correlation with spontaneous animal behaviors (Extended Data Fig. 9a–c ). We found subpopulations of neurons spanning many regions of the cortex (Fig. 3a–e ) that were highly tuned to each stimulus ( R > 3 σ ), numbering 34,468 whisker-tuned neurons, 24,299 visually tuned neurons, and 64,810 neurons tuned to uninstructed animal behaviors. We performed hierarchical clustering on the correlation matrix of the population of all 123,577 stimulus-tuned neurons (Extended Data Fig. 9d ) and found 4 distinct clusters, each exhibiting a different axial distribution across depth (Extended Data Fig. 9e–i ). We subsequently mapped these clusters, and the neurons within them, back to their anatomical locations in the brain. For each population of neurons tuned to a stimulus modality (whisker or visual stimulation) or to uninstructed spontaneous behavior, we considered the relative size and location of the subpopulations corresponding to each cluster (Fig. 3b–d ). For comparison, we also considered the lateral distribution of 13,259 neurons uncorrelated with any stimulus condition or the uninstructed spontaneous behavior (| R | < σ , Fig. 3e ). For the majority of the stimulus conditions, we observed a distribution of correspondingly tuned neurons across multiple regions of the cortex (Fig. 3a ). For each condition, as well as for the uncorrelated population, we could faithfully extract the transients of single cells (Fig. 3f–i ). Fig. 3: Analysis of the activity of stimulus-tuned and behavior-correlated neurons in a single-hemisphere recording. a , Brain regions covered by the recording in Fig. 2 , reproduced from the Allen Brain Atlas, Brain Explorer 2 (ref. 51 ). Scale bar, 250 μm. b–e , Transverse spatial distributions of neurons tuned to a single stimulus condition. The correlation matrix for all tuned neurons was hierarchically clustered, generating four clusters colored in blue, green, yellow, and red. Maps correspond to whisker stimuli ( b ), visual stimuli ( c ), or behaviors ( d ), or a population of neurons not correlated with any stimuli ( e ). Scale bars, 250 μm. f – i , Example neuronal traces from populations tuned to whisker stimuli ( f ), visual stimuli ( g ), or spontaneous behaviors ( h ), or that were uncorrelated ( i ). Occurrence of stimuli denoted by markers in f–h . Offset, 1.0 × Δ F / F 0 . j , k , Trial-averaged activity of example whisker-tuned neurons with (orange in j , cyan in k ) and without (gray) the presence of a simultaneous visual trial. Solid lines denote mean of all trials, shaded regions denote 1 s.d. from mean. l , Lateral spatial distributions of the orange and cyan populations in j and k . Scale bar, 250 μm. m , n , o , Lateral spatial distributions of visually tuned neurons modulated by simultaneous whisker stimuli ( m ), whisker-tuned neurons modulated by simultaneous animal behaviors ( n ), and visually tuned neurons modulated by simultaneous animal behaviors ( o ). p , q , Single-trial activity, for example neurons tuned to whisker stimuli for trials with simultaneous visual stimuli ( p ) and neurons tuned to spontaneous animal behaviors ( q ). Shaded regions denote 1 s.d. from the mean. Raw data are shown by markers, and black lines denote the deconvolved response. Horizontal and vertical scale bars, 1.0 × Δ F / F 0 , 5 seconds. r , Heat map of trial-averaged activity of behavior-tuned neurons with relative lag denoted by the overlaid black line. s , Lateral spatial distribution of behavior-tuned neurons color-coded by relative lag. Scale bar, 250 μm. t , Cumulative fraction of populations tuned to a given condition (whisker stimulus, visual stimulus, spontaneous behavior, uncorrelated) with significant mutual correlation ( R > 3 σ ) captured within a given neuron-to-neuron separation. Source data Full size image Cluster 1 (blue) was located primarily in the barrel field (SSp BFD) and PTLp (Fig. 3a ), and was thus highly represented in the whisker-tuned population (Fig. 3b ). This cluster was also highly represented in the population correlated with spontaneous behaviors (Fig. 3d ), inferring mixed responses of the neurons in this cluster to both stimuli. Cluster 2 (green) was only represented in behavior-tuned neurons (Fig. 3d ) and was primarily located in specialized regions of the SSp related to sensation in the lower limbs, upper limbs, and torso of the animal (SSp-LL, SSp-UL, and SSp-TR, respectively), as well as PTLp (Fig. 3a ). Cluster 3 (yellow) was located in VISp and PTLp, and represented neurons correlated with all stimulus conditions. The final cluster 4 (red) was distributed across multiple regions, including SSp, VISp, PTLp, and a dense population within RSP, which is thought to be associated with spatial memory encoding 40 . This subset located within RSP was primarily tuned to spontaneous behaviors (Fig. 3d ). The spatial clustering analysis suggests that, although some of these functional clusters overlap with distinct anatomical regions of the brain, neurons in these regions can also jointly represent multiple stimulus conditions or may have stimulus-evoked activity that is modulated by the presence of additional stimuli. To further probe mixed representation, we analyzed the trial-averaged activity of stimulus-tuned neurons. First, we considered differences in activity for whisker-tuned neurons in trials in which only whisker stimuli were present and compared them with those in which we presented both whisker and visual stimuli together (Fig. 3j–l ). The presence of a coincident visual trial resulted in populations of neurons with both positively (Fig. 3j ) and negatively (Fig. 3k ) modulated activity relative to trials with only whisker stimulation. We found similar numbers of positively (3,703) and negatively modulated (4,166) neurons (significance defined by all neurons for which P < 0.05 determined by a two independent sample t -test); however, there was a clear distinction between the anatomical location of the two populations, with positively modulated neurons located primarily in SSp BFD and negatively modulated neurons located in VISp. Figure 3m shows a map of visually tuned neurons with activity that was significantly modulated (all neurons with P < 0.05, two independent sample t -test) by coincident presentation of whisker stimuli. Visually tuned neurons were primarily negatively modulated by the presence of whisker stimuli and located within VISp. Figure 3n,o shows the population of whisker-tuned and visually tuned neurons that were significantly modulated (all neurons with P < 0.05, two independent sample t -test) by coincident uninstructed spontaneous behaviors of the animal. In both cases, the majority of whisker-tuned and visually tuned neurons are positively modulated by spontaneous behaviors. Additionally, at the single-trial level, stimulus- and behavior-tuned populations showed both neuron-to-neuron and trial-to-trial variation. Figure 3p shows example traces from eight neurons tuned to whisker stimuli with coincident presentation of visual stimuli. In some instances, neurons anatomically separated by >1 mm exhibit variations in activity across trials that are correlated (neurons 1–4); in other instances, the variations in trial-to-trial activity do not covary with the above group nor with one another (neurons 5–8). At the population level, we found trial-to-trial correlations among whisker-tuned neurons (Extended Data Fig. 9j ) and visually tuned neurons (Extended Data Fig. 9k ) to be positively skewed ( r = 0.16 ± 0.24 and r = 0.26 ± 0.29, respectively) and consistent with previously measured values 41 , with neuron-to-neuron separations spanning millimeter scales (Extended Data Fig. 9l ) and multiple layers of the cortex (Extended Data Fig. 9m ). Such trial-to-trial covariations of neuronal responses, also referred to as noise correlations, have been suggested to represent distributed, higher-dimensional information encoding the underlying interaction of external stimuli and behavioral states with internal states 11 , 42 . Noise correlations occur on a trial-by-trial basis and any trial averaging will prevent their detection. Thus, while sequential single-plane and tiled-FOV recordings could potentially capture the same population of cells shown here, the trial-to-trial variability of their responses recorded by our method would be lost. Additionally, single-trial neuronal responses showed variability in the sequence of neuronal firing times across the brain (Fig. 3q ). We quantified this variability by calculating the lag between the occurrence of spontaneous animal behaviors and the onset of stimulus-evoked activity, finding a ~1.7-second delay between the timing of the earliest- and latest-firing neurons. Figure 3r shows a heatmap of the lag variation in behavior-tuned neurons, while Fig. 3s shows the lateral positions of this population color-coded by relative lag (Supplementary Video 6 ). The earliest-responding neurons are primarily located in the SSp-TR, SSp-LL, and SSp-UL regions, whereas neurons in regions farther from these sensory areas, including RSP, PTLp, SSp BFD, and VISp, respond later, in keeping with previous results 11 . Correlated neurons in these data can span mesoscale separations of 2–4 mm (Fig. 3t ), exhibit correlated trial-to-trial variations spanning different regions of the brain, and have spatiotemporal coding structure evident at calcium-scale time resolution, underscoring the need for high-speed, large-FOV, volumetric recording capability. Re-configurable multi-scale imaging with LBM LBM maintains the ability to navigate tradeoffs between lateral voxel sampling, FOV, and imaging speed to suit numerous applications. For example, Fig. 4a–c shows mean projection images and example neuronal traces from a volume of ~600 × 600 × 500 μm 3 in the PTLp of a jGCaMP7f 43 -expressing mouse (Supplementary Video 7 ) at ~10-Hz volume rate and 1-μm lateral voxel sampling, sufficient for resolving subcellular features such as the neuronal processes of active cells. Relaxing voxel sampling to an intermediate ~3-μm lateral sampling (Fig. 4d–g and Supplementary Videos 8 and 9 ) allows for increasing the v-FOV to ~2.0 × 2.0 × 0.5 mm, containing a population of ~70,000 neurons that can be recorded at 6.5 Hz. Fig. 4: Multi-scale functional imaging with light beads microscopy during whisker stimulation. a – c , High-resolution volumetric (~600 × 600 × 500 μm 3 ) imaging of neuroactivity at 9.6 Hz in jGCaMP7f-expressing mice. Representative mean projection images of neurons at planes 440 μm ( a ) and 384 μm ( b ) deep, taken from the above volume during a 3-minute recording. Scale bars, 50 μm. Zoomed-in boxed regions are inset, Scale bars, 10 μm. c , Representative time series of the nine neurons outlined in the zoomed-in region of the plane in b . Offset, 1.0 × Δ F/F 0 . d – g , Recording of 70,275 neurons within a volume of ~2 × 2 × 0.5 mm at 6.7 Hz and 2.8 μm lateral voxel sampling. d , 3D rendering of extracted neuron spatial coordinates and maximum projected activity for a 9-minute recording. Transverse brain image reproduced from the Allen Brain Atlas, Brain Explorer 2 (ref. 51 ). e , f , Mean projection images at 144 and 482 μm depths, respectively. Scale bars, 250 μm. Zoomed-in regions inset scale bars, 50 μm. g , Representative time series of 50 whisker-tuned neurons. Occurrences of the stimulus denoted by red marks. Offset, 0.5 × Δ F / F 0 . Source data Full size image Finally, by employing ~5-μm lateral voxel sampling, we could image a volume of ~5.4 × 6.0 × 0.5 mm encompassing both hemispheres of the mouse neocortex down to a depth of ~600 μm in tissue (Fig. 5 and Supplementary Videos 10 and 11 ). Experiments in this modality used up to ~450 mW of optical power; however, we confirmed through immunohistochemical labeling experiments that, owing to the large cranial windows and mesoscale v-FOVs employed in these experiments, the optical power used did not result in any heating-related damage to the brain (Extended Data Fig. 10 and Supplementary Note 5 ). Figure 5a,b shows a representative recording in this modality, capturing 1,065,289 neurons at 2.2-Hz volume rate. Even with the reduced acquisition rate in this modality, the F -score is preserved (Extended Data Fig. 6a ), and thus calcium transients can still be detected and extracted (Fig. 5c , Extended Data Fig. 7c,d , and Supplementary Video 12 ). The optical access, large degree of multiplexing, and efficient scanning approach employed by LBM opens the door to scaling 2pM from single-brain-region to cortex-wide recording, allowing for investigation of bihemispheric cognitive processing, as well as capturing the dynamics of populations of neurons more than two orders of magnitude larger than what can be captured by other techniques 10 , 23 , 25 , 29 . Fig. 5: Volumetric recording of 1,065,289 neurons within a volume of ~5.4 × 6 × 0.5 mm at 2.2 Hz in a GCaMP6s-expressing mouse with no external stimulation. a , 3D rendering of extracted neuron spatial coordinates and maximum projected activity for a 9-minute recording. Transverse brain image reproduced from the Allen Brain Atlas, Brain Explorer 2 (ref. 51 ). b , Y – Z projection of the neuron density. Approximate boundaries between cortical layers are denoted with green lines. All depth values are displayed relative to the pia. c , Mean projection image from the recording in a at 600 μm depth. Scale bar, 500 μm. Inset, zoomed-in boxed region of c . Inset scale bar, 200 μm. d , Subset of 50 traces from e . Offset, 0.5 × Δ F / F 0 . Source data Full size image Discussion Mesoscopic 2pM platforms are necessary for increasing the optical access of calcium imaging to multiregional recording from the mammalian brain. However, as we argue and demonstrate, a spatiotemporal-sampling approach that optimizes the tradeoffs between speed, volume size, and resolution is essential to maximize the number of neurons that can be recorded simultaneously from ever-increasing v-FOVs (Supplementary Note 1 ). Such optimal sample acquisition requires one-pulse-per-voxel sample excitation, fluorescence-life-time limited excitation rates using the entire pulse-to-pulse interval, and spatial sampling at the minimum density required to resolve cells or other features of interest. This in turn frees up resources that can be used to further increase volume size, speed, or resolution. LBM represents the first realization of the optimal condition described above. MAxiMuM allows for scaling of the multiplicity beyond the previously shown few-beam regime, making full use of the pulse-to-pulse time interval of our laser to achieve the maximum possible voxel rate within the fluorescence lifetime limit of GCaMP. Thereby, compared with existing techniques, LBM allows for an effective increase by up to an order of magnitude in the total number of accessed voxels, and an increase of one to two orders of magnitude in recording volume, scaling the total number of recorded neurons by up to three orders of magnitude while maintaining calcium-imaging-compatible frame rates (Supplementary Table 1 ). In our current implementation of LBM, our depth reach is limited by the detected fluorescence signal but not the background surface fluorescence induced by 2p excitation. We envision that, by using a laser with shorter pulses as well as high-bandwidth amplification between the detector and digitizer, SNR could be enhanced to enable deeper imaging. Additionally, LBM could be combined with three-photon microscopy in a hybrid configuration 25 and thus could access sub-cortical regions. Finally, LBM is also compatible with employing an enlarged, temporally focused PSF, which would further increase the sensitivity of our method and the number of neurons detected. Enabled by the capabilities of LBM, we have observed evidence of mixed selectivity 44 in large populations of neurons distributed across many brain regions, as well as trial-to-trial variability of both stimulus- and behavior-tuned neurons. Additionally, we found evidence for covariance of activity among subsets of the stimulus-tuned neuronal population across the brain at the single-trial level, which have been suggested to represent encoding of information about internal states and uncontrolled aspects of stimuli and behavior 11 , 42 . These observations highlight the need for both high-speed and large-scale neuronal-recording capability in order to identify and capture long-range functional cortical circuits and the variability of their response at the trial-to-trial level and single-neuron resolution. Furthermore, the volumetric nature of the LBM technique offers opportunities for investigation and validation of different models of cortical organization and computation based on multi-layer and multiregional processing of information 13 . Moreover, as suggested by our observations on the correlation distance for the simple sensory and behavioral paradigms used in this work (2–4 mm), and bolstered by other findings 9 , 16 , 42 , mesoscopic-scale volumetric imaging of populations on the order of 1 × 10 5 –1 × 10 6 neurons is necessary for revealing the full neuronal population code in individual cortical regions 9 , 16 , 42 , 45 , as well as identifying the structure and dynamics 11 , 13 , 46 , 47 , 48 of inter-regional brain activity critical for learning 12 , 49 , memory 14 , 50 , and other cognitive functions. As such, the size of the neuronal population recording enabled by our technique opens up a range of opportunities to understand how the neurocomputations underlying multiregional encoding and processing of sensory and behavioral information emerge from the dynamic interaction of brain-wide networks of neurons at the single-neuron level in the mammalian brain. Methods Laser source Our custom laser system comprised an ultrafast ytterbium-doped fiber master oscillator and chirped-pulse amplifier (Active Fiber Systems, 60 W average power, 4.7 MHz, 300 fs pulse duration, ~10 μJ pulse energy, λ = 1,030 nm), followed by an optical parametric chirped-pulse amplifier (OPCPA, White Dwarf dual, Class5 Photonics). The OPCPA operated at a wavelength of 960 nm with ~90 fs duration pulses up to ~0.8 μJ in energy at a repetition rate of 4.7 MHz. We employed an electro-optic modulator (Conoptics, 350-160-BK) to dynamically adjust laser power in the sample and blank the beam while the resonant scanner was reversing direction. We pre-compensated pulse-broadening using two pairs of chirped mirrors (Class5 Photonics) with −500 fs 2 per reflection, which imparted a total of −24,000 fs 2 of anomalous group delay dispersion to counteract the material dispersion of the multiplexing module, the mesoscope, and the other components in the system. The tradeoffs regarding the repetition rate of the laser and characteristics of the MAxiMuM cavity are discussed in detail in Supplementary Note 1 . Spatiotemporal multiplexing module To facilitate spatiotemporal multiplexing, we constructed a cavity comprising concave mirrors configured in an 8-f, non-inverting, re-imaging scheme (Extended Data Fig. 1a ). The input beam was focused by L 1 , just above the aperture of the partially reflective mirror (PRM), M 1 , and in the front focal plane of M 2 . Mirrors M 2 −M 5 were concave mirrors ( f = 500 mm, 2′ diameter) with custom low-dispersion dielectric coatings (Layertec), which re-imaged the initial spot onto the turning mirror M 6 . M 6 provided a slight vertical tilt to the beam such that it intersected the PRM M 1 . M 1 was a low-dispersion ultrafast beam splitter (Thorlabs, UFBS9010) with a nominal transmission of ~10% at 45° incidence. By adjusting the position of M 6 , we were able to change the angle of incidence at the PRM and tune the transmission to the desired value of ~8%. The majority of the light incident on M 1 underwent the next round trip through the cavity, and the rest of the light was transmitted. Each round trip through the cavity provided a temporal delay τ = 13.8 ns, as well as an offset in the focal plane of the beam, dictated by the distance between M 6 and M 1 (~145 mm). The vertical angle of M 6 , necessary to ensure the beam intersected the aperture of M 1 , caused a small lateral offset between subsequent round trips. This offset was minimized during alignment (Supplementary Note 4 ). Round trips in the primary cavity generated the first 15 multiplexed beams, and a subsequent single-pass cavity (Extended Data Fig. 1b ) increased the multiplicity to 30. After the primary cavity (Extended Data Fig. 1a ), the light was re-collimated by L 2 . L 3 and L 4 formed a unitary magnification telescope that ensured that the lowest power beams were directed to the shallowest depths in the sample. The distances between M 6 and L 2 , L 2 and L 3 , and L 3 and L 4 were iteratively optimized in order to position the last beam exiting cavity A in the nominal focal plane of the objective, while maintaining as uniform as possible magnification for each beam. The beams were transmitted through a half-wave plate (HWP) and onto a polarizing beam splitter (PBS). The reflected portion of the beam underwent a single round trip through another custom-mirror-based 8-f re-imaging cavity ( f = 250 mm, 2′ diameter, Layertec), before recombination with the transmitted portion of the beam (Extended Data Fig. 1b ). The beams coupled to the secondary cavity were delayed an additional 6.7 ns, interleaving them in time with the beams transmitted by the PBS (Extended Data Fig. 1c ). The focal planes of these delayed beams could be globally shifted by adjusting the position of M 9 and M 11 , and formed two sub-volumes that were spatially contiguous, such that all 30 beams provided continuous sampling along the optical axis (Extended Data Fig. 1d ). Manipulation of the HWP could be used to adjust the relative optical power of the sub-volumes in order to preserve matching to the scattering properties of the tissue. In total, 30 spatiotemporally multiplexed beams exited the secondary cavity, and the axial separation between imaging planes is ~16 μm, leading to a total axial sampling range of 465 μm. Integration with mesoscope The output of the multiplexing module was interfaced with a commercial mesoscope (Thorlabs, Multiphoton Mesoscope) 18 . The mesoscope layout and accompanying electronics are shown in Extended Data Fig. 3 . The configuration of the microscope followed normal operation conditions with the exception of some minor modifications. The remote focusing unit of the system, consisting of a PBS and objective mounting apparatus, was removed and replaced by a turning mirror to route beams directly to the first telescopic relay. This modification was necessary because light exiting MAxiMuM was split between two orthogonal polarization states and thus incompatible with the PBS in the remote focusing module. Furthermore, the axial range of MAxiMuM (~500 μm) makes remote focusing redundant for our intended axial imaging range and thus an unnecessary drain on the power and dispersion compensation budgets. Additionally, the electrical amplifier following the photo-multiplier tube (PMT) was removed, as the temporal response of the standard model amplifier used with the mesoscope was insufficient for multiplexed data. Given the power budget available from our custom laser source, we estimate that signals from each of the voxels in LBM can be up to about three times higher than those generated by a typical Ti:Sapphire laser (80 MHz, 2 W average power) coupled to the same mesoscope. During mouse imaging experiments, typical signals consisted of ~250 counts/voxel, which, when considering the bit depth (12 bits), digitization range (2 V peak-to-peak), and impedance of our digitizer, corresponds to ~5 mA of photocurrent from the PMT. Given the sensitivity of the PMT (176 mA/W) and a gain of ~1 × 10 6 , this suggests our signals are ~400 photons/voxel on average, with ~2 photons/voxel corresponding to dark counts. We can also estimate the photon number per voxel in a bottom-up fashion: assuming a 35 GM action cross-section and an intracellular concentration of 10 μM for GCaMP 52 and a total collection efficiency of ~10% 18 , our signals are on the order of ~250 photons/voxel, in reasonable agreement with our top-down estimation accounting for background and auto-fluorescence. Data acquisition Data were acquired using the commercial mesoscope-compatible version of the ScanImage software platform (Vidrio) with some additional customizations, as well as upgraded digitization hardware (Extended Data Fig. 3a ). We used an evaluation board (Analog Devices, AD9516-0) to multiply a trigger signal from the OPCPA laser to 1,614 MHz, which in turn was fed to the upgraded digitizer (National Instruments, NI 5772) and field programmable gate array (FPGA, National Instruments, PXIe-7975R) to serve as a sample clock. This clock signal was used within the customized version of ScanImage to synchronize the line trigger to the pulse repetition rate of the laser, thus ensuring a single laser pulse constituted one voxel of the recording. Additionally, the ScanImage customization allowed the user to define channels by integrating temporal windows of the raw PMT signal (Hamamatsu H11706-40) with respect to a trigger from the laser. The window for each channel was set to integrate the fluorescence signal associated with each beam from the MAxiMuM system such that the channels constitute the de-multiplexed axial planes of the volumetric recording (see channel plots in Extended Data Fig. 3b ). The microscope recorded frames for each channel separately, in the same fashion as a two-color compatible microscope records separate channels from each PMT. Data streamed to disk consisted of 30 consecutive frames representing each channel, and thus each axial plane, repeated in sequence for each time point in the measurement. Data processing Extended Data Fig. 5 shows a schematic of the data processing pipeline. Data recorded by the microscope were reassembled from constituent ROIs into 30 stacks of frames ( x , y , t ) corresponding to each plane of the volume which were each processed separately. Motion correction of each plane was facilitated using the non-rigid version of the NoRMCorre algorithm 53 and neuronal footprints and time series were extracted with using the planar, patched version of the CaImAn software package 54 , 55 . Due to the reduced spatial sampling density of the data, the elliptical search method was found to most accurately extract neuronal signals from soma. The algorithm was initialized with a number of components dictated by the physiological expectation from the given volumetric field of view, assuming a standard density of 9.2 × 10 4 neurons per cubic millimeter 51 , 56 . The spatial correlation threshold was held at the default value of 0.4, and the minimum signal-to-noise parameter was set to 1.4. In practice, we found that this value was consistent with only keeping transients with statistically significant ( Z > 3 σ ) transient activity (see statistics in the following section). Neuropil subtraction was facilitated using the global background feature of CaImAn with three components. Extended Data Fig. 6d shows example local neuropil traces (magenta lines) from neurons in the data set shown in Fig. 2 as well as the resultant traces after subtraction (black lines). Finally, neuronal footprints were screened using the ‘mn’ and ‘mx’ options in CaImAn such that components larger than the area expected for a 20 μm diameter neuron, or smaller than that of a 10 μm diameter neuron, in the equivalent pixel space, were eliminated. The detected neurons from each plane in the volume were subsequently collated. The lateral positions of neuronal footprints were corrected for plane-to-plane offset using calibration values determined by recordings of pollen grains (Extended Data Fig. 4d ). For cases where components in adjacent planes were temporally correlated above the default CaImAn threshold (0.8) and also had any spatially overlapping voxels, the time series and footprints were merged into a single component. First order moments in the x , y , and z directions were used to determine the centroids of each neuronal component. The field curvature imposed by the microscope was corrected using a parabolic profile with a – 158 μm offset at the periphery of the full FOV 18 . Data analysis Correlations between neuronal activity and stimuli were analyzed by correlating the time series of each neuron with the corresponding stimulus vector, generated by convolving a time series composed of the onset of each stimulus or behavior with the expected kernel of the calcium indicator (see the final panel of Extended Data Fig. 5 ). This kernel had an exponential rise time of 200 ms and a decay of 550 ms, in agreement with the literature values for GCaMP6s 6 . All correlations considered between stimulus vectors and neuronal time series were Pearson type and used the raw time-series data rather than the deconvolved traces from CaImAn. The lag between the neuronal time series and each stimulus vector was defined as the time for which the cross-correlation between each trace and vector was maximized. For determining stimulus-tuned populations (Fig. 3b–d and Extended Data Fig. 9a–c ), the median value of the distribution of lags was applied as an offset to each time series prior to determining correlation. For the temporal analysis in Fig. 3r,s , the relative lag values for each individual behavior-tuned neuron are shown with respect to the median lag value. Null-hypothesis testing was conducted by creating a time series with a number of randomly shuffled ‘stimuli’ equal to the number presented during a typical recording. For uninstructed behaviors, shuffling was achieved by circulating each trace in the data set by a random value to remove temporal structure. The threshold for significant correlation with visual stimuli, whisker stimuli, or uninstructed animal behaviors was determined by fitting the shuffled correlations, r , to a normal distribution given by \(p\left( r \right) = e^{ - r^2/2\sigma ^2}\) . Correlations with stimuli for which r > 3 σ were considered highly correlated, while correlations below σ were deemed insignificant. Hierarchical clustering was performed using Ward’s method with the Euclidean distance metric via the MATLAB function ‘linkage’. For the mixed representation analysis in Fig. 3j–o , the activity of each trial was defined as the integration of the time series of each neuron in a 5-second window following the presentation of the stimulus. Significance of the change in a neuron’s activity was determined with a t -test comparing the activity of all trials with the stimulus presented alone to those where the stimulus was presented coincidently with another stimulus. Neurons with P < 0.05 were considered to have significant change in activity. Animal statistics and imaging power A total of n = 6 male and female animals transgenically expressing GCaMP6s in glutamatergic neurons (vGluT1-cre × fl-GCaMP6s, pCAG promoter, Jackson Labs stock numbers 031562 and 034422, respectively) 6 , 38 , and n = 3 male and female animals expressing jGCaMP7f 42 through viral transfection were imaged during experiments. We used 150 – 200 mW of power to image with sufficient signal-to-noise ratio in jGCaMP7f-expressing mice, and 200 – 450 mW in transgenic animals with large (8 mm diameter) cranial windows. The power required for imaging is, at least partially, a function of the labeling strategy employed: Sparser labeling strategies (that is, those based on local viral injection, cell-type specific promoters, or other targeted labeling methods) result in less out-of-focus background fluorescence and thus higher SNR can be achieved for lower imaging power relative to imaging in animals for which denser labeling strategies (that is, transgenic labeling) are utilized. All imaging conditions used were determined to be within safe limits for brain heating through immunohistochemical labeling experiments and brain temperature simulations (Supplementary Note 5 ). Extended Data Fig. 8b–d,h–j shows distributions of typical transient activity in GCaMP6s-expressing mice, including peak activity, baseline noise levels, and typical transient decay times. The characterizations are consistent with expectations for two-photon imaging. Extended Data Fig. 8d,j show the distributions of maximum Z -scores for the neuronal data sets shown in Figs. 2 and 5a . Z -scores were calculated by applying a three-point moving average to each neuron’s time series, finding the maximum value and normalizing by the baseline noise. The moving average in this instance ensures that we are measuring the robustness of all consecutive data points within the kernel of the indicator (200 ms rise time and 550 ms decay time for GCaMP6s sampled at a 4.7-Hz volume rate implies 3 samples within each transient) relative to noise, and not inflating the significance of isolated fluctuations in the data. At an SNR threshold of 1.4, the cutoff of the distribution is such that neurons in the data set have activity exceeding at least three standard deviations of the baseline, indicating low likelihood of false-positive ROIs being classified as neurons. Extended Data Fig. 8f,k shows the distributions of nearest neighbor separations between neurons in the data set shown in Figs. 2 and 5a . The majority of pair-wise distances occur between 10 and 20 μm, in agreement with expectations for cortical neuronal density 51 , 56 . Apparatus for stimulus delivery and behavioral tracking Visual and somatosensory stimuli were controlled via a synchronization script running in parallel to ScanImage implemented on a microcontroller (Arduino Uno). A portion of the voltage used to open the laser shutter was read in by the microcontroller, triggering the beginning of a recording epoch and synchronizing the microcontroller clock to the ScanImage frame clock. For whisker stimulation, a motor shield and servo motor were used to move a brush forward and backward over the animals’ whiskers at time intervals indicated by the stimulation protocol. The brush size and its proximity were chosen to stimulate all whiskers simultaneously (as opposed to stimulation of specific whiskers), and stimulation was applied contralaterally to the hemisphere being recorded by the microscope. For visual stimuli, the microcontroller sent a 5 V TTL trigger signal to the control computer. A parallel MATLAB program read in these trigger signals and generated a series of images on a secondary external monitor (Waveshare 11.6′ LCD) placed ~20 cm from the animal’s eyes. For each trigger signal, a 500-ms duration movie was displayed on the monitor consisting of a binary drifting grating pattern at the full dynamic range of the screen. The position of the screen was chosen to cover 72° of the animal’s field of view horizontally and 43° vertically. The grating period was 0.07 cycles per degree, and the rate of drift was 1 cycle/second. The orientation of the grating followed a pattern of 0° (horizontal), 45°, 90° (vertical), and 135° and was repeated in this pattern for all stimuli during the recordings. All rodents were head-fixed on a home-built treadmill with a rotation encoder affixed to the rear axle (Broadcom, HEDS-5540-A02) to measure the relative position of the tread during the recordings. Treadmill position, the microcontroller clock value, and the onset of either a visual or whisker stimulus were streamed to the control computer via a serial port connection and logged with a separate data logging script. The data logging script also triggered a camera (Logitech 860-000451) in order to capture additional animal behavior during recordings. Motion tracking of the rodent’s left and right forelimbs and right hindlimb was facilitated using DeepLabCut 57 , 58 . An example recording of the animal with motion tracking super-imposed over top is shown in Supplementary Video 1 . Stimulation was presented at 5-second intervals such that the calcium signal from correlated transients had sufficiently decayed before the onset of the next stimulus. Data visualization All time-series data are displayed with a moving average corresponding to a 1-second time interval along the temporal axis to improve transient visibility. Calcium trace heatmaps are individually normalized to improve visualization. For 3D visualization, equally sized spheres in the data set were rendered using the ‘scatter3’ function in MATLAB. For Supplementary Videos 2 , 8 , and 10 , the top ~57,000, ~33,000, and ~150,000 most active neurons, respectively, are visualized and their time series are individually normalized, with the opacity of the representative sphere increasing with transient activity. For the volume projection images in Figs. 2a , 4d , and 5a in the manuscript, the top ~207,000, ~70,000, and ~266,000 most active neurons are rendered, respectively, and the color of each sphere represents the maximum projection of the corresponding neuron’s time series with the color bar and opacity of each representative sphere adjusted for maximum visibility of the most active neurons. Imaging power and immunohistochemical validation We used 150 – 200 mW of power to image FOVs of 0.6 × 0.6 × 0.5 mm with sufficient signal-to-noise ratio in jGCaMP7f-expressing mice. In transgenic mice, power was restricted to <250 mW in small FOVs (0.6 × 0.6 × 0.5 mm, 2 × 2 × 0.5 mm) to remain within previously established thresholds for brain safety. For large FOV recordings (3 × 5 × 0.5 mm, 5.4 × 6 × 0.5 mm), power ranged from 200 to 450 mW. To further validate any possible neuropathological responses associated with these intensities and absolute power levels when delivered within the large volumetric FOVs and cranial windows, we employed immunohistochemical labeling. Brain sections were immunostained for the astrocyte activation marker GFAP following imaging under various laser intensities. All experiments were conducted at least 2 weeks after cranial window surgery. Awake head-fixed mice (wild-type) were subjected to various laser powers and scanning FOVs (Extended Data Fig. 10 ) for 9 minutes continuously at a depth of ~600 µm below the surface of the brain. For verification, we included a negative control condition corresponding to animals that had undergone cranial window implantation but had not been exposed to laser power in the region of the brain considered. As a positive control, we imaged with 360 mW of power in a FOV of 0.4 mm, exceeding previously established limits for brain safety 24 . To make full use of the 8 mm cranial window and each animal, both hemispheres of each mouse were used for separate experiments, with negative control and low exposure conditions contralateral to high exposure and positive control conditions. Sixteen hours after scanning, mice were deeply anaesthetized with isoflurane (4% flow rate of 0.5–0.7 l/minute) and transcardially perfused with cold phosphate-buffered saline (PBS) followed by 4% PFA (VWR International, 15710). Brains were extracted and placed in PFA for 24 hours and then transferred to 30% sucrose/PBS solution at 4 °C. Coronal sections (30-µm thickness) were collected from within and around the scanning FOV site using a cryostat (Leica Biosystems). Brain sections were permeabilized using 0.2% Triton X-100/PBS (PBST) for a 1-hour incubation period, followed by a blocking solution of 5% normal goat serum (NGS) in PBST for 1 hour. Sections were then incubated in primary mouse GFAP antibody (Protein Tech, 60190-1-Ig) (1:800) in PBST + 2% NGS for 24 hours at 4 °C. Sections were then washed 3 times with PBS for 20 minutes per wash, followed by an incubation period in Alexa-594-conjuagted goat anti-mouse antibody (Abcam, ab150116) (1:1,000) for 2 hours at room temperature. Sections were washed again 3 times with PBS for 20 minutes per wash with Hoechst 33342 (Invitrogen, H3570) (1:2,000) being added during the last wash, before being mounted on slides and coverslipped using anti-fade mounting medium (Invitrogen, P10144). Brain sections were imaged at ×20 magnification using a resonant-scanning confocal microscope (Caliber I.D, RS-G4). Images were analyzed using FIJI. Relative fluorescence intensity was quantified by measuring the mean fluorescent intensity in a 1 × 1 mm axial area centered within the imaging FOV and dividing this measurement by the mean intensity of equivalently sized areas within the control hemispheres. Brain heating simulations We used a finite difference model 59 to simulate laser-induced heating, thermal conductivity, and homeostatic cooling through blood perfusion. Additionally, we used modifications 60 to account for a scanned focal plane and heat conduction through the cranial window and immersion water. All simulations used an 8-mm cranial window, with the exception of those in Extended Data Fig. 10j , where the window size varies from 3 to 8 mm. The boundary conditions of the model were adjusted to assume a constant temperature of 25 °C at a distance 1.5 mm above the surface of the cranial window. We used a voxel size of 0.01 mm for light diffusion and 0.03 mm for heat diffusion, a time step of 0.16 ms and an optical wavelength of 960 nm. Material constants for glass and water were obtained from published tables. Animal subjects and surgical procedures All surgical and experimental procedures were approved by the Institutional Animal Care and Use Committee of The Rockefeller University. Male and female adult C57BL/6J mice were supplied by Jackson Laboratory; VGlut-IRES-Cre × Ai162 crossed mice were bred in house. All mice were 28 – 70 days of age at the time of the first procedure, and were 49–291 days old during imaging experiments. Mice were allowed food and water ad libitum. In C57BL/6J mice, expression was achieved through injection of a genetically expressed calcium indicator adeno-associated virus (AAV9-syn-jGCaMP7f) at ~1–2 weeks prior to cranial window implantation following the procedure outlined in previous works 25 . During cranial window implantation, mice were anesthetized with isoflurane (1–1.5% maintenance at a flow rate of 0.7–0.9 l/min) and placed in a stereotaxic frame (RWD Life Science). The scalp was removed, and the underlying connective tissue was cleared from the skull. A custom-made stainless-steel head bar was fixed behind the occipital bone with cyanoacrylate glue (Loctite) and covered with black dental cement (Ortho-Jet, Lang Dental). For smaller windows, circular craniotomies (4-mm diameter) were performed over the desired imaging site. For larger windows, either a D-shaped single-hemisphere 4 × 8 mm craniotomy, or a circular 8 mm diameter dual-hemisphere craniotomy was performed. A ~1 mm segment of the skull furthest posterior within the bounds of the diameter of the craniotomy was kept intact to avoid the junction of the sagittal and transverse sinus vessels while drilling. A circular 4 mm or 8 mm glass coverslip, or a D-shaped 4 × 8 mm glass coverslip, with 1 mm of the bottom removed (#1 thickness, Warner Instruments) was implanted in the craniotomy site and sealed in place with tissue adhesive (Vetbond). The exposed skull surrounding the cranial window was covered with a layer of cyanoacrylate glue and then dental cement. Post-operative care consisted of 3 days of subcutaneous delivery of meloxicam (0.125 mg/kg), antibiotic-containing feed (LabDiet no. 58T7), and meloxicam-containing (0.125 mg/tablet) food supplements (Bio-Serv no. MD275-M). After surgery, animals were returned to their home cages and were given at least one week for recovery before being subjected to imaging experiments. Mice with damaged dura or unclear windows were euthanized and were not used for imaging experiments. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The raw image data presented in this work is currently too large for sharing via typical public repositories. It is available from the corresponding author upon reasonable request. Source data are provided with this paper. Code availability Stimulus delivery and treadmill control was implemented with a combination of MATLAB, Python, and Arduino scripts. Neuronal segmentation and non-rigid motion correction were based on the CaImAn 53 , 54 and NoRMCorre 52 software packages, respectively, and implemented using MATLAB. All custom code, including pipelines based on CaImAn and NoRMCorre, is publicly available on the Vaziri lab GitHub repository ( ). Change history 08 November 2021 A Correction to this paper has been published: | Capturing the intricacies of the brain's activity demands resolution, scale, and speed—the ability to visualize millions of neurons with crystal clear resolution as they actively call out from distant corners of the cortex, within a fraction of a second of one another. Now, researchers have developed a microscopy technique that will allow scientists to accomplish this feat, capturing detailed images of activity of a vast number of cells across different depths in the brain at high speed and with unprecedented clarity. Published in Nature Methods, the research demonstrates the power of this innovation, dubbed light beads microscopy, by presenting the first vivid functional movies of the near-simultaneous activity of one million neurons across the mouse brain. "Understanding the nature of the brain's densely interconnected network requires developing novel imaging techniques that can capture the activity of neurons across vastly separated brain regions at high speed and single-cell resolution," says Rockefeller's Alipasha Vaziri. "Light beads microscopy will allow us to investigate biological questions in a way that had not been possible before." A focus on microscopy Whether it's whiskers that seek hazards by flicking to and fro, or hand-eye-coordination that helps a human hit a baseball, animals rely upon the call and response of the sensory, motor, and visual regions of the brain. Cells from far reaches of the cortex coordinate this feat through a web of neuroactivity that weaves distant regions of the brain into interconnected symphonies. Scientists are only now beginning to untangle this web, with the help of cutting-edge microscope technology. The combination of two-photon scanning microscopy and fluorescent tags is the gold standard when it comes to imaging the activity of neurons within less transparent brain tissues, which are prone to scattering light. It involves firing a focused laser pulse at a tagged target. A few nanoseconds after the pulse hits its mark, the tag emits fluorescent light that can be interpreted to give scientists an idea of the level of neuroactivity detected. But two-photon microscopy suffers from a fundamental limitation. Neurobiologists need to record simultaneous interactions between the sensory, motor, and visual regions of the brain, but it is difficult to capture the activity in such a broad swath of the brain without sacrificing resolution or speed. The neuroactivity of one million neurons in the mouse brain, at unprecedented resolution. Credit: Alipasha Vaziri. Designing an ideal microscope for visualizing interactions between far apart brain regions can feel like plugging holes in a sinking ship. In the interests of high resolution, scientists often must sacrifice scale—or zoom out to take in the larger structure, at the cost of resolution. This can be overcome by snapping a series of high-resolution images from distant corners of the brain separately, later stitching them together. But then speed becomes an issue. "We need to capture many neurons at distant parts of the brain at the same time at high resolution," Vaziri says. "These parameters are almost mutually exclusive." An innovative resolution Light beads microscopy offers a creative solution and pushes the limits of imaging speed to what's maximally obtainable – only limited by physical nature of fluorescence itself. This is done by eliminating the "deadtime" between sequential laser pulses when no neuroactivity is recorded and at the same time the need for scanning. The technique involves breaking one strong pulse into 30 smaller sub pulses – each at a different strength – that dive into 30 different depths of scattering mouse brain but induce the same amount of fluorescence at each depth. This is accomplished with a cavity of mirrors that staggers the firing of each pulse in time and ensures that they can all reach their target depths via a single microscope focusing lens. With this approach, the only limit to the rate at which samples can be recorded is the time that it takes the fluorescent tags to flare. That means broad swaths of the brain can be recorded within the same time it would take a conventional two-photon microscope to capture a mere smattering of brain cells. Vaziri and colleagues then put light beads microscopy to the test by integrating it into a microscopy platform that allows for optical access to a large brain volume enabling the recording of the activity of more than one million neurons across the entire cortex of the mouse brain for the first time. Because Vaziri's method is an innovation that builds on two photon microscopy, many labs already have or can commercially obtain the technologies necessary to perform light beads microscopy, as described in the paper. Labs that are less familiar with these techniques could benefit from a simplified, self-contained module that Vaziri is currently developing for more widespread use. "There's no good reason why we didn't do this five years ago," he says. "It would have been possible—the microscope and laser technology existed. No one thought of it." Ultimately, the goal is to complement rather than replace current techniques. "There are neurobiological questions for which the standard two-photon microscope is sufficient," Vaziri says. " But light beads microscopy allows us to address questions that existing methods cannot." | 10.1038/s41592-021-01239-8 |
Medicine | How intestinal bacteria use our dying cells as fuel | Christopher J. Anderson et al, Microbes exploit death-induced nutrient release by gut epithelial cells, Nature (2021). DOI: 10.1038/s41586-021-03785-9 Journal information: Nature | http://dx.doi.org/10.1038/s41586-021-03785-9 | https://medicalxpress.com/news/2021-08-intestinal-bacteria-dying-cells-fuel.html | Abstract Regulated cell death is an integral part of life, and has broad effects on organism development and homeostasis 1 . Malfunctions within the regulated cell death process, including the clearance of dying cells, can manifest in diverse pathologies throughout various tissues including the gastrointestinal tract 2 . A long appreciated, yet elusively defined relationship exists between cell death and gastrointestinal pathologies with an underlying microbial component 3 , 4 , 5 , 6 , but the direct effect of dying mammalian cells on bacterial growth is unclear. Here we advance a concept that several Enterobacteriaceae, including patient-derived clinical isolates, have an efficient growth strategy to exploit soluble factors that are released from dying gut epithelial cells. Mammalian nutrients released after caspase-3/7-dependent apoptosis boosts the growth of multiple Enterobacteriaceae and is observed using primary mouse colonic tissue, mouse and human cell lines, several apoptotic triggers, and in conventional as well as germ-free mice in vivo. The mammalian cell death nutrients induce a core transcriptional response in pathogenic Salmonella , and we identify the pyruvate formate-lyase-encoding pflB gene as a key driver of bacterial colonization in three contexts: a foodborne infection model, a TNF- and A20-dependent cell death model, and a chemotherapy-induced mucositis model. These findings introduce a new layer to the complex host–pathogen interaction, in which death-induced nutrient release acts as a source of fuel for intestinal bacteria, with implications for gut inflammation and cytotoxic chemotherapy treatment. Main Apoptosis is the primary form of regulated cell death during development and homeostasis 7 , although pyroptosis and necroptosis have important roles in various disease contexts 8 , 9 . A fascinating, yet loosely defined, relationship exists between gastrointestinal pathologies, mammalian cell death and enteric bacteria 3 , 4 , 6 . Many foodborne bacterial pathogens, including the non-typhoidal serovars of Salmonella enterica ( Salmonella ), directly or indirectly induce mammalian cell death 5 . In addition, patients with inflammatory bowel disease (IBD) have increased levels of apoptosis 10 and ‘dysbiotic’ enrichment of Enterobacteriaceae 11 . Furthermore, chemotherapeutic agents cause gastrointestinal mucositis 12 that increases the risk of developing infections 13 . Although a clear relationship exists between increases in Proteobacteria (including Enterobacteriaceae) and diseases such as IBD and intestinal cancers 14 , a direct link between dying mammalian cells and bacterial outgrowth remains unexplored. Notably, dying mammalian cells release soluble factors that act as signals between mammalian cells 15 . Given the close relationship between mammalian cell death and bacteria-centric pathologies, we tested whether mammalian intercellular signalling molecules released from apoptotic cells might provide direct fuel for bacterial growth. Apoptosis promotes bacterial growth We established primary colonocyte culture explants and induced cell death via treatment with staurosporine or doxorubicin (Fig. 1a ). Cultured explants maintained general architecture and underwent apoptotic cell death after treatment with staurosporine (Fig. 1a , Extended Data Fig. 1a–d ). Apoptotic supernatants collected from these explants significantly enhanced Salmonella growth (Fig. 1b ). Notably, increased Salmonella growth after exposure to apoptotic supernatants compared with fresh medium suggests that dying cells actively ‘secrete’ caspase-dependent growth-promoting factors into the medium (Fig. 1b ). The heterogeneous explant system contained non-epithelial cells, including CD45 + myeloid cells; however, the total CD45 + fraction as well as the dying (CD45 + TUNEL + ) populations did not increase after staurosporine treatment (Extended Data Fig. 1b ). Fig. 1: Regulated mammalian cell death enhances bacterial growth. a , Schematic of ex vivo approach and cleaved caspase 3 images from n = 3 independent experiments. KD, kinase dead; KO, knockout. Scale bar, 200 μm. b , Colony forming units (CFU) of Salmonella in fresh medium ( n = 7), live supernatants ( n = 5) or staurosporine (Stauro)-treated supernatants ( n = 5). Explants were treated with the pan-caspase inhibitor quinoline-Val-Asp-difluorophenoxymethylketone (QVD) ( n = 8), staurosporine ( n = 11), or QVD and staurosporine ( n = 8). c , Salmonella CFU in supernatants of Vil-cre +/− Casp3/7 fl/fl explants ( n = 7 live, n = 8 stauro). d , Salmonella CFU in supernatants of Vil-cre +/− Casp3/7 fl/fl ( n = 7) or Casp3/7 fl/fl control (Ctrl) ( n = 6) explants. Middle, n = 10 medium, n = 12 control, n = 7 RIPK1 KD, n = 4 Mlkl −/− . Right, gasderminD (GasD) knockout ( n = 6), Casp1/11 −/− ( n = 7), control ( n = 9). e , Left, Salmonella CFU ( n = 9 medium, apop. + QVD; n = 10 live, apop. (FADD)). Middle, bacterial CFU ( n = 4). Right, bacterial CFU ( n = 4). Box and whiskers ( b – e ) show minimum to maximum values with all independent replicates, centre denotes median, and the bounds denote the 25th to 75th percentiles. NS, not significant ( P > 0.05). * P ≤ 0.05, ** P ≤ 0.005, *** P ≤ 0.0005, one-way ANOVA with Tukey’s multiple comparisons test ( b , d (middle and right), e (left)), unpaired two-tailed Student’s t -test ( c , d (left)), multiple two-tailed Student’s t -tests ( e (middle and right). Source data . Full size image To more directly test epithelial cell apoptosis, we used colonic explants from Vil-cre +/− Casp3/7 fl/fl mice ( Vil is also known as Vil1 ), in which executioner caspase-3 and caspase-7 are both deleted specifically in epithelial cells via the Vil-cre construct 16 . In contrast to control mice, staurosporine-treated Vil-cre +/− Casp3/7 fl/fl colonocytes were unable to enhance Salmonella growth (Fig. 1c ). Doxorubicin treatment also induced apoptosis, and these supernatants enhanced Salmonella outgrowth despite the direct antimicrobial action of doxorubicin (Extended Data Fig. 1e–g ). As recent studies have highlighted the interconnectivity of different regulated cell death pathways 17 , we also tested mice that lack specific components of lytic cell death pathways. Doxorubicin-treated colonic explants from mice defective for necroptotic or pyroptotic cell death did not restrict Salmonella growth (Fig. 1d ), which demonstrates the necessity and specificity for caspase-dependent intestinal epithelial cell apoptosis in promoting Salmonella growth. We next induced apoptotic cell death in colonic epithelial cell lines, collected the cell-free supernatants, and tested bacterial growth (Extended Data Fig. 2a–c , Supplementary Figs. 1 , 2 ). Supernatants from cells undergoing apoptosis via FADD dimerization (CT26:FADD cell clones) 18 , UV irradiation or staurosporine treatment significantly enhanced Salmonella growth (Fig. 1e , Extended Data Fig. 2d ). Apoptotic supernatants similarly enhanced the growth of a human intestinal commensal strain of Escherichia coli 19 , the systemic pathogen Klebsiella pneumoniae 20 and clinical E. coli isolates obtained from patients with Crohn’s disease (LF82) 21 , colorectal cancer (CCR20) 22 and urinary tract infection (UTI189) 23 (Fig. 1e ). Increased bacterial growth was observed over a time course, not influenced by culture medium, replicated after treatment with the caspase-3 activator PAC-1, and conserved in human colonic epithelial cells (Extended Data Figs. 3 a–g, 4a, b ). Apoptotic colonic cell lines display heterogeneity in membrane permeability (Extended Data Figs. 2 b, c, 4a, b ), whereas Jurkat cells, an immortalized human T cell line, do not (Extended Data Fig. 4c ). Apoptotic Jurkat supernatants stimulated increased Salmonella growth (Extended Data Fig. 4c ), which suggests that factors involved in promoting bacterial growth are conserved across mammalian cell types, and are less dependent on membrane permeability. Notably, forced necrosis and complete lysis of colonic cells did not enhance Salmonella growth (Extended Data Fig. 4d ), which indicates the requirement of regulated apoptosis. We then asked what type of factors within apoptotic supernatants promoted bacterial growth. Supernatants from apoptotic cells contained similar levels of total protein, and a rigorous filtration strategy to remove extracellular vesicles and most supernatant proteins demonstrated that soluble factors less than 3 kDa in size were sufficient to boost Salmonella growth (Extended Data Fig. 5a–c ). The soluble mammalian factors were insensitive to proteinase K treatment, insensitive to temperature denaturation, and independent of serum or phenol red (Extended Data Fig. 5d–g ). Thus, small molecules or metabolites less than 3 kDa in size released from apoptotic colonocytes act as nutrients that augment pathogenic, commensal and opportunistic Enterobacteriaceae growth; we term this process death-induced nutrient release (DINNR). DINNR induces a specific transcriptome We performed RNA sequencing (RNA-seq) analysis of Salmonella to address how apoptotic nutrients may promote bacterial growth. We identified eight Salmonella genes that were similarly regulated across in vitro systems (Extended Data Fig. 6a ). Three of the target genes are annotated as genes of unknown function and three additional genes ( fljA , adiY and soxS ) are annotated as transcriptional regulators (Extended Data Fig. 6a ). By contrast, two target genes had a high potential to directly contribute to growth. The first, cadB (SL1344_2520), encodes a lysine/cadaverine antiporter involved in the regulation of bacterial cytosolic pH 24 . The second, pflB (SL1344_0910), is a pyruvate formate-lyase that produces acetyl-CoA and formate from pyruvate 25 . Apoptotic supernatants significantly increased the expression of cadB (Extended Data Fig. 6b ) and pflB (Fig. 2a ) in Salmonella but not in E. coli (Extended Data Fig. 6d ), which suggests that there is species variation in the transcriptional response to apoptotic supernatant. Although deletion of the cadBA operon did not affect Salmonella growth (Extended Data Fig. 6c ), growth of the Δ pflB strain was significantly reduced in apoptotic supernatants (Fig. 2b , Extended Data Fig. 6e ), which suggests a direct role for PflB in promoting growth. Fig. 2: Mammalian cell death nutrients promote pflB expression and growth in Salmonella . a , Salmonella pflB expresion . n = 8 (medium, apop. FADD), n = 7 (apop. QVD), n = 3 (medium, live, apop. UV), n = 4 (medium, live, apop. stauro). b , CFU of wild-type or Δ pflB mutant Salmonella (CJA071). n = 4. c , Pyruvate concentration in medium, live, apoptotic or necrotic supernatants. n = 4 (medium), n = 6 (medium + fetal bovine serum (FBS)), n = 8 (live), n = 13 (apop. stauro, apop. UV), n = 4 (freeze−thaw). d , Schematic for pyruvate production in mammalian cells. e , Pyruvate concentration in live, UV or UV plus shikonin supernatants, shown as the fold change relative to live supernatants. n = 4. f , CFU of wild-type Salmonella . n = 7 (apop. UV), n = 9 (apop. UV + shikonin). g , CFU of wild-type Salmonella . n = 12 (apop. stauro), n = 10 (apop. stauro + shikonin), n = 8 (apop. stauro + shikonin + pyruvate). h , CFU of wild-type or Δ pflB mutant Salmonella (CJA071). n = 5. Box plots are as in Fig. 1 . Data are mean ± s.e.m. NS, P > 0.05. * P ≤ 0.05, ** P ≤ 0.005, *** P ≤ 0.0005, one-way ( a , c , g ) or two-way ( b ) ANOVA with Tukey’s multiple comparisons test, one-way ANOVA with Dunnett’s multiple comparisons test ( d ), unpaired two-tailed t -test ( f ), two-way ANOVA with Sidak’s multiple comparisons test ( g ). Source data . Full size image PflB is a pyruvate-formate lyase, and we observed that apoptotic cells released significantly higher levels of pyruvate (Fig. 2c ) but not formate (Extended Data Fig. 6f ). Notably, necrotic cells did not release increased amounts of pyruvate (Fig. 2c ), which suggests that pyruvate synthesis and release during cell death is probably a regulated process. Treatment of mammalian cells with the pyruvate kinase M2 (PKM2) inhibitor shikonin 26 (Fig. 2d ) did not inhibit cell death (Extended Data Fig. 6g ), yet significantly reduced the amount of secreted pyruvate (Fig. 2e ). Apoptotic cell supernatants from shikonin-treated cells significantly reduced the growth of wild-type Salmonella (Fig. 2f ). Furthermore, PflB-dependent Salmonella growth was ablated in shikonin-treated apoptotic supernatants, but could be restored with exogenous pyruvate (Fig. 2g, h ). Together, these results directly link mammalian-produced pyruvate during apoptosis to PflB-dependent growth in Salmonella . Apoptotic cells release a core group of six metabolites via the pannexin-1 (Panx1) membrane channel 15 . Jurkat cells that express a dominant-negative form of Panx1 27 undergo comparable apoptosis, but fail to release Panx1-dependent metabolites 15 (Extended Data Fig. 7a ). Notably, PflB-dependent growth of Salmonella was significantly blunted in apoptotic supernatants from cells expressing dominant-negative Panx1 (Extended Data Fig. 7a ), which suggests a link between Panx1-dependent metabolites and Salmonella PflB. Indeed, a mixture of the six Panx1-dependent metabolites potently promoted Salmonella growth (Extended Data Fig. 7b ). Specifically, the combination of two metabolites—UDP-glucose and fructose-1,6-bisphosphate (FBP)—was sufficient to promote PflB-dependent outgrowth (Extended Data Fig. 7b, c ). Pathogenic and commensal strains of E. coli displayed similar enhanced growth in the presence of Panx1-dependent metabolites, highlighting a conserved utilization of these metabolites across members of the Enterobacteriaceae at concentrations detected in apoptotic supernatants (Extended Data Fig. 7d, e ). Death responsive genes drive colonization To test the relevance of the cadBA operon and pflB in a model of foodborne Salmonella infection, we used the streptomycin pretreatment mouse model 28 . After competitive infections, in which mice are infected with equal numbers of wild-type and mutant strains of Salmonella , the relative ratio (colonization fitness) of each Salmonella strain was compared (Fig. 3a ). The Δ cadBA strain was less fit than wild-type Salmonella (Extended Data Fig. 8a ), consistent with previous work 29 . In addition, the Δ pflB strain was markedly less fit in colonization compared with wild-type Salmonella , which correlated with a significant increase in pyruvate concentrations in infected mice (Fig. 3b, c ). Even at earlier stages of infection, wild-type Salmonella was significantly more fit than the Δ pflB strain in germ-free mice (Extended Data Fig. 8b ). The reduced colonization in germ-free large intestines suggests that PflB is most critical for bacterial competition in the presence of microbial competitors. This notion is supported by our findings that the pflB mutant colonizes equally during single strain infections (Extended Data Fig. 8c–f ). Fig. 3: PflB promotes Salmonella fitness during foodborne infection. a , Schematic of infection model. SPF, specific pathogen-free. b , Pyruvate concentrations. n = 4 uninfected, n = 7 infected mice from 3 cohorts. c , Salmonella burden of wild-type (black) or Δ pflB (CJA057, blue) at day 4. n = 8 mice from 2 cohorts. d , Competitive index in ileum at day 4 of either wild-type Salmonella compared with Δ pflB (CJA057), or ΔSPI-1ΔSPI-2 (CJA077) compared with ΔSPI-1ΔSPI-2Δ pflB (CJA081). n = 14 mice from 4 cohorts. e , Cleaved caspase 3 in ileal sections from uninfected, wild-type Salmonella infected, or ΔSPI-1ΔSPI-2 (CJA077) infected mice. Scale bars, 500 μm (top), 100 μm (bottom). Representative images from n = 3 mice. f , Salmonella burden of wild-type (black) or Δ pflB (CJA057, blue) in Casp1/11 −/− mice, 4 days after infection. n = 8 mice from 2 cohorts. g , Salmonella burden of wild-type (black) or Δ pflB (CJA057, blue) in Vil-cre +/− Casp3/7 fl/fl mice. n = 5 mice from 2 cohorts. h , Competitive index of wild-type compared with Δ pflB (CJA057). n = 8 ( Casp3/7 fl/fl control), n = 5 ( Vil-cre +/− Casp3/7 fl/fl ) from two cohorts. In c , f and g , connected wild-type and mutant Salmonella come from the same mouse. The median value is listed below. In d – h , data are from ilium. Box plots are as in Fig. 1 . * P ≤ 0.05, two-tailed Mann–Whitney U -test ( b , d , h ) or two-tailed Wilcoxon signed rank test with theoretical median of 1 ( c ). Source data . Full size image Salmonella uses two type-3 secretion systems encoded within the Salmonella pathogenicity island (SPI)-1 and SPI-2 that are largely responsible for the inflammation and morbidity in this model 28 (Extended Data Fig. 8c–f ). PflB-mediated fitness was significantly reduced in the ΔSPI-1ΔSPI-2 (Δ invA Δ ssaV ) strain within the ileum (Fig. 3d ), which suggests a link between PflB and Salmonella virulence. Salmonella induces SPI-1- and SPI-2-dependent cell death, including apoptosis and caspase-1- and caspase-11-dependent pyroptosis, in mammalian cells 30 , 31 . We observed increased levels of apoptotic cleaved caspase-3 in wild-type, but not ΔSPI-1ΔSPI-2, infected mice in the ilea (Fig. 3e ) but not the colon (Extended Data Fig. 8g, h ). Elegant research has focused on the pro-inflammatory response generated against Salmonella infection 32 , 33 , 34 . In agreement with ileum-restricted cleaved caspase-3, Vil-cre +/− Casp3/7 fl/fl mice display similar levels of morbidity and bacterial burden compared with littermate controls (Extended Data Fig. 8i–m ). PflB-dependent growth and expression were independent of the pyroptotic caspase-1 and caspase-11 in vitro and in vivo (Fig. 3f , Extended Data Fig. 9a–e ). By contrast, the ability of wild-type Salmonella to outcompete the Δ pflB mutant strain was completely ablated in the ilea of Vil-cre +/− Casp3/7 fl/fl mice (Fig. 3g, h ). These data connect Salmonella virulence factor-dependent epithelial cell apoptosis to a tissue-specific niche for PflB-driven fitness. TNF-linked cell death sustains bacteria A close association exists between microbial dysbiosis and IBD 10 , with Enterobacteriaceae outgrowth linked to exacerbated symptoms 35 . TNF is a crucial cytokine that promotes intestinal inflammation and epithelial apoptosis in patients with IBD 36 . A20 is an IBD susceptibility gene that functions as a ‘brake’ on TNF-induced inflammation, and targeted deletion of A20 in mouse gut epithelial cells ( Vil-cre +/− A20 fl/fl mice) results in increased TNF-induced ileal pathology and apoptosis 37 . A20-knockout cells were susceptible to TNF-driven apoptosis, which significantly promoted Salmonella growth, induced Salmonella pflB and cadB expression, and promoted growth of Enterobacteriaceae in vitro (Extended Data Fig. 10a–f , Supplementary Figs. 3 , 4 ). Salmonella infection induces significant TNF expression and provides an endogenous source of TNF within the intestinal tract 38 . Vil-cre +/− A20 fl/fl mice showed a marked correlation between an increase in caspase-3 activation, an increase in total Salmonella colonization and an increase in PflB-dependent Salmonella colonization in the ileum (Extended Data Fig. 11a–d ). Moreover, consistent with increased disease severity, A20-deficient mice lost significantly more weight and exhibited higher Salmonella colonization in the large intestine and increased splenic dissemination (Extended Data Fig. 10e–g ). These findings provide a new experimental link between the clinically described TNF-induced apoptosis and the expansion of Enterobacteriaceae, and suggest that disease settings associated with increased intestinal epithelial cell apoptosis could leave the host more susceptible to infections. Gut apoptosis fuels enterics in vivo Cytotoxic chemotherapies are also linked to the enrichment of Proteobacteria 39 and a notably higher risk of infection 13 . To test whether gastrointestinal apoptosis increases susceptibility to infection, we treated mice with doxorubicin before oral infection (Fig. 4a , Extended Data Fig. 12a ). Treatment with doxorubicin induced intestinal pathology, including a significant reduction in caecal weight and a shortening of the colon (Extended Data Fig. 12b, c ). Concurrently, doxorubicin treatment led to staggering increases in bacterial burden after infection with Salmonella (Fig. 4b ) or E. coli (Extended Data Fig. 12d ), which demonstrates a considerable increase in general susceptibility to exogenous Enterobacteriaceae infection. Notably, apoptosis-deficient Vil-cre +/− Casp3/7 fl/fl mice were significantly protected from doxorubicin-induced intestinal damage and Salmonella burden (Fig. 4c , Extended Data Fig. 12b ), which demonstrates that intestinal epithelial cell apoptosis enhances susceptibility to infection in vivo. Fig. 4: Intestinal epithelial cell apoptosis fuels Salmonella growth in vivo. a , Schematic of in vivo model of doxorubicin-induced mucositis. b , Salmonella CFU. n = 8 mice from 2 cohorts. c , Salmonella CFU. n = 16 control Casp3/7 fl/fl , n = 9 Vil-cre +/− Casp3/7 fl/fl from 3 cohorts. d , Competitive index of wild-type Salmonella versus Δ pflB (CJA057). n = 14 C57BL/6 and Casp3/7 fl/fl control (Ctrl) mice, n = 23 C57BL/6 and Casp3/7 fl/fl (Ctrl + Doxo) mice, n = 9 Vil-cre +/− Casp3/7 fl/fl + doxorubicin (Doxo) from 3 cohorts. e , Salmonella CFU. n = 8 control Panx1 +/+ , n = 11 Panx1 −/− from 6 cohorts. f , Competitive index of wild-type Salmonella versus Δ pflB (CJA057). n = 7 control Panx1 +/+ , n = 6 Panx1 −/− from 4 cohorts. Box plots are as in Fig. 1 . NS, P > 0.05. * P ≤ 0.05, *** P ≤ 0.0005, two-tailed Mann–Whitney U -test ( b , c , e , f ) or Kruskal–Wallis test ( d ) with each tissue analysed separately. Source data . Full size image Doxorubicin treatment itself induced damage to the large intestine, caspase-3 activation in the ilea, and a potent expansion of the endogenous Enterobacteriaceae in otherwise uninfected mice (Extended Data Fig. 12e–h ). These findings are in line with patient data and highlight that endogenous or exogenous Enterobacteriaceae expand in response to apoptosis. Finally, to test the link between doxorubicin-induced apoptosis and the death-dependent growth response of Salmonella , we performed competitive infections with wild-type and Δ pflB mutant strains. Wild-type Salmonella was significantly enriched in control mice after doxorubicin treatment, but the loss of intestinal epithelial apoptosis in Vil-cre +/− Casp3/7 fl/fl mice significantly reduced PflB-dependent fitness (Fig. 4d ). Similarly, despite developing similar magnitudes of doxorubicin-induced disease (Extended Data Fig. 12b ), Panx1 −/− mice had significantly reduced Salmonella burden and PflB-dependent fitness (Fig. 4e, f ). These data implicate the Panx1-dependent metabolites released in response to doxorubicin-induced apoptosis, rather than chemotherapy-induced damage per se, as being crucial drivers in the susceptibility to infection. The data presented advance a concept that the soluble factors involved in mammalian apoptosis-dependent intercellular communication are exploited by intestinal bacteria, with DINNR directly fuelling growth. Enterobacteriaceae can directly induce cell death and inflammation, which is subsequently used as a competitive strategy for expansion 11 , 14 . Here, we provide examples in which Enterobacteriaceae either directly induce intestinal apoptosis or expand in response to exogenous triggers of apoptosis such as chemotherapy. After initial chemotherapy-induced apoptosis, intestinal bacteria drive subsequent intestinal pathology 40 . Apoptosis-dependent expansion of Enterobacteriaceae coincides with considerable intestinal pathology, although defining a direct cause-and-effect relationship will require further studies. Within the context of clinical chemotherapy treatment, intestinal mucositis increases the risk of infection from Gram-positive and Gram-negative organisms 13 . Although this work focuses on gastrointestinal epithelial cells and Enterobacteriaceae, our data reveal a conceptual conservation across mammalian cell types suggesting a broader relevance. Mechanistically, Salmonella pflB is a crucial gene for adaptation in response to epithelial cell apoptosis, while differences in the transcriptional responses between members within the Enterobacteriaceae reflect an added layer of specificity within the bacterial response. Finally, these data add a new dimension to the intricate relationship between mammalian cell death and microbial complications, that is, soluble factors released by apoptotic intestinal epithelial cells leave the host primed for subsequent exogenous infection or promote the ‘dysbiotic’ outgrowth of the endogenous Enterobacteriaceae. These findings open the conceptual window for developing supportive therapeutics that might help to restrain bacterial growth in patients undergoing cytotoxic chemotherapy or during TNF-associated flare ups. Methods All animal work was approved by the University of Virginia Institutional Animal Care and Use Committee (IACUC), the VIB-UGent Center for Inflammation Research Ethical Committee, and the University of Ghent Animal Ethics Committee. Reagents The reagents used for different parts of this work were obtained from the indicated suppliers as follows: doxycycline (Sigma D-9891); B/B homodimerizer (Clonetech AP20187); QVD (Sigma SML0063); staurosporine (Abcam ab120056); TUNEL (Sigma 12156792910); CD45 antibody (Abcam ab10558); caspase 3 activity kit (Sigma APT131, AssayGenie RG BN00018); caspase 8 activity kit (Sigma APT129); annexin V-APC (Biolegend 640941); annexin V-Pac Blue (Biolegene 640917); TO-PRO-3 Iodide (Thermo Fisher T3605); 7AAD (Invitrogen A1310); Yoyo-1 (Thermo Fisher Y3601); Sytox blue (Thermo Fisher S34857); PAC-1 (Selleckchem S2738); doxorubicin (Sigma D-1515); proteinase K (GC Biotech BIO-37037); RiboPure RNA Purification Kit (Thermo Fisher AM1925); SensiFast cDNA Synthesis Kit (GC Biotech BIO-650504); recombinant human TNF (VIB Protein Core); inosine monophosphate (IMP; Sigma 57510); dihydroxyacetone phosphate (DHAP; Sigma 37442); guanosine monophosphate (GMP; Sigma G8377); UDP-glucose (Abcam ab120384); spermidine (Sigma S2626); FBP (Sigma F6803); In Situ Cell Death Detection Kit, TUNEL (Sigma 12156792910); annexin V binding buffer (BD 556454); shikonin (MedChemExpress HY-N0822); pyruvate detection kit (Merck MAK071); formate detection kit (Sigma MAK059); FBP detection kit (Biovision K2036). Antibodies for western blot The following antibodies were used: caspase-1 (rabbit, anti-mouse) 41 , caspase-3 (CST 9662), cleaved caspase-3 (CST 9664), caspase-7 (CST 8438), cleaved caspase-7 (Abcam ab255818), caspase-8 (Abnova MAB3429), cleaved caspase-8 (CST 9429), caspase-8 (CST 9746), P-MLKL Ser345 (Abcam ab196436), MLKL (Sigma-Aldrich MABC604), P-RIPK3 Thr231/232 (CST 91702), tubulin HRP (Abcam ab21058), sheep anti-mouse IgG HRP (Cytiva NA931), goat anti-rat IgG HRP (Cytiva NA935), goat anti-rabbit IgG HRP (Cayman Chemical 10004301). Mammalian cell culture All mammalian cell culture work was performed using FBS from Summerlin Scientific. Serum-free conditions, when applicable, are indicated in the figure legends. CT26 cells (ATCC CRL-2638) were routinely cultured in DMEM (4.5 g l −1 glucose) supplemented with 10% FBS, 1× sodium pyruvate and 1× glutamine or RPMI-1640 containing 10% FBS. CT26:FADD cell clones were generated previously 18 . HCT116 cells (ATCC CCL-247) were routinely cultured in McCoy’s 5A supplemented with 10% FBS, 1× sodium pyruvate and 1× glutamine. Jurkat cells (ATCC TIB-152) were cultured in RPMI-1640 containing 10% FBS. A20-deficient HCT116 cells were generated via CRISPR–Cas9 gene targeting. In short, a sgRNA was designed using the CRISPRscan tool (A20-1:GGAGCTTGTCAGTACATGTG) and cloned into the px458 vector (Addgene plasmid, 48138). The day before transfection, 2 × 10 6 cells were seeded in a 10-cm cell culture dish. The next day, cells were transfected (jetPEI) with 20 μg plasmid according to the manufacturer’s instructions. Two days after transfection, GFP-positive single cells were sorted in a 96-well plate. Finally, 7 days after transfection, clones were expanded and screened using western blot analysis, to select the desired A20-knockout clones. Mammalian cell death induction assays All mammalian cells were cultured at 5 × 10 5 cells per ml. Cells were washed with 1× PBS before the induction of cell death in the indicated medium. Jurkat cells were cultured in suspension at 5 × 10 5 cells ml −1 . After induction of cell death, supernatant was collected and spun at 330 g for 5 min to remove cellular debris. Unless stated otherwise, the resulting supernatant was filtered using a 0.2-μm syringe filter and either used immediately or frozen at −20 °C for later use. Cells were stained with annexin V (AV) conjugated to APC or Pac Blue and either Sytox blue, Yoyo-1 or 7-AAD (DNA-binding dyes) for 15 min at room temperature in annexin V binding buffer (BD). Flow cytometry was performed using the Attune NxT (Invitrogen), the FACS Calibur (BD), the LSR Fortessa (BD) or the LSRII (BD). Data were analysed using FlowJo v.10 software. All independent experiment values shown are the average values of technical duplicates. ‘Percentage annexin V + ’ cells include annexin V + DNA dye − and annexin V + DNA dye + cell populations. FADD-dependent apoptosis CT26:FADD clones were incubated with 1 μg ml −1 doxycycline for 16 h to induce expression of the construct. Doxycycline was washed away before the addition of 10 nM B/B homodimerizer for 5 h to induce death. For caspase-inhibition studies, cells were incubated with 30 μM QVD for 1 h before B/B administration and QVD was maintained in the medium. UV-induced apoptosis HCT116 or CT26 cells were exposed to 600 mJ cm −2 UV-C irradiation (Stratalinker). HCT116 or CT26 cells were incubated for 24 h after UV irradiation. Jurkat cells were exposed to 150 mJ cm −2 and then incubated for 4 h after UV irradiation. For medium controls, fresh DMEM was exposed to 600 mJ cm −2 UV or left unexposed. Staurosporine-induced apoptosis HCT116 or CT26 cells were incubated with 1 μM staurosporine or DMSO vehicle control for 24 h. PAC-1-induced apoptosis CT26 cells were incubated with 50 μM PAC-1 or DMSO vehicle control for 24 h. TNF-induced apoptosis HCT116 cells were treated with 100 ng ml −1 of recombinant human TNF for 24 h. Salmonella -induced cell death HCT116 cells were infected with a ratio of 100 wild-type Salmonella to 1 HCT116 cell, spun at 300 g for 1 min to maximize bacterial:epithelial cell contact, and infected for 1 h. After 1 h, cells were washed twice with 1× PBS before incubation with medium containing 100 μg ml −1 gentamicin for 30 min to kill any extracellular bacteria. Medium was then replaced with a lower dose of gentamicin (50 μg ml −1 ) for 24 h before cell death was quantified via flow cytometry. Freeze–thaw CT26 cells were submitted to three cycles of freeze–thaw. For each cycle, cells were frozen solid on dry ice and then thawed in a 37 °C water bath. Control samples remained at room temperature. For supernatant collection, cellular debris was removed, and supernatant was filtered as described above. Shikonin CT26 cells were pre-treated with 5 μM shikonin (or DMSO vehicle control) for 1 h. Cells were then washed once with PBS before replacing with fresh medium containing the indicated trigger of cell death with or without 5 μM shikonin. Immunoblotting CT26 or HCT116 cells were seeded in a 6-well plates at a concentration of 400,000 cells per well. Cell death was induced as described above with or without pre-treatment with 30 μM QVD. After the indicated time of cell death induction (2, 5 or 24 h), cells were collected and lysed directly in sample buffer. After protein denaturation, SDS–PAGE was performed using 8% or 4–12% gradient Bis-Tris gels (caspase-1 detection on 12% Tris-glycine gels). Primary antibodies were used for overnight incubation, followed by 1-h incubation with secondary antibody and chemiluminescence detection. Antibodies were used at the following dilutions: 1:1,000 (caspase-3, cleaved caspase-3, caspase-7, cleaved caspase-7, caspase-8, cleaved caspase-8, P-MLKL, MLKL and P-RIPK3); 1:2,000 (caspase-1); 1:5,000 (tubulin HRP, sheep anti-mouse IgG HRP, goat anti-rat IgG HRP, goat anti-rabbit IgG HRP). Collection and treatment of supernatants After the induction of cell death with the indicated trigger, cell-free supernatant was collected and spun at 330 g for 5 min to remove cellular debris. The resulting supernatant was filtered using a 0.2-μm syringe filter. Medium controls underwent matching spins and 0.2-μm filtration. Supernatants or medium controls were subsequently directly inoculated with bacteria for growth studies, or frozen at −20 °C for later use. Metabolite detection Pyruvate, formate or FBP concentrations were determined using the appropriate detection kit asper the manufacturer’s instructions. For these experiments, mammalian cells were cultured in DMEM without phenol red supplemented with 10% FBS. Although the supernatants of apoptotic cells were often used as such for their effects on bacterial growth, in specific instances, the supernatants were treated as indicated below. Sequential filtration After staurosporine-induced apoptosis, CT26 supernatants were collected and filtered (‘> 300-kDa’). Supernatants were subjected to sequential filtrations with the indicated size Amicon Ultra (Merck Millipore) filters by spinning at maximum speed for 10–30 min according to the manufacturer’s instructions. Aliquots were removed in between each filtration step and either used fresh or frozen at −20 °C for subsequent use. Proteinase K treatment Supernatants were sequentially filtered to <10 kDa as described above. The <10-kDa supernatants were then treated with 50 μg ml −1 proteinase K and incubated at 37 °C for 1 h. Control samples were left untreated at 37 °C. After treatment, proteinase K was removed by subsequent filtration at <3 kDa. Supernatants were used immediately or frozen at −20 °C for subsequent use. Protein quantification Total protein in medium, live cell or apoptotic supernatants was determined using the Pierce BCA Protein Assay Kit (Thermo) according to the manufacturer’s instructions. Protein concentration (μg ml −1 ) was calculated from albumin standard dilutions. Concentrations were determined with or without FBS in the medium, with or without 3-kDa filtration (as described above), or with or without proteinase K treatment (as described above). Temperature denaturation The 0.2-μm-filtered supernatants were left at room temperature or incubated at 100 °C for 15 min. After incubation, supernatants were cooled to room temperature and used immediately or frozen at −20 °C for subsequent use. Primary colonocyte extraction and killing Full-length colons (from the caecum to the rectum) were excised from the indicated genotype of the mice and cut open longitudinally to flay the colon open. Opened colons were vortexed two or three times in sterile 1× PBS to remove faecal content. Colons were then cut horizontally into 5–7-mm-long pieces and incubated in DMEM plus 10% FBS and an antibiotic cocktail containing ampicillin (100 μg ml −1 ), chloramphenicol (20 μg ml −1 ), kanamycin (50 μg ml −1 ) and streptomycin (100 μg ml −1 ) for 2 h at 37 °C with 5% CO 2 . Colonic pieces were then washed three times in sterile 1× PBS to remove any traces of the antibiotic cocktail. Individual colonic pieces were then incubated in 1.5-ml Eppendorf tubes containing 500 μl of DMEM plus 10% FBS with or without 2 μM staurosporine for 8 h or 20 μg ml −1 doxorubicin for 6 h. For caspase inhibition, 30 μM QVD was added to the 2-h antibiotic treatment step and was maintained throughout death induction. After death induction, colonic explants were spun at maximum speed for 5 min using a table-top centrifuge and supernatant was collected. Unfiltered supernatant was used immediately or frozen at −20 °C for subsequent use. Each colon typically yielded 9–10 individual explants. All subsequent experiments (cell death, bacterial growth) used explants extracted from a minimum of three mice. Caspase-3 activation After removal of the supernatant, explant pieces were assessed for caspase-3 activity via the Caspase-3 Colorimetric Activity Assay Kit DEVD (Sigma) according to the manufacturer’s instructions. Colorimetric values were obtained at 405 nm using the iMark microplate reader (BioRad) and arbitrary units were calculated from a standard curve using the provided p-nitroaniline (pNA) standard. Microscopy Colonic explant samples were fixed overnight in 4% paraformaldehyde, embedded in paraffin, and cut at 5-μm thickness. Subsequently, sections were stained with haematoxylin and eosin or a TUNEL, CD45 and DAPI co-staining. The TUNEL assay was performed according to manufacturer’s instructions (In situ cell death detection kit, TMR red, Roche) and followed by an overnight incubation at 4 °C with DAPI and a rabbit CD45 antibody (Abcam, ab10558). CD45 was visualized with an anti-rabbit secondary antibody coupled to Dylight633. For cleaved caspase-3 staining, tissues were fixed overnight in 4% neutral buffered formalin, embedded in paraffin and cut into 5-μm sections. Slides were deparaffinized, rehydrated with an alcohol series and processed for antigen retrieval with Dako citrate buffer (Agilent, S169984-2) for 25 min in a pressure cooker. Endogenous peroxidase activity was blocked using Dako REAL Peroxidase-Blocking Solution for 20 min (Agilent, S202386-2), followed by three washes in PBS (5 min each). Sections were blocked for 30 min in Dako REAL Antibody Diluent (Agilent, S202230-2) supplemented with 5% goat serum (Agilent, X090710-8). Incubation with cleaved caspase-3 antibody (Cell Signaling Technology, 9664S, 1:1,500) was carried out overnight at 4 °C in blocking buffer. Sections were washed three times with PBS (5 min each) and incubated with SignalStain Boost IHC Detection Reagent (Cell Signaling Technology, 8114S) for 30 min. Signal was developed using ImmPACT DAB Substrate (Vector Laboratories, SK-4105) followed by haematoxylin counterstaining, dehydration and mounting with Entellan new (Merck Millipore, 107906). Slides were imaged with an Axio Scan.Z1 (Zeiss), using a 10× Plan-Apochromat 0.45 NA (0.650 μm pixel −1 ) and a Hamamatsu Orca Flash camera. With an HXP illumination source, the following were used in the acquisition: DAPI (BP 445/50), HE GFP (BP 525/50), HE DsRed (BP 605/70) and Cy5 (BP 690/50). Image analysis was performed using QuPath (version 0.1.2). Bacterial growth experiments Overnight cultures of the indicated organism were routinely grown at 37 °C with 200 rpm agitation (aerobic) in LB broth. After overnight culture (approximately 16 h), 1 ml of bacterial culture was pelleted and media removed. Cultures were re-suspended in 1× PBS. A list of organisms used can be found in Supplementary Table 2 . OD 600 measurements Four millilitres of supernatant or medium controls were inoculated with 1 × 10 7 –2 × 10 7 CFU ml −1 of the indicated bacterial species. Cultures were grown at 37 °C with 200 rpm agitation (aerobic) and bacterial growth was quantified by OD 600 measurements using the Ultrospec 10 (VWR) at the indicated time points. Growth of gentamicin-resistant Salmonella in infected supernatants used a starting inoculum of 8 × 10 7 CFU ml −1 . All independent experiment values are the average values of technical duplicates. CFU ml −1 values Five hundred microlitres of supernatant or medium controls were inoculated with 1 × 10 3 CFU of the indicated strain. Cultures were grown at 37 °C under anaerobic conditions using anaerobic incubating jars (Merck, 28029-1EA-F), anaerobic pouches (Merck, 68061-10SACHETS-F) and anaerobic indicator strips (Fisher, BR0055B), and growth was assessed at the indicated time by serially diluting with 1× PBS and plating onto LB agar (in vitro supernatants) or MacConkey agar (ex vivo primary colonocyte supernatants) to obtain CFU ml −1 values. MeMix6 supplementation The metabolite mixture ‘MeMix 6’ was composed of spermidine, FBP, DHAP, GMP, IMP and UDP-glucose 15 . A single mixture containing the indicated concentrations (Extended Data Fig. 7b ) of each metabolite was prepared fresh in 1× PBS and diluted to the indicated final concentrations in DMEM with 10% FBS. Indicated concentrations of each metabolite were based on previously published targeted metabolomic studies 15 . Bacterial mutagenesis Salmonella mutant strains are listed in Supplementary Table 2 and were constructed using lambda red homologous recombination as previously described 42 using the LR primers and plasmids listed in Supplementary Tables 2 and 3 . Correct polar insertions, antibiotic-resistance profiles, and subsequent nonpolar deletions after pCP20 transformation and flippase activity were verified using primers listed in Supplementary Table 1 . A similar approach was taken to generate gentamicin-resistant Salmonella with slight modifications. Instead of helper plasmid pKD4, the gentamicin-resistant gene was amplified from plasmid pRGD (Addgene, 74106) with flanking regions of homology to the downstream region of the essential gene glmS . Chromosomal insertion here has been shown to ensure sufficient expression 43 . The pflB mutant was complemented with plasmid pLK003. Plasmid-based complementation was achieved using the arabinose-inducible pBAD24 vector, amplification of Salmonella genomic DNA using the primers listed in Supplementary Table 1 , and EcoR1 and HindIII restriction enzymes. As controls, wild-type and Δ pflB strains were transformed with empty pBAD24 vector. Bacterial gene expression Primer sequences are listed in Supplementary Table 1 . For all gene expression studies, bacterial cultures were grown to mid-logarithmic growth phase (OD 600 = 0.4–0.6), spun down at maximum speed, and then resuspended in TRIzol reagent (ThermoFisher) for immediate extraction or frozen at −80 °C. Bacterial RNA was extracted and DNA removed as described using the RiboPure RNA Purification Kit (Thermo Fisher). For RNA-seq experiments, ribosomal RNA was removed using Ribo-zero kits (Illumina). For all RNA-seq data, sample groups were analysed in quadruplicates and reads were mapped to the Salmonella Typhimurium SL1344 reference genome (ASM21085v2, EnsEMBL 39). RNA-seq and analysis For the HCT116 ± UV data, production and analysis were performed by the VIB Nucleomics Core ( ) on the Illumina NextSeq platform. An mRNA library was constructed using the TruSeq Stranded mRNA kit (Illumina), sequencing was conducted using the NextSeq High Output, and differential gene expression and statistical significance was determined by edgeR (Bioconductor). The CT26:FADD data were produced and analysed by Novogene. An mRNA library was constructed following rRNA depletion using NEBNext Ultra RNA Library Prep Kit for Illumni (NEB), sequencing was performed on the Illumina PE150 platform, and differential gene expression and significance was determined using HTSeq software. Blue dots indicate Salmonella genes with a log 2 -transformed fold change >1 or <−1 and an uncorrected P value <0.001. qPCR cDNA synthesis was performed using the SensiFast cDNA synthesis kit (GC Biotech). Quantitative PCR (qPCR) primers were designed using NCBI Primer Blast. The ΔΔ C t values for each independent biological replicate were calculated twice, first using the Salmonella housekeeping gene gmk 44 and then the housekeeping gene strB 45 and the two ΔΔ C t values were averaged. Relative fold expression was calculated such that the average of the indicated controls was equal to 1. The endogenous controls for HS E. coli samples were gmk and rpoA 46 . All independent experiment values shown are the average values of technical duplicates. Animal studies Wild-type C57BL/6 mice were purchased from Janvier Labs. Global caspase-1/11 knockouts were a gift from M. Lamkanfi. Vil-cre +/− Casp3/7 fl/fl , Vil-cre +/− A20 fl/fl , Ripk1 knockdown , Mlko −/− , Gsdmd −/− and Panx1 −/− mice have been described previously and were housed at the VIB-UGent Center for Inflammation Research 16 , 37 , 47 , 48 , 49 . For all mutant mouse genotypes, fl/fl Cre-negative mice served as littermate controls where appropriate. Vil-cre +/− mice were co-housed with Vil-cre −/− controls during development and genotypes were only separated at the beginning of each experiment. Panx1 −/− mice were similarly co-housed with control Panx1 +/+ mice until experimentation. Germ-free experiments were done in the germ-free facility at the University of Ghent. All mice were 7–14 weeks of age at the time of infection, and both male and female mice were used as indicated. Salmonella -induced colitis Littermate-, sex- and age-matched mice were infected using the Salmonella model of colitis 28 . SPF mice were given a single dose of 20 mg streptomycin via oral gavage (100 μl of 200 mg ml −1 streptomycin dissolved in water) one day before infection. Mice were infected with 1 × 10 7 CFU per mouse resuspended in 1× PBS via oral gavage. For competitive infections, polar deletions containing the chromosomally inserted kanamycin-resistance gene were used 45 . Mice were infected with 1 × 10 7 CFU per strain per mouse, for a 2 × 10 7 CFU per mouse total inoculum in 100 μl. Germ-free mice were infected with 2 × 10 6 CFU per mouse (1 × 10 6 CFU per strain) and did not receive streptomycin. The input ratio of each strain was calculated by plating the infective dose on agar plates containing streptomycin (total inoculum) and agar plates containing streptomycin plus kanamycin (mutant strain). Wild-type Salmonella was thus calculated as (total inoculum) − (mutant inoculum), and the input ratio was calculated as (WT Salmonella )/(mutant Salmonella inoculum) with a desired input equal to 1. Mouse body weight was measured daily and the percentage body weight was calculated as (daily weight)/(day 0 starting weight). At the indicated day after infection, intestinal tissue (ileum, caecum and colon) and the spleen were collected. For bacterial burden measurements and cleaved caspase-3 staining, luminal contents of the ileum (the last 5–6c m of the distal end of the small intestine) and the colon were removed. Luminal content was retained in all caecal samples. Any attaching lymphatic tissue was removed from intestinal tissue before homogenization in 1 ml of 1× PBS. CFU per gram of tissue were calculated by plating serial dilutions of tissue homogenates on MacConkey agar containing streptomycin and MacConkey agar containing streptomycin and kanamycin. Tissue samples were excluded if either the total or mutant strain burden fell below the limit of detection. Single strain infection tissue homogenates were plated solely on MacConkey agar containing streptomycin. Germ-free mice Axenic/germ-free mice were housed in positive-pressure flexible film isolators (North Kent Plastics). One week before the start of the infection experiment, axenic mice were transferred to individually ventilated Isocage-P cages (positive pressure Isocages Techniplast). All experiments were performed on mice of C57BL/6J genetic background. All experiments on axenic mice were performed according to institutional (ethical committee for animal experimentation Ghent University Faculty Medicine and Health Sciences), national and European animal regulations. Doxorubicin treatment Doxorubicin was given as a single intraperitoneal injection at 15 mg kg −1 of mouse body weight while vehicle control (water) was given at similar volumes (approximately 300 μl volumes of water or 1 mg ml −1 doxorubicin solution). The next day, mice were infected with 1 × 10 9 CFU of Salmonella (either single strain of competitive infections, as described above) or 1 × 10 9 CFU of E. coli . Tissues were collected at day 1 after infection and weight loss, total Salmonella burden, and competitive indices were calculated as above. Total Enterobacteriaceae burden, including E. coli , was assessed by plating tissue lysates on MacConkey agar without antibiotics. Uninfected controls were given 1× PBS via oral gavage instead of bacteria, and endogenous Enterobacteriaceae burden was assessed by plating tissue lysates on MacConkey agar without antibiotics. Endogenous Enterobacteriaceae were identified as genus Escherichia (probable E. coli or E. albertii ) following 16s rRNA sequencing on purified genomic DNA using the primers listed in Supplementary Table 1 . In addition, E. coli identification was confirmed via MALDI–TOF mass spectrometry (Bruker) performed on colonies from MacConkey agar plates. Statistical analysis All box and whiskers plots show minimum to maximum with all independent replicates included, the horizontal line within the box depicting the median, and the box extending from the 25th to 75th percentile. Statistical tests were performed using GraphPad Prism version 8 as indicated in the figure legends. Outlier data points were identified using the ROUT method with Q = 1%. No statistical tests were used to determine sample size. For in vitro, ex vivo and in vivo experiments, sample sizes were determined based on the numbers required to achieve statistical significance using indicated statistics, but with a minimum of three independently performed experiments to ensure data reproducibility. The investigators were not blinded to allocation during the experiments and outcome assessment. All experiments required known injections of substances, including cytotoxic chemotherapeutics and infectious organisms. Therefore, it was not possible to blind the investigator for such experiments. Allocation of mice was random in all in vivo experiments, taken from littermates. Studies were performed in different vivaria (University of Virginia, VIB-UGent Center for Inflammation Research, and UGent germ-free facility). All in vitro, experimental treatment group allocation was random. Cartoon schematics All cartoon schematics were created using the software from BioRender.com. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability RNA-seq data have been submitted to the Gene Expression Omnibus (GEO) under accession number GSE175947 and GSE178167 . Other data or unique materials generated that support the findings of this study are available from the corresponding author upon request. Source data are provided with this paper. | As part of a trans-Atlantic collaboration, researchers from the team of Kodi Ravichandran (VIB-UGent Center for Inflammation Research) and the University of Virginia School of Medicine (Charlottesville, Virginia, U.S.) have discovered how intestinal bacteria can exploit nutrients that are released from dying cells as fuel to establish intestinal infections. Researchers have long been investigating two seemingly distinct fields of study: how certain bacteria colonize our intestinal tracts, and how our own cells die. But do those processes interact? Kodi Ravichandran (VIB-UGent Center for Inflammation Research), says: "We have known for a few decades that the cell death process itself can indirectly influence bacterial infections by changing the body's immune response. At the same time, we have also been studying how dying cells can communicate with their neighbors. What CJ (CJ Anderson, postdoctoral fellow) set out to ask was: if these dying cells are secreting factors that can be recognized and sensed by healthy neighboring cells, what is to stop other organisms like intestinal bacteria from recognizing these same molecules?" Using cell culture and healthy mouse tissue systems, the authors discovered that certain molecules are actively produced and shed by intestinal epithelial cells when they start to die. Interestingly, these molecules are directly sensed and used by intestinal bacteria such as Salmonella and E. coli. The researchers showed that this relationship between dying cells and gut bacteria, which they term death-induced nutrient release (DINNR), takes place in several disease contexts. Intestinal bacteria can use cell death-related molecules to assist in colonization during food poisoning, inflammatory conditions (resembling Inflammatory Bowel Disease (IBD) or Crohn's Disease), and chemotherapy-induced mucositis. Gastrointestinal toxicity is one of the leading side effects in cancer patients undergoing chemotherapy, and it can necessitate dose alterations that can reduce efficacy. Additionally, cancer patients receiving chemotherapy have a significantly greater risk of developing subsequent infections. "This relationship between chemotherapy and bacterial infections was particularly interesting to us," says Anderson. "Unlike foodborne infections or flareups of IBD or Crohn's Disease where a patient does not know when he or she will be 'under attack," physicians know exactly when they are administering chemotherapeutic drugs to cancer patients. This means we have a therapeutic window where we can try to develop some sort of combination therapy to limit some of this fuel for bacteria." The researchers also revealed that restricting the release of certain nutrients from dying cells, without having to stop the cell death process itself, was able to protect animals from infection. According to Anderson, this could have major implications for developing future therapeutics. "What was exciting to us was that we do not need to cheat death itself to still see some protection. We do not need to go searching for the Holy Grail. If we can change or restrict some of these nutrients that are released during death, we can try to improve patient care. As with any piece of fundamental research, there is more work to be done, but the opportunity is there for future therapies." | 10.1038/s41586-021-03785-9 |
Medicine | Scientists discover novel genes responsible for regulating muscle cells | Soma Tripathi et al, Smad7:β-catenin complex regulates myogenic gene transcription, Cell Death & Disease (2019). DOI: 10.1038/s41419-019-1615-0 | http://dx.doi.org/10.1038/s41419-019-1615-0 | https://medicalxpress.com/news/2019-05-scientists-genes-responsible-muscle-cells.html | Abstract Recent reports indicate that Smad7 promotes skeletal muscle differentiation and growth. We previously documented a non-canonical role of nuclear Smad7 during myogenesis, independent of its role in TGF-β signaling. Here further characterization of the myogenic function of Smad7 revealed β-catenin as a Smad7 interacting protein. Biochemical analysis identified a Smad7 interaction domain (SID) between aa575 and aa683 of β-catenin. Reporter gene analysis and chromatin immunoprecipitation demonstrated that Smad7 and β-catenin are cooperatively recruited to the extensively characterized ckm promoter proximal region to facilitate its muscle restricted transcriptional activation in myogenic cells. Depletion of endogenous Smad7 and β-catenin in muscle cells reduced ckm promoter activity indicating their role during myogenesis. Deletion of the β-catenin SID substantially reduced the effect of Smad7 on the ckm promoter and exogenous expression of SID abolished β-catenin function, indicating that SID functions as a trans dominant-negative regulator of β-catenin activity. β-catenin interaction with the Mediator kinase complex through its Med12 subunit led us to identify MED13 as an additional Smad7-binding partner. Collectively, these studies document a novel function of a Smad7-MED12/13-β-catenin complex at the ckm locus, indicating a key role of this complex in the program of myogenic gene expression underlying skeletal muscle development and regeneration. Introduction Developmental myogenesis, the process of terminal differentiation of skeletal muscle progenitor cells, consists of a series of well-characterized highly regulated steps that has become a paradigm for lineage acquisition and cellular differentiation 1 , 2 , 3 , 4 , 5 . During embryogenesis, pluripotent mesodermal stem cells commit to become myogenic progenitor cells 6 . Commitment to the myogenic lineage results in a binary state of either maintenance of proliferative potential and multipotency or, on appropriate cues, withdrawal from the cell cycle, activation of a battery of structural, contractile, and metabolic genes and ultimately formation of multi-nucleated, electrically excitable myofibers 1 , 7 , 8 . Also, in adult skeletal muscle, resident stem cells (satellite cells) located between the basal lamina and the plasma membrane of mature muscle fibers 9 recapitulate this “myogenic program” of differentiation in response to injury or for routine maintenance of the muscle tissue 10 . How networks of transcriptional regulators exert molecular control over developmental and adult myogenesis has been a prevalent theme in understanding the molecular control of ontogeny, physiology, and pathology of the striated muscle 10 , 11 , 12 . Extensive biochemical and genetic evidence has implicated a family of DNA-binding transcriptional regulatory proteins encoded by the myogenic regulatory factor (MRF) genes, myf5 , myod1 , myogenin ( myog ), and mrf4 , in myogenesis 10 , 13 . In conjunction with the proteins encoded by the myocyte enhancer factor 2 ( mef2a-d ) gene family, the MRF’s activate an evolutionarily conserved program of gene expression, which leads to the generation of terminally differentiated skeletal muscle cells 14 , 15 , 16 . Understanding the trans -acting factors contributing to this process has been aided in tandem by extensive analysis of the cis -regulatory elements of muscle-restricted, differentiation-induced genes such as the muscle creatine kinase ( ckm ) gene 17 , 18 , 19 . In addition to the central role played by the MRF/MEF2 axis in myogenesis 20 , other transcription factors have been implicated in the control of myogenic differentiation such as Six 1 and 4 21 , 22 , 23 , AP-1 24 , 25 , β-catenin 26 , 27 , 28 , and Smad7 29 , 30 , 31 . Since our initial observations of the pro-myogenic role of Smad7 in cultured muscle cells 29 , we have documented a novel nuclear role for Smad7 in muscle that is essentially independent of its “canonical” role as a repressor of transforming growth factor (TGF)-β signaling 31 . In addition, other groups have documented an in vivo role for Smad7 in the skeletal muscle 30 . Understanding the nature of the ancillary role played by Smad7 and other transcriptional regulators at muscle-restricted genes is therefore of some importance for our overall understanding of the molecular programming of myogenic identity. In view of the “non-canonical” nuclear role of Smad7 alluded to above, the mechanistic basis by which it contributes to the myogenic differentiation program is so far incompletely understood. We were intrigued by a report identifying a protein–protein interaction (PPI) between Smad7 and β-catenin in human prostate cancer cells 32 since both are known, independently, to be key regulators of muscle gene expression in a variety of contexts. We therefore assessed this putative PPI in cultured muscle cells. Our data support a robust interaction between Smad7 and β-catenin that contributes to the transcriptional control of a key myogenic promoter ( ckm ). We further extended these observations in characterizing the recruitment of Mediator components (Med12/13) by the β-catenin/Smad7 complex, thus connecting the muscle transcriptosome assembled on enhancers such as ckm to the basal transcription machinery. Integration of the Smad7-β-catenin complex into the network of proteins regulating the “myogenic program” expands our understanding of the unique molecular wiring encoding myogenic differentiation, growth and repair. Materials and methods Cell culture C2C12 myoblasts were obtained from the American Type Culture Collection. Cells were cultured in growth medium (GM) consisting of high-glucose Dulbecco’s modified Eagle’s medium (DMEM, Gibco), 10% fetal bovine serum (FBS), and L-Glutamine (HyClone) supplemented with 1% penicillin–streptomycin (Invitrogen, ThermoFisher). Myotube formation was induced by replacing GM with differentiation medium (DM), consisting of DMEM supplemented with 2% horse serum (Atlanta Biologicals) and 1% penicillin–streptomycin. Cells were maintained in an incubator at 95% humidity, 5% CO 2 , and 37 °C. Transfections For ectopic protein expression, cells were transfected using the calcium phosphate precipitation method for transcription reporter assays. Cells were re-fed 16 h post-transfection and harvested. For small interfering RNA (siRNA) experiments, cells were transfected with Lipofectamine 2000 (Life Technologies) using instructions provided by the manufacturer and harvested 48 h later, unless otherwise indicated. Gene silencing MISSION siRNA (Sigma-Aldrich) for rat and mouse ctnnb1 siβ-catenin#1 (SASI_Rn01_00099925), siβ-catenin#2 (SASI_Rn01_00099923), siβ-catenin#3 (SASI_Rn01_00099924), siSmad7#1 (SASI_Mm02_00290887), siSmad7#2 (SASI_Mm02_00290886), siSmad7#3 (SASI_Mm02_00290885) and universal scrambled siRNA (SIC001) were used at 75 nM concentrations. Plasmids Expression plasmids for Myc-His-tagged full-length Smad7, β-catenin-myc, transcription reporter assay constructs pckm-luc have been described previously 29 , 31 , 33 . β-catenin mutant expression plasmids were constructed by the ligation of PCR-amplified nucleotides corresponding to the indicated amino acid (aa) regions (aa575–683, aa1–574) at Hind III and Xho I sites of pcDNA3-EYFP or pcDNA3-3Xflag-8XHis, respectively. Constructs for expression of glutathione-S-transferase (GST)-fused β-catenin fragments were described previously 34 . Transcription reporter gene assays Transcriptional reporter assays were performed using luciferase reporter plasmids along with expression constructs (indicated in the figure legends) and a Renilla plasmid (pRL-Renilla, Promega) as an internal control. Cells were washed with 1× phosphate-buffered saline and harvested in Luciferase Lysis Buffer (20 mM Tris pH 7.4 and 0.1% Triton X-100). Enzymatic activity was measured in each sample on a luminometer (Lumat LB, Berthold) using Luciferase assay substrate (E1501; Promega) or Renilla assay substrate (E2820; Promega). Luciferase activity values obtained were normalized to Renilla activity in the same cell extracts and expressed as fold activation to the control. Nuclear and cytoplasmic extraction Nuclear and cytoplasmic extraction was obtained using the NE-PER Kit (78833; Thermo Scientific), as per the instructions provided by the manufacturer. Immunoblotting of extracellular signal-regulated kinase and c-Jun was used as the positive control for cytoplasmic and nuclear fractions, respectively. Western blot analysis Total cellular protein extracts were prepared in NP-40 lysis buffer (0.5% (vol/vol)), 50 mM Tris-HCl (pH 8),150 mM NaCl, 10 mM sodium pyrophosphate, 1 mM EDTA (pH 8), and 0.1 M NaF supplemented with 1× protease inhibitor cocktail (P-8340; Sigma) and 0.5 mM sodium orthovanadate. Protein concentrations were determined by a standard Bradford assay. Equivalent amounts of protein were denatured in sodium dodecyl sulfate (SDS) loading buffer at 100 °C for 5 min and then run in SDS-polyacrylamide gels, followed by electrophoretic transfer to an Immobilon-FL polyvinylidene difluoride membrane (Millipore) as directed by the manufacturer. Blots were incubated with blocking buffer that consisted of 5% milk in Tris-buffered saline (TBS)-T (10 mM Tris-HCl, pH 8.0, 150 mM NaCl, 0.1% Tween 20) prior to the incubation with primary antibody at 4 °C overnight with gentle agitation. After three washes with TBS-T, appropriate horseradish peroxidase-conjugated secondary antibody (BioRad, 1:2000) were added for 2 h at room temperature. Protein/antibody immune-complexes were detected with Enhanced Chemiluminescence western blotting substrate (Pierce, ThermoFisher). Antibodies Rabbit monoclonal for αSmad7 (ab124890) and polyclonal for αMED13 (ab76923) and αMED12 (ab70842) were purchased from Abcam. A rabbit polyclonal antibody was raised against GST-Smad7 according to the protocol approved by York University of Animal Care Committee. This was used for endogenous Smad7 immunoprecipitation (IP) and detection in cellular and nuclear extract. αβ-Catenin (pAb9562) and ChIP-grade αFlag antibody (mAb 14793S) were purchased from Cell signaling. Monoclonal αFlag antibody (F1804) was from Sigma. αMyc (9E10), αMyHC (MF20), and αMyogenin (F5D) were purchased from DSHB. αActin (Sc1616), αMyoD (sc304), and αCKM (sc-69878) were purchased from Santa Cruz. Co-immunoprecipitation (Co-IP) For Co-IP, cells were harvested, and proteins were extracted as described above. IP was performed using an ImmunoCruz Optima Kit (Santa Cruz Biotechnology) according to the manufacturer’s instructions. Eluates were analyzed by western blotting as described above. Live-cell imaging C2C12 cells were seeded onto the glass-bottom dishes (MatTek Corp). The cells were transfected for the expression of fluorescent-tagged proteins. Before imaging the cells, the media was replaced for FluoroBrite DMEM (Thermo Fisher Scientific) supplemented with 10% FBS. To mark nuclei, Hoechst 33342 (Sigma-Aldrich) was added to 2.5 µM into the media. After 30 min, the stained cells were visualized by a Carl Zeiss Spinning disc system (Zeiss Observer Z1 with Yokogawa CSU-X1 and AxioCam MRm camera) in the environment chamber (37 °C, 5% CO 2 ). The raw images were processed by ZEN (Carl Zeiss) to obtain pseudo-colored micrographs. Chromatin immunoprecipitation (ChIP) C2C12 myoblasts or Smad7-Flag ectopically expressing myoblasts were crosslinked with 1% formaldehyde followed by sonication to shear DNA strands attached to the protein. Smad7/β-catenin was immunoprecipitated with Flag or β-catenin antibody to obtain the protein–DNA complex. IP with IgG antibody served as controls. Immunoprecipitated DNA was reverse crosslinked, purified, and subjected to quantitative PCR (qPCR) using primers specific for the ckm promoter. Primers specific to gapdh was used as controls for ckm enrichment. GST pull down assay For GST pull down assays, GST-Smad7 and 6XHis-β-catenin fusion proteins were produced in bacteria using standard protocol. Briefly, GST-Smad7- and 6XHis-β-catenin-expressing cells were sonicated and the proteins were purified with glutathione-agarose beads (Sigma-Aldrich G-4510). Protein concentrations were estimated by SDS-polyacrylamide gel electrophoresis and following Coomassie blue staining, using BSA for comparative estimation. GST-β-catenin fragments corresponding to aa (full length (FL), 1–100, 120–683, 422–683, 575–696, 575–683, 1–574) were utilized for mapping study. Five micrograms of GST FL Smad7, β-catenin (or the molar equivalent of the smaller GST, GST-β-catenin fragments), and 25 µl glutathione-agarose beads (50% slurry) were incubated in 600 µl NETN buffer (100 mM NaCl, 20 mM Tris-HCl (pH 8) 0.5 mM EDTA, 0.5% (vol/vol) NP-40) overnight at 4 °C. In experiments where GST-β-catenin was used to co-precipitate Smad7, GST-Smad7 was thrombin (GE Healthcare-27-0846-01) digested to yield the immune-complex without the GST tag (10 µl thrombin per mg fusion protein was incubated at room temperature for 17 h), and eluates were analyzed by western blotting. Results Smad7 and β-catenin are co-expressed and interact in myogenic cells A time course analysis of β-catenin and myogenic markers during C2C12 differentiation indicated that β-catenin expression is enhanced along with myosin heavy chain (MyHC), muscle creatine kinase (CKM), and myogenin levels during differentiation (Fig. 1a ). Endogenous Smad7 protein levels increased during differentiation as compared to myoblasts in growth conditions (Fig. 1b ). Immunofluorescence analysis of Smad7- and β-catenin-expressing C2C12 cells indicates that both Smad7 and β-catenin are localized in the nucleus; interestingly, two different patterns of localization were observed. β-Catenin exhibited a diffuse pattern of localization within the nucleus in some cells while in others a very defined co-localization occurred in nuclear puncta (Fig. 1c ). A cytoplasmic and nuclear fractionation demonstrated that Smad7 and β-catenin are abundant in both fractions (Fig. 1d ). Based on a previous report indicating a PPI between β-catenin and Smad7 in human prostate cancer cells (PC-3U) 32 , we assessed whether Smad7 and β-catenin form a complex under these cellular conditions in cultured muscle cells and muscle tissue (mouse tibialis anterior (TA)). Total protein lysates from mouse TA and C2C12 muscle cells were subjected to Co-IP with β-catenin and Smad7 antibodies. Interestingly, Smad7 and β-catenin were precipitated together in muscle tissue (TA) and in muscle cells (Fig. 1e ). The interaction was further confirmed by ectopically expressing Smad7-Flag followed by detection of β-catenin in the Co-IP (Fig. 1f ). Fig. 1: Smad7 and β-catenin interaction and expression during myogenic differentiation. a C2C12 myoblasts were cultured in growth media (GM) for 24 h, followed by differentiation media (DM) for designated times. Lysates were collected and assessed for the expression of muscle markers by western blot analysis. b Lysates were combined from several plates for immunoblotting detection of Smad7. Actin was used as loading control. c Co-localization of Smad7 and β-catenin was analyzed by immunofluorescence in ectopically expressing EYFP-Smad7-NLS and mcherry- β-catenin C2C12 cells. d Cytoplasmic and nuclear extraction was done to determine the endogenous localization of Smad7 and β-catenin in C2C12 cells. Extracellular signal-regulated kinase was utilized as cytoplasmic control and c-Jun as nuclear control. e Co-immunoprecipitation (Co-IP) assays were performed using Smad7 antibody to detect an interaction between endogenous β-catenin and Smad7 in extracts from mouse tibialis anterior (TA) muscle and C2C12 myoblasts. f Alternatively, in another separate experiment, C2C12 myoblasts were transiently transfected with Smad7-Flag. Smad7-Flag lysates were utilized for Co-IP with β-catenin or Flag antibody to detect interaction with β-catenin. IP with IgG served as controls Full size image Smad7 physically interacts with β-catenin As the Co-IP analysis indicated Smad7 and β-catenin were in the same complex, we next clarified whether this was a direct interaction by utilizing GST pull down assays with bacterially expressed purified proteins. GST-Smad7 and 6xHistidine (6xHis)-β-catenin fusion protein were produced to conduct the assay. A Coomassie stained blot confirmed the purification of the fusion proteins (Fig. 2a ). β-catenin immunoprecipitated with GST-Smad7 but not with GST-conjugated beads indicated that this interaction was direct (Fig. 2b ). Next, we determined the Smad7 interaction domain (SID) on β-catenin. The 781 aa sequence of β-catenin gives rise to a structure consisting of several characteristic repeats, termed armadillo repeats, each approximately 40 aa’s long. N-terminal (NTD) and C-terminal (CTD) domains flank either end of the armadillo repeats. Helix-C indicates a conserved helix located adjacent to the last ARM repeat proximal to the CTD 35 (Fig 2c ). GST-β-catenin protein fragments corresponding to aa’s 1–106, 120–683, 422–683, 575–696, and the FL were utilized for determining the interacting region with Smad7. Smad7 interacted with FL β-catenin (as we previously identified) and β-catenin aas 120–683, 422–683 and 575–696. Smad7 did not interact with β-catenin aa1–106 therefore indicating that the region between aa575 and aa683 (Fig. 2d ) is required for this interaction. To further refine this, we used additional GST-β-catenin fragments, GST-β-catenin aa575–683 and GST-β-catenin aa1–574 and conducted the interaction assay. In agreement with the above results, we observed that Smad7 interacted with β-catenin aa575–683 and not with aa1–574 leading us to conclude that the SID on β-catenin lies between aa575 and aa683 (Fig. 2e ). This region spans the 10th armadillo repeat of β-catenin and a partial region extending into the CTD. Previously, this CTD domain has been associated with β-catenin trans activation properties and has been shown to interact with MED12, TBP (TATA-binding protein), CBP (CREB-binding protein/p300), and proteins associated with chromatin regulation 35 . Fig. 2: β-catenin C terminal domain (aa575–683) comprises a Smad7 interaction domain (SID). a Coomassie blue staining of the purified proteins GST (26 kDa), GST-Smad7 (72 kDa), and β-catenin His tag (98 kDa). b Glutathione-S-transferase (GST) pull down assays were performed with purified GST-Smad7 and 6× Histidine (6×His)-β-catenin fusion protein expressed in the bacteria. c The β-catenin protein (781aa) consists of a central region (aa 141–664) made up of 12 Armadillo repeats that are flanked by distinct N-and C-terminal domains, respectively. A specific conserved helix (Helix-C) is located proximally to the C-terminal domain, adjacent to the last ARM repeat. d , e Interaction between Smad7 and β-catenin was assessed by GST pull down assay. Purified Smad7 was incubated with GST or GST-β-catenin fusion protein fragments along with glutathione-agarose beads; detection of Smad7 protein complexed with GST-β-catenin fragments were analyzed by immunoblotting with anti-Smad7 polyclonal antibody Full size image Smad7 and β-catenin are enriched on the muscle-specific ckm promoter proximal region The regulatory region of the ckm gene ( ckm ) has been extensively characterized and has served as a paradigm for tissue-restricted transcriptional control during myogenesis 17 , 36 . Our laboratory previously documented that Smad7 associates with the promoter proximal elements of ckm 29 . We therefore used this as a test promoter to assess the function of the β-catenin:Smad7 interaction and to interrogate the role of this complex during myogenic differentiation. First, ChIP coupled with qPCR for ckm and gapdh (control) was utilized to assess whether Smad7 and β-catenin are enriched on the ckm promoter proximal region. In the absence of a Smad7 antibody efficacious for ChIP, Smad7 enrichment was confirmed by Flag-Smad7 recruitment to the ckm promoter (Fig. 3a ). Additionally, endogenous β-catenin is enriched at the ckm promoter in myogenic cells (Fig. 3b ). Next, we utilized reporter gene assays in which Smad7 and β-catenin were exogenously expressed in C2C12 cells along with a ckm (−1082 to −1262) promoter fragment driving a firefly luciferase reporter gene. These data indicate that Smad7 and β-catenin trans activate this promoter region both alone and in combination (Fig. 3c ). Based on the interaction mapping, a mammalian expression vector for β-catenin without the SID (deletion of aa575–683) was constructed. Initially, we performed reporter gene assays of the activity of these β-catenin constructs using the TOP flash luciferase reporter gene, which acts both as a β-catenin reporter and a general Wnt signaling pathway reporter as it contains 7 TCF/LEF consensus sites in which TCF/LEF proteins bind to but cannot activate the reporter gene without β-catenin. Our data indicated that the FL β-catenin activated TOP flash, whereas β-catenin 1–574 and β-catenin ΔSID could minimally activate TOP flash compared to the FL β-catenin (supplementary Fig. S1 ). Further, when β-catenin was ectopically expressed with Smad7, there was an enhancement in TOP flash promoter activity, whereas there was no change in TOP flash activity in conditions where Smad7 was expressed in combination with β-catenin 1–574 and β-catenin ΔSID. These data revealed that, while Smad7 was able to enhance the function of the intact β-catenin, it did not co-operate with β-catenin 1–574 and β-catenin ΔSID in reporter gene assays (supplementary Fig. S1 ). We next deleted the SID domain to test whether it might prevent the cooperative trans activation mediated by Smad7 and β-catenin. To test this idea, Smad7, FL β-catenin (β-catenin-FL), β-catenin (aa1–574, ΔSID) along with a ckm luciferase construct were ectopically expressed in C2C12 cells alone or in combination. These data indicate that, while Smad7 and the β-catenin-FL enhanced the ckm promoter, this activity was substantially reduced in cells expressing Smad7 in combination with β-catenin aa1–574 or β-catenin ΔSID (Fig. 3d ). These data support the conclusion that the β-catenin SID is crucial for the cooperative interaction of Smad7 and β-catenin on the ckm promoter. To address this cooperativity in a different way, we next investigated the effect of perturbation in Smad7 expression on the function of β-catenin on the ckm promoter using siRNA technology. Endogenous Smad7 levels were depleted using three independent siRNAs (supplementary Fig. S2 ) and reporter gene analysis indicated a pronounced decrease in ckm promoter activity when compared to the control (Fig. 3e ). Ectopic expression of β-catenin in Smad7-depleted cells exhibited a reduction in ckm promoter induction consistent with the interpretation that β-catenin requires Smad7 in order to affect promoter activation (Fig. 3f ). To further test this idea, we correspondingly depleted β-catenin expression using three independent siRNAs. Western blot analysis verified a considerable reduction in protein levels of β-catenin using two different siRNAs (si#2 and #3) as compared to controls. Under conditions in which endogenous β-catenin was depleted, we observed that Smad7 mediated ckm promoter activation was markedly reduced (Fig. 4a ). These data indicate that both Smad7 and β-catenin cooperatively activate the ckm promoter. Previously, it was shown that MyoD, the archetypal MRF 37 , interacts with β-catenin 38 . To test this hypothesis, endogenous β-catenin levels were depleted in C2C12 using two different siRNAs. MyoD and Smad7-myc either alone or in combination were then transfected along with the ckm reporter gene in β-catenin-depleted cells. This analysis revealed that ckm promoter activity increased by Smad7 or/and MyoD was reduced by depletion of β-catenin (Fig. 4b ). These results further support the conclusion that β-catenin enhances Smad7- and MyoD-driven potentiation of ckm promoter activity. These observations in combination with previously reported data indicate that the Smad7:β-catenin interaction may be tethered by MyoD and is required for full ckm promoter activation. Fig. 3: Smad7 and β-catenin enhance ckm proximal promoter activity. a , b For chromatin immunoprecipitation (IP), myoblasts expressing Smad7-Flag or un-transfected myoblasts were crosslinked with 1% formaldehyde followed by sonication to shear DNA (Smad7 or β-catenin). IP with anti-Flag or β-catenin antibody was carried out to precipitate protein–DNA complexes. Comparable IP with IgG antibody served as control. Recovered DNA was reverse crosslinked, purified, and subjected to quantitative PCR (qPCR) using primers specific for the ckm promoter to determine enrichment. Primers specific to gapdh were used as controls. c Combinations of Smad7-myc and β-catenin-myc were ectopically expressed in C2C12 cells along with a ckm luciferase reporter gene. Renilla luciferase served as transfection control. C2C12 transfected with empty vector (pcDNA) and reporter genes served as controls for ectopic expression. Cells were harvested for Luciferase determination at 48 h after changing to differentiation media (DM) post-transfection. Normalized luciferase activity was compared to the control to determine fold changes. d C2C12 cells were transfected with Smad7-myc, β-catenin-FL(full-length)-myc, β-catenin ΔSID, and β-catenin 1-574 alone or in combination with ckm luciferase reporter construct. e Three small interfering RNAs (siRNAs) specific for Smad7 were used to deplete the endogenous Smad7 levels in C2C12. Unprogrammed Scrambled siRNA was used as controls. f β-Catenin was transfected along with ckm luciferase. Empty vector (pcDNA) was used as a control for ectopic expression . Renilla was used as a control reporter to monitor transfection efficiency. Lysates were collected at 48 h after changing to DM post-transfection. The firefly luciferase activity under each condition was measured independently and normalized to Renilla luciferase values. Each condition is compared to the control for the three individually transfected samples to determine fold change. Each dot represents one biological replicate, which corresponds to the mean of three technical replicates. N = 3 biological replicates per condition. The error bars represent standard error of the mean (SEM). Dunnett’s multiple comparisons test in one-way analysis of variance using GraphPad Prism 8.0 was utilized to test for statistical significance. * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001 Full size image Fig. 4: Smad7 and β-catenin co-operativity on ckm was mediated through MyoD. a C2C12 cells were transfected with small interfering RNAs targeting β-catenin to deplete the endogenous β-catenin levels. Smad7-myc was transfected along with ckm luciferase reporter gene. b MyoD and Smad7-myc alone or in combination were transfected along with ckm luciferase promoter activity reporter gene in depleted β-catenin condition. Empty vector (pcDNA) was used as a control for ectopic expression; Renilla Luciferase was used to normalize for transfection efficiency. Lysates were collected at 48 h after changing to differentiation media (DM) post-transfection. Each condition is compared to the control for the three individually transfected samples to determine fold change. Each dot represents one biological replicate, which corresponds to the mean of three technical replicates. N = 3 biological replicates per condition. The error bars represent standard error of the mean (SEM). Dunnett’s multiple comparisons test in one-way analysis of variance using GraphPad Prism 8.0 was utilized to test for statistical significance. * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.001 Full size image The minimal SID on β-catenin functions as a trans dominant inhibitor of β-catenin activity Since our data suggested that the SID is crucial for the function of Smad7 and β-catenin on the ckm promoter, we next considered the possibility that this domain of β-catenin might function as a more general trans dominant repressor (dominant negative) of endogenous β-catenin activity. To test this idea, we constructed a mammalian expression plasmid for tagged β-catenin aa575–683. Smad7 and β-catenin were ectopically expressed in C2C12 cells along with increasing concentrations of either EYFP-β-catenin aa575–683 (EYFP-SID) or Flag β-catenin aa575–683 (Flag-SID). Reporter gene analysis revealed that ckm promoter activity was significantly reduced in Smad7- and β-catenin-expressing cells in the presence of either EYFP-SID or Flag-SID (Fig 5a ). These data indicate that the β-catenin SID can function as a dominant-negative inhibitor of β-catenin function. A similar analysis using a sprr1a promoter luciferase reporter gene that is not regulated by MyoD (data not shown) was unaffected by SID expression, suggesting that the effect of SID on ckm is specific (supplementary Fig. S3 ). We subsequently assessed the effect of the SID domain on myogenesis by analyzing the protein levels of myogenic markers with and without SID expression. Immunoblot analysis from ectopically expressed SID in C2C12 cells exhibited reduced protein levels of CKM and Myogenin as compared to controls indicating an overall repression of myogenesis by exogenous SID expression (Fig. 5b ). Collectively these data suggested that the β-catenin SID can inhibit the activity of endogenous β-catenin and the myogenic differentiation program. Fig. 5: β-Catenin aa575-683 (Smad7- interacting domain (SID)) functions as a dominant-negative inhibitor of endogenous β-catenin activity. a Smad7-myc and β-catenin-myc alone or in combination were transfected along with increasing amounts of either EYFP-β-catenin SID or Flag β-catenin SID and ckm luciferase construct. Empty vector (pcDNA) was used as a control. Lysates were collected at 48 h after changing to differentiation media (DM) post-transfection. Each condition was compared to the control for the three individually transfected samples to determine fold change. Each dot represents one biological replicate, which corresponds to the mean of three technical replicates. N = 3 biological replicates per condition. The error bars represent standard error of the mean (SEM). Dunnett’s multiple comparisons test in one-way analysis of variance using GraphPad Prism 8.0 was utilized to test for statistical significance. ** p ≤ 0.01, *** p ≤ 0.001. b Total cell lysates of ectopically expressing Flag β-catenin SID were collected at 48 and 72 h after changing to DM post-transfection. Un-transfected cells served as controls. Cell lysates were analyzed for endogenous β-catenin, muscle markers (Ckm, Myogenin), and Flag (β-catenin SID) by western blot analysis. An actin blot indicated protein loading of samples Full size image Smad7:β-catenin complex interact with the Mediator kinase complex subunits (MED13 and 12) Previous studies 39 and our unpublished observations confirm that β-catenin directly interacts with MED12. Since it is established that MED12 and MED13 form an integral part of the mediator kinase module 39 , 40 , we hypothesized that the composite function of the Smad7:β-catenin interaction might be to recruit the MED kinase module. This idea is supported by our Co-IP analysis in which we observed that Smad7 and MED13 are in the same precipitated protein complex (Fig. 6a ). Furthermore, we also validated the previously reported MED12:β-catenin interaction (Fig. 6b ). Finally, we observed that the Smad7:MED13 interaction was disrupted when endogenous β-catenin was depleted (Fig. 6c, d ). The Smad7:MED13 interaction thus constitutes a novel observation in characterizing Smad7 as a component of the transcriptional machinery-linking promoter activity to the Mediator kinase complex. Fig. 6: Mediator subunit 13 (MED13) associates with Smad7 in a β-catenin-dependent manner. a C2C12 cells were transiently transfected with Smad7-flag. Lysates were collected at 24 h after replenishing media post-transfection. Co-immunoprecipitation (Co-IP) assays were performed using anti-Flag antibody to test an interaction between endogenous MED13 and Smad7. b Co-IP assays were performed using β-catenin antibody to test for an interaction between endogenous MED12 and β-catenin. IP with IgG served as controls. c Small interfering RNAs (siRNAs) specific for β-catenin were used to deplete the endogenous β-catenin levels in C2C12, Scrambled siRNA served as controls. Lysates were collected at 48 h post-transfection and immunoblotted for β-catenin, anti-Flag (for Smad7-Flag), and Actin (loading control). d Co-IP with anti-Flag was repeated in depleted β-catenin conditions to assess the interaction between endogenous MED13 and Smad7 Full size image Discussion Smad7:β-catenin as an essential component of muscle enhanceosomes Despite a considerable body of literature implicating Wnt-β-catenin signaling in muscle differentiation 26 , 41 , 42 , 43 , there is a surprising lack of mechanistic insight into how it fulfills this role. Moreover, recent in vitro and in vivo data implicating Smad7 in the control of muscle gene expression is also not understood mechanistically. Interestingly, in one study 44 a correlation between TGF-β and Wnt signaling was investigated where TGF-β1 was observed to control the differentiation of fibroblasts to myofibroblasts by upregulating Wnt signaling. Here we provide evidence of a direct functional interaction between Smad7 and β-catenin that serves a fundamental role in recruiting Mediator components to the well-characterized ckm gene. Our data indicate that β-catenin is inefficiently recruited to the ckm gene enhancer in muscle cells when Smad7 is depleted and corresponding loss of β-catenin recruitment renders this gene incapable of responding to essential differentiation cues (Fig. 7 ). Here we present evidence indicating that a Smad7:β-catenin complex is a critical part of the transcriptional machinery at a key muscle promoter proximal element, being intrinsically necessary as a co-regulator for the MRFs in the myogenic gene expression program. Fig. 7: Proposed model of Smad7:β-catenin integration into the transcriptional holocomplex on the ckm proximal promoter region. The core ckm promoter proximal region contains essential binding sites for MEF2 proteins and MyoD/E protein complexes. β-catenin has also been previously implicated in binding to MyoD and MEF2 transcription factors. The trithorax group protein Ash2L was previously established as one of the interacting partners of MEF2D/2A. Based on the published work and the Smad7:β-catenin:MED12/13 contribution identified herein, an integrated model of the transcriptional regulation of the ckm gene is proposed Full size image Mediator recruitment to ckm by β-catenin/Smad7 The Mediator complex was initially characterized in budding yeast 45 and has since been established as fulfilling an essential function in RNA polymerase II-mediated gene transcription from flies to mammals 46 . The multi-subunit compositional complexity of the Mediator holocomplex has proved a substantial challenge to define the full extent of its properties but what is apparent is that its fundamental role is to provide a functional bridge between transcriptional regulatory proteins bound to gene enhancers and the general transcription machinery assembled at core promoters 47 , 48 , 49 , 50 , 51 , 52 , 53 . The interaction of different Mediator subunits with a variety of transcription factors thus allows a myriad of cellular signaling events that converge on the transcription factors to be subsequently relayed to the transcriptional machinery and ultimately programs of gene transcription 54 . Thus, this essential activity in promoting signal-dependent transcriptional pre-initiation complex (PIC) assembly and stabilization renders many Mediator subunits essential for life, since gene targeting in mice has revealed that many subunits prove embryonic lethal when deleted and in yeast all Pol II-regulated genes are dependent on Mediator 52 , 55 , 56 , 57 . The experimental dissection of Mediator has been aided by the characterization of four relatively stable sub-complexes that have been designated as the head, middle, tail, and kinase modules. In our study, we have documented that the Smad7:β-catenin complex specifically associates with the MED12 and 13 subunits of the Mediator kinase module. The kinase domain is speculated to fulfill a transient regulatory function in Mediator by promoting recruitment of all four Mediator domains to enhancers which then transitions to a core promoter-bound Mediator complex in which the kinase module is absent. We therefore propose that Mediator recruitment to the ckm enhancer is mediated by the Smad7:β-catenin complex in muscle cells (Fig. 7 ). Implications of β-catenin:Smad7 interaction in Rhabdomyosarcoma (RMS) In view of the role played by β-catenin: Smad7, it is perhaps worth consideration of the implications of this in the context of the pathology of RMS, a soft tissue pediatric cancer with features of muscle 58 . Previously, we reported that constitutive glycogen synthase kinase 3 (GSK3) activity is a feature of the embryonal form of RMS (there are two general categories: alveolar (ARMS) and embryonal (ERMS)). The result of this GSK3 activity is perpetual proteosomal degradation of β-catenin since this is the primary function of the APC-GSK3 complex in canonical Wnt signaling 59 . Upon Wnt signaling stimulation, GSK3 activity is substantially reduced and proteosomal degradation of β-catenin ceases, resulting in its cytoplasmic accumulation and translocation to the nucleus where it activates transcription in combination with other transcriptional regulators, such as TCF/LEF. Based on these previous observations and the current study, it is worth speculating that the absence of nuclear β-catenin function due to constitutive GSK3 activity contributes to the differentiation defect in ERMS. Promoting terminal differentiation of the myo-like cells in RMS is seen as a potentially effective therapeutic target, since differentiation, by nature, results in cell cycle withdrawal and cessation of proliferation which would cause tumor regression. There are several lines of indirect evidence supporting this idea; we recently reported that Myogenin is expressed at high levels in RMS but, despite the role of this MRF as a key terminal effector of the myogenic differentiation program, it is functionally inactive in these cells. It is therefore possible that the lack of nuclear β-catenin underlies this functional impairment due to the failure to recruit Mediator to connect Myogenin function to PIC assembly in these ERMS cells. In support of this possibility, we observed that Smad7:β-catenin is tethered to muscle promoters by its interaction with MRFs (such as MyoD and Myogenin) bound to E boxes on muscle promoter/enhancer regions. Another line of circumstantial evidence supporting this notion is that pharmacological GSK3 inhibitors have been reported to force RMS cells into a more differentiated cellular phenotype, suggesting that re-constitution of β-catenin function in RMS cells promotes their differentiation and could potentially be antitumorigenic. Further clarification of the role of β-catenin:Smad7 as a therapeutic target in RMS is therefore warranted. Further implications of p38 mitogen-activated protein kinase (MAPK) signaling on ckm regulation Phospho-dependent protein interactions are a signature of β-catenin function. There is ample evidence of kinase-mediated phosphorylation modulating the affinity of PPIs in the canonical Wnt pathway, resulting in important outcomes for Wnt-dependent target gene activation 33 . Specifically, we recently reported that a p38 MAPK-dependent interaction with the MEF2 transcription factor enhances β-catenin nuclear retention and activity 33 . During differentiation, MyoD and MEF2 bind muscle-specific promoters and enhancers, leading to the recruitment of co-activators (including p300) and the basal transcriptional machinery to establish a transcriptionally poised promoter. The previously implicated mechanism is that p38 MAPK activates ckm expression by phosphorylation-dependent recruitment of the histone methyltransferase Ash2L by MEF2D 60 . In view of the current observations, it is also possible that p38 MAPK may promote β-catenin recruitment to ckm through its interaction with MEF2. In summary, we have documented a PPI between Smad7 and β-catenin that may serve a fundamental role in the control of myogenic gene transcription. These observations may have implications for our understanding of the molecular control of myogenic differentiation during embryonic development and adult muscle regeneration. | York University scientists have uncovered a unique set of genes that play a role in muscle cellular gene expression and differentiation which could lead to new therapeutic targets to prevent the spread of muscle cancer. The researchers analyzed gene networks in muscle cells and found that the Smad7 and β-catenin proteins work cooperatively inside the body to regulate muscle cell differentiation, growth and repair. When these regulatory proteins work in harmony, they control the pathway for normal gene expression, resulting in normal skeletal muscle cells. The study, published in the journal Cell Death & Disease, indicates that a dysfunctional relationship between the Smad7 and β-catenin complex can lead to a situation of impaired muscle cell differentiation—a hallmark of some soft tissue cancers such as Rhabdomyosarcoma (RMS). This rare cancer, which most often affects children, forms in soft tissue, mostly skeletal muscle tissue, and sometimes in hollow organs like the bladder or uterus. "What happens in those rhabdomyosarcoma cells is that they have a muscle cell-like character, but the difference is that normal muscle cells stop dividing," said John McDermott, a professor in the Department of Biology in the Faculty of Science, who supervised the study and is a contributing author. McDermott said these cells look like muscle cells, in terms of the way they function and their phenotype, but they don't stop dividing, which is why they form tumors at various sites in the body. "Our idea is that part of the reason why those cells are defective in the differentiation program, which would mean that they would stop dividing, is that the β-catenin complex is being degraded in those cells because of an anomaly in the signaling pathway that controls that," said McDermott. "If we can stabilize the β-catenin and Smad7 complex in those cells, you could potentially encourage them to differentiate and stop proliferating, which would mean that you'd stop those cells from growing in the tumor." The research was conducted in York's Muscle Health Research Centre - the first of its kind in Canada—which focuses on the importance of skeletal muscle to the overall health and well-being of Canadians. This new molecular genetic finding could lead to strategies for cancer treatments that target these specific molecules. The study also defines new molecular targets for therapeutic interventions in muscle wasting and cancer. "Until you know how things work normally, it's very hard to target anything specific, so identifying the normal function of molecules is essential before assessing abnormal function in cancer cells," said McDermott. "This then allows therapeutic targeting of specific molecules in order to develop pharmacology to treat the condition, or in some cases pre-existing pharmacology would be used." The research team—led by Ph.D. student Soma Tripathi and including Tetsuaki Miyake, a research associate and Ph.D. - focused on understanding the role of transcription factors in orchestrating tissue-specific gene expression and differentiation. They did this by identifying DNA binding proteins that are involved in transcriptional regulation during muscle development. The study also identified new regulators of muscle regeneration which could also open doors for the pharmaceutical industry to develop new treatments to address the normal but debilitating loss of muscle in the aging population. "Muscle regeneration is a highly complex process and is regulated by a variety of transcription factors which are essentially proteins that help turn genes on or off by binding to specific genes within the genome," said Tripathi. "We believe two such transcription factors, Smad7 and β-catenin, play a key role in the specific pattern of gene expression required for muscle development and repair." | 10.1038/s41419-019-1615-0 |
Earth | Clouds may be less climate-sensitive than assumed | Raphaela Vogel, Strong cloud-circulation coupling explains weak trade cumulus feedback, Nature (2022). DOI: 10.1038/s41586-022-05364-y. www.nature.com/articles/s41586-022-05364-y Observations refute the idea that warming strongly reduces cloudiness, Nature (2022). DOI: 10.1038/d41586-022-03640-5 , doi.org/10.1038/d41586-022-03640-5 Journal information: Nature | https://dx.doi.org/10.1038/s41586-022-05364-y | https://phys.org/news/2022-11-clouds-climate-sensitive-assumed.html | Abstract Shallow cumulus clouds in the trade-wind regions cool the planet by reflecting solar radiation. The response of trade cumulus clouds to climate change is a key uncertainty in climate projections 1 , 2 , 3 , 4 . Trade cumulus feedbacks in climate models are governed by changes in cloud fraction near cloud base 5 , 6 , with high-climate-sensitivity models suggesting a strong decrease in cloud-base cloudiness owing to increased lower-tropospheric mixing 5 , 6 , 7 . Here we show that new observations from the EUREC 4 A (Elucidating the role of cloud-circulation coupling in climate) field campaign 8 , 9 refute this mixing-desiccation hypothesis. We find the dynamical increase of cloudiness through mixing to overwhelm the thermodynamic control through humidity. Because mesoscale motions and the entrainment rate contribute equally to variability in mixing but have opposing effects on humidity, mixing does not desiccate clouds. The magnitude, variability and coupling of mixing and cloudiness differ markedly among climate models and with the EUREC 4 A observations. Models with large trade cumulus feedbacks tend to exaggerate the dependence of cloudiness on relative humidity as opposed to mixing and also exaggerate variability in cloudiness. Our observational analyses render models with large positive feedbacks implausible and both support and explain at the process scale a weak trade cumulus feedback. Our findings thus refute an important line of evidence for a high climate sensitivity 10 , 11 . Main Earth’s climate strongly depends on the abundance and behaviour of its smallest clouds. Shallow trade-wind cumulus clouds are rooted in the turbulent sub-cloud layer and form when thermals rise above the lifting condensation level 12 . They may grow only a few hundred metres high in dry environments or become positively buoyant and rise up to the trade-wind inversion, where they detrain condensate into stratiform cloud layers. Trade cumuli populate most of the subtropical oceans and cool the planet by reflecting the incoming solar radiation. Owing to their large geographical extent, small errors in predicting the way trade cumuli respond to warming can have a large effect on the global radiative budget. This explains why shallow cumuli in the trades are a main source of spread in the estimates of climate sensitivity of climate models 1 , 2 , 3 , 4 . Cloudiness near the base of the cumulus layer makes up two-thirds of the total cloud cover in the trades 13 and its change with warming governs the strength of the trade cumulus cloud feedback in climate models 5 , 6 . Reductions in cloud-base cloudiness in climate models are tightly coupled with increases in lower-tropospheric mixing owing to convective and large-scale circulations 5 , 6 , 7 . On the basis of this strong negative coupling between mixing and cloudiness, the hypothesis emerged that enhanced convective mixing might desiccate the lower cloud layer and reduce cloudiness in the trades 7 . This mixing-desiccation hypothesis suggests that the moisture transported by convection from the sub-cloud layer to the trade inversion is compensated by downward mixing of drier air and evaporation of clouds near cloud base. The mechanism—which is expected to become more pronounced with warming owing to the nonlinear Clausius–Clapeyron relationship—is consistent with idealized high-resolution simulations of nonprecipitating trade cumuli 14 and with the behaviour of climate models that have a strongly positive trade cumulus feedback 5 , 7 , 15 . However, the mixing-desiccation hypothesis has never been tested with observations. Using the convective mass flux at cloud base, M , as a proxy for lower-tropospheric convective mixing, the hypothesis can be tested by analysing the relationship between M and the mean relative humidity ( \({\mathcal{R}}\) ) and cloud fraction ( C ) at cloud base in observations, with \(C\propto {\mathcal{R}}\propto {M}^{\beta }\) and β < 0 suggesting the mixing-desiccation mechanism to be present in nature (Fig. 1a ). Fig. 1: Illustration of two mechanisms for the coupling of mixing and cloudiness. a , The mixing-desiccation mechanism contends that E increases in response to an increase in M , which leads to a reduction in \({\mathcal{R}}\) and cloud-base cloudiness C , and a relationship \(C\propto {\mathcal{R}}\propto {M}^{\beta }\) with β < 0. b , The mesoscale motion control of cloudiness instead suggests that M is equally controlled by both E and W , such that M is uncorrelated to \({\mathcal{R}}\) and β > 0. Full size image The mixing-desiccation mechanism is based on several assumptions that might not be operating in nature. M is commonly defined as the product of the cloud fraction and the in-cloud vertical velocity, and its variability is mostly governed by the area coverage of active clouds 16 , 17 , defined as saturated and buoyant updrafts that ventilate the sub-cloud layer. If variability in the in-cloud vertical velocity near cloud base is small, a positive relationship between C and M is expected ( β > 0; Fig. 1b ). This was demonstrated for nonprecipitating trade cumuli using Doppler radar data 17 , 18 and seems at odds with the mixing-desiccation hypothesis. Yet active clouds represent only half of the total C (refs. 19 , 20 ) and the lifetime and variability of passive clouds, such as the detritus of decaying clouds, might be more sensitive to \({\mathcal{R}}\) and mixing-induced drying of their environment than to M . The sub-cloud-layer mass budget provides a theoretical basis for interpreting the mixing-desiccation mechanism. It can be expressed as a budget of the sub-cloud-layer height h , $$\frac{\partial h}{\partial t}+{V}_{h}\cdot \nabla h=E+W-M,$$ (1) in which the entrainment rate, E , representing the mass source owing to the entrainment of dry and warm cloud layer air, and the mesoscale vertical velocity, W , are balanced by the mass export owing to the convective mass flux, M (ref. 20 ). Note that we define M as the (mass) specific mass flux, which has units of velocity (see Methods ). E is the only term directly affecting the sub-cloud-layer moisture and heat budgets 21 , 22 . If an increase in M is mostly balanced by an increase in E , a drying and warming of the sub-cloud layer and a reduction in \({\mathcal{R}}\) and C is expected (Fig. 1a ). The trades, however, exhibit strong mesoscale convective organization, which is linked to the presence of mesoscale circulations and substantial variability in W (refs. 20 , 23 , 24 , 25 ). This variability in W could contribute to variability in M without directly affecting \({\mathcal{R}}\) (Fig. 1b ). An increase in M could also produce increased inversion cloudiness and thus increased total cloud cover, compensating the radiative effects of a potential decrease in C . The diversity of cloud types and the large variability in W in the trades thus call into question the mixing-desiccation mechanism as the dominant control of C and trade cumulus feedbacks. The EUREC 4 A field campaign was conceived to test the mixing-desiccation hypothesis 8 , 9 . EUREC 4 A took place in January and February 2020 near Barbados, a region selected as a source of data because clouds in its vicinity are representative for the entire trade-wind belt 26 . During EUREC 4 A, we made measurements designed to quantify the magnitude and (co-)variability of M , C and \({\mathcal{R}}\) over one month, which was characterized by substantial variability in the mesoscale convective organization 27 and the large-scale circulation 9 (see Methods ). With the help of these measurements, we are able to test the mixing-desiccation hypothesis with observations for the first time. Observations of M , C and \(\boldsymbol{\mathcal{R}}\) co-variations During EUREC 4 A, we dropped more than 800 dropsondes from the HALO aircraft flying at about 10 km altitude along 1-h circles of 220 km diameter 28 , 29 . We use the dropsonde data to estimate M at the sub-cloud-layer top as a residual of the mass budget (equation ( 1 )) on the 3-h scale of three consecutive circles (see Methods ). Figure 2a shows a large day-to-day variability of M , with higher values at the beginning and end of the campaign, and a campaign mean of 17.4 ± 7.5 mm s −1 (mean ± standard deviation σ ). M shows a pronounced diurnal cycle (Extended Data Fig. 1 ), with larger values around sunrise and smaller values in the afternoon (consistent with refs. 20 , 30 ). The mass budget estimates are robust to changes in the estimation procedure and consistent with independent data ( Methods and Extended Data Fig. 2 ). Fig. 2: Time series of mixing and cloudiness during EUREC 4 A. a – d , Measurements of M ( a ), E and W ( b ), C ( c ) and \({\mathcal{R}}\) ( d ), with filled symbols representing the 3-h scale and open symbols representing the 1-h scale. The vertical bars in a – c show the estimation uncertainty at the 3-h scale (see Methods section ‘Uncertainty estimation’). The \({\mathcal{R}}\) in d is shown for both the HALO (blue) and ATR (green) aircraft, with the ‘X’ markers representing the data points that are excluded in the correlations owing to inconsistent sampling of the mesoscale cloud patterns between the two aircraft. The campaign mean ± 1 σ is shown on the left side of each panel. Source Data Full size image The entrainment rate E is computed as the ratio of the scaled surface buoyancy flux and the buoyancy jump across h (equation ( 2 ) and Extended Data Fig. 3 ). E averages to 18.3 ± 6.4 mm s −1 across the campaign (Fig. 2b ) and also shows a pronounced diurnal variability (Extended Data Fig. 1 ). E is mostly controlled by variability in the surface buoyancy flux (Extended Data Fig. 4b ). It is the strengthening of winds and surface fluxes that contributes most to the increase in E and M in the second half of EUREC 4 A. W is, with −0.9 ± 6.7 mm s −1 , on average nearly zero. Variability in W , however, is substantial and contributes slightly more to variability in M compared with E (Extended Data Fig. 4a ). So although M ≈ E holds on average, consistent with the mixing-desiccation hypothesis (Fig. 1a ), variability in M is controlled by both E and W . Figure 2c shows the new measurements of the cloud-base cloud fraction C from combined horizontally staring lidar and radar on board the ATR aircraft flying near cloud base 31 . C is, with 5.4 ± 3.1%, both small and highly variable. The variability of C on the 3-h scale is substantially larger than variability on synoptic and longer timescales 13 . The robustness of C is demonstrated by the internal consistency among complementary and independent measurements in terms of measurement techniques and spatial sampling 31 . The \({\mathcal{R}}\) at cloud base is robustly around 86% (Fig. 2d ), except for a few outliers. Three data points with much lower \({\mathcal{R}}\) for ATR compared with HALO (marked with ‘X’ in Fig. 2d ) are excluded in the following analyses, as these situations were associated with air masses that were sampled differently by the two aircraft (see Methods and Fig. A2 in ref. 31 ). Despite being fundamental quantities to understand climate sensitivity, the challenging nature of observing M and C so far prevented an observational analysis of the relationship between mixing and cloud-base cloudiness. With the EUREC 4 A observations presented here, we are now able to test the mixing-desiccation hypothesis with data. Data refute mixing-desiccation hypothesis The cloud-base cloud fraction is suggested to be controlled both dynamically through M and thermodynamically through \({\mathcal{R}}\) . We can therefore express C as a multiple linear regression \(\hat{C}={a}_{0}+{a}_{M}\widetilde{M}+{a}_{{\mathcal{R}}}\widetilde{{\mathcal{R}}}\) , in which \(\widetilde{(\,)}\) represents standardized values (for example, \(\widetilde{M}=M/{\sigma }_{M}\) ). Figure 3a shows that the observed C and the reconstructed \(\hat{C}\) agree very well ( r = 0.83 [0.80, 0.91], with values in the square brackets representing the 25th and 75th quartiles of bootstrapped correlations, respectively), demonstrating that M and \({\mathcal{R}}\) dominate variability in C . Fig. 3: Relationships among M , \(\boldsymbol{\mathcal{R}}\) and C . a – d , The relationships between the observed C and the reconstructed \(\hat{C}\) from the regression \(\hat{C}={a}_{0}+{a}_{M}\widetilde{M}+{a}_{{\mathcal{R}}}\widetilde{{\mathcal{R}}}\) ( a ), M and \({\mathcal{R}}\) ( b ), M and C ( c ) and \({\mathcal{R}}\) and C ( d ) are shown at the 3-h scale. The error bars represent the estimation uncertainty for M and C and the sampling uncertainty for \({\mathcal{R}}\) (see Methods ). The dotted line in a is the 1:1 line. The size of the markers in b represents C . The shading in c represents the scaling for C ∝ 2 M / w * using the mean ± 2 σ of the velocity scale w *. The grey ‘X’ markers represent data that are excluded in the correlations owing to inconsistent sampling between the two aircraft (see Fig. 2d and Methods ). Source Data Full size image The mixing-desiccation mechanism contends that, as M increases, E increases and leads to a reduction in \({\mathcal{R}}\) . The anticorrelation of E and \({\mathcal{R}}\) is confirmed by the observations ( \({r}_{E,{\mathcal{R}}}=-\,0.47\,[-0.62,-\,0.32]\) ; Extended Data Fig. 4d ). But W is also correlated to \({\mathcal{R}}\) ( \({r}_{W,{\mathcal{R}}}=0.48\,[0.29,0.62]\) ; Extended Data Fig. 4e ). W does not directly affect the thermodynamic properties of the sub-cloud layer 22 , as it transports mass with the same properties of the well-mixed sub-cloud layer. The positive correlation between W and \({\mathcal{R}}\) is thus probably connected to a self-aggregation feedback leading to a net convergence of moisture into areas that are already moist 25 , 32 , 33 . The opposing correlations of E and W with \({\mathcal{R}}\) lead to a negligible anticorrelation of M and \({\mathcal{R}}\) ( r = −0.08 [−0.26, 0.10]; Fig. 3b ). Although this makes M and \({\mathcal{R}}\) independent predictors of C , it contrasts with the expected desiccation effect of increased mixing. The basic premise of the mixing-desiccation hypothesis thus breaks down in the presence of strong variability in W . Figure 3c further shows a pronounced positive correlation between C and M ( r = 0.72 [0.64, 0.81]), demonstrating that M explains more than 50% of variability in C . The EUREC 4 A data are therefore in line with a more direct relation C ∝ M β and a β > 0 (Fig. 1b ). The tight connection between C and M is also consistent with physical understanding represented in the scaling \(C\approx 2{C}_{{\rm{c}}{\rm{o}}{\rm{r}}{\rm{e}}}\propto 2M/{w}^{\ast }\) , in which \({C}_{{\rm{core}}}\) is the area fraction of active cloud cores and w * is the Deardorff vertical velocity scale (see Methods and ref. 24 ). The correlation of C with \({\mathcal{R}}\) is weaker ( r = 0.36 [0.16, 0.56]; Fig. 3d ). These conclusions are robust to changes in the estimation procedure of M and to independent estimates of C (Extended Data Fig. 5 ). The relationships exposed by the EUREC 4 A data are thus in opposition to the mixing-desiccation hypothesis, which contends that increasing mixing (larger M ) leads to a desiccation of the lower cloud layer (smaller \({\mathcal{R}}\) ) and a reduction in cloud-base cloudiness (smaller C ). We also find a positive relationship between C and another indicator of lower-tropospheric mixing (Extended Data Fig. 4f ) and a weak positive correlation between M and the total projected cloud cover (Extended Data Fig. 6 ). Hence, the EUREC 4 A data emphasizes dynamic factors—the convective mass flux M and the mesoscale vertical velocity W —as dominant controls of C , rather than thermodynamic factors related to the mixing-desiccation mechanism. Models underestimate strong cloud–circulation coupling How consistent is the present generation of climate models with our observations? To assess how climate models represent the relationship between mixing and cloudiness, we use ten models from the Cloud Feedback Model Intercomparison Project (CFMIP) 34 that provide the necessary pointwise M , C and \({\mathcal{R}}\) output at high temporal resolution near the EUREC 4 A domain (see Methods ). In contrast to the consistency among many independent EUREC 4 A observations, Fig. 4a shows that the models strongly differ in their magnitude and variability of M and C . Although some models predict unrealistically low M (CanAM4, MIROC6 and MPI-ESM), the IPSL-CM6A has a five times larger mean M compared with the EUREC 4 A observations. Except for IPSL-CM6A, all models strongly overestimate variability in C (see also ref. 35 ) and 8 of 10 models also overestimate the magnitude of C . This is partly owing to the tendency of models to produce stratocumulus clouds in this shallow cumulus regime 36 , 37 (evident in the strong increases in C (up to 50–100%) above a critical \({\mathcal{R}}\) of about 94%; see Extended Data Fig. 7 ). By contrast, the observations indicate no occurrence of C > 13% or \({\mathcal{R}}\) > 94%. The models that produce such more stratocumulus-like conditions with \({\mathcal{R}}\) > 94% more than 15% of the time (Extended Data Fig. 8a ) are labelled with open symbols in Fig. 4 . Fig. 4: Relationships in climate models and link to trade cumulus feedback. a , Mean ± σ /2 of M and C . b , Correlation coefficients r between M and C ( r M , C ) and \({\mathcal{R}}\) and C ( \({r}_{{\mathcal{R}},C}\) ). c , Ratio of the standardized multiple linear regression coefficients \({a}_{M}/{a}_{{\mathcal{R}}}\) and the thermodynamic component of the trade cumulus radiative feedback. The models are coloured in bins of feedback strength. Open symbols refer to models with frequent stratocumulus (defined as having \({\mathcal{R}}\) > 94% more than 15% of the time; see Extended Data Fig. 8a ). The grey shading represents the 25th to 75th quartile and the grey bars the 95% confidence interval of bootstrapped observational values. For plotting purposes, a shows the mean \(\bar{M}\) -30 for IPSL-CM6A and c shows the ratio \({a}_{M}/{a}_{{\mathcal{R}}}+3\) for BCC-CSM2. In c , the upper end of the observational 95% confidence interval (at 6.75) is cropped. Source Data Full size image Only the BCC-CSM2 model represents the pronounced positive correlation between C and M observed during EUREC 4 A at the 3-h scale (Fig. 4b ). Six of the other models have a correlation coefficient r < 0.05, of which three models even show a negative correlation. Most models thus strongly underestimate the tight coupling between clouds and convection observed in EUREC 4 A. Instead, these six models are more in line with the mixing-desiccation mechanism and a β < 0 (Fig. 1a ), even though this is not mediated by a pronounced negative correlation between M and \({\mathcal{R}}\) (Extended Data Fig. 8c ). All the models also strongly underestimate variability in W (Extended Data Fig. 8b ), as they do not represent the sub-grid processes leading to the observed variability in the mesoscale vertical velocity (for example, shallow circulations driven by differential radiative cooling 38 or local sea-surface temperature (SST) gradients 39 ). The relationships between C and \({\mathcal{R}}\) are more consistent among most models (Fig. 4b ) and are also more consistent with the observations compared with the relationships between C and M . In contrast with the observations, clouds as parameterized by climate models are more thermodynamically than dynamically controlled. The misrepresentation of the relative sensitivity of C to changes in M or \({\mathcal{R}}\) by all models is encapsulated in the ratio of the standardized regression coefficients \({a}_{M}/{a}_{{\mathcal{R}}}\) from the regression \(\hat{C}={a}_{0}+{a}_{M}\widetilde{M}+{a}_{{\mathcal{R}}}\widetilde{{\mathcal{R}}}\) . The model samples lie completely outside the EUREC 4 A data (Fig. 4c ). All models, with one exception, substantially underestimate the value of \({a}_{M}/{a}_{{\mathcal{R}}}\) compared with the observations, highlighting that, in the climate models, variability in C is primarily controlled by variations in \({\mathcal{R}}\) rather than variations in M . Although BCC-CSM2 seems credible in terms of the magnitude and relationship of C and M , its credibility is eroded by its unrealistic relationship between C and \({\mathcal{R}}\) (Extended Data Fig. 7 ), and thus an implausible \({a}_{M}/{a}_{{\mathcal{R}}}\) of −5.2. At odds with the observations, in most models, M and \({\mathcal{R}}\) are only weak predictors of C , as evident in the low coefficient of determination ( r 2 ) of the multiple linear regression of \(\hat{C}\) (Extended Data Fig. 8c ). The cloud parameterizations of the models thus fail in capturing the key relationships between C and the dynamic and thermodynamic environment observed in nature. Implications for trade cumulus feedbacks The EUREC 4 A observations provide robust estimates of the mean, variability and coupling of M , C and \({\mathcal{R}}\) in contrasted trade cumulus environments. Although the observed variability is substantial, the variability simulated by climate models is unrealistic, as are the drivers of this variability. The EUREC 4 A data thus provide a physical test of the capacity of models to represent the interplay of the processes active in regulating trade-wind cloud amount and may guide future model development. Moreover, the fact that the relationships at the 3-h process scale are consistent with the relationships at the monthly timescale ( r ≥ 0.84; Extended Data Fig. 8e,f ) suggests that the underlying fast physical processes that couple M , \({\mathcal{R}}\) and C in the models are largely invariant with the timescale. The relationships derived from the EUREC 4 A observations can therefore also be used to evaluate the credible range of trade cumulus feedbacks in the climate models. Figure 4b demonstrates that all models with a strong trade cumulus feedback represented by a change in the cloud radiative effect (ΔCRE) with warming exceeding 0.37 W m −2 K −1 (reddish colours in Fig. 4c ) represent the refuted mixing-desiccation mechanism with a negative (or very weak) correlation between M and C . Also, these four models exaggerate both the coupling of C to \({\mathcal{R}}\) (small \({a}_{M}/{a}_{{\mathcal{R}}}\) ; Fig. 4c ) and the variability in C ( σ C ; Extended Data Fig. 8d ). By contrast, the models that are closer to the observations tend to have a weaker positive ΔCRE with warming. The EUREC 4 A observations of the physical processes that drive the short-term variability of C thus rule out the mechanism that leads to the largest positive trade cumulus feedbacks in current climate models. By showing that mesoscale motions inhibit the mixing-desiccation mechanism, we refute an important physical hypothesis for a large trade cumulus feedback. In the spirit of the storyline approach for constraining equilibrium climate sensitivity 10 , our findings thus refute an important line of evidence for a strong positive cloud feedback and thus a large climate sensitivity. The EUREC 4 A observations therefore support recent satellite-derived constraints from observed natural variability 37 , 40 and climate-change experiments using idealized high-resolution simulations 41 , 42 , which suggest that a weak trade cumulus feedback is more plausible than a strong one. Moreover, for the first time, we take into account all types of cloud present in the trades, including the optically thinnest ones that are usually missed in satellite observations 43 , and consider the full range of mesoscale variability that was not represented in idealized simulations of cloud feedbacks. We also provide an explanation for the inconsistency of models with large positive feedbacks: in these models, the observed tight coupling between convective mixing and cloudiness is absent; instead, C is primarily controlled thermodynamically by \({\mathcal{R}}\) , which exaggerates variability in C and feedbacks to warming. By not representing the variability in mesoscale circulations, the models miss an important process regulating trade cumulus clouds. Future research should focus on better understanding the processes controlling these mesoscale circulations and how they might change in a warmer climate. Methods EUREC 4 A field campaign We use data from the EUREC 4 A field campaign, which took place in January and February 2020 and was anchored in Barbados 8 , 9 . We focus on measurements made by the HALO 29 and ATR aircraft 31 , which flew coordinated patterns in the approximately 220-km diameter EUREC 4 A circle centred at 13.3° N, 57.7° W. The HALO aircraft flew three circles at 10.2 km altitude in 200 min (about 60 min per circle plus a 15-min break between circles) and launched dropsondes every 30° of heading (about 12 sondes per circle) to characterize the large-scale environment 28 . At the same time, the ATR aircraft flew 2–3 50-min rectangle patterns inside the circle near cloud base and measured the cloud fraction with horizontally staring cloud radar and backscatter lidar, and with several in situ probes and sensors 31 . Observations from the Barbados Cloud Observatory (BCO) 15 and the RV Meteor 44 provide further context at the western and eastern boundaries of the EUREC 4 A circle. A typical flight day of HALO consisted of two sets of three consecutive circles lasting about 3 h and comprising 30–36 sondes (sometimes defined as circling 9 , 22 , 29 ). The 3-h circle sets are separated by a 1.5-h break to refuel the ATR. The circle patterns were flown from 22 January to 15 February with different starting times between 04:30 and 19:30 local time (LT) to sample the diurnal cycle. Four more single-dropsonde circles are also used, three of which were flown by the P3 aircraft 45 during nighttime (starting at 00:15 LT on 9 and 10 February, and at 01:30 LT on 11 February). In total, the dataset comprises 73 circles (1-h scale) and 24 sets of three consecutive circles (3-h scale), for which 16 have coincident ATR data. We assume that HALO and ATR sample comparable conditions on the 3-h scale. This is confirmed by the similar cloud-base \({\mathcal{R}}\) of the aircraft during most flights (Fig. 2d ), except for the first 3-h circle set on 2 February and the second 3-h circle set on 7 and 13 February, in which the spatial scale of the cloud organization was larger than the scale of the domain sampled by the ATR. These three 3-h circle sets are marked in the figures and excluded from the calculated correlations. The spatial scale of the observations represents the lower end of Orlanski’s 46 meso- α scale and is comparable in size with a climate model grid box. The 200–300-km scale is the relevant scale of the cloud processes for a trade cumulus ensemble and also the scale that convective parameterizations target. It lies in between the \({\mathcal{O}}\) (1 km) scale of individual clouds and the synoptic scale of \({\mathcal{O}}\) (1,000 km), and is associated with the emergence of the prominent trade cumulus cloud organization patterns 47 . As the air masses are advected by about 30 km per hour (at the campaign mean wind speed of about 9 m s −1 at 1 km height), the spatial sampling of the 220-km diameter circle does not differ substantially between the 1-h and 3-h timescales, which motivates our nomenclature focus on the time rather than space scale. Using the measurements, model and reanalysis data, we would not expect our results to change substantially if the analysis domain were increased or reduced by a factor of two or more (see Methods section ‘Mass flux estimation’ for a discussion of the scale sensitivity of the results). The Barbados region was chosen as the location of EUREC 4 A because shallow trade cumulus clouds are the dominant cloud type in the area during winter 13 . Furthermore, clouds in the Barbados region are similar to clouds across the trade-wind regions in both observations and models 26 . The mean meteorological conditions during the EUREC 4 A campaign, as sampled by the dropsondes, also correspond well to the average January–February conditions from 12 years of data from the ERA-Interim reanalysis 48 (their Fig. 5), albeit with a 10% larger 850 hPa relative humidity during EUREC 4 A (the EUREC 4 A dropsondes also have an approximately 8% larger relative humidity compared with the 2013–2022 average in ERA5, not shown). Also, all four prominent patterns of mesoscale cloud organization 47 were present during the campaign 27 . The conclusions drawn from the EUREC 4 A data are thus relevant across the tropics and for climate timescales. Observations For estimating the cloud-base mass flux M , \({\mathcal{R}}\) and many other variables, we use dropsonde data from the JOANNE dataset 28 , namely Level 3 (gridded quality-checked sondes) and Level 4 (circle products) vertical profiles of thermodynamic quantities, wind and mesoscale vertical velocity, W . The HALO dropsondes are corrected for a dry bias by multiplying the relative humidity by 1.06 (ref. 28 ). For the cloud-base cloud fraction C , we use the BASTALIAS lidar-radar synergy product 31 , which includes both cloud and drizzle (but not rain) and constitutes an upper bound on C . We also test the relationships for three further estimates of C : The non-drizzling cloud product from the radar-lidar synergy ( C only ), which excludes drizzle and constitutes a lower bound on C . In situ estimates from a microphysical probe defined on the basis of thresholds of liquid water content plus particle size ( C pma ). In situ high-frequency (25-Hz) humidity sensor, with cloud defined as relative humidity ≥98% ( C turb ). The in situ sensors measure the along-track C , whereas the lidar-radar synergy samples clouds inside the rectangle at a distance up to 8 km from the aircraft 31 . Despite pronounced differences in the measurement principles and sampling, Fig. 18 of ref. 31 demonstrates the internal consistency and robustness among the independent C estimates. The ATR turbulence measurements also include measurements of vertical updraft and downdraft velocities 49 , from which an in-cloud mass flux M turb is computed by multiplying C turb by the in-cloud vertical velocity. Further HALO aircraft measurements used are total projected cloud cover (CC) estimates from the differential absorption lidar WALES, the hyperspectral imager specMACS and the cloud radar HAMP 29 . From these cloud masks, we derive the CC along the 1-h circle. For specMACS and HAMP, the cloud detection is ambiguous and we consider both the ‘probably cloudy’ and the ‘most likely cloudy’ flags in our CC estimates. We also use ceilometer and cloud radar data from the BCO and the RV Meteor to test the robustness of the sub-cloud-layer height definition (not shown). Radar cloud fraction profiles are obtained by correcting the hydrometeor fraction profiles with ceilometer data during periods of rain (see ref. 30 for a description of the correction applied). The BCO cloud radar data also demonstrate that missing the level of maximum cloud-base cloud fraction in 3-h averages by, say, 60 m does not affect the variability of C (correlations of r = 0.99 and r = 0.93 with the maximum C when 60 m above and below the peak level, respectively) and only marginally affects its magnitude (18% and 33% smaller relative to the maximum C for being 60 m above or below the peak level, respectively). So only if the ATR flight level deviated from the height of maximum cloudiness in ways that co-varied with M would we expect such a height difference to influence our analysis. As the ATR aircraft usually flew slightly above h (Extended Data Fig. 3a ) and because it sampled many more clouds in 3 h compared with the stationary BCO, a potential influence of missing the peak level is deemed not to bias our findings. Surface buoyancy flux To estimate the surface buoyancy flux ( \(\overline{{w}^{{\prime} }{\theta }_{{\rm{v}}}^{{\prime} }}{| }_{{\rm{s}}}\) , needed to compute M ), we use dropsonde humidity, temperature and wind data at 20 m height and apply the Coupled Ocean-Atmosphere Response Experiment (COARE) bulk flux algorithm version 3.6 (refs. 50 , 51 ). For the SST, we extrapolate the 2-m-depth SST of the RV Meteor (thermosalinograph primary backboard temperature), or alternatively from the AutoNaut Caravela 52 , to the dropsonde location based on a fixed zonal and meridional SST gradient of −0.14 K per degree. A gradient of −0.14 K per degree corresponds to the median zonal and meridional gradient (−0.145 K per degree and −0.135 K per degree, respectively) across the EUREC 4 A circle over the period from 19 January to 15 February in the ERA5 reanalysis 53 and in two satellite SST products (from the Advanced Baseline Imager on board the Geostationary Operational Environmental Satellite (GOES-16 ABI) and the Collecte Localisation Satellites (CLS). The sonde-derived surface buoyancy flux on the 3-h scale compares favourably with bulk fluxes from the RV Meteor mast, with a correlation coefficient r = 0.83 and a mean offset of 0.1% relative to RV Meteor. The sonde-derived flux has a comparable magnitude with the flux measured at the RV Ronald Brown 54 further upstream and is also well correlated ( r = 0.81) with ERA5. The ERA5 fluxes, however, overestimate the surface buoyancy flux compared with the sonde-derived flux by 25%, which is mostly because of the overestimation of the sensible heat flux by 64% relative to the observations (9.8 W m −2 and 6.0 W m −2 for ERA5 and dropsondes, respectively). A strong overestimation of the sensible heat flux compared with buoy measurements in the region is also present in the predecessor ERA-Interim reanalysis 55 . Overall, the good correspondence of our sonde-derived surface buoyancy flux with the independent data lends credibility to our estimation procedure. The sonde-derived surface buoyancy flux is also used to compute the Deardorff sub-cloud-layer vertical velocity scale \({w}^{* }={\left(h\frac{{\rm{g}}}{{\theta }_{{\rm{v}}}}\overline{{w}^{{\prime} }{\theta }_{{\rm{v}}}^{{\prime} }}{| }_{{\rm{s}}}\right)}^{1/3}\) shown in Fig. 3c , in which g is the gravitational acceleration. Mass flux estimation Vogel et al. 20 developed a method to estimate the shallow-convective mass flux at the sub-cloud-layer top as a residual of the sub-cloud-layer mass budget and tested it in real-case large-eddy simulations over the tropical Atlantic. Here the method is applied to EUREC 4 A observations, in parallel with Albright et al. 22 , who close the sub-cloud-layer moisture and heat budgets and provide an independent constraint on the entrainment rate E . Except for the surface buoyancy flux estimate (see the previous section), all data for the budgets come entirely from the dropsondes. Equation ( 1 ) expresses the budget of the sub-cloud-layer height h per unit area and constant density. \(\frac{\partial h}{\partial t}\) represents the temporal fluctuation of h and V h · ∇ h its horizontal advection, E is the entrainment rate, W the mesoscale vertical velocity (positive upwards) and M the convective mass flux at h . The sub-cloud-layer height h is defined as the height at which the virtual potential temperature ( θ v ) first exceeds its density-weighted mean from 100 m up to h by a fixed threshold ϵ = 0.2 K (refs. 22 , 56 ). Extended Data Fig. 3a confirms that our h is usually close to the ATR flight altitude and h is also well within the range of independent BCO and RV Meteor observations of the maximum radar cloud-base cloud fraction and the peak frequency of the first ceilometer cloud-base height (not shown). This confirms that our h agrees well with the level of maximum near-base cloud fraction, which was set as the target height for the ATR flight level and thus for evaluating the mass budget 31 . The entrainment rate E represents the deepening of h owing to small-scale mixing at the sub-cloud-layer top. We use a modified version of the classical flux-jump model 57 , 58 that accounts for the finite thickness of the transition layer, the approximately 150-m-thick stable layer separating the mixed layer from the cloud layer (see ref. 22 for details). The buoyancy flux at h is modelled as a fixed fraction A e of the surface buoyancy flux, \(\overline{{w}^{{\prime} }{\theta }_{{\rm{v}}}^{{\prime} }}{| }_{{\rm{s}}}\) , in which A e is the effective entrainment efficiency. The buoyancy jump at the sub-cloud-layer top is computed as \(\Delta {\theta }_{{\rm{v}}}=\Delta \theta +0.61(\overline{\theta }\Delta q+\overline{q}\Delta \theta )\) , with \(\Delta \theta ={C}_{\theta }({\theta }_{{\rm{h+}}}-\overline{\theta })\) and \(\Delta q={C}_{q}({q}_{h+}-\overline{q})\) . q is the specific humidity, C q and C θ are scaling coefficients accounting for uncertainty in the depth over which the jumps are computed, the subscript h + refers to the value of q or θ above h (computed as the average from h to h + 100 m) and \(\overline{q}\) and \(\overline{\theta }\) are averages from 50 m to the mixed-layer top (defined as the height of maximum relative humidity below 900 m). Finally, E is computed as $$E=\frac{{A}_{{\rm{e}}}\overline{{w}^{{\prime} }{\theta }_{{\rm{v}}}^{{\prime} }}{| }_{{\rm{s}}}}{\Delta {\theta }_{{\rm{v}}}}$$ (2) The uncertain parameters A e , C q and C θ are estimated through a joint Bayesian inversion to close the moisture and heat budgets by ref. 22 , yielding maximum-likelihood estimates of A e = 0.43 ± 0.06 (mean ± 1 σ ), C q = 1.26 ± 0.34 and C θ = 1.15 ± 0.31. The mesoscale vertical velocity W at h is computed by vertically integrating the divergence of the horizontal wind field measured by the dropsondes 23 from the surface up to h . W is at the lower end of the meso- α scale of ref. 46 , what climate modellers often associate with the ‘large scale’. The terms h , E and W are computed at the 1-h scale of a single circle and then aggregated to the 3-h scale (three circles). The temporal fluctuation of h is estimated as the linear regression slope of h computed from the 30–36 soundings available per 3-h circle set. Similarly, the horizontal advection of h is estimated as the sum of the linear regressions of the eastward (∂ h /∂ x ) and northward (∂ h /∂ y ) gradients of the individual h , multiplied by the wind speed at the 3-h mean h . Both ∂ h /∂ t and V h · ∇ h are only available on the 3-h scale. The default M shown in the paper is the equilibrium mass flux M = E + W , which reproduces well the mass flux diagnosed directly from cloud-core area fraction and vertical velocity in large-eddy simulations 20 . This equilibrium M is also available on the 1-h scale of an individual circle. Taking into account ∂ h /∂ t and V h · ∇ h in the mass flux estimate leads to \({M}^{{\prime} }=M-\frac{\partial h}{\partial t}-{V}_{h}\cdot \nabla h\) , which shows very similar characteristics compared with M (Extended Data Fig. 3 ). This is mainly because both the advection (−1.3 ± 2.7 mm s −1 ) and temporal fluctuation (0.5 ± 6.8 mm s −1 ) terms are on average about zero, and the advection term is also nearly invariant. The inclusion of advection and \(\frac{\partial h}{\partial t}\) in M ′ slightly enhances variability on the diurnal timescale (Extended Data Fig. 1a ). Cold pools formed by evaporating precipitation destroy the structure of the sub-cloud layer and make the estimation of h less robust. We thus exclude soundings that fall into cold pools in the analysis using the criterion of h < 400 m developed by ref. 56 based on the EUREC 4 A soundings. The influence of these and other assumptions on the magnitude and variability of M are discussed in the Methods section ‘Robustness of observational estimates’. Also note that our M is defined as the (mass) specific mass flux and has units of velocity. It differs from the more familiar mass flux (in units of kg m −2 s −1 ) by the air density, which is usually assumed to be constant 18 , 59 , and which is justified here given the small density variations across the measurements (mean ± σ of 1.104 ± 0.0077 kg m −3 , that is, less than 0.7% of the mean). Although the 1-h scale variability of M can be substantial (for example, second 3-h circle sets on 26 January and 13 February; Fig. 2 ), the median estimation uncertainty is only 20% at the 3-h scale (see section below). Also, M has a similar magnitude and reassuring correlation ( r = 0.77) to the independent M turb estimate from in situ turbulence measurements on the ATR aircraft (Extended Data Fig. 2d ). The mass budget terms show different degrees of scale sensitivity (see also discussion in ref. 20 ). Extended Data Figs. 2c and 4a show that the correlation between W and M is slightly larger at the 1-h scale compared with the 3-h scale ( r W , M 3h = 0.60 and r W , M 1h = 0.67), whereas they are essentially the same for E and M ( r E , M 3h = 0.54 and r E , M 1h = 0.55). The scale sensitivity of the W variance is in line with radiosonde data from the EUREC 4 A ship array, which show that the divergence amplitudes at equivalent radii of 100–300 km scale inversely with radius 60 (as in ERA5 and ICON, consistent with ref. 23 ). In ERA5, the scale sensitivity of the surface buoyancy flux, which contributes most to variability in E (Extended Data Fig. 4b ), is much smaller compared with the scale sensitivity of W (not shown). This is probably because variability in the surface buoyancy flux is mostly controlled by the surface wind speed (Extended Data Fig. 4h ) and radiative cooling 61 , both of which are large scale. The surface wind speed has autocorrelation coefficients of 0.74 for a 2-day and 0.48 for an 8-day lag (Fig. 3d of ref. 22 ). Although weaker compared with the synoptic variability, the surface wind also has a distinct diurnal cycle 62 , 63 , which causes a diurnal cycle of the surface buoyancy flux (Extended Data Fig. 1c and ref. 20 ). Some of the diurnal variability in E is thus lost for longer temporal averaging. Also, the variability in the temporal fluctuation and horizontal advection of h (equation ( 1 )) decreases on larger scales 20 . In summary, M variability decreases on larger averaging scales. The scale sensitivity of W is larger compared with E , such that the contribution of W to M variability tends to become smaller compared with the contribution of E on much larger scales. As noted above, E describes the net effect of local processes and must be inferred from the statistics of other quantities (that is, the mean sub-cloud-layer growth rate or the dilution of sub-cloud-layer properties). This raises the question whether the E estimate itself might depend on the mesoscale environment and therefore introduce spurious co-variabilities between M , W and C . The Bayesian estimation of the uncertain parameter estimates A e , C q and C θ is a priori independent of M and W . Also, the synoptic variability during EUREC 4 A can be well explained by keeping them constant 22 . Reference 22 also explored to what extent other factors correlated with residuals in their Bayesian fits and found no evidence of a systematic effect of other factors, including wind speed and shear 64 . As discussed above, the variability in E tends to be less scale-sensitive than W and mostly controlled by larger-scale factors, such as the surface wind speed (through the surface buoyancy flux; Extended Data Fig. 4b,h ). Furthermore, E and W are anticorrelated ( r E , W = −0.35; Extended Data Fig. 4g ). So both statistically from the anticorrelation and physically through the scale argument, we believe that our parameterization of E does not induce spurious co-variability. Uncertainty estimation For the M , \({\mathcal{R}}\) and C estimates, we distinguish two sources of uncertainty: sampling uncertainty and estimation (or retrieval) uncertainty. For all terms, the sampling uncertainty is computed at the 3-h scale as the standard error, \({\rm{SE}}=\sigma /\sqrt{n}\) , of the three individual 1-h circle values (each representing about 50 min of flight or up to 12 sondes), in which σ is the standard deviation and n the number of circles. The estimation uncertainty is computed differently for every term according to the underlying assumptions and choices. For \({\mathcal{R}}\) , the manufacturer-stated uncertainty (that is, repeatability) is 2% and some extra uncertainty stems from the correction of the dry bias of the HALO dropsondes (see ref. 28 ). Because this uncertainty is the same for all data points, the estimation uncertainty of \({\mathcal{R}}\) is not shown in the figures. For C , the estimation uncertainty is computed for every 3-h circle set as the SE of the four different estimates of C , namely C itself, C only , C turb and C pma . The uncertainty estimate therefore represents uncertainty in measurement principles and spatial sampling 31 . Further uncertainties of the individual C estimates (for example, owing to the choice of thresholds) are neglected, as sensitivity tests suggest that they are smaller than the uncertainty among the different C estimates 31 . For W , the advection term V h · ∇ h and the temporal fluctuation ∂ h /∂ t , the estimation uncertainty is taken as the SE of the respective regression used to compute the term. Because ∂ h /∂ t is computed from individual sondes, it contains both temporal and spatial variability of h on the 3-h scale and its SE is inflated. The estimation uncertainty of the surface buoyancy flux is a combination of uncertainty in the underlying SSTs and in the COARE bulk flux algorithm. We estimate the uncertainty in the underlying SSTs by computing the SE of five different versions of the flux (three with different fixed SST gradients (the default median value and the median ± interquartile range, that is, −0.14 K per degree, −0.21 K per degree and −0.07 K per degree), one with a temporally varying gradient (not shown) and one with a different baseline SST (from the AutoNaut Caravela 52 instead of the RV Meteor)) and adding a 5% uncertainty of the COARE algorithm given in ref. 50 as the 1-h uncertainty in the 0–10 m s −1 wind speed range. For A e and Δ θ v , we use the relative uncertainties of the Bayesian inversion as the estimation uncertainty (that is, σ ( A e )/ A e for A e and the average of σ ( C q )/ C q and σ ( C θ )/ C θ for Δ θ v ). Uncertainties in the three individual terms of E are propagated by adding their fractional uncertainties in quadrature to yield the estimation uncertainty of E . In the same spirit, the estimation uncertainty is propagated from the 1-h scale to the 3-h scale and from the individual terms of equation ( 1 ) to M . The uncertainties of the correlations and the multiple linear regression are estimated with bootstrapping (10,000 repetitions). We communicate these uncertainties by mentioning the 25th and 75th quartiles in the text and by showing both the quartiles and the 2.5% and 97.5% quantiles (representing the 95% confidence interval) in Fig. 4 and Extended Data Fig. 8 . Apart from the uncertainty quantification described here, we assess the robustness of the M and C observations to several other choices and assumptions in the Methods section ‘Robustness of observational estimates’. Other mixing indicators Other proxies for lower-tropospheric mixing were used in previous studies 5 , 7 , 65 that can be estimated from the dropsonde data and compared with the variability in C . Here we compute the boundary-layer vertical advection (BVA) diagnostic from ref. 65 defined as \({\rm{BVA}}={\int }_{0}^{{Z}_{\min }}W(z)\frac{\partial {\rm{MSE}}}{\partial z}\rho {\rm{d}}z\) , in which MSE is the moist static energy, \({Z}_{\min }\) the level of minimum MSE that marks the top of the trade-wind layer (on average at 2,900 m) and ρ the density. Note that a lower (more negative) BVA value indicates stronger mixing. Reference 65 found a pronounced positive relationship between changes in BVA and changes in C from a series of single-column model experiments with the IPSL-CM5A model, which is characterized by a strong positive low-cloud feedback and the presence of the mixing-desiccation mechanism (Fig. 4 ). Extended Data Fig. 4f shows a pronounced negative correlation between BVA and M in the EUREC 4 A data, indicating good agreement in their complementary definitions of mixing. Smaller BVA (stronger mixing) is also associated with larger C (not shown), which is at odds with the IPSL-CM5A model. The absolute correlation between BVA and C ( r = 0.34), however, is considerably smaller than the correlation between M and C ( r = 0.72). General circulation models The cloud fraction, net mass flux (upward and downward) and relative humidity at cloud base are calculated for ten Coupled Model Intercomparison Project (CMIP) models: Four from the fifth phase, CMIP5 (ref. 66 ): CanAM4 (ref. 67 ), MPI-ESM-LR 68 , IPSL-CM5A-LR 69 , HadGEM2-A 70 and Six from the sixth phase, CMIP6 (ref. 71 ): BCC-CSM2-MR 72 , CNRM-CM6-1 (ref. 73 ), IPSL-CM6A-LR 74 , MIROC6 (ref. 75 ), MRI-ESM2-0 (ref. 76 ), HadGEM3-GC31-LL 77 using the sub-hourly vertical profiles at selected sites (named cfSites in CMIP5 and CF subhr in CMIP6) provided by CFMIP 34 . Note that the M from the models is not computed using equation ( 1 ) but is defined according to the respective convective parameterization scheme of the models (see references above). We use the atmosphere-only amip configuration from 1979 to 2008, selecting data from December, January, February and March to be broadly consistent with the winter conditions sampled during EUREC 4 A. For each model, between two and six sites are available in the North Atlantic trades between 60–50° W and 12–16° N, namely the BOMEX, NTAS, EUREC 4 A, BCO and RICO sites. All profiles with clouds above 600 hPa (about 4.2 km) are dropped to ensure a focus on shallow convection. We verified that, in terms of the large-scale environment, the cfSites fall into the climatological trade cumulus regime as defined by ref. 40 . The cloud-base level is defined as the level of maximum cloud fraction between 870 and 970 hPa (between about 400 and 1,300 m). If the maximum cloud fraction is smaller than 0.25% for a given profile, the cloud-base level is taken at the climatological level of maximum cloud fraction. The hourly cloud-base data are aggregated to a 3-h timescale, which corresponds to the 3-h scale of the EUREC 4 A data, as well as a monthly timescale. The values computed are insensitive to (1) averaging across the sites before aggregating to the 3-h timescale, (2) removing the site near the Northwest Tropical Atlantic Station buoy upstream of the EUREC 4 A circle (near 51° W and 15° N), (3) focusing only on January and February, and (4) excluding nighttime values outside the hours sampled during EUREC 4 A (not shown). We use the thermodynamic component of the change in the cloud radiative effect at the top of the atmosphere (ΔCRE) with warming under given dynamical conditions to quantify the strength of the trade cumulus radiative feedback. Reference 2 showed that the ΔCRE with warming is a good approximation of the cloud feedback computed with radiative kernels 78 . The CRE is defined as the difference between all-sky (all, including clouds) and clear-sky (clr, clouds assumed to be transparent to radiation) net downward radiative fluxes, CRE = R all − R clr = (LW clr − LW all ) + (SW all − SW clr ) = CRE LW + CRE SW , with R being the total radiative flux and LW and SW its longwave and shortwave components, respectively. The radiative fluxes are defined positive downward. The ΔCRE with warming is then simply the difference in CRE between the warmer amip4K (4-K uniform increase in SST) and the amip (control) simulations, normalized by the 4-K temperature difference (that is, ΔCRE/Δ T s = (CRE amip4K − CRE amip )/4K). To restrict the feedback estimation to the trade cumulus regime, we focus on ocean-only grid points between 35° S and 35° N and use the regime partitioning of ref. 40 with trade cumulus regimes in each simulation (amip or amip4K) defined as having a climatological annual mean estimated inversion strength smaller than 1 K and a vertical velocity at 700 hPa between 0 and 15 hPa day −1 . Robustness of observational estimates Applying the mass budget formulation to the EUREC 4 A dropsonde data involves several choices for definitions and thresholds. These choices are guided by constraints from independent data and from closure of the moisture and heat budgets in ref. 22 , which provides justification for the default configuration described in the Methods section ‘Mass flux estimation’. Nevertheless, it is important to assess and understand the sensitivity of the mass budget estimates and the key relationships to different estimation choices. We focus first on the influence of different definitions of the sub-cloud-layer height h and the entrainment rate E on the mean and standard deviation ( σ ) of M and E , the respective correlations of M with E and W , and the correlation and mean difference to the independent M turb estimate from turbulence measurements onboard the ATR aircraft (see Extended Data Fig. 2 ). For the h definition, we compare our default h to an alternative definition, ‘h.parcel’, which defines h as the level of neutral buoyancy of a surface-lifted parcel (with density-weighted θ v averaged from 30 to 80 m) plus 0.2 K θ v excess. Using ‘h.parcel’ leads to a 16 m shallower mean h compared with the default. The slightly shallower h decreases Δ θ v (the denominator of the E formulation in equation ( 2 )) from 0.36 K to 0.34 K, which slightly increases E and M by around 1.5 mm s −1 . Although W is unaffected by this small change in h , the resulting M has a slightly reduced correlation to the independent M turb compared with the default M ( r = 0.69 versus r = 0.77). The same chain of arguments holds for increasing and decreasing the threshold ϵ in the h definition by ±0.05 K. With ϵ = 0.25 K instead of 0.2 K (case ‘h.eps = 0.25’), h increases by 31 m and, through the larger Δ θ v , decreases E and M by about 3.3 mm s −1 . Owing to the presence of a thin transition layer 22 , the response to ϵ ± 0.05 K is nonlinear and a reduction of ϵ to 0.15 K (‘h.eps = 0.15’) leads to a disproportionately smaller Δ θ v and roughly 6 mm s −1 larger E and M . The 35 m shallower mean h with ϵ = 0.15 K also strongly increases σ E , which increases the correlation between E and M at the expense of a decreased correlation between the unaffected W and M (Extended Data Fig. 2c ). The next set of choices involves the entrainment rate estimate. We test the influence of the different surface buoyancy flux estimates from ERA5 and RV Meteor. As the ERA5 flux is 25% larger than the other fluxes, we scale it to have same mean as the dropsonde-derived flux (case ‘sbf = ERA5.sc’). For ‘sbf = ERA5.sc’, the variability in E and M are substantially larger compared with the default dropsonde flux, increasing their correlation. For the case ‘sbf = Meteor’, the differences to the default estimates is smaller (Extended Data Fig. 2a,b ) and the correlation with M turb is slightly larger than in the other configurations. The estimates are also unaffected by changing the three coefficients A e , C q and C θ estimated by Bayesian inversion in ref. 22 to close the moisture and energy budgets during EUREC 4 A when cold pool soundings (defined as having h < 400 m following ref. 56 ) are excluded (‘diffEpars’). We further compare four different ways of computing Δ θ v . Computing the value at h + as averages from h to h + 50 or h + 150 m (instead of to h + 100 m) has a similar (but more linear) influence as increasing ϵ ± 0.05 K (see discussion above). Using two different heights for averaging θ v across the mixed layer (up to h in ‘tvbar = h’ and up to the level at which q first falls below its mean by a threshold of 0.3 g kg −1 in ‘tvbar = qgrad’) hardly influences the estimates. Last, we show the influence of computing the mass budget including the cold pool soundings for two sets of surface buoyancy flux estimates, case ‘withCP’ for the default dropsonde-derived flux and ‘withCP_sbf = ERA5.sc’ for the scaled ERA5 flux. In both cases, the mean and σ of both M and E are increased when cold pools are included (matching the mean E of ref. 22 , who included cold pools). However, especially for the default surface fluxes (case ‘withCP’), the correlation with M turb is strongly reduced. Extended Data Fig. 2a,d also shows the influence of selected choices on the total mass flux M ′, which includes the contribution of the temporal fluctuation and horizontal advection of h . Because these extra terms are on average nearly zero (Extended Data Fig. 3c ), their inclusion does not affect \(\overline{M}\) . σ M instead increases by about 1.5 mm s −1 owing to the pronounced variability in the temporal fluctuation term. As this term is not very robust, we use the more reliable equilibrium M as our best estimate. The equilibrium M is also robust at the 1-h scale of an individual circle (case ‘1h-scale’). Overall, Extended Data Fig. 2 makes us very confident in the robustness of our mass budget estimates because they only show a modest sensitivity to the various choices and because we can explain these sensitivities physically. Also, the independent ATR M turb estimates (Extended Data Fig. 2d ) and the extra constraints on E from our complementary analyses of the moisture and heat budgets in ref. 22 (dashed lines in Extended Data Fig. 2b ) lend further credibility to our default estimation choices. Next, we focus on the sensitivity of the key relationships between M , C and \({\mathcal{R}}\) to a selected set of plausible estimation choices of M and the different C estimates from the ATR aircraft. Extended Data Fig. 5a shows that the positive correlation between M and C is notable for all parameter choices, and both the equilibrium M and total M ′. Furthermore, the negligible correlation between M and \({\mathcal{R}}\) is also very robust. Extended Data Fig. 5b further confirms that the default M also has strong correlations with the three independent estimates of C from the ATR aircraft. The same is true for the other estimation choices of M , with a small overall range of correlations of 0.52 < r M , C < 0.73. Correlations between C and \({\mathcal{R}}\) are more variable between the different C estimates and are in the range \(0.12 < {r}_{{\mathcal{R}},C} < 0.63\) . It is not surprising that the C only estimate that neglects contributions from drizzle has the strongest correlation with \({\mathcal{R}}\) , as it mostly features passive clouds that are more affected by ambient humidity than the more active clouds that also include drizzle. Note that there is also a slight dependency of r ( \({\mathcal{R}}\) , C ) on the M estimates, as the cases ‘h.parcel’ and ‘h.eps = 0.25’ result in different h and thus different heights at which \({\mathcal{R}}\) is evaluated. The bottom panels of Extended Data Fig. 5 also confirm the robustness of the correlation coefficient of the multiple linear regression \(\hat{C}={a}_{0}+{a}_{M}\widetilde{M}+{a}_{{\mathcal{R}}}\widetilde{{\mathcal{R}}}\) and the ratio of the standardized regression coefficients \({a}_{M}/{a}_{{\mathcal{R}}}\) to the M estimation choices (Extended Data Fig. 5c ) and the different C estimates (Extended Data Fig. 5d ). There is no configuration with \({a}_{M}/{a}_{{\mathcal{R}}} < 1\) , indicating that C is always more strongly coupled to M than to \({\mathcal{R}}\) in the observations. Slightly larger values of \({a}_{M}/{a}_{{\mathcal{R}}}\) and smaller correlations are evident for the total M ′. Also, the standard deviation of C ( σ C ) is very similar for the different C estimates that include drizzle (between 2.1% and 3.7%, with 3.1% being the σ C of the default BASTALIAS lidar-radar synergy product) and only slightly lower for the C only estimate (1.6%) when using the full sample. Variability is slightly reduced in the smaller sample that overlaps with the HALO flights, because it excludes two night flights with larger cloudiness and two flights in dry environments with very small cloudiness ( σ C of 1.7–2.4% for the C estimates that include drizzle). Overall, Extended Data Fig. 5 demonstrates the insensitivity of the observed relationships to a wide range of configurations. We therefore conclude that the relationships between mixing and cloudiness observed during EUREC 4 A are very robust. Data availability All data used in this study are published in the EUREC 4 A database of AERIS ( , last accessed: 28 July 2022). We use v2.0.0 of the JOANNE dropsonde data 28 ( ). The specific ATR datasets 31 used are the BASTALIAS product ( ), the turbulence measurements 49 ( ) and the PMA/Cloud composite dataset ( ). The specific HALO datasets 29 used are cloud masks derived from WALES cloud-top height estimates ( ), HAMP Cloud Radar ( ) and specMACS ( ), and the flight segmentation product ( ). From the BCO 15 , we used ceilometer ( ) and cloud radar data ( ). From the RV Meteor 44 , we used standard dship meteorological data for the EUREC 4 A Meteor cruise M161 (retrieved from , last accessed: 28 June 2022), surface heat fluxes ( ), ceilometer measurements ( ) and cloud radar data (v1.1, ). We further used data from AutoNaut Caravela 52 ( ) and 10-min air–sea flux data (v1.3, ) from the RV Ronald Brown 54 . Also, we used CLS Daily High Resolution Sea Surface Temperature maps (retrievable through the AERIS operational centre , last accessed: 28 June 2022, or directly from ), GOES-16 ABI SSTs from the ABI_G16-STAR-L3C-v2.7 product ( ) and ERA5 (ref. 53 ) reanalysis data. The CMIP5 and CMIP6 climate model outputs are available for download at . Source data are provided with this paper. Code availability The scripts used for the analyses and other supporting information that may be useful for reproducing this study can be obtained from . | In a major field campaign in 2020, Dr. Raphaela Vogel who is now at Universität Hamburg's Center for Earth System Research and Sustainability (CEN) and an international team from the Laboratoire de Météorologie Dynamique in Paris and the Max Planck Institute for Meteorology in Hamburg analyzed observational data they and others collected in fields of cumulus clouds near the Atlantic island of Barbados. Their analysis revealed that these clouds' contribution to climate warming has to be reassessed. "Trade-wind clouds influence the climate system around the globe, but the data demonstrate behavior differently than previously assumed. Consequently, an extreme rise in Earth's temperatures is less likely than previously thought," says Vogel, an atmospheric scientist. "Though this aspect is very important for more accurately projecting future climate scenarios, it definitely doesn't mean we can back off on climate protection." To date, many climate models have simulated a major reduction in trade-wind clouds, which would mean much of their cooling function would be lost and the atmosphere would consequently warm even more. The new observational data shows that this isn't likely to occur. What is certain is that, as global warming progresses, more water on the ocean's surface evaporates and the moisture near the base of trade-wind clouds increases. In contrast, the air masses in the upper part of the clouds are very dry and only become slightly moister. This produces a substantial difference in moisture above and below. In the atmosphere, this is dispelled when the air masses mix. The previous hypothesis: drier air is transported downward, causing the cloud droplets to evaporate more rapidly and making it more likely that the clouds will dissipate. The observational data from Barbados now offers the first robust quantification as to how pronounced the vertical mixing actually is, and how this affects moisture and cloud cover as a whole. As such, it is the first data to shed light on a process that is essential to understanding climate change. In brief: more intensive mixing does not make the lower layers drier or make the clouds dissipate. Rather, the data shows that the cloud cover actually increases with increasing vertical mixing. "That's good news, because it means that trade-wind clouds are far less sensitive to global warming than has long been assumed," says Vogel. "With our new observations and findings, we can now directly test how realistically climate models portray the occurrence of trade-wind clouds. In this regard, a new generation of high-resolution climate models that can simulate the dynamics of clouds around the globe down to scales of one kilometer are particularly promising. Thanks to them, future projections will be more accurate and reliable." The month-long field campaign EUREC4A (2020) was designed by the team members around extended flights with two research aircraft, which were equipped with different instruments and operated at different altitudes, and shipboard measurements from the R/V Meteor—A German research vessel managed by the University of Hamburg. One plane was used to drop hundreds of atmospheric probes from an altitude of nine kilometers. As they fell, the probes gathered atmospheric data on the temperature, moisture, pressure and wind. The other plane surveyed clouds at their base, at an altitude of 800 meters, while the ship performed surface-based measurements. The result: an unprecedented database that will help to understand the unclear role of clouds in the climate system—and to more accurately predict their role in future climate change. Whether clouds have a cooling or warming effect depends on how high they are. With a maximum altitude of two to three kilometers, the trade-wind clouds examined here are comparatively low, reflect sunlight, and cool the atmosphere in the process. In contrast, higher clouds amplify the greenhouse effect, warming the climate. The work is published in the journal Nature. | 10.1038/s41586-022-05364-y |
Biology | Hippo dances with hormones: Hints from fly research for study of cancer, stem cells | Recent summary of Hippo pathway in Nature Reviews Drug Discovery. www.nature.com/nrd/journal/v13 … n1/full/nrd4161.html Journal information: Developmental Cell , Nature Reviews Drug Discovery | http://www.nature.com/nrd/journal/v13/n1/full/nrd4161.html | https://phys.org/news/2015-07-hippo-hormones-hints-cancer-stem.html | Abstract The Hippo signalling pathway is an emerging growth control and tumour suppressor pathway that regulates cell proliferation and stem cell functions. Defects in Hippo signalling and hyperactivation of its downstream effectors Yes-associated protein (YAP) and transcriptional co-activator with PDZ-binding motif (TAZ) contribute to the development of cancer, which suggests that pharmacological inhibition of YAP and TAZ activity may be an effective anticancer strategy. Conversely, YAP and TAZ can also have beneficial roles in stimulating tissue repair and regeneration following injury, so their activation may be therapeutically useful in these contexts. A complex network of intracellular and extracellular signalling pathways that modulate YAP and TAZ activities have recently been identified. Here, we review the regulation of the Hippo signalling pathway, its functions in normal homeostasis and disease, and recent progress in the identification of small-molecule pathway modulators. Main The Hippo signalling pathway is an emerging growth control pathway that is conserved throughout the animal kingdom. Growing interest in the Hippo pathway is driven by studies demonstrating the fundamental role of this pathway in organ growth control, stem cell function, regeneration and tumour suppression 1 , 2 , 3 , 4 , 5 , 6 . Indeed, the Hippo pathway is deregulated with a high frequency in many diverse cancers, which suggests that altered Hippo signalling is tightly linked to tumour initiation and/or progression 7 . Hence, there is considerable excitement and speculation about targeting the Hippo pathway to treat a variety of human malignancies 8 , 9 . The main function of the Hippo pathway is to negatively regulate the activity of Yes-associated protein (YAP) and transcriptional co-activator with PDZ-binding motif (TAZ) — two homologous transcriptional co-activators that are the main downstream mediators of the Hippo pathway 10 . When activated, YAP and TAZ promote cell proliferation and inhibit cell death; YAP and TAZ are hyperactivated in many human malignancies 7 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 . Therapeutic intervention for cancer would thus involve reducing or inhibiting the oncogenic function of YAP and/or TAZ. However, to date, few small-molecule inhibitors have been discovered that target the Hippo pathway, and a prevalent view is that most Hippo pathway signalling components are not conventional drug targets. Indeed, YAP and TAZ are transcriptional co-activators with no known catalytic activity. Moreover, no known upstream regulators that specifically promote YAP and TAZ activity have demonstrated enzymatic activity 6 , 7 . Thus, inhibiting the function of YAP and TAZ may require targeting protein–protein interactions. A further complication is that YAP and TAZ are required for tissue repair and regeneration in some contexts 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , raising questions as to whether systemic and chronic manipulation of Hippo signalling might have potential deleterious side effects on normal tissue function and homeostasis. However, transient activation of YAP and TAZ may help to promote tissue repair and regeneration in the context of injury 28 , 29 . These two faces of the Hippo signalling pathway suggest that the identification and proper application of small-molecule modulators of Hippo signalling may provide exciting new approaches for cancer therapy and for regenerative medicine. Overview of the Hippo pathway The Hippo pathway relays signals from the plasma membrane into the nucleus, where it regulates the expression of a series of target genes that control diverse cellular processes such as proliferation, survival and differentiation 1 , 2 , 3 , 4 , 5 , 6 . In this regard, Hippo signalling is similar to other well-known signal transduction pathways such as the epidermal growth factor (EGF), transforming growth factor-β (TGFβ) or WNT signalling pathways. However, in contrast to these pathways, the Hippo pathway does not appear to have dedicated extracellular signalling peptides and receptors, but is instead regulated by a network of upstream components and mechanisms, many of which are also involved in regulating cell adhesion and cell polarity 3 , 6 , 31 . Nevertheless, the Hippo pathway bears considerable resemblance to other canonical signal transduction pathways as many upstream regulators feed into the core of the pathway, which is composed of two serine/threonine kinases known as the Hippo and Warts kinases in fruitflies 32 , 33 , 34 , 35 , 36 , 37 , 38 ; Hippo kinase is known as mammalian STE20-like protein kinase 1 (MST1) and MST2 in humans, whereas Warts kinase is known as large tumour suppressor homolog 1 (LATS1) kinase and LATS2 kinase in humans 38 , 39 , 40 , 41 . These kinases and their essential roles in growth control were first discovered in Drosophila melanogaster and they function together in a novel signalling pathway that has been named the 'Hippo pathway' after one of its founding members 32 . Since the discovery of the Hippo and Warts kinases, many additional components of the Hippo pathway have been identified, and a complex signalling network has emerged that integrates multiple upstream inputs from the plasma membrane into the nucleus ( Figs 1 , 2 ). In this Review, we largely refer to Hippo signalling components using the mammalian nomenclature, and the D. melanogaster components are listed in Table 1 . Figure 1: The core of the Hippo signalling pathway and its mode of action. Schematics of the core pathway components and how they interact are depicted. a | When the Hippo pathway is on, mammalian STE20-like protein kinase 1 (MST1) or MST2 phosphorylate Salvador homolog 1 (SAV1), and together they phosphorylate and activate MOB kinase activator 1A (MOB1A), MOB1B, large tumour suppressor homolog 1 (LATS1) kinase and LATS2 kinase, which then phosphorylate Yes-associated protein (YAP) and transcriptional co-activator with PDZ-binding motif (TAZ). Phosphorylated YAP and TAZ are sequestered in the cytoplasm by the 14-3-3 protein and shunted for proteasomal degradation. As a result, the TEA domain-containing sequence-specific transcription factors (TEADs) associate with the transcription cofactor vestigial-like protein 4 (VGL4) and suppress target gene expression. b | When the Hippo pathway is off, the kinases MST1, MST2, LATS1 and LATS2 are inactive, so YAP and TAZ are not phosphorylated and instead accumulate in the nucleus where they displace VGL4 and form a complex with TEADs, which promotes the expression of target genes. PowerPoint slide Full size image Figure 2: The Hippo pathway network. An outline of a cell is depicted, showing the nucleus and the Hippo pathway network. Mammalian Hippo pathway components that promote the activity of Yes-associated protein (YAP) and transcriptional co-activator with PDZ-binding motif (TAZ) are shown in green, whereas those that inhibit YAP and TAZ activity are shown in red. Pointed and blunt arrowheads indicate activating and inhibitory interactions, respectively. AMOT, angiomotin; β-TRCP, β-transducin repeat-containing E3 ubiquitin protein ligase; CSNK1, casein kinase 1; CRB, Crumbs homolog; DLG, discs large homolog; FRMD6, FERM domain-containing protein 6; GPCR, G protein-coupled receptor; HIPK, homeodomain-interacting protein kinase; KIBRA, kidney and brain protein; LATS, large tumour suppressor homolog; LGL, lethal giant larvae protein homolog; MARK, MAP/microtubule affinity-regulating kinase; MASK, multiple ankyrin repeats single KH domain-containing protein; MOB1A, MOB kinase activator 1A; MPP5, membrane protein, palmitoylated 5 (also known as PALS1); MST, mammalian STE20-like protein kinase; NF2, neurofibromin 2 (also known as Merlin); PATJ, PALS1-associated tight junction protein; PP2A, protein phosphatase 2A; PTPN14, protein tyrosine phosphatase, non-receptor type 14; RASSF, RAS association domain-containing family protein; SAV1, Salvador homolog 1; SCRIB, Scribble homolog; SIK, salt-inducible kinase; TAO, thousand and one amino acid protein kinase; TEAD, TEA domain-containing sequence-specific transcription factor; VGL4, vestigial-like protein 4; WBP2, WW domain-binding protein 2; ZO, zona occludens protein; ZYX, Zyxin protein. PowerPoint slide Full size image Table 1 Hippo pathway components in humans and D. melanogaster Full size table The Hippo signalling pathway. The core of the Hippo pathway comprises a highly conserved signalling module that functions similarly in mammals and in D. melanogaster . This pathway contains the serine/threonine kinases MST1, MST2, LATS1 and LATS2 (Refs 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 ), the scaffolding protein Salvador homolog 1 (SAV1; which interacts with MST1 and MST2) 42 , 43 and the scaffolding proteins MOB kinase activator 1A (MOB1A) and MOB1B (which interact with LATS1 and LATS2, respectively) 44 , the transcriptional co-activators YAP and TAZ 45 , and the TEA domain-containing sequence-specific transcription factors TEAD1 to TEAD4 (Refs 46 , 47 , 48 , 49 , 50 ) ( Fig. 1 ). YAP and TAZ are transcriptional co-activators that are not able to bind to DNA themselves but form complexes with TEADs 46 , 47 , 48 , 49 , 50 as their main partners; they also bind to other transcription factors such as SMADs 51 , 52 , 53 , T-box transcription factor 5 (TBX5) 54 , 55 , RUNT-related transcription factor 1 (RUNX1) and RUNX2 56 as well as p73 (Ref. 57 ) to regulate gene expression. The Hippo pathway is considered to be in the active state when the MST and LATS kinases are active ( Fig. 1a ). The MST kinases — in complex with SAV1 — phosphorylate and activate the LATS kinases and MOB1 cofactors 35 , 58 , 59 , 60 , 61 , which in turn phosphorylate their downstream targets YAP and TAZ 11 , 21 , 45 , 62 , 63 , 64 , 65 . Phosphorylation of YAP and TAZ results in their nuclear export, cytoplasmic retention and β-TRCP (β-transducin repeat-containing E3 ubiquitin protein ligase)-dependent degradation by the proteasome 11 , 21 , 62 , 63 , 66 , 67 , 68 , 69 . Instead of binding to YAP and TAZ, the TEADs then form complexes with the transcription cofactor vestigial-like protein 4 (VGL4), which represses target gene expression 70 , 71 . Thus, when the Hippo pathway is activated, YAP and TAZ activity is inhibited, and YAP- and TAZ-driven gene expression is suppressed. Conversely, when the Hippo pathway is deactivated, YAP and TAZ accumulate in the nucleus, where they drive gene expression by forming complexes with TEAD and other transcription factors 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 72 , 73 , 74 , 75 , 76 ( Fig. 1b ). Thus, the Hippo pathway acts primarily by inhibiting the nuclear functions of YAP and TAZ. Regulation of activity. A key question concerning the Hippo pathway relates to the signals and mechanisms that regulate its activity and, ultimately, that of YAP and TAZ. To date, over 20 regulators have been identified that intersect with the core of the Hippo pathway at different levels 1 , 2 , 3 , 4 , 5 , 6 , 31 ( Fig. 2 ). In mammals, at least four interconnected upstream branches regulate the Hippo pathway: the Crumbs homolog (CRB) complex; regulators that act upstream of the MST kinases; the actin cytoskeleton; and the adherens junction. Each of these inputs is described below. The CRB complex contains CRB proteins, which are transmembrane proteins that localize to apical junctions and are required to specify the apical plasma membrane domain 77 . CRB proteins have short intracellular domains with protein docking sites that assemble multiprotein complexes that function in cell polarity and also regulate the Hippo pathway 78 , 79 , 80 , 81 , 82 . Most prominently for its regulation of the Hippo pathway in mammals, the CRB complex recruits members of the angiomotin (AMOT) family of adaptor proteins that directly bind to components of the Hippo pathway. Initial reports showed that AMOT inhibits the nuclear localization of YAP by directly binding to YAP and by activating LATS kinases to promote YAP phosphorylation and nuclear exclusion 78 , 83 , 84 , 85 , 86 , 87 , 88 . However, a recent report has shown, using knockout mice, that in vivo AMOT is required for the YAP-dependent overgrowth of neurofibromin 2 ( Nf2 ; also known as Merlin)-mutant mouse livers, that AMOT promotes the nuclear localization of YAP and that it forms a functional complex with YAP and TEADs on target gene DNA 89 . AMOT may thus have both growth-promoting and growth-suppressing functions depending on the cellular and molecular context. However, the reasons for these seemingly paradoxical results are not known and further analysis is required. A second branch of the Hippo pathway consists of kinases and other regulators that modulate the activity of the MST kinases. These include the TAO (thousand and one amino acid protein) kinases and the cell polarity kinase MAP/microtubule affinity-regulating kinase 1 (MARK1; also known as PAR1), which directly phosphorylate MST kinases and regulate their activity 90 , 91 , 92 . In addition, MSTs are regulated by the adaptor protein KIBRA (kidney and brain protein; also known as WWC1) 93 , 94 , 95 , and in D. melanogaster the Hippo kinase is also regulated by the adaptor protein Expanded 96 . A third branch of the Hippo pathway is defined by an as yet poorly understood signalling mechanism that is mediated by the actin cytoskeleton. Here, the mechanical properties of the extracellular matrix and cell matrix attachment regulate the localization and activity of YAP and TAZ, through a process that requires F-actin and that is also present in D. melanogaster 97 , 98 , 99 , 100 , 101 . In addition, G protein-coupled receptors (GPCRs) that relay signals from soluble extracellular cues such as lysophosphatidic acid and sphingosine-1-phosphate regulate YAP and TAZ activity through RHO GTPases, which are likely to affect YAP and/or TAZ via modulation of the actin cytoskeleton 102 , 103 , 104 . Thus, pathways that regulate the structure of the actin cytoskeleton — for example, by activating RHO signalling — affect the Hippo pathway. However, although it is generally appreciated that F-actin intersects the pathway downstream of the MST kinases, the exact mechanism by which this occurs is not known and it may involve LATS-dependent and -independent regulation of YAP and TAZ 97 , 98 , 99 , 100 , 101 , 105 . A fourth branch of the inputs to the Hippo pathway emanates from the adherens junction. Engagement of E-cadherin at adherens junctions suppresses the nuclear localization and activity of YAP by regulating MST activity 106 . In addition, the E-cadherin-associated protein α-catenin regulates YAP directly by sequestering YAP–14-3-3 protein (also known as YWHAQ) complexes in the cytoplasm 107 , 108 . Furthermore, members of the junction-associated Ajuba protein family directly inhibit LATS kinase activity 109 . In D. melanogaster , the related Zyxin protein also regulates Warts levels 110 . Whether these adherens junction-associated regulators of the Hippo pathway act independently of each other or whether they function coordinately to regulate Hippo signalling is not yet known. In addition, crosstalk among the different regulatory branches is very likely to exist. For example, CRB and adherens junctions regulate the structure of the actin cytoskeleton, and polarity proteins and junction proteins regulate each other 111 , 112 , 113 , 114 . In addition to these four major branches of the upstream regulators of Hippo signalling, there are several other proteins that modulate the activity of the pathway, most of which have conserved functions in D. melanogaster ( Fig. 2 ). These include proteins that directly interact with or affect YAP and TAZ, such as WW domain-binding protein 2 (WBP2) 115 , 116 , 117 , multiple ankyrin repeats single KH domain-containing protein (MASK) 118 , 119 , zona occludens protein 1 (ZO1), ZO2 (Refs 120 , 121 ), homeodomain-interacting protein kinase 2 (HIPK2) 122 , 123 , 14-3-3 protein, protein tyrosine phosphatase non-receptor type 14 (PTPN14) 124 , 125 , 126 , 127 , 128 , casein kinase 1 (CSNK1) and β-TRCP 68 , 69 . They also include proteins that interact with the upstream kinase complexes, such as the RAS association domain-containing family proteins (RASSFs) 129 , 130 , 131 , protein phosphatase 2A (PP2A) 132 , salt-inducible kinases (SIKs) 133 , Merlin 96 , 134 , Scribble and the Scribble-associated proteins discs large (Dlg) and lethal giant larvae (Lgl) 81 , 135 , 136 , 137 , 138 . In D. melanogaster , Scribble, Dlg and Lgl form a module that regulates cell polarity and establishes the basolateral domain, which indicates that the Hippo pathway is independently modulated by several junctional complexes 3 , 31 , 111 . The D. melanogaster Hippo pathway is also regulated by a signalling axis from the atypical cadherin FAT, which regulates the levels and activity of Warts kinases 2 , 3 , 31 . However, it is not clear whether FAT homologs are involved in the regulation of the vertebrate Hippo pathway 139 . Tissue-specific pathway regulation. Although the net effect of deregulated YAP and TAZ activity in many tissues is similar, YAP and TAZ activity appears to be controlled by different regulatory mechanisms in different tissues. For example, whereas the MST kinases are essential for inhibiting YAP and TAZ in the liver 140 , 141 , 142 , 143 , intestine 144 , pancreas 145 , 146 and heart 147 , they appear to be dispensable in the skin 107 . Presumably, other negative regulatory mechanisms predominate in the skin, and sequestration via α-catenin and AMOT have been suggested to circumvent the need for phosphorylation by LATS kinases 107 , 108 . Further evidence of cell-type specificity in Hippo pathway regulation comes from the observations that the MST kinases are largely dispensable in mouse embryonic fibroblasts (MEFs) 140 , 142 , whereas the LATS kinases are essential for inhibiting the activities of YAP and TAZ 148 . Tissue-specific requirements for upstream components are also observed in D. melanogaster , in which FAT signalling is dominant in imaginal discs but has no significant effect in the ovarian follicle cell epithelium 149 , 150 , 151 , 152 , 153 , 154 , 155 . Likewise, Merlin has minor effects in imaginal discs but is a major regulator of the Hippo pathway in follicle cells 149 , 150 , 156 , 157 , 158 , 159 , 160 . The Hippo pathway in growth control and cancer The dramatic overgrowth phenotypes caused by the loss of Hippo pathway activity in D. melanogaster first led to the idea that this pathway may be important in the control of organ growth and that it acts as a tumour suppressor in vertebrates. Indeed, many studies have now shown that loss of Hippo signalling or hyperactivation of YAP and TAZ promotes growth and cell pluripotency depending on the tissue context. Thus, loss of Hippo pathway activity in mouse models causes overgrowth of various organs such as the liver and heart, and it can lead to the development of cancer in the liver, skin and intestine. Below, we highlight the phenotypes associated with loss of pathway activity in genetically engineered animal models, discuss cellular events that are controlled by YAP and TAZ, and summarize the evidence for a role of hyperactive YAP and TAZ in human cancer. Hippo signalling in growth control. A predominant function of the Hippo pathway is in regulating progenitor cell proliferation and organ size. In D. melanogaster , loss of function of the Hippo or Warts kinases, or overexpression of Yorkie (the D. melanogaster homolog of YAP and TAZ), during development results in severely overgrown imaginal discs, which leads to dramatic overgrowth of the corresponding adult structures 32 , 33 , 34 , 35 , 36 , 37 , 38 , 45 ( Fig. 3a,b ). These overgrowths arise because mutant cells have two defects. First, cell proliferation is dysregulated; they proliferate faster than wild-type cells and continue to proliferate when wild-type cells stop proliferating, after tissues have reached their proper size. Second, these mutant cells are resistant to the apoptotic signals that are induced to eliminate the extra cells. The combination of these defects therefore results in the production of excess cells that cannot be eliminated, leading to increased organ size. Figure 3: Hippo-mutant phenotypes in fruitflies and mice. Scanning electron micrographs of a wild-type fruitfly (panel a ) and a fruitfly with patches (clones) of cells that are homozygous mutant for the Hippo ( hpo ) gene (panel b ) are shown. The hpo -mutant tissues exhibit overgrowth of the adult cuticle. Panel c shows a mouse liver from a wild-type animal at 2 months of age, and panel d shows a mouse liver from a mouse mutant at 2 months of age in which the two hpo homologs mammalian STE20-like protein kinase 1 ( Mst1 ) and Mst2 have been conditionally deleted in the developing liver. Panel e shows a normal mouse liver at 6 months, whereas panel f shows a double mutant Mst1 −/− Mst2 −/− mouse liver at 6 months; the mouse liver is not only overgrown but has also developed foci of hepatocellular carcinoma. Panels a and b are reproduced, with permission, from Ref. 1 © (2011) The Company of Biologists Ltd. Panels c , d and f are reproduced, with permission, from Ref. 141 © (2010) National Academy of Sciences. PowerPoint slide Full size image Similarly, in mice, YAP overexpression or loss of MST or LATS kinase activities increases liver and heart size by increasing cell number 11 , 140 , 141 , 142 , 143 , 147 , 161 , 162 ( Fig. 3c,d ). However, the relationship between Hippo signalling and organ size is not absolute. In some tissues such as the skin and intestine, overexpression or activation of YAP causes an enlargement of the stem cell compartment, in part owing to a block in differentiation, but it does not lead to an overall increase in organ size 12 , 107 , 163 . This may be due to the cellular composition of these tissues as they continuously undergo a stem-cell-driven renewal programme. Thus, the general conclusion from genetic studies of the Hippo pathway in mice and fruitflies is that YAP and Yorkie drive cell proliferation and tissue growth. However, they may not drive growth in every tissue, and they also have other functions apart from regulating tissue growth, although these functions appear to be minor and related to cellular differentiation 1 , 164 , 165 . Several direct downstream target genes of the Hippo pathway have been identified and include genes such as cyclins, growth factors and inhibitors of apoptosis that are involved in cell proliferation, cell survival and stem cell functions, among others. In D. melanogaster , a combination of chromatin immunoprecipitation followed by sequencing (ChIP–seq) and RNA sequencing (RNA–seq) for Yorkie and the TEAD protein Scalloped, showed that Yorkie and Scalloped regulate most of the genes that are expressed in imaginal discs, making them more akin to general transcriptional activators 166 . At present, however, it is not completely understood how YAP, TAZ and Yorkie drive cell proliferation and tissue growth; this is likely to involve the regulation of many direct target genes, some of which directly affect growth and survival, whereas others indirectly affect these processes via global regulation of metabolism and other cellular processes 21 , 166 , 167 , 168 . Hippo signalling in cancer. The dramatic effects of YAP overexpression and hyperactivation on organ size and progenitor cell pools demonstrate that YAP has potent growth-promoting activity. Indeed, there is considerable evidence that abnormal Hippo signalling is associated with several human cancers 7 . Elevated levels and nuclear localization of YAP and, in some cases, TAZ have been reported in the majority of solid cancers, which suggests that there is widespread deregulation of Hippo signalling in human neoplasia 7 , 11 , 12 , 15 , 16 , 17 , 18 , 19 , 20 , 21 . Abnormally elevated YAP levels and nuclear localization occur in various human cancers including liver, lung, breast, skin, colon and ovarian cancer 7 , 11 , 12 , 15 , 16 , 17 , 18 , 19 , 20 , 21 . Mouse models have shown that in the liver, skin and colon the attenuation of Hippo signalling or overexpression of YAP is sufficient to promote tumour formation 11 , 12 , 15 , 107 , 140 , 141 , 142 , 143 ( Fig. 3e,f ). The exact mechanisms involved in the transformation of normal cells to malignant tumour cells by deregulated YAP and TAZ are not known but are likely to involve enhanced cell proliferation and survival coupled with the acquisition of additional cancer cell phenotypes such as cancer stem cell characteristics, epithelial to mesenchymal transition (EMT), drug resistance and inhibition of senescence, as these are all YAP- and TAZ-promoted activities that are abnormally regulated in tumour cells ( Fig. 4 ). Figure 4: Cellular functions of YAP and TAZ. YAP and TAZ regulate several cellular properties that are important for the development of cancer and the regulation of stem cell behaviour and regeneration. Some of these, such as the promotion of stemness and proliferation, are important for cancer development and in regeneration, whereas others — such as the regulation of epithelial to mesenchymal transition (EMT) — may be important only for the development of cancer. However, the functions of YAP and TAZ in the reprogramming of mature and differentiated cells during regenerative behaviour may be exploited during the development of cancer to help drive EMT and other phenotypes. PowerPoint slide Full size image The association of Hippo signalling with stem cell properties has recently been extended to include cancer stem cells. In breast cancer, TAZ has been shown to be a potent stimulator of cancer stem cells in vitro 137 . Overexpression of TAZ increases the ability of MFC10A cells to form mammospheres , which indicates that TAZ enhances the proportion of MCF10A cells with stem cell properties. Conversely, knockdown of endogenous TAZ inhibits mammosphere formation. TAZ is overexpressed in approximately 85% of high-grade human breast cancers, and there is evidence for gene amplification in 15–20% of these cases 137 . Taken together, these results indicate that the activation of TAZ is a major event associated with the initiation and/or progression of breast cancer. By contrast, according to a report by Cordenonsi et al . 137 , YAP does not appear to be effective in promoting cancer stem cell characteristics. However, other studies support a role for YAP in promoting cancer-like properties in non-transformed mammary epithelial cells 15 . Given that YAP and TAZ promote pluripotency in other stem cell contexts, it is likely that their hyperactivation also contributes to the expansion, survival and self-renewal of cancer stem cells in other malignancies. In addition to their self-renewal capacity, cancer stem cells are associated with EMT — a feature that is commonly observed in high-grade tumours and metastases. Overexpression of YAP and/or TAZ can lead to the acquisition of a mesenchymal phenotype in mammary epithelial cells, which suggests that Hippo signalling may have important roles in the suppression of EMT 15 , 62 , 137 . Indeed, the loss of cell polarity determinants such as Scribble leads to reduced Hippo pathway activity, activation of YAP or TAZ, EMT and the expansion of cancer stem cells 137 . Thus, YAP and/or TAZ activation via loss of cell polarity may engage a positive feed-forward loop in which YAP and/or TAZ promote EMT and the loss of cell polarity, which then further activates YAP and TAZ. Importantly, hyperactivation of TAZ has been associated with EMT and high-grade tumours in human breast cancer and in glioma 137 , 169 . Cancer stem cells are commonly resistant to chemotherapeutic agents, and several studies have recently indicated that hyperactivation of YAP and/or TAZ contributes to this resistance 15 , 170 , 171 , 172 . Increased cell survival mediated by the suppression of apoptosis is one feature of Hippo pathway inactivation as well as YAP and/or TAZ hyperactivation 11 , 15 , 42 , 43 . Several mechanisms contribute to this suppression, including transcriptional upregulation of pro-survival factors such B cell lymphoma 2 (BCL-2) family members 173 . In addition, it has been reported that connective tissue growth factor (CTGF) and cysteine-rich angiogenic inducer 61 (CYR61) — targets of YAP and TAZ — inhibit apoptosis in liver cells 174 , 175 and are responsible for taxol resistance in breast cancer cells 172 . Other mechanisms are likely to contribute to YAP- and/or TAZ-mediated drug resistance, including their ability to promote EMT and stem cell properties 176 , both of which are associated with increased resistance to chemotherapy in various tumours and cell lines 177 . In summary, deregulated YAP and TAZ activity promotes multiple cancer cell phenotypes beyond simply driving cell proliferation, which suggests that targeting YAP and TAZ may inhibit the phenotypes of cancer cells at many levels ( Fig. 4 ). Mechanisms of Hippo pathway deregulation in cancer. The levels and nuclear localization of YAP and TAZ are elevated in many human cancers, and YAP and TAZ promote the acquisition of several important cancer cell phenotypes. However, the mechanism of Hippo pathway deregulation as well as YAP and TAZ activation in human cancers is not well understood. This knowledge gap is important because insight into those molecular mechanisms that are altered and responsible for elevated YAP and/or TAZ activity in cancer would be beneficial for the development of therapies targeting the Hippo pathway. Few germline or somatic mutations in components of the Hippo pathway have been discovered in targeted and whole-genome sequencing efforts to date, which suggests that these may be generally uncommon events 7 . One exception to this observation is the NF2 locus. NF2 encodes the tumour suppressor protein Merlin, which regulates the Hippo pathway by modulating the activity of the LATS kinases to promote their localization to the plasma membrane 96 , 134 , 178 . NF2 is mutated with a high frequency in neurofibromatosis, a condition that is characterized by malignant peripheral nerve sheath tumours, and it is thus a bona fide tumour suppressor gene 179 , 180 . Loss of Merlin as well as the Hippo pathway components LATS2 or SAV1 is also frequently observed in malignant mesothelioma 181 . It is not understood why NF2 and LATS2 mutations are common in neurofibromatosis and mesothelioma but uncommon in other human cancers, despite the widespread deregulation of YAP and TAZ in human cancers. Additionally, activating mutations in YAP or TAZ have not been reported in human cancers 7 , 182 , despite the ability of such mutations to drive tumour formation in mouse models of cancer and to confer tumour properties onto non-tumorigenic human cell lines 11 , 12 , 15 . One possibility is that mutant YAP or TAZ may not confer a growth advantage in human cancers because mutant forms of these proteins may also initiate as yet unknown growth-inhibitory mechanisms, or because they may still be kept in check by mechanisms that act independently of LATS kinases — for example, by actin-mediated regulation of YAP and TAZ 105 . In support of the latter hypothesis (that is, mutant YAP and TAZ are kept in check by mechanisms other than those involving LATS kinases), YAP and TAZ are found to be genomically amplified in several human cancers including oral cancers, intracranial ependymomas, hepatocellular carcinomas, gliomas and mammary tumours 15 , 16 , 19 , 183 , 184 , 185 . The frequency of amplification is variable, with high occurrence observed in oral cancers and ependymomas and relatively low frequencies observed in hepatocellular carcinomas and breast cancer. By contrast, approximately 50% of human liver cancers display elevated YAP levels 20 , and TAZ overexpression is observed in over 80% of breast cancers 18 , 137 . These findings imply that amplification is a common mechanism in some human cancers but that other mechanisms predominate in other cancers. The exact nature of these other mechanisms is currently not known, but may include promoter methylation and epigenetic silencing of the MST and LATS kinases 186 , 187 , 188 , 189 , or the expression and/or upregulation of proteins that control YAP transcription 190 , 191 , 192 , 193 or stability 192 . Hence, current evidence suggests that multiple mechanisms contribute to the deregulation of YAP and TAZ in human cancers, including promoter hypermethylation, mutation and amplification. Importantly, although specific defects that deregulate the Hippo pathway in many cancers are not known, the core components of the pathway are largely unaffected by irreversible mutations or genetic aberrations 7 . Hence, at least in principle, these observations provide the opportunity for pharmaceutical interventions to reactivate or restore Hippo pathway function, thereby inhibiting the oncogenic activities of YAP and TAZ. Modulating the Hippo pathway for cancer therapy. Given the association of elevated and hyperactive YAP and TAZ with many cancers, direct or indirect attenuation of either of these proteins represents a rational and novel targeted approach for the treatment and prevention of human malignancies. In this section, we discuss recent preclinical findings that support efforts directed towards the development of new therapies targeting the Hippo pathway for the prevention and treatment of malignancies. Several lines of evidence from mouse and in vitro models show that reducing YAP activity is effective in limiting the growth of tumour cells and preventing tumour formation. First, Nf2 deletion in mice induces liver tumour formation; however, simultaneous deletion of Yap specifically in the liver completely suppresses the formation of liver tumours 178 . Importantly, reducing — but not eliminating — Yap gene dosage to 50% results in a similar level of suppression, which indicates that partial inhibition of YAP is effective in preventing tumour formation in this model 178 . In addition, expression of a dominant negative version of TEAD2 (that lacks the DNA binding domain but can still bind to YAP) abolished hepatomegaly and the development of liver cancer in an Nf2 -mutant mouse model 194 . A dosage effect for YAP was also seen in a mouse model of colon cancer in which the kinases Mst1 and Mst2 were conditionally deleted in the intestinal epithelium with villin-Cre 144 . By 3 months of age, these mice developed colonic adenomas, which are associated with disruption of the normal colonic tissue architecture resulting from the hyperactivation of YAP. Both of these phenotypes were effectively suppressed by the removal of a single Yap allele 144 . Hence, these two studies show that reducing YAP dosage is effective in preventing tumour formation in situations that are predisposed to YAP activation. Thus, therapeutics that target YAP may not need to fully block YAP activity for efficacy, thereby reducing the potential for negative side effects. Although these results are encouraging, it has not yet been established whether reducing YAP dosage after tumour formation will cause a reduction in tumour burden. In addition, these studies activate YAP by mechanisms that have not been documented in the corresponding human cancers. Hence, it will be important to test whether reducing the activity of YAP or TAZ is effective using relevant genetically engineered mouse models that more closely recapitulate human cancer. Another issue of concern is the lack of a complete understanding of the long-term consequences of YAP and/or TAZ inhibition on normal and cancerous tissues. It has been shown that deletion of YAP in the mouse intestine can lead to WNT hypersensitivity with subsequent enhanced stem cell expansion and hyperplasia induced upon injury, which suggests that there is a potential for YAP inhibition to elicit unexpected consequences 195 . The WNT hypersensitivity and enhanced stem cell expansion and hyperplasia induced by YAP inhibition was attributed to YAP-dependent cytoplasmic sequestration of the WNT signalling component DVL — a function of YAP that is independent of the transcriptional activity of YAP–TEAD. The growth-promoting effect of YAP deletion may be due to further intestinal differentiation, resulting in an increased number of Paneth cells, which define the stem cell niche and are a major source for WNT ligands 196 . Thus, complete inhibition of YAP may have unintended consequences and result in increased colorectal tumour growth. Desirable effects may thus be best achieved by aiming to reduce the transcriptional activity of the YAP–TEAD complex — for example, by disrupting the YAP–TEAD interaction or by promoting cytoplasmic localization of YAP, rather than completely removing YAP. Indeed, the expression of a dominant negative TEAD2 does not cause overt liver abnormalities 22 . Hence, available evidence supports the notion that selective inhibition of YAP–TEAD function may have therapeutic efficacy with minimal side effects. Studies using cancer cell assays have provided additional evidence that reducing YAP activity is effective in suppressing tumour growth 190 , 197 , 198 , 199 , 200 . Results obtained with human cancer cell lines have demonstrated the efficacy of reducing YAP dosage (by small interfering RNA (siRNA) or short hairpin RNA (shRNA) depletion) in slowing or arresting the growth of tumour cells in vitro or in xenograft assays 190 , 197 , 198 , 199 , 200 . YAP knockdown also affects other properties of cancer cells; for example, it makes cancer cells more sensitive to chemotherapeutic reagents 127 , inhibits cancer stem cell formation 137 and reduces their metastatic potential 198 , 201 , which suggests that targeting YAP may be effective in modulating various properties of cancer cells that are required for their expansion, survival and spread to distant tissues. Notably, a recent study has extended the potential for targeting YAP in human cancer to cancers that do not exhibit elevated levels of YAP 55 . Genome-wide screens for shRNA-induced synthetic lethality in a large panel of human cancer cells revealed that many tumour cell lines with activated WNT signalling are particularly sensitive to knockdown of YAP 55 . In this case, YAP forms a transcriptional complex with β-catenin (the nuclear effector of canonical WNT signalling) and TBX5 (a sequence-specific DNA-binding protein); this transcriptional complex is necessary for cell survival 55 . Interfering with the assembly of this complex inhibits tumour cell survival and growth as effectively as knockdown of YAP itself, which suggests that it is the β-catenin–YAP–TBX5 complex that is driving survival, rather than the YAP–TEAD complex (which is presumably also present in these cells). In these cancer cell lines, YAP is constitutively nuclear and is not regulated by cell density, which indicates that there is a non-canonical mode of YAP regulation. Indeed, LATS-independent phosphorylation of YAP by the SRC family kinase YES1 is required for YAP–β-catenin–TBX5 complex formation and for the oncogenic properties of this complex 55 . Thus, this study highlights that sensitivity to YAP inhibition may not always correlate with the levels or activity of YAP, and that TEAD-independent interactions of YAP may be important in some cancer contexts. In summary, these studies imply that inhibiting the activity of YAP and TAZ may be effective in treating a variety of cancers, that complete inhibition is not required to show therapeutic efficacy, and that the sensitivity of a cancer cell to YAP or TAZ inhibition may not always correlate with levels of YAP or TAZ expression. The Hippo pathway in repair and regeneration Tissue repair and regeneration often involves stem cell activation and progenitor cell expansion, and an emerging picture is that YAP and TAZ regulate the balance between stem, progenitor and differentiated cells ( Fig. 4 ). In general, enhanced YAP and/or TAZ activity is associated with stem and progenitor cell expansion that is coupled with inhibition of differentiation, whereas attenuation of YAP and/or TAZ activity tends to have the opposite effect. Examples include stem and progenitor cells of the liver, intestine, pancreas, heart, skin and central nervous system 12 , 28 , 107 , 147 , 161 , 162 , 163 , 202 . At present, there is little mechanistic insight into how YAP and TAZ regulate stem cell properties and inhibit their differentiation. However, a recent study by Lian et al . 203 suggests that in murine embryonic stem cells YAP directly binds to the promoters of genes that enhance pluripotency, which suggests that YAP may be a key component of the core pluripotency machinery. It remains to be determined whether this result can be generalized to tissue-specific stem cells in vivo . In situations where progenitor cell expansion has been observed in vivo by YAP activation, it is not clear whether YAP functions to inhibit differentiation and expand stem cell pools or whether it acts by reprogramming differentiated cells to more primitive and stem cell-like progenitor states. Indeed, in the case of murine embryonic stem cells and induced pluripotent cells, YAP appears to be capable of doing both 203 . Enhanced YAP activity inhibits embryonic stem cell differentiation, whereas inhibition of YAP leads to loss of pluripotency. Similarly, in induced pluripotent cells derived from murine fibroblasts, YAP activity increases during the reprogramming process, and negative regulators of YAP — such as the LATS kinases — act as barriers to reprogramming 204 . The situation in human induced pluripotent stem cells and embryonic stem cells is similar to that observed in mice; LATS kinases function as negative regulators of reprogramming 204 . However, unlike in the murine setting, in human stem cells this effect appears to be mediated by TAZ 52 . In human embryonic stem cells, TAZ is important for the maintenance of the pluripotent state, whereas YAP is dispensable for this activity. Differences between mouse and human systems are not fully understood but are likely to derive from distinct signalling pathways that regulate pluripotency and reprogramming in murine versus human cells. Taken together, these results suggest that YAP and possibly TAZ promote the reprogramming of differentiated cells as well as the expansion of stem and progenitor cell populations, which are both features that have important applications to regenerative medicine. An evolutionarily conserved essential function for YAP in tissue regeneration has been described through genetic studies of intestinal epithelial regrowth following injury. Elevated YAP expression is seen in mice treated with dextran sodium sulphate (DSS), a chemical that results in injury and inflammation of the large intestine and initiates a regenerative response 22 . Mice that lack YAP in the colonic epithelium do not show overt defects in intestinal homeostasis but are unable to efficiently undergo a regenerative response following DSS treatment 22 , which is consistent with a role for Hippo signalling in repressing latent regenerative responses that involve stem cell activation. Another study, however, found that conditional elimination of YAP in the intestine promoted tissue regeneration after irradiation and caused hypersensitivity to WNT ligands 195 . The reasons behind the different outcomes in the two studies are unclear at present but may be due to differences in the experimental set-up and the time point of analysis. In any case, these two studies suggest that YAP has growth-promoting as well as growth-suppressing functions, potentially because it directly promotes progenitor cell expansion but also suppresses Paneth cell differentiation, which promotes the survival and self-renewal of intestinal stem cells by producing WNT ligands. The D. melanogaster intestine also harbours stem cells that are triggered to proliferate in response to tissue damage induced by feeding toxins or pathogens. The D. melanogaster YAP homolog Yorkie is required for this response 24 , 25 , 26 , 27 ; however, unlike in mammals, Yorkie is activated in differentiated enterocytes, and this drives the expression of cytokines that then signal to stem cells and promote their proliferation 24 . Thus, although Yorkie may not be directly involved in controlling intestinal stem cell proliferation in D. melanogaster , it is activated upon tissue damage. Therefore, regulation of the Hippo pathway and activation of YAP in the intestine is involved in driving the regeneration of damaged tissue. A similar situation may be occurring in the liver, where loss of Hippo signalling results in the expansion of oval cells 141 — a facultative progenitor cell population that contributes to liver repair following hepatocyte injury. In addition, YAP is required for neonatal heart regeneration in mice 28 , 29 . Notably, forced overexpression of an activated S127A phospho-site-mutant form of YAP in the adult mouse heart promoted heart regeneration after myocardial infarction 28 . Thus, therapeutic elevation of YAP activity might prove to be beneficial for the restoration of gut, heart, liver and potentially other tissues following injury. An important effect of the experimental elevation of YAP activity with regard to regeneration is its involvement in promoting the entry of non-dividing cells into the cell cycle. This effect is seen in both the liver 11 , 12 and heart 28 , 29 , 147 , 161 . These are tissues that are composed of mostly quiescent cells with cellular turnover rates estimated to be in the order of 1 year for human hepatocytes 205 and once in a lifetime for human cardiomyocytes 206 . Although both cell types are normally quiescent, hepatocytes can be readily induced to re-enter the cell cycle and progress through cytokinesis following injury, whereas adult cardiomyocytes do not efficiently undergo cell cycle re-entry and rarely — if at all — undergo cytokinesis. Because of this, the adult liver can regenerate, whereas the heart cannot. Potentially relevant to these observations are the findings that in mouse mutants of core Hippo pathway components (including conditional deletion of Sav1 or combined deletion of Mst1 and Mst2 ) there is unscheduled hepatocyte proliferation in adult mice 140 , 141 , 142 , 143 and enhanced embryonic cardiomyocyte proliferation 147 . Moreover, overexpression of YAP in adult murine cardiomyocytes induces cell cycle entry and cell division 28 , 29 , 161 . Hence, transient elevation of YAP activity might prove to be useful for the repair of tissues that do not normally undergo regeneration — such as the heart — and for augmenting the proliferative capability of damaged tissues that normally do undergo regeneration, such as the liver. Altogether, although sustained activation of YAP or TAZ has clear oncogenic potential in several tissues, transient upregulation of YAP or TAZ activity by pharmacological intervention might be useful in situations that require mobilization of stem and progenitor cell populations or even the reprogramming of adult differentiated cells. Therapeutically targeting the Hippo pathway The studies described above suggest that manipulation of Hippo pathway activity might be beneficial in cancer prevention and treatment as well as in the expansion of stem cell populations for use in regenerative medicine. Targeting the Hippo pathway as an anticancer therapeutic strategy would aim to suppress YAP and TAZ activity, whereas targeting the Hippo pathway to facilitate the regeneration and reprogramming of adult cells and tissues would aim to elevate YAP and TAZ activity. However, although evidence is mounting to demonstrate that pharmacological manipulation of Hippo signalling as well as YAP and/or TAZ activity would be beneficial, there is a need to develop effective means of manipulating Hippo signalling as well as YAP and TAZ activity in both a sustained and transient manner. In addition, tissue-specific control of YAP and TAZ may become important if chronic and/or long-term whole-body inhibition of YAP and/or TAZ results in deleterious side effects. This may be achieved by targeting individual branches of upstream regulators. Although there is interest and potential in targeting genes and pathways that are downstream of Hippo, YAP and TAZ, such as the AXL receptor kinase 207 , CTGF 208 , epidermal growth factor receptor (EGFR) signalling 209 , 210 and others (reviewed in Ref. 211 ), below we focus on strategies and small molecules ( Table 2 ) that are designed to target Hippo signalling itself by manipulating components of the Hippo pathway. Table 2 Small-molecule modulators of the Hippo pathway Full size table Kinases. Decades of targeted drug development efforts suggest that kinases and other proteins with enzymatic activity are attractive targets for small-molecule therapeutics 212 . Within the core of the Hippo signalling pathway are two pairs of kinases — MST1 and MST2, and LATS1 and LATS2 — that restrain YAP and TAZ activity. Small-molecule inhibitors of these kinases would thus be predicted to upregulate YAP and TAZ function, which might prove to be beneficial in regenerative medicine applications such as ex vivo stem and progenitor cell expansion and in vivo tissue repair. A small-molecule inhibitor of MST1, called 9E1, has been developed 213 . This molecule inhibits MST activity in vitro and in cultured HeLa cells, as measured by histone H2B phosphorylation in response to apoptosis; however, its effects on YAP and TAZ activity have not yet been reported. 9E1 shows significant — but not complete — selectivity for MST1 over other kinases, and it provides a starting point for the development of more selective MST1 inhibitors. In addition, rational strategies to target kinases are generally effective at identifying lead compounds that can be further developed and refined by traditional medicinal chemistry approaches 214 , 215 . Conversely, targeting the MST or LATS kinases for anticancer therapy is more challenging, as small-molecule agonists would be most desirable but there are few options available for the rational design of such agonists. Alternatively, YES1 or HIPK2 may be targeted for cancer therapy in some contexts as these kinases promote YAP activity in some assays; YES1 is required for cell survival and the activation of YAP in β-catenin-active cancer cell lines 55 , and HIPK2 promotes YAP activity in in vitro transcriptional assays 122 . Although there are no reported inhibitors of HIPK2, there is evidence that YES1 inhibition can selectively target cancer cell survival and growth. YES1 phosphorylates YAP, thereby promoting the formation of a YAP–β-catenin–TBX5 transcriptional complex, which is essential for the proliferation of β-catenin-dependent colon cancer cell lines 55 . Notably, the broad-spectrum tyrosine kinase inhibitor dasatinib, which inhibits YES1, was effective in inhibiting the growth of these β-catenin-dependent cell lines in vitro and in inhibiting tumour formation in xenografts, by interfering with the assembly of the YAP–β-catenin–TBX5 complex independently of Hippo pathway modulation 55 . Thus, inhibition of YES1 may be effective in a subset of β-catenin- and YAP-dependent cancers. The YAP–TEAD and TAZ–TEAD complexes. The most attractive anticancer targets in the Hippo pathway are YAP and TAZ, as they are the key downstream mediators of this pathway. However, pharmacological inhibition of YAP and TAZ is challenging, as these proteins have no known catalytic activity and function by engaging domains that facilitate protein–protein interactions with the upstream kinases LATS1 and LATS2, the 14-3-3 proteins, the polarity proteins AMOT, ZO1 and ZO2, and the TEAD transcription factors. In many cases, the oncogenic properties of YAP and TAZ depend on their interaction with the TEAD proteins 72 , 198 . Genetic disruption of this interaction by mutating amino acid residues that are crucial for the formation of the YAP–TEAD complex or the TAZ–TEAD complex abolishes the transforming ability of YAP and TAZ in vitro 49 ; in a mouse model of liver cancer, the expression of a dominant negative form of TEAD2 that sequesters YAP was also shown to abolish the transforming ability of YAP 194 . As the co-crystal structure of the YAP–TEAD complex has been recently determined 216 , 217 , 218 , it is possible that this detailed structural information can be leveraged to rationally design small-molecule inhibitors of the YAP–TEAD complex by targeting residues that line the YAP–TEAD polypeptide interaction interface. A small-molecule inhibitor of the YAP–TEAD interaction was identified through high-throughput screening, as described below 194 . MASK, WBP2 and Ajuba proteins. Additional components of the Hippo pathway that could be envisioned for targeted anticancer therapy are proteins that are required for YAP and TAZ activity, such as the MASK proteins 118 , 119 and WBP2 115 , 116 , which directly interact with YAP and TAZ, and Ajuba family proteins, which stimulate LATS activity and thus indirectly inhibit YAP and TAZ 109 . However, all of these are scaffold or regulatory proteins lacking enzymatic activity, which would require targeting a protein–protein interface. Nevertheless, because YAP and TAZ are regulated by multiple inputs, which often show tissue-specific requirements, these and other proteins may provide the means to inhibit YAP and TAZ in specific tissues, thereby aiding in the development of inhibitors that show organ and/or disease-type specificity. EGFR–PI3K pathway. Recently, small-molecule modulators of the EGFR–phosphoinositide 3-kinase (PI3K) pathway have been shown to affect Hippo signalling 219 . Administration of mitogenic growth factors contained in serum as well as EGF treatment stimulates the nuclear localization of YAP by inhibiting YAP phosphorylation 219 , 220 . Based on this observation, Fan et al . 219 screened small-molecule inhibitors of kinases and phosphatases that act downstream of EGF signalling and found that inhibitors of PI3K and pyruvate dehydrogenase kinase 1 (PDK1), but not inhibitors of AKT, were effective in blocking the nuclear localization of YAP induced by EGF and lysophosphatidic acid. Mechanistically, PDK1 physically associates with the core Hippo pathway kinase complex and this complex dissociates in response to EGF signalling, thereby resulting in YAP activation 219 . Thus, inhibitors of PI3K and PDK1 may be effective in reducing YAP activity in cells with an intact Hippo signalling pathway that have elevated EGFR and/or reduced PTEN function. GPCR signalling. Three recent reports have linked GPCRs to Hippo pathway regulation 102 , 103 , 104 . Lysophosphatidic acid, sphingosine-1-phosphate and thrombin were found to signal through Gα 12/13 -coupled GPCRs to stimulate YAP nuclear localization and activity 102 , 103 , 104 . Importantly, representatives of many different GPCR subgroups affect YAP, which demonstrates a general function of GPCRs in YAP regulation 102 . GPCRs that act through Gα 12/13 , Gα q/11 or Gα i/o proteins stimulate YAP, whereas Gα s -coupled receptors have the opposite effect 102 . GPCRs regulate YAP via RHO GTPases and the actin cytoskeleton to inhibit LATS kinases independently of MST kinases, causing dephosphorylation and nuclear accumulation of YAP 102 , 103 , 104 . Agonists of Gα s -coupled receptors — such as adrenaline, glucagon and the dopamine receptor agonist dihydrexidine — result in enhanced YAP phosphorylation and inactivation 102 , 221 . Importantly, GPCR agonists such as adrenaline also affect YAP phosphorylation in vivo , causing an increase in YAP phosphorylation in the heart of injected mice 102 . These results therefore indicate that YAP activity can be modulated with drugs that affect GPCR signalling, although it remains to be determined which agonists or antagonists will be effective in therapeutic applications. F-actin. The identification of F-actin as an important regulator of YAP and TAZ localization and activity uncovers new opportunities for the modulation of Hippo signalling with small molecules. The F-actin destabilizers cytochalasin D, latrunculin A and latrunculin B, as well as treatment with the non-muscle myosin II inhibitor blebbistatin, the myosin light-chain kinase inhibitor ML-7, the RHO inhibitor botulinum toxin C3 and the RHO kinase inhibitor Y27632 all cause nuclear export of YAP and TAZ 97 , 99 , 100 , 101 . These results suggest that drugs targeting F-actin and its modulators may also be effective in modulating YAP and TAZ activity in vivo . This approach is complicated, however, by the fact that the actin cytoskeleton is not a Hippo pathway-specific signal transduction component but is instead required for many basic cellular functions. Nevertheless, non-lethal levels of actin modulators may enhance the effects of other drugs that target the Hippo pathway via different mechanisms. High-throughput screening approaches. An attractive approach to identify small-molecule inhibitors and activators of Hippo signalling is through the application of cell-based high-throughput screening. Indeed, when approximately 3,300 US Food and Drug Administration (FDA)-approved drugs were screened for inhibitors of the transcriptional activity of YAP, 71 hits were identified including several porphyrin compounds 194 . One of these compounds, verteporfin, which is currently used as a photosensitizer in the treatment of macular degeneration, was effective in vivo in delaying tumour progression in an Nf2 -depleted mouse model of liver cancer and in suppressing liver overgrowth caused by the overexpression of YAP; these results were observed following repeated administration of verteporfin during the development of the cancer phenotype 194 . Verteporfin was found to bind to YAP in vitro and to inhibit the interaction of YAP with TEAD 194 . Although these data are exciting, further studies will be needed to determine whether verteporfin is effective in other in vitro and in vivo cancer models and also whether it is effective in the treatment of established cancers. In addition, verteporfin has a relatively low affinity for YAP (at the micromolar level), and so higher-affinity derivatives may be required. In another cell-based screen of 48 drugs for the identification of inhibitors of YAP nuclear localization, dobutamine — a G protein-coupled β-adrenergic receptor agonist that is clinically used to treat acute heart failure — was identified as being effective in preventing the nuclear accumulation of YAP and YAP-mediated transcriptional activation in osteoblastoma and HEK293 cells 222 . Dobutamine treatment induced cytoplasmic translocation of YAP and phosphorylation of Ser127, the main LATS phosphorylation site. Phosphorylation of this site was required for the effect of dobutamine, although it appeared to be phosphorylated by a kinase other than LATS 222 . Similarly to other GPCRs, dobutamine-mediated activation of the β-adrenergic receptor is likely to act through a pathway that involves RHO GTPases and F-actin 102 , 103 , 104 . Open questions and future challenges The Hippo pathway — and in particular the YAP–TEAD and TAZ–TEAD complexes — is an emerging anticancer target, and mounting evidence from mouse models as well as tissue culture assays indicates that targeting the Hippo pathway is effective in preventing disease and in counteracting the cellular mechanisms that promote oncogenic transformation. Although less appreciated, there is also considerable potential for the application of small molecules that transiently activate the YAP–TEAD and TAZ–TEAD complexes to promote stem cell expansion and tissue repair following injury. Research over the past decade has identified a complex network of regulatory components within the Hippo pathway and established robust assays through which the activity of this pathway can be measured. These findingsnow provide a rich landscape for the identification of small-molecule modulators of the Hippo pathway; various pharmacological modulators of Hippo signalling have already been discovered. The challenges now are to determine whether these drugs will be therapeutically useful, which combinations will be the most effective and in what disease contexts they should be applied. Reflecting the complex regulation of the Hippo signalling pathway, these small molecules affect several pathway components and branches. However, many of these compounds do not specifically target components of the Hippo pathway and therefore another challenge is to develop novel modulators that specifically target the Hippo pathway, particularly the activity of the YAP–TEAD and TAZ–TEAD complexes. As many of the pathway components are adaptor proteins that may be difficult to target, it will be important to discover additional membersof the Hippo pathway, especially those that directly affect YAP and TAZ, which may lead to the identification of better drug targets. For example, it has recently been found that SET7-dependent lysine monomethylation of YAP is important for cytoplasmic retention 223 , which suggests that the identification and selective inhibition of YAP protein demethylases could be a novel approach for modulating YAP activity. Similarly, p300/CBP (CREB-binding protein) mediates the acetylation of YAP and sirtuin 1 (SIRT1) mediates the deacetylation of YAP 224 , 225 , which indicates that modulators of the acetylation cycle may be used to modulate YAP activity. Another challenge is to decipher how YAP and TAZ are deregulated in cancer as this will be crucial for determining which components in the pathway should be targeted. In principle, targeting the Hippo pathway for applications in regenerative medicine may be more easily accomplished as the core of the pathway contains two kinases. It will thus be interesting to develop and test inhibitors of the MST and LATS kinases — such as the compound 9E1 — for transient stimulation of growth and regeneration of tissues following injury and in applications where stem and progenitor cell expansion is desired. Although most attention has focused on small-molecule-mediated manipulation of YAP and TAZ activity by controlling the subcellular localization of YAP and TAZ or their ability to form complexes with TEADs, there are other avenues for the pharmacological manipulation of these proteins that warrant further investigation. For example, YAP and TAZ stability is controlled by phosphorylation-induced protein degradation 68 , 69 , and small molecules that enhance the turnover of YAP and TAZ would be expected to reduce the nuclear transcriptional activities of these proteins. Finally, not all components of the Hippo pathway are required in all tissues, and the pathway has tissue-specific regulatory mechanisms. Targeting such tissue-specific regulators provides opportunities to manipulate the Hippo pathway in specific cells, which may help in reducing toxicity and increasing therapeutic value. In conclusion, although studies aimed at targeting the Hippo pathway for therapeutic purposes are still in their infancy, promising preclinical genetic and pharmacological results have already been documented in the literature. We anticipate that the full potential of harnessing the Hippo pathway in the prevention and treatment of human disease will soon be realized. | Although fruit flies don't develop cancer, cancer and stem cell researchers have been learning a great deal from fruit flies - in particular, mutant flies with overgrown organs that resemble hippopotamuses. A fly gene called Hippo and its relatives in mammals normally block cell proliferation and limit organ size. When flies have mutations in Hippo or other genes (together dubbed the Hippo pathway), the resulting overgrowth distorts their tissues into hippopotamus-like bulges. In humans, the Hippo pathway is involved in forming embryonic stem cells, suppressing cancerous growth, and also in regenerative growth and wound healing. Working with flies, researchers at Emory have found that the abnormal growth induced by Hippo pathway disruption depends on genes involved in responding to the steroid hormone ecdysone. Their results are scheduled for publication in Developmental Cell. "Ecdysone is, to some degree, the fly version of estrogen," says senior author Ken Moberg, PhD, associate professor of cell biology at Emory University School of Medicine. In fly larvae, ecdysone triggers metamorphosis, in which adult structures such as wings and eyes emerge from small compartments called imaginal discs.. Ecdysone has a chemical structure like that of estrogen, testosterone and other steroid hormones found in humans. Ecdysone is not sex-specific, but it acts with the same mechanism as other steroid hormones, diffusing into cells and binding proteins that bind DNA and regulate gene activity. Postdoctoral fellow Can Zhang, PhD and MD, PhD student Brian Robinson are co-first authors of the Developmental Cell paper. Collaborators at University of Massachusetts, Boston led by Alexey Veraksa, PhD contributed to the paper. The research team discovered that when the Hippo pathway's control is broken, the resulting excess growth in fly imaginal discs,depends on proteins involved in the ecdysone response. The genes that are activated by this combination of Hippo and ecdysone signals in imaginal disc 'tumors' include genes that are usually turned on only in germline stem cells, found in flies' reproductive organs. Activation of these germline stem cell factors by the Hippo pathway requires ecdysone response genes, the researchers found. The researchers concentrated on a protein in flies called Yorkie, which is usually restrained by "upstream" parts of the Hippo pathway. When unleashed, Yorkie travels into the cell nucleus and turns on growth-related genes. "We found that Yorkie does not just engage a developmental growth program in disc tumors," Moberg says. "It is capable of turning on an ectopic program only seen in germline stem cells." The researchers were able to detect a physical interaction between Yorkie and Taiman, a protein important in ecdysone response. Taiman's closest human relative is named Amplified in Breast Cancer-1 or AIB1. These findings point to possible connections in human biology between AIB1 and the Yorkie homologs Yap1 and Taz, Moberg says. "Since both groups of these proteins are often overexpressed in human cancers, we think our findings may have implications for the study of proliferative mechanisms in cancers and possibly cancer stem cells," he says. | www.nature.com/nrd/journal/v13 … n1/full/nrd4161.html |
Biology | Most European men descend from a handful of Bronze Age forefathers | The study 'Large-scale recent expansion of European patrilineages shown by population resequencing' is published in Nature Communications. DOI: 10.1038/ncomms8152 Journal information: Nature Communications | http://dx.doi.org/10.1038/ncomms8152 | https://phys.org/news/2015-05-european-men-descend-bronze-age.html | Abstract The proportion of Europeans descending from Neolithic farmers ∼ 10 thousand years ago (KYA) or Palaeolithic hunter-gatherers has been much debated. The male-specific region of the Y chromosome (MSY) has been widely applied to this question, but unbiased estimates of diversity and time depth have been lacking. Here we show that European patrilineages underwent a recent continent-wide expansion. Resequencing of 3.7 Mb of MSY DNA in 334 males, comprising 17 European and Middle Eastern populations, defines a phylogeny containing 5,996 single-nucleotide polymorphisms. Dating indicates that three major lineages (I1, R1a and R1b), accounting for 64% of our sample, have very recent coalescent times, ranging between 3.5 and 7.3 KYA. A continuous swathe of 13/17 populations share similar histories featuring a demographic expansion starting ∼ 2.1–4.2 KYA. Our results are compatible with ancient MSY DNA data, and contrast with data on mitochondrial DNA, indicating a widespread male-specific phenomenon that focuses interest on the social structure of Bronze Age Europe. Introduction Controversy has surrounded the origins and antiquity of the people of Europe, focused on the proportions descending from Neolithic farmers originating ∼ 10 thousand years ago (KYA), or from earlier Palaeolithic hunter-gatherers. Early studies observed a European SE–NW cline in classical gene frequency data which was ascribed to demic diffusion of farmers 1 , or, in an alternative view, to the first Palaeolithic colonization 2 . More recent autosomal genome-wide SNP data sets reflect current population structure 3 , 4 and admixture during the last 3,000 years 5 , but have provided little insight into older population processes. Most debate on European prehistory has been stimulated by analyses of uniparentally-inherited markers. Spatial patterns in maternally-inherited mitochondrial DNA (mtDNA) are non-clinal, with age estimates of haplogroups (hg) taken to suggest a major Palaeolithic contribution 6 . Analyses of diversity in the male-specific region of the Y chromosome (MSY) show significant frequency clines in major lineages 7 , and geographical distributions and dates based on short-tandem repeats (STRs) have led to interpretations of both Palaeolithic 8 and Neolithic 9 major components. The most frequent western European lineage, hg R1b-M269, was originally believed to have originated in the Palaeolithic 10 , but in more recent analysis was assigned a Neolithic origin 11 , a claim challenged in turn 12 on the basis of STR choice and sample ascertainment. In general, dates based on STRs are problematic because of uncertainty about appropriate mutation rates, and possible long-term mutation saturation due to their stepwise mutation processes 13 . Palaeolithic dates for the major lineages are challenged by scanty ancient MSY DNA data, which suggest a marked discontinuity between 5–7 KYA and the present 14 . A major cause of the controversy about MSY evidence is that unbiased estimates of diversity and time depth have until recently been impossible to obtain in large samples. Next-generation sequencing (NGS) generally offers unbiased ascertainment of MSY SNPs, providing phylogenies in which topologies inform about past demography, and branch lengths are in principle proportional to time, avoiding dating problems associated with STRs. Some insights have emerged from recent work 15 , 16 , but no systematic population-based NGS study across Europe has yet been undertaken. Here, we use targeted NGS of European and Middle Eastern populations to show that Europe was affected by a major continent-wide expansion in patrilineages that post-dates the Neolithic transition. Resequencing at high coverage of 3.7 Mb of MSY DNA, in each of 334 males comprising 17 population samples, defines an unbiased phylogeny containing 5,996 high-confidence single-nucleotide polymorphisms (SNPs). Dating indicates that three major lineages (I1, R1a and R1b), accounting for 64% of the sampled chromosomes, have very recent coalescent times, ranging between 3.5 and 7.3 KYA. In demographic reconstructions 17 a continuous swathe of 13/17 populations from the Balkans to the British and Irish Isles share similar histories featuring a minimum effective population size ∼ 2.1–4.2 KYA, followed by expansion to the present. Together with other data on maternally inherited mtDNA 16 , 18 and autosomal DNA 19 , our results indicate a recent widespread male-specific phenomenon that may point to social selection, and refocuses interest on the social and population structure of Bronze Age Europe. Results Samples and approach To address the lack of a systematic population-based NGS study of European MSY diversity, we assembled a collection of 20 randomly chosen male DNA samples from each of 17 populations from Europe and the Middle East ( Supplementary Table 1 ). We used a sequence-capture approach to successfully generate 3.7 Mb of unique MSY sequence from 334 of the total set of 340 males. Mean read-depth of 51 × allowed us to call a set of 5,996 high-confidence SNPs ( Supplementary Data 1 ), with a minimum 6 × read-depth. SNP calls were validated using publicly available Y-chromosome sequence information from genome-wide data ( Supplementary Table 2 ). SNPs have been deposited in NCBI dbSNP database and ss numbers can be found in Supplementary Data 1 . Phylogeography of European MSY lineages We constructed a maximum-parsimony tree displaying the phylogenetic relationships between SNP haplotypes ( Fig. 1a ; Supplementary Fig. 1 ), rooted by reference to two MSY sequences 13 from the basal haplogroups A and B. Our sequenced regions cover many previously known SNPs, which allowed us to apply established haplogroup names 20 to clades. Figure 1b shows the geographical distribution of these haplogroups in our samples, which is consistent with previous studies of specific SNPs using larger per-population sample sizes 10 . As expected, the commonest haplogroup is R1b-M269 (43.1%), with highest frequency in the north-west, followed by I1-M253 (13.8%), I2-P215 (9.0%), R1a-M198 (7.5%) and J2-M172 (7.5%). Some clades show geographically-restricted distributions, with hg N1c-M178 being most frequent in the Saami, and sub-lineages of haplogroups E, G and J prevalent in the Mediterranean area. Figure 1: Phylogeny and geographical distribution of European MSY lineages. ( a ) Maximum-parsimony tree of European MSY lineages defined here by resequencing. Branch lengths are proportional to molecular divergence among haplotypes. Key mutation names are given next to some branches, and haplogroup names 20 in the coloured bar below. Three sporadic haplogroups are coloured in black. The grey box within hg R1b-M269 shows the star phylogeny referred to in the text. ( b ) Map with pie-charts showing frequencies of Y-chromosome haplogroups (defined and coloured as in part a ) in 17 populations from Europe and the Near East. Population abbreviations are as follows: bas: Basque; bav: Bavaria; CEU: Utah residents with Northern and Western European ancestry from the CEPH collection (France); den: Denmark; eng: England; fri: Frisia; gre: Greece; hun: Hungary; ire: Ireland; nor: Norway; ork: Orkney; pal: Palestinians; saa: Saami; ser: Serbia; spa: Spain; TSI: Toscani in Italia (Italy); tur: Turkey. Full size image The shapes of different clades within the tree ( Fig. 1a ) vary greatly. Haplogroups E1b-M35, G2a-L31, I2-P215, J2-M172, L-M11 and T-M70 contain long branches with deep-rooting nodes, whereas I1-M253, N1c-M178, R1a-M198 and R1b-M269 show much shallower genealogies. Haplogroup R1b-M269 is particularly striking, containing a remarkable star phylogeny within which 44 terminal branches (13.2% of the total), found in 13 of the 17 sampled populations, descend as a multifurcation from a single node without any sub-structure whatsoever, despite the extensive nature of the sequencing carried out. These qualitative features of the phylogeny are supported by values of the average number of mutations from the ancestral node to branch tips, and also by estimates of time-to-most-recent-common-ancestor (TMRCA) ( Table 1 ) derived by two different methods. Considering haplogroups R1b-M269, R1a-M198 and I1-M253, and the 95% highest posterior density intervals of their TMRCAs, 64% of the MSY sequences sampled in our study descend from three ancestors who each lived more recently than ∼ 7.3 KYA. Table 1 TMRCAs of major haplogroups in Europe estimated using two methods. Full size table Inferences on demographic history Although it offers a tool to formulate hypotheses, the phylogeographic analysis above is limited in its power to illuminate demographic history. To further understand past demography, we applied a population approach: Table 2 shows diversity parameters for the 17 populations. When we consider the diversity from a molecular perspective (as the number of polymorphic sites) the highest diversity is in Turkey and Greece, closely followed by other southern populations (including Palestinians), and the lowest in Saami and Orkney. Consistent with this, there is a significant correlation of decreasing diversity both from south to north, and east to west ( Supplementary Table 3 ), a clinal pattern that might be compatible with a model of demic diffusion from the Middle East. Considering instead the distribution of haplotypes, assessed as median number of singletons (here defined as variants that appear only once within a given population), by far the highest diversity is seen in Turkey and Greece, with the lowest diversity in the Saami and Palestinians—probably reflecting the effect of recent isolation and drift. Neither this measure, nor its standard deviation, shows any significant correlation with either latitude or longitude ( Supplementary Table 3 ). Table 2 Diversity parameters for the 17 populations. Full size table Bayesian skyline plots (BSPs) ( Fig. 2 ) reveal the variation of effective population size with time 21 . The plots are consistent with patterns seen in the relative numbers of singletons, described above, in that the Saami and Palestinians show markedly different demographic histories compared with the rest, featuring very recent reductions, while the Turks and Greeks show evidence of general expansion, with increased growth rate around 14 KYA. A different pattern is seen in the remaining majority (13/17) of populations, which share remarkably similar histories featuring a minimum effective population size ∼ 2.1–4.2 KYA (considering the 95% confidence intervals (CIs) reported in Supplementary Table 4 ), followed by expansion to the present. Considering only these 13 populations, the only significant geographical correlation is of decreasing diversity in the number of polymorphic sites from east to west ( Supplementary Table 3 ); notably, there is no significant correlation between the age at which effective population size was at a minimum before expansion ( Supplementary Table 4 ), and either latitude or longitude ( Supplementary Table 3 ). Taken together, the very recent age of the demographic shift and its lack of geographical pattern suggest that its origin is distinct from that of the diffusion of agriculture. Figure 2: Bayesian skyline plots. Thick black lines indicate the median for effective population size ( N e ) and thinner grey lines show 95% higher posterior density intervals. Grey shading indicates the confidence intervals of the time estimate of the minimum effective population size before the expansion, based on the limits of the 95% CI of the mutation rate. Population abbreviations are as follows: bas: Basque; bav: Bavaria; CEU: Utah residents with Northern and Western European ancestry from the CEPH collection (France); den: Denmark; eng: England; fri: Frisia; gre: Greece; hun: Hungary; ire: Ireland; nor: Norway; ork: Orkney; pal: Palestinians; saa: Saami; ser: Serbia; spa: Spain; TSI: Toscani in Italia (Italy); tur: Turkey. Full size image In contrast to BSPs, approximate Bayesian computation (ABC) offers an alternative model-based approach to understanding complex population histories, based on coalescent simulations 22 , 23 . We analysed several demographic models including single population expansion, reduction or combinations of the two (see Supplementary Fig. 2 ). Despite the high-molecular resolution of our data, the fact that they originate from a single locus did not allow precise estimation of demographic parameters, but the ABC generally confirmed the population size dynamics reconstructed by the BSP analysis (with some exceptions: Supplementary Tables 5–7 ). The proportion of the parameter variance explained by the summary statistics ( R 2 ) is in most cases higher than 10% (and hence generally considered as a good estimation), but the 95% credible intervals are wide. This is particularly true for T1, the time of the start of the demographic change (reduction or expansion), thus preventing us from drawing any conclusion about the timing of these events (whether post Neolithic, or more ancient) from the ABC analysis. In general, the non-parametric genealogical approach represented by BSPs better explores the variation found in our data, compared with a more conservative ABC analysis based on a single locus. We note that both analyses assume panmixia, and that population structure might influence effective population size estimates. However, it seems improbable that such an effect would extend to so many populations. Discussion Our approach has led to the confident identification of many MSY sequence variants in European population samples and a highly resolved phylogeny, but our conclusions are also influenced by a more contentious factor, the choice of mutation rate. We chose a rate (1.0 (0.92–1.09) × 10 −9 per bp per year, considering a 30-year generation time) based on the observation of 609 MSY mutations (excluding palindromic regions) in Icelandic deep-rooting pedigrees 24 . The point estimate of this rate is the same as an earlier pedigree-based estimate in which only four mutations were observed 25 , and which we applied in our broader MSY-phylogeny study 13 . We note that the rate we have used is higher than the estimate of 0.76 (0.67–0.86) × 10 −9 per bp per year based on counting the ‘missing’ mutations in the genome of the Ust’-Ishim male 26 , radiocarbon dated to ∼ 45,000 YBP. Other studies 27 , 28 have inferred slower mutation rates (0.62 or 0.64 × 10 −9 ) based on scaling the genome-wide de novo rate to account for male-specific transmission, though this has been criticized 29 , and is not consistent with phylogenetic mutation rates estimated from human–chimpanzee MSY comparisons 30 , 31 . Some have chosen to calibrate the pedigree mutation rate against external (for example, archaeological) data 15 , 32 , but we have rejected this idea, firstly because of uncertainty over how archaeological date estimates correlate with demographic changes, and secondly because we have used a coalescent-based dating method that itself models genealogies. Despite recent advances, mutation rate remains a difficult issue, and more data are needed. The recent and rapid continent-wide demographic changes we observe suggest a remarkably widespread transition affecting paternal lineages. This picture is confirmed in an independent analysis of MSY diversity in the pooled HGDP CEPH panel European samples 16 , and is compatible with current ( n =98) ancient DNA data for MSY ( Fig. 3 ; Supplementary Table 8 ), in which hgs R1a, R1b and I1 are absent or rare in sites dating before 5 KYA, whereas hgs G2a and I2 are prevalent. Figure 3: Timeline of MSY ancient DNA data. The graph shows stacked frequencies of MSY haplogroups in ancient European DNA samples, based on data from 98 individuals, and binned into 500-year intervals. ‘other’ includes C1, F*, H2, R*, R1, R1b(xR1b1a). Below the timeline are indicated BEAST point estimates and highest posterior density intervals for three relevant haplogroups. Full size image Analyses of ancient autosomal sequence data 19 demonstrate discontinuity 7–5 KYA between western European hunter-gatherers, tending to have genetic affinity to northern Europeans, and farmers, resembling southern Europeans. Consideration of the genomic ancestry of modern Europeans 33 reveals ancestry from these two groups, but also from north Eurasian hunter-gatherers 33 . Recent analysis 34 better defines this latter component, supporting a two-migration model into a hunter-gatherer substrate, involving an early-Neolithic (7–8 KYA) arrival of farming populations from the Near East, followed by a late-Neolithic (4.5 KYA) migration of pastoralists from the steppe region north of the Caspian Sea, whose genomic contribution is ubiquitous in modern Europeans. Ancient MSY sequences 34 show that hgs R1a and R1b are present in the steppe much earlier than observed in any European sites ( Supplementary Table 8 ), making this region a likely source for these MSY expansion lineages. Ancient mtDNA data 18 also indicate large-scale population discontinuity since the Neolithic transition, with a massive shift in haplogroup composition ∼ 7.5 KYA between Central European hunter-gatherers (carrying exclusively hg U lineages) and farmers (a much broader range of hgs), followed by later fluctuations. Demographic inference from whole mtDNA sequences 16 , however, does not show recent and sudden expansion. This suggests that the recent events responsible for shaping modern MSY variation were male specific. The period 4–5 KYA (the Early Bronze Age) is characterized by rapid and widespread change, involving changes in burial practices that might signify an emphasis on individuals or kin groups, the spread of horse riding, and the emergence of elites and developments in weaponry 35 . In principle male-driven social selection 36 associated with these changes could have led to rapid local increases in the frequencies of introgressing haplogroups 34 , and subsequent spread, as has been suggested for Asia 37 . However, cultures across Europe remain diverse during this period; clarifying the temporal and geographical pattern of the shift will rely heavily on additional ancient DNA data. Methods Samples DNA donors were recruited with informed consent (University of Leicester Research Ethics Committee reference: maj4-cb66). DNA was extracted from various sources including lymphoblastoid cell lines, peripheral blood and saliva. Samples (340) were included in the design from 17 populations (20 males each) across Europe and the Near East. Samples from Greece, Serbia, Hungary, Germany (Bavaria), Spanish Basque country, central Spain, Netherlands (Frisia), Denmark, Norway, Finland (Saami), England 38 (Herefordshire and Worcestershire), Orkney 38 , Ireland and Turkey were collected by the authors. Twenty random Palestinian male samples were purchased from the National Laboratory for the Genetics of Israeli Populations ( ). Finally, samples from two HapMap 39 populations were used, both to supplement the population data set and to provide data on externally analysed samples for validation purposes: the Centre d'Etude du Polymorphisme Humain (CEPH) collection in Utah, USA, with ancestry from Northern and Western Europe (CEU) and the Toscani in Italia (TSI). After the initial analyses, one English and one Spanish individual were identified as females and therefore removed from all downstream analyses in this study, reducing the final number of samples to 338. For further details on samples see Supplementary Table 1 . Bait design for target enrichment For target enrichment Agilent SureSelect (Agilent Technologies Inc., CA, USA) hybridization capture was used. RNA baits were designed using Agilent eArray with default parameters for Illumina Paired-End Long Read sequencing (bait length: 120 bp; design strategy: centred; tiling frequency: 1 × ; avoid overlap: 20 bp; strand: sense; avoid regions: repeat masker) and human reference sequence hg19/GRCh37 (February 2009). Boosting was used for ‘orphan’ (located >20 bp from flanking baits) and GC-rich ( ⩾ 63%) baits by direct replication (orphans 2 × , GC-rich 3 × ). In this study we focus on the eight X-degenerate regions 31 of the Y chromosome which are likely to yield interpretable sequence data; other captured regions are discussed elsewhere 13 . The total length of targeted regions was ∼ 8.6 Mb, and following capture design and the necessary repeat masking, the designed baits covered 2.3 Mb. Coordinates of the eight targeted regions can be found in Supplementary Data 2 . Sequencing and data processing Genomic DNA (3–5 μg) was used for library preparation and target enrichment using Agilent SureSelect XT Target Enrichment System for Illumina Paired-End Sequencing Library kit (version 1.3). In order to obtain larger insert sizes, DNA samples were fragmented to ∼ 250–600 bp without size selection. This resulted in a mean insert size of 330 bp, which increases recovery of sequence data from bait-adjacent regions. Sequencing was done on an Illumina HiSeq 2000 instrument (Illumina, CA, USA) with paired-end 100-bp run to high coverage. Library preparation, target enrichment and sequencing were carried out at the High-Throughput Genomics Centre at the Wellcome Trust Centre for Human Genetics, University of Oxford, UK. Base calling was done using Illumina Bustard 40 and quality control with FastQC 41 . Sequence data were mapped to the human genome reference (GRCh37) using Stampy v1.0.20 (ref. 42 ). Local realignment was done using The Genome Analysis Toolkit (GATK) v2.6-5 (ref. 43 ), followed by duplicate read marking with Picard v1.86 (ref. 44 ) and base quality score recalibration also with GATK. The individual steps and parameters used are listed in Supplementary Table 9 . Variant calling and filtering Owing to larger insert sizes (see above) and high efficiency of sequence capture, high sequence coverage was obtained not only at baited regions but also at ∼ 300 bp flanking the enrichment baits. Therefore, the original bait coordinates were modified by adding 300 bp to either side of each bait followed by merging the overlapping coordinates and increasing the size of the analysed region to 4,433,580 bp. Data on the 338 male samples described above were co-analysed with simultaneously generated data on an additional 117 samples, described elsewhere 13 . Variant calling was done using the samtools mpileup v0.1.19 multi-sample option, calling all samples simultaneously ( Supplementary Table 9 ). In total 19,276 raw SNPs were called from 455 male samples. Raw variants were filtered using vcftools v0.1.11 (ref. 45 ) and in-house Perl scripts. Filters used for the final data are listed in Supplementary Table 9 . As well as the two females, seven samples (four from the European population set) were removed from the final data set due to missing >5% of calls. The filtered data set included 13,474 variant sites from 448 samples, from 0 to 643 missing calls per individual, with an average call rate of 99.8%. To recover as many missing genotypes as possible for subsequent analyses, they were divided into three groups based on read-depth: DP 0—the genotype call was discarded; DP 2–6—the raw call was accepted; all other cases—the sites were re-called using a single-sample approach to obtain the DP4 field in the vcf, the bam file was checked manually, and the most probable allele was inferred by comparing the bam file with the information contained in the DP4 field. After this procedure, 213/13,474 sites still lacked genotype calls, leading to a final number of 13,261 sites for further analyses. Having applied the above filters to variant sites, it was necessary to apply the same criteria to non-variant sites. We calculated the depth per sample per site using the GATK DepthOfCoverage tool, filtered for base quality 20 and mapping quality 50, and then applied the criterion of ⩾ 6 × coverage in ⩾ 95% of samples. This led to a reduction in the figure of base pairs sequenced from 4,433,580 to 3,724,156 bp. The corresponding coordinates ( Supplementary Data 2 ) were used for all downstream analysis. Mean raw sequence coverage per sample across the 3,724,156 bp of analysed regions was calculated using Picard v1.93. Sequence depth for the 448 samples varied from 25 × to 85 × per sample, with the average of 51 × . Supplementary Table 1 shows sequence depth information for the 338 male samples described here. The final European data set used here, excluding cases with >5% missing calls, includes 334 individuals and 5,996 SNPs ( Supplementary Data 3 ). Validation In silico validation of the 13,261 filtered SNP calls was done using two previously published data sets: genomes sequenced to high-coverage with self-assembling DNA nanoarrays by Complete Genomics 46 , and Omni2.5 BeadChip genotype data produced at the Broad Institute as part of the 1,000 Genomes Project 47 . Our samples included 4 and 39 HapMap individuals overlapping with these two data sets, respectively. A Perl script was written to compare the SNP calls in our variant set to overlapping samples and positions in the control sets. Of the 888 variant sites shared between our data and the Complete Genomics data across four overlapping samples, the false positive and false negative error rates were both 0%. When compared with the Omni data across 241 variant sites and 39 overlapping samples, the error rates were 0.13% for false positives and 1.82% for false negatives. However, all the false calls originated from only 19 variant sites. To shed light on these comparatively high error rates, we also compared Complete Genomics and Omni data for regions corresponding to our final analysed regions. Across 263 variant sites and 49 overlapping samples, we obtained false positive and false negative rates of 2.85 and 2.36%, respectively. The false calls originated from 30 sites and 15 of those overlap with the sites producing high error rates when comparing our data with Omni. Since the Complete Genomics data set is generally considered to have very high quality then this seems to indicate problems in making correct calls from Omni genotyping data. More detail is provided in Supplementary Table 2 . Phylogenetic inference PHYLIP v3.69 was used to create a maximum parsimony phylogenetic tree 48 . Three independent trees were constructed with dnapars using randomization of input order (seeds: 4941985, 62529981 and 38185313), each 10 times. Output trees of these runs were used to build a consensus tree with the consense programme included in PHYLIP package. The tree was rooted using two Y-chromosomes belonging to haplogroups A and B which were sequenced in the complete data set 13 . FigTree v1.4.0 49 was used to visualize the tree ( tree.bio.ed.ac.uk/software/figtree/ ). Haplogroup prediction The presence of known markers was checked using AMY-tree v1.2 (refs 50 , 51 ). This software was developed to predict MSY haplogroups from whole-genome sequence data using a list of known markers. Since our data do not cover the whole MSY but only a proportion of it, the software lacks sufficient information for haplogroup prediction. However, it can be used to deduce the presence and allelic states of known MSY markers present in sequence data. The AMY-tree v1.2 conversion file contains a list of 1,453 known Y-SNPs, of which 490 are present in our data. These 490 sites were used to assign a standard haplogroup to all our samples according to the Y Chromosome Consortium phylogenetic tree 20 and its subsequent updates ( Supplementary Table 1 ). TMRCA and ages of nodes The TMRCA of the tree and of nodes of interest were estimated via two approaches: BEAST v1.8 (refs 17 , 52 ): Markov chain Monte Carlo (MCMC) samples were based on 25,000,000 generations, logging every 1,000 steps, with the first 2,500,000 generations discarded as burn-in. Three runs were combined for analysis using LogCombiner. We used an exponential growth coalescent tree prior (growth rate prior: uniform(0–0.002)), HKY substitution model, and a strict clock with a substitution rate of 1.0 (95% CI: 0.92–1.09) × 10 −9 mutations/nucleotide/year 24 . TMRCAs were estimated in a single run including all 17 European populations and assigning samples to specific clades in agreement with the MP tree shown in Fig. 1 . Rho: A Perl script was written to calculate TMRCA and its standard deviation for any given clade within a PHYLIP outfile, using the rho statistic 53 , 54 . A scaled mutation rate of 268.5 (246.3–291.9) years per mutation was used, based on a published rate of 1.0 (95% CI: 0.92–1.09) × 10 −9 mutations/nucleotide/year 24 and the number of nucleotides in our regions of interest (3,724,156). Bayesian skyline plots Bayesian skyline plots were generated using BEAST v1.8 (refs 17 , 52 ). MCMC samples were based on 100,000,000 generations, logging every 1,000 steps, with the first 10,000,000 generations discarded as burn-in. We used a piecewise linear skyline model with 10 groups, a HKY substitution model, and a strict clock with a mean substitution rate of 1.0 × 10 −9 mutations/nucleotide/year 24 and a generation time of 30 years, consistent with 55 . For the 13 populations showing a recent expansion in the BSP, the limits of the 95% CI of mutation rate 24 (0.92–1.09 × 10 −9 ) were used to define the CI of the time estimate of the minimum effective population size before the expansion (grey shading in Fig. 2 ). Intrapopulation diversity and geographical correlation The number of polymorphic sites per population, Tajima’s D 56 , and Fu’s FS 57 were calculated using Arlequin 3.5 (ref. 58 ). The number of singletons was calculated using Vcftools v0.1.11. Correlation tests between measures of genetic diversity and latitude and longitude were run in R 59 with the function cor.test of the package stats. Approximate Bayesian computation We generated one million simulated datasets for each tested model ( Supplementary Fig. 2 ) with the programme FastSimcoal2 (ref. 60 ), simulating a single haploid locus of 3,724,156 bp. We summarized the data by means of the derived site frequency spectrum (-s -d flags in the command line) considering only categories with at least one observed polymorphic site. Ancestral states in the observed data were defined elsewhere 13 using a custom script. To compare models we applied the Logistic Regression procedure 61 . Model parameters were estimated by a locally weighted multivariate regression 22 after a logtan transformation 62 of the 10,000 best-fitting simulations from a specific model. To calculate the posterior probabilities for models and parameters we used R scripts from , modified by AB and SG. We also estimated the power of our ABC procedure to correctly recognize the true model calculating for each model the proportion of true positives and false positives. We evaluated 1,000 pseudo-observed data sets generated under each model, counting the number of times a specific model is correctly identified by the ABC procedure (true positives), and the number of times the same model is incorrectly selected as the true model (false positives). Demographic models and priors We considered five models depicting different demographic histories, testing each model separately for each population ( Supplementary Fig. 2 ). M1 is the simplest, in which the effective population size remains constant over time (uniform prior: 20–20,000). In M2 an ancient constant-sized population (uniform prior: 1,001–20,000) starts an exponential reduction T1 generations ago. The reduction spans LEX generations (uniform prior: 5–634), then the population returns to constant size (uniform prior: 20–1,000) T2 generations ago (uniform prior: 0–30). In M3 the reduction is followed by an expansion that starts T2 generations ago (with the effective population size, NER, drawn from an uniform prior: 20–1,000) until the present (uniform prior for the current effective population size, NC: 1,001–20,000). M4 is parameterized in the same way as M2, with an expansion instead of a reduction (NA uniform prior: 20–1,000; NC uniform prior: 1,001–20,000). In M5, the expansion ends at T2 followed by a reduction until present time (NEE uniform prior: 1,001–20,000; NC uniform prior: 20–1,000). We considered a generation time of 30 years. In all the models the Last Glacial Maximum ( ∼ 20,000 years ago) represents the upper bound of the time for the first demographic change (T1). In each simulation the per-generation, per-site mutation rate 24 is drawn from a normal distribution with mean 3.01 × 10 −8 and 95% confidence intervals 2.77–3.26 × 10 −8 . DNA sequences were generated under a finite sites mutational model with no transition/transversion bias. Perl scripts used in the analysis are available upon request. Additional information How to cite this article: Batini, C. et al . Large-scale recent expansion of European patrilineages shown by population resequencing. Nat. Commun. 6:7152 doi: 10.1038/ncomms8152 (2015). | Geneticists from the University of Leicester have discovered that most European men descend from just a handful of Bronze Age forefathers, due to a 'population explosion' several thousand years ago. The project, which was funded by the Wellcome Trust, was led by Professor Mark Jobling from the University of Leicester's Department of Genetics and the study is published in the prestigious journal Nature Communications. The research team determined the DNA sequences of a large part of the Y chromosome, passed exclusively from fathers to sons, in 334 men from 17 European and Middle Eastern populations. This research used new methods for analysing DNA variation that provides a less biased picture of diversity, and also a better estimate of the timing of population events. This allowed the construction of a genealogical tree of European Y chromosomes that could be used to calculate the ages of branches. Three very young branches, whose shapes indicate recent expansions, account for the Y chromosomes of 64% of the men studied. Professor Jobling said: "The population expansion falls within the Bronze Age, which involved changes in burial practices, the spread of horse-riding and developments in weaponry. Dominant males linked with these cultures could be responsible for the Y chromosome patterns we see today." In addition, past population sizes were estimated, and showed that a continuous swathe of populations from the Balkans to the British Isles underwent an explosion in male population size between 2000 and 4000 years ago. This contrasts with previous results for the Y chromosome, and also with the picture presented by maternally-inherited mitochondrial DNA, which suggests much more ancient population growth. Previous research has focused on the proportion of modern Europeans descending from Paleolithic—Old Stone Age—hunter-gatherer populations or more recent Neolithic farmers, reflecting a transition that began about 10,000 years ago. Chiara Batini from the University of Leicester's Department of Genetics, lead author of the study, added: "Given the cultural complexity of the Bronze Age, it's difficult to link a particular event to the population growth that we infer. But Y-chromosome DNA sequences from skeletal remains are becoming available, and this will help us to understand what happened, and when." | 10.1038/ncomms8152 |
Biology | Model suggests Indian Ocean Dipole changes are reducing wheat yields in Australia | Puyu Feng et al, Increasing dominance of Indian Ocean variability impacts Australian wheat yields, Nature Food (2022). DOI: 10.1038/s43016-022-00613-9 Journal information: Nature Food | https://dx.doi.org/10.1038/s43016-022-00613-9 | https://phys.org/news/2022-10-indian-ocean-dipole-wheat-yields.html | Abstract The relationships between crop productivity and climate variability drivers are often assumed to be stationary over time. However, this may not be true in a warming climate. Here we use a crop model and a machine learning algorithm to demonstrate the changing impacts of climate drivers on wheat productivity in Australia. We find that, from the end of the nineteenth century to the 1980s, wheat productivity was mainly subject to the impacts of the El Niño Southern Oscillation. Since the 1990s, the impacts from the El Niño Southern Oscillation have been decreasing, but those from the Indian Ocean Dipole have been increasing. The warming climate has brought more occurrences of positive Indian Ocean Dipole events, resulting in severe yield reductions in recent decades. Our findings highlight the need to adapt seasonal forecasting to the changing impacts of climate variability to inform the management of climate-induced yield losses. Main Rapid increases in global population and affluence call for increased quantity in food supply. Although the impacts of climate warming on agriculture may vary in different regions 1 , overall, global agriculture will need to double its production of major cereal crops to feed 10 billion people by the 2050s, translating to a ~2.4% growth in crop production per year (ref. 2 ). As food systems are increasingly globalized and interdependent, it is expected that a large portion of this growth will come from an increase in crop yields in the countries that are currently the primary crop-producing and crop-exporting countries. However, maintaining stable crop yields in these countries (especially with a Mediterranean climate) is challenging as year-to-year climate variability often subjects crops to unfavourable climate conditions, including floods, droughts and heat stress. The climate variability can be managed by optimizing farm inputs (for example, seed and fertilizer) in potentially ‘good’ or ‘bad’ seasons if weather anomalies can be reliably informed in advance. However, skilful weather forecasts are reliable only within a 2-week horizon, which is usually not enough for cultivation plans to be made in terms of seasonal crops. Instead, producers are increasingly being informed by seasonal forecasts that rely on the behaviour of large-scale climate drivers. Weather conditions (for example, rainfall and temperature) in many regions can be regularly influenced by large-scale climate drivers from surrounding oceans 3 , 4 , such as the El Niño Southern Oscillation (ENSO), the most prominent inter-annual phenomenon in the tropical Pacific. Consequently, there is a relation chain that climate drivers modulate local weather that affect crop productivity. Furthermore, scientific advances have improved the skill in forecasting the dynamics of climate drivers, with lead times ranging from several months up to a year 5 . Farmers can benefit from the routine availability of climate driver forecasts by changing their strategies to adapt to the upcoming season. The relationships between crop productivity and climate drivers are generally region specific. However, ENSO has been shown to be a dominant driver, with over 28% of global cropland subjected to considerable impacts of ENSO anomalies during 1961–2010 (ref. 6 ). ENSO-based seasonal climate or crop yield forecasting approaches for different regions were developed decades ago 7 , 8 . These are mostly derived from traditional linear regression or correlation analyses that do not allow for potential non-linear changes in the impacts of ENSO or changes in the relative importance of different climate drivers that could develop under a background of climate warming. Previous studies have revealed that climate drivers are non-stationary phenomena, and their characteristics (for example, spatial patterns, variance, duration and frequency) and impacts on regional climate conditions can change over time. For example, ENSO characteristics can change over time 9 , and its impacts can be modified by modes of climate variability operating on inter-decadal and longer timescales 10 . Australia is one of the world’s largest producers and exporters of wheat, accounting for 10–15% of annual global wheat exports. Consequently, Australia plays a key role in global food supply at present, and is expected to continue to do so in the future. It is well established that inter-annual variability of Australia’s climate is heavily influenced by climate drivers from the three surrounding oceans: ENSO in the Pacific Ocean, the Indian Ocean Dipole (IOD, an east–west gradient of sea surface temperature (SST) across the tropical Indian Ocean), and the Southern Annular Mode (SAM, a north–south movement of the mid-latitude circumpolar westerly winds) over the Southern Ocean 3 , 11 , 12 . IOD variability is being altered by climate change 13 and has played a more important role in several severe drought events in Indian-Ocean-rim countries during recent decades 14 , 15 . Similarly, climate change is resulting in changes in SAM that are outside the range of pre-industrial natural variability of the last millennium 16 . Therefore, it is likely that the impacts of large-scale climate drivers on crop yields are not stable. In this Article, we use the Australian wheatbelt as a case region to explore the potential changing impacts of climate drivers on crop yields. Improving our understanding of any changes in the impacts of these climate drivers on Australian wheat productivity has important implications for increasing the resilience of global food production. Results Descriptive statistics of wheat productivity and climate drivers Climate drivers can change from decade to decade. To sample this variation, long-term datasets of climate drivers and crop yields are needed to evaluate potential changes in climate variability and its impacts on crop yields. Long-term climate driver data are relatively easy to obtain, but observed crop yield datasets at century scale are rare. An additional challenge is that it is difficult to isolate the impacts of climate fluctuations on crop productivity in such datasets, as change in observed crop yields is also driven by many other non-climatic factors, including changes in agronomic management practices and technology development. We address these challenges by simulating wheat yield data with rainfall, solar radiation, and maximum and minimum temperature from a climate dataset starting in 1889, with all other non-climatic factors kept constant through time. Figure 1 presents the mean and coefficient of variation (CV) of simulated annual wheat yields during 1889–2020 for 0.5° grid cells across the Australia wheatbelt. The average simulated wheat yield of the wheatbelt is around 2.0 t ha −1 , with yields in the eastern and south-eastern fringe of the belt being higher than in other areas. The mean CV of wheat yield across the Australian wheatbelt is 0.38. Australian wheat is generally grown under rainfed conditions and is highly sensitive to rainfall variability (Supplementary Fig. 1 ). Hence, low yields and larger-than-average variation are found in more inland areas, on the fringe of the arid interior of the Australian continent. Fig. 1: Simulated wheat yield across Australian wheatbelt. a , b , Mean ( a ) and CV ( b ) of wheat yield during years 1889–2020. WA, Western Australia; SA, South Australia; VIC, Victoria; NSW, New South Wales; QLD, Queensland. Wheat yield was simulated by a well-calibrated biophysical crop model, APSIM ( ), for 0.5° grid cells throughout the wheatbelt. The model was driven by the SILO gridded climate dataset, including daily rainfall, solar radiation, and maximum and minimum air temperatures from 1889 to 2020. Source data Full size image We use the Southern Oscillation Index (SOI) 17 , the Dipole Mode Index (DMI) 18 and the SAM index 19 to characterize the occurrence and magnitude of ENSO, IOD and SAM phases, respectively. The SOI measures the difference in sea level pressure (SLP) between the western and central tropical Pacific. Sustained negative (positive) SOI values indicate El Niño (La Niña) events, typically bringing lower (higher) than average winter–spring rainfall across eastern and northern regions of Australia. The DMI is defined as the difference between the SST anomalies of western and eastern regions of the equatorial Indian Ocean. A positive IOD event (sustained positive DMI values) typically results in below-average winter–spring rainfall over western and southern Australia, but above-average winter–spring rainfall during a negative event. The SAM index is the difference in zonal mean SLP between 40° S and 65° S. The effects of the SAM on Australia’s winter rainfall vary greatly depending on region (that is, more (less) rainfall in the east and less (more) rainfall in the south during a positive (negative) phase). Australian wheat is generally grown from late autumn to late spring. It is therefore anticipated that there are associations between climate drivers and wheat yields. Figure 2 shows the temporal variations of wheat yield as well as growing season (May to November) mean climate driver indices during 1889–2020. Wheat yield varied greatly over the study period, ranging from 1.1 t ha −1 in 1914 to 2.6 t ha −1 in 1973. The responses of yield to climate drivers are generally consistent with previous results suggesting that positive-phase IOD and El Niño events negatively impact Australian national wheat yield 20 . For example, national wheat yield was low in 1914, 1940, 1982 and 2019, years experiencing a positive IOD or El Niño phase. As for SAM, it does not show a consistent effect on national yield. While there are no apparent long-term trends in the SOI over the study period, there is a clear long-term signal in both DMI and SAM, with many positive-phase years in recent decades but seldom during the first half of the study period. It should be noted that nearly every positive IOD phase year witnesses a yield reduction, even with neutral phases of ENSO (for example, 2019). It is likely that a shift in the relative contributions of climate drivers to yield variations may occur. Fig. 2: Annual mean Australian national wheat yield (black lines) and growing season mean climate driver indices (bars) during 1889–2020. The left y axis represents the normalized values of three climate driver indices, the SOI, the DMI and the SAM index. The right y axis represents simulated wheat yields. Source data Full size image Changes of dominant climate drivers We identify the dominant climate driver for each grid over different time periods across the wheatbelt, and the stationarity of the local-scale impacts of these climate drivers over time. We equally split the 1889–2020 dataset into four 33 year subperiods, namely 1889–1921, 1922–1954, 1955–1987 and 1988–2020. Our hypothesis is that the impacts of the different drivers on crop productivity differ between subperiods. We restrict our analysis to four 33 year subperiods as this is near to the accepted 30 year convention to define climatology and maintains enough samples of different phases of climate drivers in each period for subsequent analysis. Given that IOD can co-occur with ENSO (Supplementary Table 1 ), we remove the influence of ENSO from IOD using a simple linear regression 21 . The dominant climate driver has varied over time in most grids. During the first subperiod (1889–1921), ENSO was the dominant climate driver in most grids of the wheatbelt (Fig. 3a ). The IOD and the SAM were only dominant in sporadic grids throughout the wheatbelt. During the second subperiod (1922–1954), the importance of the SAM increased, exhibiting greater impacts on wheat yield in many southern grids (Fig. 3b ). Eastern and north-eastern areas continued to be dominated by ENSO. In the following subperiod (1955–1987), ENSO presented enhanced effects on wheat yield and recaptured many grids that were previously dominated by SAM (Fig. 3c ). During the most recent subperiod (1988–2020), the dominant climate drivers over the wheatbelt shifted greatly (Fig. 3d ). The IOD became the dominant driver across southern parts of the wheatbelt. Only north-eastern areas and some grids in the west were dominated by ENSO or SAM. This phenomenon is rarely seen in the past century (Supplementary Fig. 2 ). Fig. 3: Dominant climate driver of wheat yield at each grid as identified by the RF model. a – d , Dominant climate driver of wheat yield for the years 1889–1921 ( a ), 1922–1954 ( b ), 1955–1987 ( c ) and 1988–2020 ( d ). The grey grids indicate where no single driver is dominant. Dominance is determined when an importance value is more than 50% for a specific driver based on the RF model. Source data Full size image Shifting impacts of climate drivers on wheat productivity The results for the first three subperiods (Fig. 3 ) show that, between 1889 and 1987, ENSO was consistently the most widespread influence on variability in wheat yield, particularly in eastern regions. However, since 1988, influences from the Indian Ocean far exceeded those from the Pacific. This is consistent with Yuan and Yamagata’s study 20 , who demonstrated stronger negative impacts of IOD than ENSO on wheat yields in recent decades. Severe drought and bushfire events during the recent three decades have also been attributed to more occurrences of positive IOD rather than ENSO 15 , 22 , 23 . Interestingly, the mean and CV of simulated wheat yield have been relatively stable from one climatological period to the next (though not on shorter timescales), despite the great changes of dominant climate drivers (Supplementary Fig. 3 ). It is likely that the magnitude of variability in wheat yields due to the influence of different climate drivers is similar in different subperiods, but their relative contributions change. The changing dominance of the climate drivers may be due to changes in the relationships between the drivers and yields. Non-linear relationships are detected between wheat yields and climate drivers (Fig. 4 ). In general, the effects of the climate drivers on wheat yields are non-stationary. The effects of ENSO on wheat yields were slightly stronger during 1889–1921 and 1955–1987, as denoted by larger average slopes. In these two subperiods, more grids were also identified to be dominated by ENSO (Supplementary Fig. 4 ). The sensitivity of yields to the IOD was also non-stationary over time, being greater during 1988–2020 than in the earlier periods. This, plus a high degree of variability in the IOD (that is, more occurrences of positive IOD events; Fig. 2 ), explains why the IOD was identified as the dominant climate driver across most of the wheatbelt in the fourth subperiod. The effect of SAM was also enhanced in the fourth subperiod, but it was still relatively weaker than that of IOD. In addition, the curves in Fig. 4 also suggest the asymmetric impacts of positive and negative events of climate drivers. Specifically, the impact of a positive event on wheat yield is not necessarily the precise opposite of the impact of a negative event. This is mainly because the relationships between climate drivers and Australia’s rainfall are non-linear 24 , 25 . Fig. 4: Partial dependence of wheat yield change on large-scale climate drivers in four subperiods as derived from the RF model. a–d , Partial dependence of wheat yield change on large-scale climate drivers for the years 1889–1921 ( a ), 1922–1954 ( b ), 1955–1987 ( c ) and 1988–2020 ( d ). Values of climate driver indices are shown as growing season mean values and are normalized. The lines in each panel are smoothed representations of the responses, with averaged fitted values (model predictions based on the data) for 0.5° grid cells over the Australian wheatbelt. The trend of the line, rather than the actual values, describes the nature of the dependence between response and predictor variables. The grey shading shows 95% confidence intervals. Source data Full size image Contributions of oceanic warming to the shift An increase was observed in both the mean and the magnitude of variability in DMI (Supplementary Fig. 5 ), resulting in more occurrences of positive IOD events during the recent subperiod. We note that the Indian Ocean has been warming since 1950 at a higher rate than the other tropical basins 26 , and a component of this warming resembles the SST pattern associated with the positive phase of the IOD 27 . A question arises here: is the increasing influence of IOD on yields related to this long-term warming trend? Next, we study the effects of oceanic warming on the shifting influence of IOD by re-running the random forest (RF) model with detrended DMI series. Spatial distributions of dominant climate drivers in the first three subperiods changed slightly, but changed greatly in the fourth subperiod, with more grids dominated by ENSO or SAM rather than IOD (Supplementary Fig. 6 ). This suggests that the increasing influence of IOD is partly related to the multi-decadal warming trend in the Indian Ocean. This is supported by an analysis of the importance values of the three climate drivers before and after detrending DMI for the fourth subperiod (Fig. 5 ). The relative contribution of IOD to the yield variability decreases by 18.1% on average after detrending. The importance values of ENSO and SAM increased by 6.2% and 6.9%, respectively. Fig. 5: Relative importance of large-scale climate drivers on wheat productivity during 1988–2020 before and after detrending the DMI series, derived from the RF model. The black line, two box boundaries and whiskers below and above a box represent median, 25th and 75th percentiles, and 10th and 90th percentiles, respectively. Source data Full size image Discussion Reliable forecasts of climate drivers are available seasons ahead. Recent scientific advances have demonstrated that decision-makers can make use of this high predictability to improve food monitoring or famine early warning systems, based on the historical relationships between climate drivers and crop yields 6 . Here we show that the relationships are not stationary and can change notably over time. In particular, wheat yield in Australia was mainly subjected to ENSO from the Pacific. However, since the 1990s, the impacts from ENSO have been decreasing, but those from the IOD have been increasing. The shifting impacts of climate drivers on wheat productivity are through their modulation of Australia’s rainfall (Supplementary Fig. 7 ). The impacts of the climate drivers on Australia are often compounded and can interact with one another. For example, positive IOD events often occur during El Niño years (Fig. 2 ), promoting hotter and drier conditions in southeast Australia. In our study, growing season mean SOI and DMI indeed show a significant correlation (Supplementary Table 1 ). However, after removing the potential dependency of DMI on SOI, the IOD’s impacts still become stronger across most of the wheatbelt (Fig. 3 ). How do the shifting impacts occur? It is well known that the relationship between ENSO and Australian rainfall is modulated on multi-decadal timescales by the Interdecadal Pacific Oscillation 10 , a low-frequency pattern of SST variability in the tropical and extra-tropical Pacific. Stronger rainfall response to ENSO events is observed during the negative phase of the IPO. This may be one of the reasons for the increasing dominance of SAM during 1922–1954 (Fig. 3b ), when a positive IPO phase probably reduced the influence of ENSO on Australian rainfall. However, since 1999, the IPO has been in a negative phase. Thus, the increasing dominance of IOD may not be related to the weakening of ENSO–Australian rainfall relationship. However, we find that the non-uniform warming of the Indian Ocean contributes to the importance of IOD in recent decades by 18.1%. Multiple lines of evidence imply that the recent trend towards more frequent positive IOD events is related to global warming, and positive IOD events may occur more often over the twenty-first century if greenhouse gas concentrations continue to increase 13 , 28 . Our study is an important step towards understanding the shifting impacts of climate drivers on crop productivity, which can inform improved climate resilience of regional crop production in the face of climate hazards. For example, drought is a major hazard to Australian wheat production, and prolonged drought in recent years has broken the long-term increasing trend in Australian wheat yields. Many farmers in Australia still rely on the forecasts of the ENSO to prepare for potential drought risk months in advance. An SOI-based seasonal rainfall forecasting programme is officially operated by the Queensland government 8 . However, the impacts from the Indian and Southern Oceans receive less attention. Here we show that, across most of the Australian wheatbelt, the impacts of the IOD have been increasing in recent decades. More occurrences of positive IOD events in the future are also likely to induce more drought events. Thus, we appeal to farmers to consider dynamical model-based seasonal prediction systems when planning their crop management strategies for the upcoming season, such as the Australian Community Climate Earth System Simulator-Seasonal version 1 (ACCESS-S1) developed by the Bureau of Meteorology 29 . Compared with traditional statistically based systems, ACCESS-S1 implicitly accounts for all the modes of climate variability. Nonetheless, there still remain issues with lack of skill beyond a few weeks lead time and occasional forecast busts. This highlights the value of research to improve the simulation of Indian Ocean conditions, and the complex atmospheric links between oceans and the Australian climate, by seasonal forecasting models. These models should ideally produce forecasts whose reliability is robust to changes in the relative importance of different climate drivers due to global warming and low-frequency modes of variability, such as the IPO. The results of our study can also improve the ability of crop simulation models to generate optimized farming practices for the upcoming season. For example, the SOI phases for the season ahead have been incorporated into the Agricultural Production Systems sIMulator (APSIM) crop model (we used in our study), to explore broad and specific options to adapt wheat cropping systems to ENSO in Australia 30 . Genotypes, sowing dates and nitrogen applications can be tested through scenario simulations at the site level before sowing, when local wheat producers make their most crucial management decisions. Given that our results show that the role of IOD in determining Australian wheat yields becomes more prominent, we also recommend incorporating the IOD into the APSIM model, to improve the agronomic decision-making capabilities. Finally, our study proposes a method to identify the shift of dominant climate drivers in a world major wheat production area, Australia. However, we believe it can be easily extended to other areas more reliant on subsistence farming of crops, for example, in East Africa. With further understanding of the relationships between climate drivers and crop yields in more areas, we can improve our capacity to recognize and manage structured climate risks, thereby enhancing global food security. Methods Wheat yield simulations In this study, we used the well-calibrated biophysical crop model APSIM, a comprehensive model developed in Australia to simulate biological processes in agricultural systems 31 . APSIM has been used in numerous studies of the responses of Australian wheat cropping to climate variations 32 , 33 . It is able to simulate crop yields accounting for the interactive effects of climate, crop genotype, soil and crop management. Here APSIM simulations were carried out only with changing climate and atmospheric CO 2 concentrations through time; thus, we can infer that these were the only factors that contribute to the simulated variations of wheat yields. Here we ran APSIM model at 0.5° grid cells across the Australian wheatbelt from 1889 to 2020. Daily climate data for each grid, including rainfall, solar radiation and maximum and minimum air temperature data, were obtained from Scientific Information for Land Owners (SILO) gridded climate dataset 34 . SILO is a database of Australian climate data from 1889 (current to yesterday). SILO’s gridded datasets are constructed by spatially interpolating all available observational data. We acknowledge that the changing observational network has implications for the quality of SILO’s datasets. Nonetheless, the nature of our analysis and the previous evaluation of gridded datasets alongside their widespread use in previous climate analyses gives us confidence that they may be used for our purposes. First, there were already more than 2,000 rainfall gauge stations in 1890s ( ), and the number has been constantly increasing. Second, spatial and temporal accuracies for gridded datasets are high as described in Jeffrey et al. 34 . For example, average coefficient of determination ( R 2 ) between observed daily variables and interpolated estimates is normally larger than 0.6, especially across the wheatbelt. In addition, SILO datasets are readily available for climate applications and have been well tested in many climate-related studies 35 , 36 . Soil information for 264 sites (Supplementary Fig. 8 ) was derived from the APSoil database 31 . Soil attributes of layer depth, bulk density, saturated water content, drained upper limit and crop-specified lower limit were available for each site. For APSIM simulations in a certain grid, soil input information was acquired from the soil site that was geographically closest to the grid. Sowing windows and cultivars were set up according to Wang et al. 37 (Supplementary Table 2 ) to reflect common farming practices in the different States of Australia. The selected cultivar was sown during the sowing window as soon as the accumulated rainfall exceeds 25 mm in 7 consecutive days, or at the end of sowing window if this condition was not met. The fertilizer at sowing was 130 kg ha −1 of urea (equivalent to 60 kg ha −1 of N). In addition, the effects of elevated atmospheric CO 2 concentration were also incorporated in accordance with the practice by Wang et al. 37 . We compared APSIM-simulated yields with observed yields to assess the suitability of the simulations for this study. Observed region-level wheat yield records for 2000–2014 in 124 regions were obtained from the yield gap map ( ) hosted by the Commonwealth Scientific and Industrial Research Organisation and Grains Research & Development Corporation. As the annual yield series of this dataset was relatively recent, we assumed that variations in yield are caused only by climate variability, and that the contribution of farming technology and practices is negligible. APSIM-simulated yields at grid level were firstly aggregated to region level and then compared with observed yields. The normalized root mean square error (NRMSE) between the simulated and observed yields (15 years × 124 regions) was 14.9% (Supplementary Fig. 9 ), suggesting that APSIM simulations could potentially capture climate-driven wheat yield variations across the Australian wheatbelt. $${{{\mathrm{NRMSE}}}} = \frac{{\sqrt {\frac{1}{n}\mathop {\sum }\nolimits_{i = 1}^n \left( {S_i - O_i} \right)^2} }}{{O_{{\mathrm{max}}} - O_{{\mathrm{min}}}}}$$ (1) where n is the number of samples, S i and O i are simulated and observed yields, respectively, and O max and O min are maximum and minimum observed yields, respectively. NRMSE represents the relative standard deviation of the residuals. For crop model simulations, performance of a model is considered good if NRMSE is lower than 20%. Large-scale climate drivers We used three indices, namely SOI, DMI and SAM index, to characterize the variability of ENSO, IOD and SAM, respectively. There are multiple indices to measure the strength of ENSO events (El Niño, neutral or La Niña). The results of our study varied slightly under different ENSO indices, such as SOI or Niño 3.4 (the average of SST anomalies over the region 5° N–5° S and 170 °W–120° W). We selected SOI because it is based on SLP, a variable more directly linked with rainfall variations than SST. An SST-based ENSO index like Niño 3.4 also incorporates a background warming signal in addition to the natural variability of SSTs, so it has a global warming component as well as an ENSO component, which adds another degree of complexity to examine ENSO. In addition, SOI is also well recognized in Australia, for the estimation of ENSO’s impacts. An SOI-based seasonal rainfall forecasting programme is officially operated by the Queensland government 8 . The SOI is calculated as the standardized SLP difference between Tahiti and Darwin, Australia 17 : $${{{\mathrm{SOI}}}} = \frac{{({{{\mathrm{standardized}}}}\,{{{\mathrm{Tahiti}}}} - {{{\mathrm{standardized}}}}\,{{{\mathrm{Darwin}}}})}}{{{{{\mathrm{MSD}}}}}}$$ (2) where $${{{\mathrm{standardized}}}}\,{{{\mathrm{Tahiti}}}} = \frac{{{{{\mathrm{actual}}}}\,{{{\mathrm{Tahiti}}}}\,{{{\mathrm{SLP}}}} - {{{\mathrm{mean}}}}\,{{{\mathrm{Tahiti}}}}\,{{{\mathrm{SLP}}}}}}{{{{{\mathrm{standard}}}}\,{{{\mathrm{deviation}}}}\,{{{\mathrm{Tahiti}}}}}}$$ (3) $${{{\mathrm{standard}}}}\,{{{\mathrm{deviation}}}}\,{{{\mathrm{Tahiti}}}} = \sqrt {\frac{{{\sum} {({{{\mathrm{actual}}}}\,{{{\mathrm{Tahiti}}}}\,{{{\mathrm{SLP}}}} - {{{\mathrm{mean}}}}\,{{{\mathrm{Tahiti}}}}\,{{{\mathrm{SLP}}}})^2} }}{{N}}}$$ (4) $${{{\mathrm{standardized}}}}\,{{{\mathrm{Darwin}}}} = \frac{{{{{\mathrm{actual}}}}\,{{{\mathrm{Darwin}}}}\,{{{\mathrm{SLP}}}} - {{{\mathrm{mean}}}}\,{{{\mathrm{Darwin}}}}\,{{{\mathrm{SLP}}}}}}{{{{{\mathrm{standard}}}}\,{{{\mathrm{deviation}}}}\,{{{\mathrm{Darwin}}}}}}$$ (5) $${{{\mathrm{standard}}}}\,{{{\mathrm{deviation}}}}\,{{{\mathrm{Darwin}}}} = \sqrt {\frac{{{\sum} {({{{\mathrm{actual}}}}\,{{{\mathrm{Tahiti}}}}\,{{{\mathrm{SLP}}}} - {{{\mathrm{mean}}}}\,{{{\mathrm{Tahiti}}}}\,{{{\mathrm{SLP}}}})^2} }}{{N}}}$$ (6) $$\begin{array}{l}{{{\mathrm{MSD}}}}\,({{{\mathrm{monthly}}}}\,{{{\mathrm{standard}}}}\,{{{\mathrm{deviation}}}}) =\\ \sqrt {\frac{{{\sum} {({{{\mathrm{standardized}}}}\,{{{\mathrm{Tahiti}}}} - {{{\mathrm{standardized}}}}\,{{{\mathrm{Darwin}}}})^2} }}{{N}}}\end{array}$$ (7) and N is the number of months. The anomalies are departures from the 1981–2010 base period. The DMI is expressed as anomalous SST gradient between the western tropical Indian Ocean (WTIO, 50 °E–70° E and 10 °S–10° N) and the south-eastern tropical Indian Ocean (SETIO, 90° E–110° E and 10° S–0° N) (ref. 18 ), $${{{\mathrm{DMI}}}} = {{{\mathrm{WTIO}}}}\,{{{\mathrm{SST}}}}\,{{{\mathrm{anomaly}}}} - {{{\mathrm{SETIO}}}}\,{{{\mathrm{SST}}}}\,{{{\mathrm{anomaly}}}}$$ (8) The anomalies are departures from the 1981–2010 base period. The SAM refers to the (non-seasonal) north–south movement of the strong westerly winds that blow almost continuously in the mid- to high latitudes of the Southern Hemisphere 19 . It is expressed as the difference of zonal mean SLP between 40° S and 65° S, $${{{\mathrm{SAM}}}} = {{{\mathrm{standardized}}}}\,40^\circ {{{\mathrm{S}}}}\,{{{\mathrm{SLP}}}} - {{{\mathrm{standardized}}}}\,65^\circ {{{\mathrm{S}}}}\,{{{\mathrm{SLP}}}}$$ (9) where each month’s zonal mean SLP is standardized by the mean/standard deviation determined for the climatological period (1981–2010). Monthly series of three indices (1889–2020) were obtained from the National Oceanic and Atmospheric Administration Earth System Research Laboratories Physical Sciences Laboratory ( ). The SOI is calculated on the basis of the observed SLP data from gauge stations. However, no observed SST or SLP from 1889 is available from stations to calculate the DMI or SAM; thus, they are derived from gridded datasets, namely the HadISST1.1 SST dataset and the ICOADS SLP dataset. Although both datasets have been well tested in many climate-related studies 38 , 39 , 40 , we acknowledge that their reliability can be slightly different compared with observations, especially in early periods. There were fewer observations available to develop the re-analysis datasets in pre-1900 periods 41 . Growing season (May to November) mean climate drivers were then derived and used as predictor variables in subsequent analysis. Removal of potential dependency between climate drivers We noticed a significant negative relationship (Supplementary Table 1 ) between growing season mean SOI and DMI. To isolate the impact of each climate driver, we removed potential dependency of one index on another using a simple linear regression 21 . Details as following: $$\widehat {{\mathrm{DMI}}} = {a} \times {{{\mathrm{SOI}}}} + {b}$$ (10) $${\mathrm{DMI}}\_{\mathrm{new}} = {{{\mathrm{DMI}}}} - \widehat {{\mathrm{DMI}}}$$ (11) where \(\widehat {{\mathrm{DMI}}}\) represents predicted DMI values from a linear regression based on SOI. a and b are regression coefficients. DMI_new denotes regression residual, which is linearly independent of SOI and is used in machine learning analysis. The determination of dominant climate drivers We implemented a machine learning decision tree model, RF, to study the contributions of growing season climate drivers to wheat yield. RF is a popular tree-based ensemble machine learning algorithm and can be used to investigate the complicated relationships between variables. In contrast to traditional linear regression or correlation analyses, RF accounts for non-linear and hierarchical relationships between the response and predictors. RF builds statistical models using predictor variables and evaluates the relative importance of each predictor variable. In this study, we adopted the accuracy-based importance metric generated from an out-of-bag (OOB) validation procedure. In the model building phase, approximately one-third of the total observational values were randomly selected and set aside for subsequent OOB model validation. Then, the prediction accuracy on the OOB sample was measured. The mean decrease in prediction accuracy when the values of a variable in the OOB sample were randomly shuffled was defined as the importance value of the variable 42 , expressed as the mean square error: $${\mathrm{MSE}}_{{\mathrm{OOB}}} = \frac{1}{n}\mathop {\sum }\limits_{k = 1}^n \left( {O_i - \bar P_{{\mathrm{kOOB}}}} \right)^2$$ (12) where n denotes the number of observations, O i indicates observed value and \(\bar P_{{\mathrm{kOOB}}}\) represents the average of all OOB predictions across all trees. We applied the RF model and derived the importance rankings using the ‘caret’ package 43 sourced in the R software. We built an RF model for each grid and each subperiod and derived the importance values of predictor variables. The importance values were then normalized to sum to 100%. The predictor variable with an importance value larger than 50% (meaning larger than the sum of the other two drivers) was identified as the dominant climate driver. According to our results (Fig. 3 ), the influence of the IOD on wheat yields showed a sudden increase in the final period (1988–2020). To test whether this was random sample error, we calculated the probability that the IOD was the dominant climate driver in any 33 year period before the final period. Specially, we drew a 33 year period from the record (1889–1987) and identified the dominant climate driver with the RF model. We performed this procedure 67 times (year series including 1889–1921, 1890–1922, 1891–1923, …, 1955–1987) at each grid and then calculated the probability that the IOD was the dominant climate driver in all 33 year periods. The results (Supplementary Fig. 2 ) illustrated that only a small number of grids showed a probability of >0.1, mainly in the west. Most were under 0.1, meaning that a strong influence of the IOD on wheat yields was not usual in 1889–1987. Partial dependence We used partial dependence plots (PDPs) to evaluate the marginal effects of predictors (for example SOI, DMI and SAM) on the response variable (wheat yield). A PDP can show whether the relationship between the response and a predictor is linear, monotonic or more complex, marginalizing over the values of all other input predictor variables (the ‘complement’ features) 44 . Here we used the ‘pdp’ R package 45 to evaluate the marginal effects of three climate drivers on wheat yield. The detrending method There is an increasing trend observed in DMI (Supplementary Fig. 5 ). We compared the results from the RF models with original DMI and with detrended DMI, to study the potential effects of the trend on the shifting influence of IOD. The method used to detrend the DMI series was the first-difference method introduced by Nicholls 46 . This method was implemented according to the following equation: $${\Delta}{{{\mathrm{DMI}}}}_t = {{{\mathrm{DMI}}}}_t - {{{\mathrm{DMI}}}}_{t - 1},\,t = 1890,\,1891,\, \ldots ,\,2020$$ (13) where ΔDMI t represents the first difference of DMI at year t , and DMI t and DMI t- 1 represent the values of DMI at year t and year t −1, respectively. Comparisons of original and detrended values of DMI can be found in Supplementary Fig. 5 . We also compared the performance of the RF models with the original DMI series and with detrended DMI series. We derived NRMSEs from the models with a five-fold cross-validation procedure. This procedure split an input dataset into five non-overlapping groups. In turn, each group was used as a held-back test set, while all other groups collectively were used as a training dataset. A total of five models were fitted and evaluated on the five hold-out test sets and five NRMSEs were reported. Then, the performance of two kinds of models was compared on the basis of their NRMSEs with Fisher’s least significant difference method at 95% confidence level. The results (Supplementary Fig. 10 ) showed that the performance of two kinds of model was similar in most grids, with only a small number of grids presenting significant difference. Thus, in general, detrending the DMI did not affect the performance of the model. Nonetheless, the relative contributions of climate drivers to model performance changed before and after detrending. This is common in an RF model, as the contribution of an input predictor is not fixed, but can change if other input predictors are changed 47 . Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The climate, soil and climate drivers indices data are publicly available from the following sources: the SILO climate data are at , the soil data are at and the climate drivers indices data are at . The detailed wheat yield data simulated by the APSIM crop model and the raw data of the figures are available at Puyu Feng’s Github homepage . Source data are provided with this paper. Code availability The detailed R code for data processing and illustration is available at Puyu Feng’s Github homepage . | A team of researchers affiliated with multiple institutions in China and Australia attributes reduced rainfall on Australian wheat fields to disruptions to the Indian Ocean Dipole due to climate change. The study is published in Nature Food. Prior research has shown that the Indian Ocean Dipole (IOD) has a major impact on the amount of rainfall in Australia. In years with a positive IOD, warm sea surface water is pushed by winds toward Africa, leaving cooler water near Indonesia and parts of Australia. This results in less rainfall in both areas. During years of negative IOD, the reverse occurs, and more rain falls on Australia. In neutral years, the IOD has no impact on rain in Australia. Prior research has also shown that wet La Niña and dry El Niño events also have an impact on the amount of rainfall in Australia. In this new effort, the researchers studied these events as the region grows warmer due to climate change using machine learning. The work involved studying rainfall patterns over Australia going back to the 1800s. They trained a machine-learning application to recognize impacts of changes to the IOD and/or La Niña and El Niño events using the data they compiled. They used data from the machine learning app to create a model depicting weather conditions for the region over the past several years and predicting what might occur in the future. They also added data regarding wheat yields, crop management changes, sowing and reaping times and the kinds of wheat that have been planted. The model showed that as the climate has grown warmer, there have been more frequent positive IOD events, leading to less rainfall, which has led to slowly decreasing crop yields—a finding that matches reports from wheat farmers. They also found that during times when positive IOD events coincided with El Niño events, Australia experienced extremely dry conditions. | 10.1038/s43016-022-00613-9 |
Medicine | New assessment could identify risks of frailty | Nature Communications (2019). DOI: 10.1038/s41467-019-12716-2 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-019-12716-2 | https://medicalxpress.com/news/2019-11-frailty.html | Abstract Global ageing poses a substantial economic burden on health and social care costs. Enabling a greater proportion of older people to stay healthy for longer is key to the future sustainability of health, social and economic policy. Frailty and associated decrease in resilience plays a central role in poor health in later life. In this study, we present a population level assessment of the metabolic phenotype associated with frailty. Analysis of serum from 1191 older individuals (aged between 56 and 84 years old) and subsequent longitudinal validation (on 786 subjects) was carried out using liquid and gas chromatography-mass spectrometry metabolomics and stratified across a frailty index designed to quantitatively summarize vulnerability. Through multivariate regression and network modelling and mROC modeling we identified 12 significant metabolites (including three tocotrienols and six carnitines) that differentiate frail and non-frail phenotypes. Our study provides evidence that the dysregulation of carnitine shuttle and vitamin E pathways play a role in the risk of frailty. Introduction A consequence of ageing is the decline in biological function that will eventually lead to a progressive deterioration of physiological performance, a decline in the ability to respond to stress and an associated increase in vulnerability. Since 1950 life expectancy has been rising at a rate of more than three years per decade, and since the onset of the millennium this has risen to five years per decade 1 . As a consequence, it is estimated that between 2015 and 2050 the global proportion of over 60-year olds will increase from 12 to 22% 2 , while the number of over-65s is forecast to triple by 2050 3 , even though recent evidence suggests that maximum lifespan could be fixed 4 , 5 . While increasing life expectancies are a positive development, all individuals are still currently subject to natural constraints and face decline in health status with age (albeit at rates that vary across populations and sub-populations 6 ) increasing the risk of exposure to chronic age-related disorders 7 . So, in 2015 the global healthy life expectancy at birth (HALE) was calculated to be 63.1 years 8 , which indicates a substantial burden of later life morbidity, even though rises in HALE are a testament to continuing global healthcare advancements including medical diagnosis and treatment of cardiovascular disease 9 , immunization 10 , smoking cessation 11 , healthy diet 12 and an increased understanding of social determinants of health 13 . Such demographic shifts and resulting population ageing are leading public health practitioners and policy makers to actively push for global innovations that can positively shape the health of future elderly populations 14 , 15 , 16 . Indeed, healthy ageing in later life has enormous potential to affect society as a whole. Consequently, to mitigate the potential economic and social strains that flow from population ageing brings call for direct government initiatives to develop appropriate public health interventions to reduce the impact of functional decline associated with frailty. Clinically, the effects of frailty are defined as a multimodal syndrome emphasized by a loss of internal reserves (energy, physical ability, balance, cognition, and health) that gives rise to biological vulnerability within an individual. This inherent complexity means that there is no single diagnostic tool available to identify the presence and extent of frailty. However, a myriad of scoring instruments are reported within the literature. These typically assess a number of negative outcomes, or distal phenotypes, and are validated by stratifying the scoring aspects of large patient cohorts such as surgical prognosis 15 , primary care interventions 16 , 17 , 18 , 19 , 20 and associated key detrimental events and are subsequently used in the development of instruments that can be used to evaluate the presence of frailty, such as the frailty phenotype 21 and frailty index (FI) 22 . Although these methods have acceptable performance in identification of frailty status, they have some limitations, including subjectivity of individual answers, resource utilisation costs and lack of linkage to underlying biological mechanisms 23 . Orthogonal to such procedures, biochemical assessments of frailty in the context of biomarker detection are sought after as they hold the potential to identify biological pathways that contribute to a frailty phenotype, offering opportunities in developing strategies for the identification and management of frailty. A focus for the bio-gerontology community within this area has been the chemistry of life contained within the central dogma of molecular biology. Several large cohort studies have demonstrated correlation between mitochondrial DNA and energy co-factors alongside mortality 24 , 25 , dysregulation of transcriptional networks and age-dependent decline 26 , alongside metabolic signatures of biological ageing in young 27 and older people 28 . By contrast, no such large scale, population level comparison of frailty identified using a validated measure versus metabolism has been undertaken. Yet the ability to uncover downstream biochemical relationships by applying metabolomics based approaches hold great potential for the development of public health strategies to reduce the risk of frailty and its adverse consequences. As the complex links between age-related frailty and the underlying life-course, social, psychological, genetic and metabolic processes remain unclear, the fRaill project ( ) takes an interdisciplinary approach to examine the causal processes relating to frailty and wellbeing at older ages. Within this work, we performed high-throughput untargeted mass spectrometry-based metabolic profiling coupled with pathway and network analysis on longitudinal serum samples, taken four years apart, from a cohort of well-phenotyped ageing (≥58 years old) subjects from the English Longitudinal Study of Ageing ( ). The aim of this approach is to stratify metabolic phenotypes (metabotypes 29 ) over a FI derived from over 60 measured indicators and assess whether associated biochemical networks relate to biological degeneration associated with frailty. From this work we have identified a panel of 12 metabolites associated with the FI and from these, two chemical classes (tocotrienols and carnitines) exhibit significantly modulated under-expression when overlaid across the FI. Subsequent network enrichment analysis and statistical modelling has identified carnitine shuttle and vitamin E metabolism as two modulated pathways that are related to a higher energy metabolic phenotype. The ability of the combined metabolic model to predict frailty has also been confirmed by a cross validated ROC model. In conjunction with these results, Mendelian Randomization analysis has been performed on previously collected GWAS data to determine the causal relationship of frailty to carnitine levels. This indicates a level of significant association between decreased levels of carnitine and frailty. Results Frailty as a function of ageing In the first stage of our analysis we performed cross-sectional stratification of a continuous FI over mass spectrometry-derived metabolite data within the fRaill project. The aim was to identify underlying chemical determinants of frailty (or resilience) and build an associated metabolic pathway model. Sample members participated in a face-to-face interview, a nurse assessment of physical function, anthropometric measurements and collection of blood samples. A full list of questionnaire characteristics and summary metadata can be found here— . A FI model was developed by calculating a cumulative score for each individual using the presence/absence of 60 deficit variables contained within Wave 4 ( n = 1846) and Wave 6 ( n = 1753) of the ELSA (English Longitudinal Study of Ageing; following standard practice, sample members were included if they had responses to at least 30 of these items) 6 . These items covered a broad range of attributes such as cognitive function, falls and fractures, vision, hearing, chronic diseases and depression. Due to the complexity in index-coding and cut-off point determination, the presence of each item attributed varying predetermined amounts to an individual’s FI score 30 which had an overall range of 0.04–0.698 over the full sample cohort. Subsequent plotting of the FI distribution produced a unimodal right skewed distribution of data from both waves (Supplementary Fig. 1 ). This distribution of frailty scores is consistent with findings from other population studies 22 . To deal with this skewed distribution we subsequently stratified the FI scores into four categories, <0.1 (26.2%), 0.1–0.2 (48.4%), 0.2–0.3 (17.5%), >0.3 (7.9%). Previous studies have indicated that frailty increases as subjects age, but this is more closely linked to biological age rather than chronological age 31 . To investigate this, a robust linear regression (RLR) model was developed to determine the level of correlation between FI and age (Fig. 1a , Supplementary Figs. 2 – 7 ). The regression equation for the full FI (in the form y = m x + c ) is equal to FI = 0.119 x Age + 0.004092. Derived from this the SSE (sum of squares of error) = 5.7017 and R squared (coefficient of determination) = 0.3357, where 1 = total correlation and 0 = non-correlation. As a measure of the discrepancy between the observed data and the estimation made by the linear regression model, the R value of 0.34 explains a reasonable proportion of variance in frailty explained by age which is to be expected given the measure over a large age range. But, the sum-of-square-error of estimates (SSE) still indicates there is still a large amount of unexplained variance that is present across all ages. Similar models for each FI category were also calculated (Supplementary Fig 2 ) indicating a drop in the correlation between age and FI as FI value increases. These results indicate that although age is linked to the frailty phenotype, as a subject expresses a stronger frailty phenotype, age plays a lesser role in the classification scoring. Fig. 1 Sample attributes at wave 4 of ELSA. a Linear regression of Age vs. Frailty Index score indicating a moderate correlation and implying that the concept of frailty, when measured under the Rockwood FI scoring system, is an independent variable with respect to age (blue dots = male subjects, red dots = female subjects). b Mean sample characteristics from 1191 subjects and associated blood analysis. c Mean cholesterol levels as observed across the frailty distribution using the standard scoring method. It can be seen that LDL, HDL and cholesterol all decrease when entering the non-frail cut-off 50 . Triglycerides are seen to increase. d Most pronounced biochemistry levels as observed across the frailty distribution using the standard scoring method. Fibrinogen and white blood cells indicating a marked increased Z-score over the frailty distribution whereas ferritin and dehydroepiandrosterone indicate a decrease. Source data are provided as a Source Data file Full size image Cross sectional modelling of biochemistry and metabolism over the frail index Untargeted metabolic phenotyping was performed on serum samples collected at Wave 4 of the ELSA alongside a range of standard biochemistry assays (see Supplementary Methods). 1846 subjects were used to generate the FI and a combined total of 1191 serum samples were selected for untargeted multiplatform metabolomics analysis based on availability, quality of sample and metadata inclusiveness. Within the complete sample set 57.9% ( n = 690) were women and other major characteristics are represented in Fig. 1b . Age and sex dependent changes in biochemical measurements of lipids (LDL, HDL, cholesterol and triglycerides) alongside other blood constituents (white blood cell count, dehydroepiandrosterone, fibrinogen and ferritin) were calculated using a standard scoring method (z-score, giving a standardised score with a mean of zero and a standard deviation of one (also referred to as auto-scaling)). This methodology required a complete set of input data items for all subjects to perform the analysis, but 428 subjects had at least one data point missing (The percentage of missing biomarkers variables varied between 0 and 2.5% except fasting glucose which has displayed 32% missings out of 1196 observations). To account for this, we adopted a missing value imputation approach (multivariate multiple imputation method with known seed for replication 32 ) to enable the assessment of all data point for all subjects. This completed dataset was subsequently tested by further sensitivity analysis and all biochemistry measures were then re-stratified over the FI as a whole and stratified in to male and female subgroups. Pearson’s correlation scores were calculated for non-standardized biomarker values and only significantly correlated biomarkers were used for standardisation purposes. Individuals were then grouped into four classes based on FI and at the same time, male vs. female stratification was also performed (data in Supplementary Figs. 8 – 11 ). Subsequent plots indicate ±correlations over the FI (Fig. 1c, d ). Using the FI score of each subject as a supervisory variable ( y -output) principal component-discriminant function analysis (PC-DFA) modelling allowed for the stratification of the mass spectrometry metabolite data ( x -input) over the FI. This PC-DFA approach was used to cluster the UHPLC-MS and GC-MS datasets and the scores plot (Fig. 2 ) reveals the relationship between the four FI classes and indicates that two clear channels of separation along the 0.1–0.2 and 0.2–0.3 FI axis. Inspection of the loadings vectors also allows for the investigation of the presence of metabolic differentiation between these four levels. Fig. 2 PC-DFA of serum metabolite data stratified over the frailty index distribution. Principal component discriminant function analysis (PC-DFA) carried out on serum metabolite profiling data from 1191 subjects within wave 4 of ELSA. Data were log 2 transformed and stratified in to groups determined by Rockwood Frailty Index value. The results were cross validated by bootstrapping (10,000 iterations) and indicate two clear planes of separation along the 0.1–0.2 axis and the 0.2–0.3 axis. This data correlates with observations that directly stratify clinical assessment over the frailty index indicating three distinct clinical phenotypes 55 . Green circle = Frailty Index 0–0.1, Blue circle—Frailty Index 0.1–0.2, Orange circle = Frailty Index 0.2–0.3, Red circle = Frailty Index above 0.3). Source data are provided as a Source Data file Full size image The development of the combined PC-DFA and Random Under Sampling boosting Classification and Regression Tree analyses (RUSBoost-CART) models were performed to reduce the dimensionality of the multivariate LCMS and GCMS datasets while simultaneously investigating the presence of metabolic differentiation between the four levels of scoring within the frail index (Fig. 3 ). The RUSBoost-CART model was double cross validation (2CV) validated using resampling methods of bootstrapping ( n = 10,000) on the training set only and permutation testing was used to generate null distributions 33 . This method indicated a strong separation between >0.2 and <0.2 on the FI and a moderate level separation between <0.1 and 0.1–0.2, thus supporting similar clustering within the PC-DFA. This was also supported by the subsequent null-distribution classification rate. Subsequent univariate analysis was carried out to determine the statistical significance of individual metabolites modulated by the frailty metabotype. Non-parametric t-tests cross-validated by false discovery rates were used to assess metabolite significance between samples that lay at <0.2 and >0.2 on the FI. Spearman based correlation was also performed to determine between metabolite association and develop associated clusters as indicated by the heatmap in Supplementary Fig. 12 . Fig. 3 RUSBoost-CART analysis of samples binned over the frailty index. a Machine learning based Random Under Sampling boosting Classification and Regression Tree analyses on (+) mode UHPLC-MS data supporting correct sample stratification over frailty index distribution. Confusion Matrix indicates a clear separation between >0.2 and <0.2 on the frailty index and thus good model prediction. b Null-distribution classification rate (red frequency histogram) supporting machine learning results (blue frequency histogram) and, indicating the groupings in the confusion matrix are correctly classified. Source data are provided as a Source Data file Full size image Development of a metabolic network of frailty Having established the presence of separation in FI level using the PC-DFA approach, we subsequently used the mummichog based pathway enrichment enrichment 34 to predict network activity and identify biochemical pathways modulated by the frailty metabotype. This method has successfully been applied to a diverse range of clinical areas such as liver damage 35 , 36 and T cell activation 34 , 37 . The general analysis pipeline of using XCMS deconvolution in conjunction with the mummichog pathway method to develop an integrated systems approach has already been documented 38 . By applying this methodology and mapping m/z clustering differences on to an integrated metabolic network containing data from the UCSD BiGG 39 , KEGG and Edinburgh Human Metabolic Network 40 resources, metabolic differences between frail versus non-frail sample classes were identified. The generation of subsequent metabolic activity networks highlighted in Fig. 4 (and Supplementary Fig. 13 ) identified 25 metabolites present within four metabolic pathways (the carnitine shuttle, peroxisomal degradation, the kynurenine pathway and vitamin E metabolism) and were statistically significant in the transition from non-frail to frail metabotypes. From these 25 metabolites, 12 were calculated as individually statistically significant (using a FDR corrected Mann–Whitney test) in distinguishing frailty class and used to generate a multivariate ROC prediction model (Fig. 5 ). Fig. 4 Enriched pathway model from hybrid network analysis. Frailty metabolite subnetwork generated from the human metabolite network from within the mummichog-Cytoscape pipeline using 554 metabolite features with unique m/z values from the LCMS (+) analysis alongside the addition of 86 metabolites from GCMS analysis. This combined approach highlights 4 main metabolic areas that altered within the frailty metabotype, all of them identifying cyclic AMP as a potential hub-metabolite. Source data are provided as a Source Data file Full size image Fig. 5 Significance and predictive ability within the metabolic model of frailty. a Table indicating 12 metabolites of statistical significance ( p < 0.05) in differentiating non-frail and frail metabolic phenotypes using Kruskal-Wallis analysis of variance with subsequent false discovery rate testing for multiple comparisons. Data also contains area under curve data used to create multivariate receiver operating characteristic curve (mROC) b mROC curve from Waves 4 generated by combining 12 metabolites to generate a predictive model of frailty status. The shaded area indicate 95% confidence intervals calculated by Monte Carlo cross validation using balanced subsampling and 1000 iterations of bootstrapped cross-validation. c Univariate ROC curves and non-frail (orange) to frail (blue) boxplots of each metabolite used to generate the multivariate ROC analysis. Each boxplot displays a median value (centre line), upper and lower quartiles (box limits), 1.5× interquartile range (black bar), and points out of interquartile range are outliers. Source data are provided as a Source Data file Full size image Biological validation of frailty metabolic phenotype via longitudinal analysis To confirm the metabolic dysregulation observed within the cross-sectional model, longitudinal analysis on samples from wave 6 of ELSA (samples from the same subjects taken four years later) was undertaken to provide biological validation of the significant metabolites identified. A FI was calculated from metadata belonging to the 1753 subjects (238 lower than Wave 4 due to subject attrition) and as before a unimodal right skewed distribution of data was observed (Supplementary Fig. 1 ). Subsequently, 786 serum samples were retrieved and untargeted UHPLCMS metabolic phenotyping was performed to determine if the same metabolites were present with corresponding non-frail to frail directionality (430 samples lower than Wave 4 due to subject attrition and sample availability). Eleven metabolites from wave 4 with similar non-frail to frail trajectories were identified at Metabolomics Standard Initiatives level 1 or 2 within the analysis 41 (commonly used metabolite identification protocol devised and used by the global metabolomics community), with major components from the carnitine shuttle and vitamin E pathways still present (Supplementary Table 1 ). The detected metabolites were used to replicate the multivariate ROC model within the cross-sectional analysis. Mendelian randomization analysis of carnitine levels and their influence on frailty To investigate the causal effect of carnitine levels on frailty we conducted Mendelian Randomization analysis 42 . We analysed three exposures related to increased/decreased carnitine levels in blood and used four SNPs as genetic instruments (rs12356193, rs419291, rs1466788, rs1171606), derived from two studies 43 , 44 . We found that Odds Ratio of frailty per 10% decrease in carnitine was 1.53 (95% CI = 1.01–2.29, p = 0.042, genetic instruments: rs12356193 and rs419291, Inverse variance weighted method), providing significant evidence for the causal relationship. Investigation of correlation between model confounders and co-morbidities vs. frailty To investigate the effects of model adjustment fully, we performed a global analysis of a range of co-morbidities and biochemistry factors a using multidimensional scaling analysis approach. This tool was used to measure the level of similarity between factors that are closely correlated (or anti-correlated) to the frailty indices. Through this method five different factors identified as being highly correlated to frailty were (Supplementary Fig. 14 ): Age—Already discussed in the article as being correlated to frailty (using linear regression approach) but does not account for all variation. •hsCRP—The pathogenic mechanisms of C-reactive protein (CRP) in ageing involves binding to FcγRII and activation of the TGF-β/Smad3 and non-TGF-β/Smad3 signalling pathways. These are directly and indirectly used to induce inflammation and fibrosis thus impairing the ability of a cell to proliferate and ultimately contributing to the process of ageing. Cfib—Plasma fibrinogen levels were noted to increase over the FI and this relationship has been noted in ageing studies looking at both male and female subjects. HbA1c/Fasting-glucose—Within the literature, there is a wealth of evidence that demonstrates HbA1c levels increase as non-diabetic subjects age. Our similar observation of the association of frailty to HbA1C fits well with this theory due to the high level of correlation to frailty to age. Discussion At the biochemical level, ageing is a continuous and dynamic remodelling process of metabolism and cell function. This chemical reconditioning is heavily influenced by unrepaired accumulation of DNA mutational damage occurring within nuclear DNA 45 and mitochondrial DNA 46 , 47 brought about by environmental stressors. Ensuing dysfunction can be translated back to physiological status and contribute to an ageing phenotype. Indeed, population studies have already examined metabolic baseline levels in human health 48 and longevity 28 , showing that metabolomics combined with symptom, biochemical or demographic data can successfully identify distinct biochemical models that were not previously been associated with lifespan in humans. These studies not only indicate modulation of various metabolic pathways such as those within the TCA cycle 28 and lipid biosynthesis 49 , 50 , but also suggest that large sample sizes ( n > 600) 48 and precise analytical methodologies, such as those performed within the HUSERMET study 48 , 51 , are essential for robust analysis of the data generated. Yet, metabolic studies directly investigating frailty have previously focused primarily on the influence of specific disease states, such as breast cancer 52 ( n = 79) and sarcopenia 53 ( n = 139), and have not specifically analysed the broad underlying causal processes relating to the frailty-ageing condition. With the aim of expanding knowledge in this area, our goal was to identify the presence of a potential frailty metabolic phenotype and link it to associated physiological and pathophysiological processes. Using a validated assessment of frailty status in conjunction with standard biochemistry analysis and high-throughput metabolic profiling, we generated a metabolic network that highlights significant areas of metabolism that are associated with the clinically assessed FI. This multi-level approach developed a mROC model that identified 12 metabolites as being highly significant in the differentiation of subjects exhibiting frail and non-frail phenotypes as indicated by their position on the FI. Ultimately, our studies show that global lipid metabolism is changed under the frailty phenotype and down regulation of the carnitine shuttle and vitamin E metabolism show potential in playing a role in modulating cellular energy production. These biochemical observations in turn mirror the reduced state of physical activity observed clinically in frail subjects. Initial calculations of the frailty indices used within the study generated unimodal right-skewed distributions 6 (Supplementary Fig. 1 ), comparable to those developed in other population scale assessments of frailty 30 . The operationalization of the Wave 4 index into a range of four discreet classifiers applicable to stratification over mass spectrometry based metabolomics data was achieved by binning FI scores in to four supervisory classes and applying PC-DFA (Fig. 2 ). Upon analysis, this approach identified only three distinct sub-planes of separation along the 0.1–0.2 axis and the 0.2–0.3 axes of FI scoring. These metabolic level observations correlate with independent FI assessments made in other large scale studies of frailty across the globe, such as in Canada 18 , 54 , 55 and Taiwan 51 in which non-frail, pre-frail and frail discreet classifiers were considered to have equivalent FI scores. Subsequent whole index validation by PLS methodologies (Fig. 3 ) and individual bin assessment using linear regression (Supplementary Fig. 2 ) also indicated that correlation between age and frailty actually decreases across the index thus distinguishing it from normal age-related degeneration, further supplying validation to the concept that frailty is in fact a geriatric syndrome within its own right and, although influenced by age, distinct from normal temporal changes. Prior to metabolomics and pathway analysis, a panel of standard clinical biochemical tests were performed on matched blood samples to investigate how conventional assays, already routinely used within clinical practise, could be used to assess and develop the frailty phenotype. Stable cholesterol, LDL and HDL levels were noted within the non- and pre-frail phenotypes, but sharp decreases were associated with the frail phenotype (Fig. 1c ). These results are confirmed by previous experimental data in which serum cholesterol levels have been indicated as a hematologic marker of frailty in older hospitalized patients 56 , 57 . LDL/HDL levels have also been demonstrated to decrease with age 58 , and conversely, high levels of HDL has also been directly associated with better survival rates in very old subjects 59 . However, a fluctuation in triglyceride levels was observed across the FI range. Within these experimental studies weight loss is identified as the key explanatory variable, which parallels the importance of involuntary weight loss displayed within the frailty phenotype. Steadily increasing fibrinogen and white blood cell levels were also noted across the FI (Fig. 1d ). Fibrinogen, as an essential component of the coagulation cascade and a key regulator of inflammation, which has been implicated as a risk factor for several diseases 60 , the elevation of which, has previously been associated with increasing frailty level 61 . In the present study observed white blood cell levels were directly correlated with frailty in older adults, an observation that further supplies evidence for the role of immuno-endocrine cross-talk within 62 functional decline. Serum ferritin levels were also noted to decrease over the FI which would initially infer an increase in anaemia. However, previous studies designed to investigate the utility of ferritin as a single indicator of frailty determined it to be of exceedingly low potential 63 owing to the complex interactions between serum iron, total iron binding capacity and transferrin saturation ratio severely hampering levels of assay sensitivity. To compound the use of serum ferritin as a biomarker of frailty further, increased levels are associated with an increase in oxidative stress and cellular damage 64 which goes against observed values obtained within this study. Dehydroepiandrosterone (DHEAS) levels were also evaluated over the index range, the decline in which was correlated with a higher FI. This correlation is in agreement with previous studies reporting a widely-recognised association between decreasing androgen levels and ageing 65 , 66 . All clinical biochemistry data were also analysed controlling for sex. As a result, an interesting observation was noted within the measured triglyceride levels. Upon stratification, male vs. female triglyceride levels act in a divergent manner (see Supplementary Fig. 10 ) as FI score increases; females noting a sharp increase and males noting a decrease. The identification of this important role of triglyceride levels has already been documented in the Leiden Longevity Study ( n = 1664) 63 in which multiple regression models indicated decreased triglyceride levels predicted to serve as an indicator of longevity in females. Biochemical network activity assessment, in which all m/z features were used as input, detected 25 identified metabolites (Supplementary Table 1 ) that contributed to dysregulation of four metabolic pathways (Fig. 4 )—monosaccharide, kynurenine, vitamin E and carnitine metabolism. All individual pathways contain a link to energy production within eukaryotic cells. In this process, pathways identified as significant can contain individual metabolites that may not be significant on their own—due to their presence within a pathway that has other significant features contained within it. To investigate the role of the individual 25 metabolites in differentiating non-frail and frail metabolic phenotypes, Kruskal-Wallis analysis of variance with subsequent false discovery rate (FDR) testing for multiple comparisons was used to test for significance. In total 12 metabolites (Fig. 5a ) were deemed individually statistically significant (>0.05) in differentiating non-frail and frail metabotypes. These feature were then used to develop a Multivariate Receiver Operating Characteristic (mROC) curve (Fig. 5b ), to act as a predictive model of frail status. In this process, the final mROC model used 12 metabolites (individual distributions and univariate contributing ROCs shown in Fig. 5c ) to generate an AUC of 0.755 (95% CI = 0.708–0.815)—indicating a moderately strong level of performance. Overall results from combining PC-DFA separation, RUSBoost-CART sampling validation, pathway enrichment, univariate descriptive comparison of metabolite means and concluding mROC predictive modelling provide a diverse range of evidence that all support the theory of metabolic dysregulation within the frail metabotype. As further evidence, predictive modelling was also replicated on in a validation subset of 768 samples from the same subjects collected four years later. The presence of 9 out of 12 the metabolites used to generate the wave 4 mROC model were detected within the deconvolved Wave 6 mass spectrometry dataset. The data from these features was then used to generate a mROC model from the validation subset with an AUC of 0.702 (95% CI = 0.63–0.748) (Supplementary Fig. 15 ) indicating a reproducible result even with slightly reduced data input. Using the two main pathways identified within pathway analysis, two of the four genetic instruments used in the Mendelian Randomization analysis showed evidence for the causal effect of carnitine levels in frailty. Our instrument SNPs represent the SLC16A9 (solute carrier family 16 member 9) (rs12356193) and SLC22A4 (solute carrier family 22 member 4) (rs419291) genes. These SNPs were strongly associated with carnitine levels in a study of human metabolites ( p = 3.69 × 10 −63 and p = 3.1 × 10 −18 , respectively) 43 . Although our results do not survive strict correction for multiple testing, they are firmly supported by the literature. A recent study measuring common variants (minor allele frequency >5%) using healthy ageing as outcome reported the possible involvement of the SLC22A4 gene, represented by multiple variants, including rs419291 67 . Vitamin E analogues, in this case detected tocotrienols, are well-documented due to their lipoperoxyl radical-scavenging abilities in the termination of lipid peroxidation via proton transfer on to lipid free radicals 68 . However, they are also noted for their ability to scavenge reactive nitrogen species, inhibit cyclooxygenase- and 5-lipoxygenase-catalyzed eicosanoids, and suppress pro-inflammatory signalling, such as NF-κB 69 . This reduction of free radical-mediated oxidative damage alongside general inflammatory suppression is vital to the maintenance of a healthy lifestyle over time. Breakdown of the endogenous antioxidant system can lead to the accumulation of oxidative damage from lipids that has been linked to ageing, cancer and many other co-morbidities 70 . The ability of the carnitine shuttle to generate acetyl-CoA is vital for the successful generation of FADH 2 and the regeneration of ATP at the end of the electron transport chain. Breakdown of this mechanism is terminal to the cell. We found that a decrease in the levels of several carnitines at higher levels of frailty could be potential indications that general cell-based lipid metabolism is deteriorating, but it is essential that further experimentation needs to be performed to confirm and validate this hypothesis. The importance of the kynurenine pathway provides a conduit for the consumption of over 99% of ingested tryptophan that is unused in protein synthesis 71 , 72 . With an upregulation of tryptophan noted within the frail metabotype, and with age-related sarcopenia known to be an underlying phenotype within frailty, this observation suggests that muscle protein breakdown is a potential contributor to frailty metabolic output. Further along the kynurenine pathway, a bottleneck in the biogenesis of the vital energy co-factor NAD and its associated dysregulation has also been associated to mitochondrial disturbances 73 , activation in times of stress and immune activation 74 alongside links to neurodegenerative diseases 75 . Conversely, the tryptophan kynurenine pathway is also the starting point for the biosynthesis of two related neurotransmitters; serotonin and melatonin. Previous work has indicated that an over activation of this pathway can lead to activation of the immune system and downstream accumulation of potentially neurotoxic intermediates such as quinolinic acid 72 and kynurenic acid 76 . These metabolites are currently considered to be involved in some way in Alzheimer’s disease, Parkinson’s disease, Huntington’s disease and amyotrophic lateral sclerosis and future works in frailty metabolism should consider them as interesting mechanistic targeted 76 . This work exemplifies the high suitability for combined metabolic and pathway analysis to explore and uncover significantly modulated biological pathways within biogerontology. The longitudinal nature of the study, alongside the unselected aspect of the sample cohort are strengths that increases the external validity of the findings. However, several limitations also exist, and these should be considered. While providing results that are consistent with data from previous experimental literature, our findings should be considered hypothesis generating in nature. This fact, tied to the restricted geography of the cohort (all subjects residing in England), requires that further validation from a range of independent cohorts is essential to test the conserved nature of the results. Also, to understand fully the complex biological processes that are dysregulated as a component of frailty, a comprehensive systems-based approach is needed to model all dimensions of the process. Further work is needed link metabolic profiles to genotypic expression. Several genetic mutations and markers have already been identified in model organisms 77 , 78 and humans living extremely long lives 79 , 80 and these observations need to be related to RNA, protein and metabolite expression. Candidate gene-association studies on data from the same wave of ELSA have indicated genetic changes effecting lipoprotein receptor-related protein 1 ( LRP1 ) gene on chromosome 12 81 . This multi ligand receptor has previously been reported to be involved in lipid homeostasis including cholesterol transport; thus, supporting our theory of global lipid imbalance in frailty. There are also several limitations that need to be considered within the Mendelian Randomization work. Firstly, as frailty is a complex condition and as such is likely to involve multiple genetic variants. The genetic variants typically explain only a small fraction of the total variance in traits; therefore, MR studies require very large sample sizes for sufficient statistical power. Although we chose the two-sample approach to achieve greater power, our sample size with 1500 cases and 3500 controls may not be powerful enough. Secondly, the instruments we employed may be considered weak, as indicated by the large confidence intervals of the causal estimates. Finally, for individual polymorphisms the variance explained is usually <1%; therefore, it is advisable to combine multiple polymorphisms into a single allele score to maximize the explanatory power of the instrument. However, due to lack of reported significant SNP-metabolite associations, the available number of genetic instruments was a single or maximum two SNPs for the exposures. The lack of multiple genetic instruments also prevented us from carrying out a pleiotropy test. One of the assumptions of MR analysis is that there is no horizontal pleiotropy, i.e., when a genetic variant affecting multiple traits via separate pathways 82 . The MR-Egger regression method provides valid causal estimates in the presence of some violations of the MR assumptions. However, as this method requires more than two genetic variants assigned to the same exposure, we could not test for this and assumed no pleiotropic effects. Also, in order to obtain more conclusive evidence on the effect of carnitine levels on frailty, studies with sufficiently large sample size are required. While our results should be interpreted with caution, this is an important exercise towards identifying causal relationships. In summary, our work reveals that the presence of frailty, with an associated increased risk of negative health outcomes in later life, is not only just identifiable through symptomatic presentation but, as predicted by Fried and colleagues 21 , is multifactorial and subsequently recognisable by a distinct biochemical phenotype. Our results, primarily imply that a deterioration in lipid metabolism is present within those who clinically present as frail: The downstream set of metabolic observations detected within this study (primarily linked to energy dysregulation) are directly linked with the primary clinical description for frailty: a reduction in physiological reserve. Metabolic frailty measurement has the potential to contribute greatly to the standardisation of frailty assessment. In addition, the application of metabolomics in combination with other -omics based technologies (such as we have with Mendelian Randomization) offers the potential for a greater understanding of the biologic basis and complexity of frailty. Knowledge of frailty risk factors and biomarkers offers the scope to yield effective early stage interventions that can be incorporated into standard of care practices and ultimately contribute to healthy ageing. Methods Sample collection procedures The English Longitudinal Study of Ageing (ELSA) is a continuing cohort study that contains a nationally representative sample of men and women born on or before February 1952 living within England. Data collected at Wave 4 (2008–09) were used as the data source and serum sample source for this study. Data collected at Wave 6 (2012–13) were used for the longitudinal validation of the model (786 samples). This study was performed in compliance with all relevant ethical regulations and guidelines for work with human participants. Participants gave informed written consent to participate in the study and ethical approval was obtained from the London Multi-Centre Research Ethics Committee. Clinical Measurements: Nurses collected anthropometric data (weight, height, waist circumference), blood pressure (BP), and non-fasting blood samples using standard protocols developed within the Health Survey for England 83 . Body weight was measured using Tanita electronic scales without shoes and in light clothing, and height was measured using a Stadiometer with the Frankfort plane in the horizontal position. Body mass index (BMI) was calculated by the standard equation—weight (kilograms)/height (metres) squared. Detailed information on biochemical blood analysis, the internal quality control, and the external quality assessment for the laboratory are summarised in the Supplementary Information. Metabolite profiling Untargeted metabolite profiling was performed on serum samples that were collected from participants using standard serum collection techniques 83 and stored at −80 °C prior to analysis. Ultra-High Performance Liquid Chromatography Mass Spectrometry (UHPLC-MS) and gas chromatography mass spectrometry (GC-MS) were performed in tandem on each sample using the Dunn 51 and Begley 84 protocols with some minor alterations and is briefly summarised as follows: Metabolites were extracted from the 1191 serum samples from Waves 4 and 100 samples from Wave 6 by individually adding 900 µL of an organic solvent mixture of 80% methanol/15 water/5% acetonitrile to 330 µL of serum. Subsequent vortexing and centrifugation (17,500 × g ) yielded a metabolite rich supernatant that was split in to two aliquots and lyophilised for 12 h to yield a metabolite pellet that was stored at −80 °C prior to analysis. A pooled QC standard was also generated by combining 30 µL aliquots of each sample in to a pooled vial with subsequent 330 µL portions being extracted identical to each sample. Processed metabolite pellets were defrosted at 4 °C and subsequently reconstituted in 100 µL of mobile phase A. UHPLC-MS analysis was performed using an Accela UHPLC auto sampler system coupled to an electrospray LTQ-Orbitrap XL hybrid mass spectrometer (ThermoFisher, Bremen, Germany). Analysis was carried out in positive ESI mode while samples in each run were completely randomised to negate for any bias. A gradient type UHPLC method was used during each run with 95% water/5% methanol/0.1% formic acid as mobile phase A and 95% water/5% methanol/0.1% formic acid as mobile phase B. 5 µL of the extract was injected onto a Hypersil GOLD UHPLC C 18 column (length 100 mm, diameter 2.1 mm, particle size 1.9 µm, Thermo-Fisher Ltd. Hemel Hempsted, UK) held at a constant temperature of 50 °C while a solvent flow rate of 400 µL min −1 was used to drive the chromatographic separation. Mass calibration was carried out in accordance with the manufacturer’s guidelines using caffeine (20 µg mL −1 ), the tetrapeptide MRFA (1 µg ml −1 ) and Ultramark 1621 (0.001%) in an aqueous solution of acetonitrile (50%), methanol (25%) and acetic acid (1%). Acquisition settings for initial profiling were carried out at 30,000 resolution in centroid and ran at 1 µ-scan per 400 ms in the 100–1000 m/z range with source gases set at sheath gas = 40 arbitrary units, aux gas = 0 arbitrary units, sweep gas = 5 arbitrary units. The ESI source voltage was set to 3.5 V, and capillary ion transfer tube temperature set at 275 °C. Xcaliber software Version 3.0 (Thermo-Fisher Ltd. Hemel Hempsted, U.K.) was used as the operating system for the Thermo LTQ-Orbitrap XL MS system. Data processing was initiated by the conversion of the standard UHPLC raw files in to the NetCDF format via the software conversion tool within Xcaliber. Peak picking was carried out in R-Studio ( ) using the XCMS algorithm ( ). The output yielded a data matrix of mass spectral features with related accurate m/z and retention time pairs. Data from the internally pooled QC samples were then used to align for instrument drift and quality control (via application of an in-house robust spline alignment Matlab script). The data matrix was also signal corrected to remove peaks that crossed the 20% RSD threshold within QC samples across the analytical run. GCMS Analysis was carried out on a Leco Pegasus 3 Time-of-Flight mass spectrometer coupled to an Agilent 6890 GC oven and Gerstel-MPS autosampler. Derivatization and instrument conditions were identical to those used by the Begley protocol 84 to yield raw data files. These were subsequently converted in to NetCDF files within chromaTOF acquisition software. Peak picking was carried out in R-Studio using the XCMS algorithm ( ). The output yielded a data matrix similar to the retention time and quant mass values contained within an internal GC standard library containing over 1600 pure reference compounds run under identical conditions. All metabolites identified as significant within the analysis were assessed and scored according to rules set out by the Chemical Analysis Working Group of the Metabolite Standards Initiative 41 . Where available, pure reference standards were purchased (Sigma-Aldrich, St Louis, USA) and used to confirm the highest level of metabolite identification—Level 1. Where no standard was available, matching of measured MS/MS spectra against those from within the METLIN metabolite database ( ) was performed to give a Level 2 annotation confirmed by appropriate secondary ion m/z values. All scoring is available in Supplementary Table 1 . UHPLCMS data dependent MS n analysis was performed on chemical standards using a LTQ-Orbitrap XL hybrid mass spectrometer (ThermoFisher, Bremen, Germany). Precursor ion full scan was performed followed by an additional scan where the ion of interest for trapped within the linear ion trap for 1000 ms and subsequently subjected to CID of 50 au, following which the fragment ions were detected. A minimum of three scans were recorded for both precursor and product ions. A combined spectrum of both FTMS and ion-trap data was used to generate product ion lists and intensity. The same tuning method, injection volume, CID and activation energy were applied to QC and standard sample to standardise the comparison. For all metabolites analysed, retention time was matched within 20 s or below (extracted date is highlighted in Supplementary Figs. 16 – 30 . All metabolites identified via GCMS were retention time and fragmentation matched to an internal standard library that was analysed under identical conditions as to the main analysis. Chemometrics PC-DFA, RUSBoost-CART and robust spline alignment analysis were carried out using MATLAB 2012a (MathWorks, Natick, MA, USA). Prior to chemometric analysis data matrices were log 2 -transformed to account for skewed distribution. All tests were supervised and bootstrapped (×10,000) using groups determined by FI value. In this process, each data set was split in to training/tests sets and resampled 85 . Sex stratified PC-DFA plots are also documented in Supplementary Fig. 31 (Wave 4) and Supplementary Fig. 32 (Wave 6). Linear regressions models and prediction plots were performed using the lm-function in R-Studio (Version 1.0.44). Univariate t -tests, cross validation, heat-map correlation curves and ROC curves were performed using MetaboAnalyst 3.0 86 . Due to the unbalanced nature of the sample classes (non-frail vs. frail) a by Monte-Carlo cross validation (MCCV) was use to balance the groups. In each MCCV, two thirds (2/3) of the samples were used to evaluate the feature importance. The top 12 (Wave 4) and top 11 (Wave 6) important features were then used to build classification models which were validated on the 1/3 the samples that were left out. The procedure was repeated 500 times to calculate the performance and confidence interval of each model. Classification and feature ranking were performed using aPLS-DA algorithms using seven latent variables as input to determine the final ROC curves. Network analysis Mummichog (Version 1.0.5) pathway analysis 34 was used offline in Python (Version 3.5.2) to predict network activity from pre-processed UHPLC-MS metabolomics data. [M + H] + was selected as the force primary ion ( z ) alongside an evidence cut-off score of 3 to include a metabolite within an activity network (e). The full metabolite data set was used as an input and 554 extracted features were determined as significant ( p < 0.0001) from the associated t-test yielding 263 potential metabolites. From this, 22 network modules were generated using an activity network of 23 annotated and statistically significant metabolites. Output files were visualised in Cytoscape (Version 3.4.0) (Supplementary Fig. 13 ) where manual addition of GCMS data was performed to generate the enriched hybrid frailty model (Fig. 4 ). Metabolite non-frail to frail distribution are available in Supplementary Figs. 33 and 34 . Mendelian randomization A text-mining approach using the keyword ‘carnitine’ yielded four possible exposures and five genetic instruments (single nucleotide polymorphisms—SNPs). Instruments were assigned into the same exposure if the reported direction of effect and the study 43 , 44 , 87 were the same. Exposure 1: Blood metabolite levels (unit increase) (carnitine), SNPs: rs12356193, rs419291 43 . Exposure 2: Blood metabolite levels (unit decrease) (carnitine), SNP: rs1466788 43 . Exposure 3: Acylcarnitine levels (unit increase) (Carnitine), SNP: rs1171606 44 . Exposure 4: Metabolic traits (unit decrease) (carnitine), SNP: rs7094971 87 . Rs7094971 (Exposure 4) was excluded from further analyses, as it was in high linkage disequilibrium with rs12356193 ( r 2 = 0.87). For Exposure 2 (rs1466788) and Exposure 3 (rs1171606) the direction of association for the outcome and the exposure was the same, against the expectations. MR analysis results of causal estimates are summarised in Supplementary Table 2 . Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All metadata, mass spectrum files and statistical packages used in this paper are freely available and deposited in accessible public repositories. All English Longitudinal Study of Ageing (ELSA) data files are available from the United Kingdom Data Service repository—Study Number 5050 ( ). Mass Spectrum and metabolomics data are accessible through the EMBL-EBI MetaboLights repository—Study Identifier MTBLS598 ( ). Statistical scripts used to perform PC-DFA, PLS-R and PLS-DA were developed within the cluster-toolbox and are freely available on the open source GitHub repository hosted at github.com/Biospec/cluster-toolbox-v2.0. The source data underlying Figs. 1 – 5 and Supplementary Figs. 1 – 15 and 31 – 34 are provided within the supplied Source Data file alongside data used to generate the Z-scores. Supplementary Figs. 16 – 30 were generated in the Thermo Fisher Xcaliber Software using the raw LCMS data available within the upload supplied to the MetaboLights repository. All data are available from the corresponding author upon reasonable request. | Signs of frailty, and the risks it brings, could be identified in young and old people alike through a new assessment developed in a study led by researchers at the University of Strathclyde. Increasing risk of frailty is a defining characteristic of the ageing process but it has no precise clinical definition and there are currently no analytical techniques that can accurately quantify its status. Furthermore, little is known about the underlying biological mechanisms of frailty but the new research has revealed a set of blood biomarkers that can predict its extent. Further down the line, it could potentially achieve this with people as young as in their 20s or 30s. The assessment could also determine whether a patient would be able to withstand intensive courses of treatment, such as chemotherapy, as well as helping to understand, prevent, cure or minimise age-related impairments. The research also involved partners from the Universities of Manchester, Liverpool and Edinburgh and Yale University. It has been published in the journal Nature Communications. Dr. Nicholas Rattray, a Chancellor's Fellow with Strathclyde Institute of Pharmacy and Biomedical Sciences, led the study. He said: "Being able to measure the frailty status of a person is currently a very subjective clinical assessment and is ultimately causing a delay in the personalisation of therapeutic options for frail patients. "By using cutting-edge analytical techniques such as mass spectrometry based metabolomics, this research has opened the door to developing ways to rapidly and accurately quantify frailty and apply this knowledge directly within the clinical environment. "We believe this assessment is the first of its kind. It could lead to a far deeper understanding of the ageing process and how to potentially develop intervention strategies for ageing poorly." Professor Roy Goodacre from the University of Liverpool said: "I am excited by this work as this shows that large-scale metabolomics has for the first time shown clear biochemical changes in people with frailty which may with future work lead to therapy to help revert these individuals back to a resilient phenotype which will improve quality of life." Professor James Nazroo, from the University of Manchester said "These crucially important findings pave the way for providing accurate diagnostic procedures for identifying risk of frailty, but also for understanding the mechanisms that lie behind that risk and the possibility of intervening to reduce risk and the negative consequences that follow from frailty." Professor Neil Pendleton from the University of Manchester said "identification of frailty is complex, requiring expertise not widely available. Using these findings could offer potential to expand opportunities for diagnosis when interventions are possible." Although there is currently no widely accepted clinical or biomedical definition of frailty, there exists a series of clinical scoring metrics, or frailty indices, which can help in the assessment of a patient's resilience. They include measures of physical fitness, falls and fractures, vision, hearing, chronic diseases and depression. The researchers analysed blood serum samples from 1191 people aged between 56 and 84 and followed up on 786 of the participants four years later. They weighed and identified molecules in the samples from the blood of 1200 elderly people, using machine learning to categorise bio-signatures of frailty. A set of 12 metabolites—substances produced in metabolism—were identified and were found to differentiate between frail and non-frail people. | 10.1038/s41467-019-12716-2 |
Computer | Threads that sense how and when you move? New technology makes it possible | Scientific Reports (2021). DOI: 10.1038/s41598-021-81284-7 Journal information: Scientific Reports | http://dx.doi.org/10.1038/s41598-021-81284-7 | https://techxplore.com/news/2021-01-threads-technology.html | Abstract Human machine interfaces that can track head motion will result in advances in physical rehabilitation, improved augmented reality/virtual reality systems, and aid in the study of human behavior. This paper presents a head position monitoring and classification system using thin flexible strain sensing threads placed on the neck of an individual. A wireless circuit module consisting of impedance readout circuitry and a Bluetooth module records and transmits strain information to a computer. A data processing algorithm for motion recognition provides near real-time quantification of head position. Incoming data is filtered, normalized and divided into data segments. A set of features is extracted from each data segment and employed as input to nine classifiers including Support Vector Machine, Naive Bayes and KNN for position prediction. A testing accuracy of around 92% was achieved for a set of nine head orientations. Results indicate that this human machine interface platform is accurate, flexible, easy to use, and cost effective. Introduction The monitoring of head motions can be very useful in many applications including the study of human behavior. For example, it can improve our understanding of one’s attention, focus and emotional state. In a classroom setting, head movements can be translated into students’ gaze direction and serve to detect students’ attentiveness during class 1 . Similarly, this technology can be applied to a driving situation, where a distracted or tired driver can be alerted through detecting abnormalities in their head motion. Head motions such as tilting and nodding when speaking or singing encode a person’s emotional intent and serve as an indication for the speakers’ interpersonal relationship in dialogs 2 , 3 . Many of the current head and neck motion monitoring systems have been developed for measuring the Cervical Range of Motion (CROM) in the diagnosis of neck pain. Most of these designs employ an Inertial Measurement Unit (IMU) to obtain highly precise measurements required by the underlying application. For example, a wearable wireless inertial sensor has been proposed by Raya et al. 4 . Their design employed a MEMS-based IMU sensor consisting of the 3-axis gyroscopes, accelerometers, and magnetometers to achieve nine degrees of freedom. The measurements require the use of two of such sensors, one placed on the forehead and one on the vertebrae. Alternative methods have also been presented, Al-Nasri et al. utilized a commercially available C-stretch band to measure neck movement by placing the band around the top of the head 5 . Sarig-Bahat et al. made use of the current virtual reality technology to measure the range of motion of the neck 6 . However, in certain non-medical situations and for our purposes, general motion monitoring instead of precise measurement of motion angles is needed. Radar technologies including Millimeter-Wave Doppler Radar and frequency-modulated continuous-wave (FMCW) radar have been utilized for this purpose in papers written by Raja et al. 7 and Chae et al. 8 . Others like Inokuchi et al. and Chui et al. have proposed methods using cameras to provide video streaming and analyze motion using image processing 9 , 10 . While these methods all allow for accurate monitoring of head motion, they also present a high level of complexity in usability and design as well as low portability. Furthermore current methods present limits on usability such that the person to be monitored has to stay at a fixed location. For situations such as a classroom setting or group activity setting, where the monitoring of one’s head motion can contribute to our understanding of one’s attention, focus and emotional state, the costliness of the current designs post obstacles for such situations where a low cost head motion sensing systems are needed in bulk. Smart thread-based sensors would provide an alternative possibility to the current head motion monitoring methods with high flexibility and efficiency. Current thread-based sensors utilize manufacturing procedures that include use of materials that directly measure strain as well as others for which strain is inferred based on changes in the electrical conductivity of an extrinsically applied coating 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 . With their increasing popularity, thread-based sensors have been used as means for non-obtrusive data collection methods for sweat and respiration monitoring as well as for joint and finger motion detection 23 , 24 , 25 . Extrinsically coated sensor thread has been used for gas detection purposes 26 and electromagnetic devices like antennas 26 , 27 , 28 . Our group has developed a carbon-coated strain sensor and demonstrated its use in gait monitoring 20 , 21 , 22 . In this paper, we incorporate the use of thread-based sensors into the monitoring and collection of head motion. This enables a cost-effective, flexible, and portable method with high usability for everyday and large scale monitoring. We propose a method for head motion monitoring and classification using thread-based sensors. The design consists of two carbon-coated thread-based strain sensors arranged in a cross-over “X” configuration, the electronic interface for data collection, and data processing methods that allow for close to real-time determination of head orientation. The sensors are placed on the back of the neck. The sensor position is concealed thus better suited for daily use and improves from the existing c-stretch band method 5 that places the sensor on the top of the head. Data is collected through impedance readout boards and transferred to nearby Bluetooth enabled devices. The data are divided into segments of fixed length from which a time series of features are extracted for classification of head orientation into one of nine positions. Specifically, with rotating left and right being the horizontal axis and flexing down and up being the vertical axis, the motion captured can be classified according to the location along either the horizontal or vertical axis as shown in Fig. 1 . With the straightforward concepts behind the thread-based sensor, this design can be easily reproduced for other motion detection and monitoring. Moreover, the platform also exhibits a high tolerance over possible placement inaccuracy without sacrificing accuracy in classifying motions. Figure 1 Motion classification. Full size image Results Fabrication of thread sensor The thread (Gütermann elastic thread: 64% Polyester, 36% Polyurethane; Made in Germany) is first manually coated with Carbon Resistive Ink (C-200 Carbon Resistive Ink; Applied Ink Solutions; Westborough, MA, USA). A more systematic procedure for reel-to-reel coating could also have been employed 21 . In order to ensure the thread is completely covered, the thread is first stretched to its maximum length then coated. Carbon resistive ink is manually rubbed onto the stretched thread until every portion of the thread has been covered. The thread is then transferred and baked in an oven at 80℃ for 30 min. Then, two male metal crimps are attached to the two ends of the thread by a crimping tool to enable wire connection and stable signal acquisition. As the threads are intended to be placed onto the human skin, an insulating layer is added to prevent possible signal disturbance from sensor friction on skin. A platinum-catalyzed silicone produced by Smooth-On, Inc. (Macungie, PA, USA) named EcoFlex is used to achieve this purpose. Threads are held on one end and dipped into EcoFlex and hung on one side on a rack for several seconds for the EcoFlex to be evenly coated. The EcoFlex coated threads are cured at room temperature. The fabrication process is shown in Fig. 2 a. The EcoFlex coating process repeats with the other end. This provides two layers of EcoFlex which would prevent the EcoFlex layer from being torn due to excessive friction with skin in the neck movement. Figure 2 ( a ) Fabrication process of the strain sensitive thread including manual coating of carbon, baking at 80 °C and Ecoflex coating respectively ( b ) SEM image of the Gütermann elastic thread cross section with scale bar of 200 μm and ( c ) scale bar of 60 μm ( d ) SEM image of the carbon coated Gütermann elastic thread under no strain with scale bar of 500 μm ( e ) SEM image of the carbon coated Gütermann elastic thread under 10% strain with scale bar of 200 μm ( f ) dynamic cyclic test of carbon coated thread with length of 12 cm ( g ) stability test of carbon coated thread through cyclic stretching with over 2000 cycles. Full size image Characteristics Using SEM images (Fig. 2 b–e) the strain sensing capability of the manufactured thread sensor has been validated. The cross section of the Gütermann elastic thread shown in Fig. 2 b,c demonstrates the structure inside the thread, an inflexible central core surrounded by fiber in a wound helix. When the thread is stretched, the fiber reorganizes, the helix structure increases in length prompting the central core to compress. The stretch-and-apply method employed in the manufacturing process ensures the carbon coating adheres to the surface of thread as well as the individual fiber and the central core, see Fig. 2 d,e. This enables the resistance to increase when the thread experiences stretching; that is, an increase in length. This method prevents the formation of irreversible cracks on the surface of the thread. The calibration process that involves repeated motions before the beginning of data collection serves to further minimize the potential deviation caused by the development of surface cracks. The EcoFlex material used as coating is a hydrophobic and very durable polymer, this ensures no change in performance of the sensor in a humid environment caused by events like sweating. The result of a dynamic cyclic test performed is shown in Fig. 2 f, the stretching of the thread corresponds to the increase in resistance values as expected. A stability test of over 2000 cycles has also been performed with the result shown in Fig. 2 g. The thread demonstrated a confirmed stability over numerous cycles. The thread presents a natural drift in resistance over time and over multiple uses due to displacement of conductive carbon particulate on the thread, and as a result of plastic deformation of the EcoFlex coating. This issue can be easily accounted for in the algorithm with periodic calibration and has been investigated in our past publication 21 . Circuit schematics The data collection circuitry is shown in Fig. 3 a,c. A microcontroller unit and two impedance read-out boards (Electrodermal Activity Sensor) designed by BITalino 29 (Lisbon, Portugal), a toolkit developed specifically in the field of body signal collection, are used for data collection and transmission. The microcontroller board allows for real-time data streaming with a frequency of 1 kHz. The two impedance (inverse of resistance) read-out units provide the data that will be used for later processing. The two thread sensors are represented in Fig. 3 c as variable resistors. The two ends of the thread sensors are soldered on flexible thin wires that are then soldered to the two data-in ports on the read-out boards. The overall circuitry is powered by a 3.7 Li-Po battery. A Bluetooth component is connected to the microcontroller unit, enabling transfer of data to nearby devices in real time. An armband, shown in Fig. 3 b, containing the entire circuitry has been designed and implemented to ensure the portability of the device and is used during experiments. Figure 3 Circuit schematics and overall system. ( a ) Circuit ( b ) Thread placement on the back of the neck ( c ) Circuit diagram. Full size image Overall system The overall design consists of data collected from two thread sensors fabricated as described above placed on the neck. The two thread sensors are placed at the back of the neck, the position is estimated to be the location of the middle of the cervical spine region, though a precise location is not required. Two thread sensors are placed in an ‘X’ shape, one as the ‘/’ and the other as ‘\’ shown in Fig. 3 b. The thread sensors are fixed on the neck by taping the two ends of the thread onto the skin using 3M Nexcare flexible clear skin adhesive tape (Saint Paul, MN, USA). The impedance values of the two thread sensors are measured and transmitted to the microcontroller unit through the BITalino impedance read-out unit. The data collected are transmitted to a nearby computer through the Bluetooth unit. The entire system is both compact and portable. Data processing overview Data is collected through the thread sensors and transmitted to a nearby computer in the form of a time series of impedance values. As illustrated in Fig. S1 , a processing procedure is required to convert the raw impedance values into meaningful quantities such that head orientation can be represented and identified. While we provide details of each step shortly, an overview of the approach is as follows. Before the data collection begins, each thread sensor has to be calibrated due to the differing characteristics each thread exhibits caused by nuances in the manufacturing process. After characteristic values of each sensor have been obtained, the data collection starts. The received real-time raw data are filtered to remove noise and normalized. The filtered and normalized data are then divided into overlapping data segments containing 550 data points (0.55 s). Next, a collection of features to be used as the basis for classifying head orientation are extracted from the segments. At this point, one of two paths is taken: Training and Testing. As we are using a supervised method 30 for classifying head orientation (see Fig. 1 , Introduction section), a set of training data is required. Such a library is constructed by manually assigning a label encoding the known orientation to the features for a given segment. This set of training data composed of feature vectors and their labels are used to construct an algorithm for use in the Testing Phase which, when presented with an unlabeled set of features, produces an estimate of the associated orientation. Note that for the examples in this paper, all experimentally collected data are labeled. In the training phase, a random portion of the obtained data segments will be used to generate a classifier. Using the trained classifier model, the rest of the data segments will be served as test data, the prediction made by the classifier model will be compared with the actual label as an evaluation of the success of the model. Calibration Because the thread sensors used are manufactured manually, variations exist in the measured impedance for a given elongation. Moreover, as the threads are taped onto the skin by the two ends, the middle part of the threads are not secured onto the neck and thus exhibit spontaneous motions. Due to these conditions and characteristics, the impedance values delivered by the threads tends to fluctuate when first placed onto the neck. Thus, it is important to calibrate the sensors. Based on the objectives, three quantities are determined by our calibration process: the maximum and minimum impedance values over the full range of motions of the head as well as the settled value associated with the head in the front facing position. The value that the sensor settles at the front position is calibrated first. After the sensors are taped to the neck, the person is asked to move their head in different directions each time returning to the front-facing position. This process is repeated several times. Figure 4 a demonstrates the data collected from a sample calibration process of the “/” sensor by instructing the person to repeat the rotating left motion. As shown on the plot, the leftmost values highlighted by the green box represent the initial value as the sensor is first placed on the neck while facing the front. In the calibration process, the person rotates his/her head from the front to the left and then rotates back. The rotating motion is represented in the plot by the downward and upward curves highlighted here as the orange. The pause when facing left is represented by the purple portion. As the person returns to the front position, the value of the sensor settles to a new value, represented in the plot as the blue portion. After performing the same motion several times, the settled value associated with the front position becomes clear. Figure 4 Calibration process ( a ) Calibrate for baseline value ( b ) Calibrate for maximum and minimum value. Full size image In addition, a maximum and minimum impedance values associated with the full range of motion are also obtained. The sensor value exhibits the maximum change when the motion performed is deviated from the front position the most. Thus, the maximum and minimum values of the sensor are obtained by instructing the person to turn their head in each direction to the greatest extent possible. The maximum and minimum values demonstrated in the collected data from each sensor are used as the calibrated maximum and minimum values as shown in Fig. 4 b. Filtering Noise in the collected data is reduced by applying a moving average filter of length 120 samples. With the sampling rate of 1000 Hz, the moving average filter acts as a low pass filter with approximately 147 Hz (0.023 rad/s) − 3 dB cutoff frequency. We determined empirically that this choice of window preserves the large scale variations of the collected data, including jumps representing the spontaneous motion of the threads, while removing the stochastic variations. Figure 5 a shows the collected raw data and Fig. 5 b demonstrates the filtering result from the raw data. Figure 5 Collecting raw data, filtering, normalizing. ( a ) Raw data ( b ) Filtered data ( c ) Data shifted down by baseline value ( d ) Separating points above and below baseline value ( e ) Normalizing separately points above and below baseline value ( f ) Normalized value ( g ) Obtained data segment 1 ( h ) Obtained data segment 2 ( i ) Obtained data segment 3 ( j ) Zoomed in normalized data. Full size image Normalization When head motion is performed, the thread sensors deviate from their original states by becoming either more stretched or more relaxed. However, from an initial relaxed position, the thread sensors tend to exhibit a greater change in impedance value when they are stretched and their length increases compared to when they relaxed and their absolute length decreases. Due to this difference, we have found it useful to normalize the data such that the changes in impedance values when associated with relaxation and elongation are better matched. Furthermore, to account for the difference between individual thread sensors, this normalization also reduces the impact of characteristic difference of individual sensors. To achieve this goal, the data for each thread are first shifted down by the baseline value, which is the value obtained through calibration as the value associated with the front position, shown in Fig. 5 c. This sets the value associated with the front position as the zero value. This also provides a more intuitive way of representing positive values as when the thread is being stretched, and negative values as when the thread becomes more relaxed. Next, as demonstrated in the pink portion of Fig. 5 d,e, all positive data are scaled between 0 and 1 by treating the calibrated maximum value as 1 and the baseline value as 0. This is mathematically achieved by dividing all positive data by the calibrated maximum value. Similarly, all data below the baseline value are scaled between 0 and − 1 by treating the calibrated minimum value as − 1 and baseline value as 0. With a similar mathematical approach of dividing all negative values by − 1 multiplied by the calibrated minimum value. This process is demonstrated as the green portion in Fig. 5 d,e. The final normalized data is shown in Fig. 5 f. Overall, this approach to signal normalization allows any stretching and contracting to be accounted equally in the data and alleviates the effects of difference in thread characteristics. Data segment Filtered and normalized incoming data are divided into overlapping segments both as the basis for feature extraction and, ultimately, to allow for close to real-time processing. Each data segment serves as the basic unit on which feature calculation is based, and orientation classification is also assigned per segment. The segment and overlap length is chosen based upon the following three criteria to ensure each window provides desired information. 1. Each window needs to be long enough to capture the key changes in the signal such as transitions from low to high values associated with intermediate positions as the head is moving such as front left, front right, etc. 2. Each window needs to encompass those relatively flat regions that are representative of extreme head positions; i.e., left, right, up and down. 3. Each window needs to have sufficient length so that it is not to be confused with the smaller variations in the signal that we hypothesize are associated with internal dynamics of the thread as it is stretched. For the examples in this paper, each segment contains 550 data points corresponding to a time interval of 0.55 s. Adjacent segments overlap by 250 data points or 0.25 s. Figure 5 g–i demonstrates three data segments obtained using this overlapping window approach for the enlarged portion of the incoming data displayed in Fig. 5 j. Feature definitions and extraction From each data segment, we extract a small number of features for use in classifying head position. In this paper, we use hand-crafted features determined by the first author from visual inspection of the data. Consider the time series in Fig. 6 a,b, for motions in the vertical range. Motions near the extreme positions, left, right, up, and down tend to present the most negative or positive values in both sensors, while having a flatter slope. On the other hand, motions between these extreme positions, such as front left, front right, front up, front down, tend to provide intermediate values while having a much sharper slope in the signals measured by both sensors. Given the above observation, the data segment mean and difference between minimum and maximum were used to capture these characteristics. Moreover, as shown on Fig. 6 , motions in the up and down range provide highly correlated data from the two sensors, while motions in the left and right range provide inversely correlated data. Thus correlation coefficient was chosen to differentiate motions in the two axis. Several related features were also extracted based on the above observations. After extensive empirical testing, the addition of difference in mean was seen to provide a significant increase in prediction accuracy and thus was included. The final set of features we use are as follows: Figure 6 ( a ) Sample data collected from horizontal range motion ( b ) Sample data collected from vertical range motion. Full size image Data segment mean—The mean of data points within a segment. There are two such values produced for one segment, one from each sensor. Difference in mean—The difference between the two mean values is calculated by subtracting the segment mean from the “\” sensor from the segment mean of the “/” sensor. Correlation Coefficient—The value is obtained by calculating the correlation coefficient between the two sets of data points for a segment produced by the two sensors. This feature measures the level of resemblance between the data produced by the two different sensors for a certain segment. Difference between Min and Max point within a segment—The value is calculated by taking the absolute value of the maximum point minus minimum point within a segment. Compared to the correlation coefficient, which measures the strength of the relationship between the data produced by the two sensors, this feature displays the level of maximum change a data segment exhibits. Classifier With the features computed and plotted, nine classifiers are trained and tested using the available data. The specifications of the classifiers used as following: 1. Linear Support Vector Machine 31 2. Quadratic Support Vector Machine 31 3. Cubic Support Vector Machine 31 4. A Gaussian kernel support vector machine 31 with a kernel scale being the square root of the number of predictors (2.4 in this case) 5. Gaussian Naive Bayes 32 6. Kernel Naive Bayes 32 7. Cosine KNN 33 8. Cubic KNN 33 9. Weighted KNN with weight being the inverse of distance squared 33 The details of the training and testing process, referring to the flowchart in Fig. S2 , are as follows. For each of 100 rounds, 75% of the generated data segments are randomly selected as training data. For each of the nine types of classifier, a classifier trained with the training data is generated. The nine generated classifiers are then tested with the remaining 25% of total segments. The testing accuracies for the models are recorded. This step is repeated 100 times. The cumulative testing result and averaged testing accuracy are recorded and analyzed. Experimental result As proof of concept, a person was instructed to perform a set of motion and the predicted result from the trained classifier is compared with the actual label to demonstrate the accuracy of the system. Setup The sensors are configured according to the above description. During the process of placing the thread sensor, the person is asked to hold an upright neck position. Before the start of the data collection process, calibration is performed, recording values essential to later quantification. During the data collection process, the person is asked to perform a set of motions continuously. First, the person is asked to start facing front, rotate their head to the left, return to the starting position, rotate to the right, and again return to front facing. This series of motions is repeated five times. Next, the person is asked to start facing front, tilt their head down, return to the starting position then tilt up and return to facing front. The tilting series is also performed five times. As the person is performing the instructed motions, a camera records the sequence for later labeling purposes. The collected data is divided into data segments, filtered and normalized. For each data segment, the features are calculated and stored. With reference to the video recording, each data segment is manually labeled based on the orientation classification definition. In total, 400 data segments were generated. For each of the 100 repetitions, 300 segments were randomly selected as the training data set, and the remaining 100 as testing data. Following the training and testing procedure specified in the classifier section, the testing results were recorded. Data analysis Data segment mean (µ)—In Supplementary Information Fig. S3 (a-d) demonstrate the data segment mean calculated for both sensors. Comparing the result from the left and right orientation, the result from the two sensors display inverse trends as shown in Fig. S3 (a) and Fig. S3 (c). The “/” sensor produces the lowest mean value when the individual is facing right, with values around − 0.8 to − 1. The mean value increases as the head turns towards the left. As seen in the plot, the front right position produces a mean between −0.8 and −0.2, the front position is between −0.2 and 0.2, the front left produces values between 0.2 and 0.6, and the left position mainly in the range of 0.4 to 1. Contrastingly, the “\” sensor produces the opposite trend, the right position displays the lowest mean value of between 0.3 to 1 and the mean value decreases as the head turns towards the left. Despite the larger range exhibited by the front right position of between −0.2 to 0.2, the front position produces a mean value that is primarily between −0.2 and 0.2, consistent with the “/” sensor. The front left position produces mean values between −0.8 to −0.4, where most of the data points are distinctly below the data point cluster representing the front and front right orientations. The left position has the lowest mean values of between −1 and −0.8. For the up and down motion range shown in Fig. S3 (b) and Fig. S3 (d), the two sensors demonstrate the same trend in segment mean. Both sensors associate the down position with the most negative values, which is reasonable as threads experience most stretching when the head is flexing down. As the head moves up, the mean values increase respectively. The front position again produces values around 0, this is expected as the result of normalization led the front position to be associated with zero. Difference in means (Δµ)—The trends displayed in the result from segment mean are being reflected and summarized in the difference of means plots. As discussed above, the two sensors produce data that exhibits an inverse trend for left–right motion and direct trend for up-down motions. As shown in Fig. 7 a, in the left and right motion range the difference of mean increases as the motion moves from right to left. For the vertical motions as shown in Fig. 7 b, the differences in mean are very close to 0, as a result of high resemblance of mean values in data presented by both sensors. Figure 7 Extracted features ( a ) Difference in mean horizontal motion range ( b ) Difference in mean vertical motion range ( c ) Correlation coefficient horizontal motion range ( d ) Correlation coefficient vertical motion range. Full size image Correlation Coefficient (ρ)—Examining Fig. 7 c for the horizontal motion range, the sensors tend to generate negatively correlated data for the intermediate orientations, and positively correlated data for left, right, and front. This is the result of the inverse trend that is presented in the data segment mean plots, in addition to a larger change within a segment that is associated with the intermediate positions. Figure 7 d demonstrates that for the vertical range, all orientations are likely to generate positively correlated data from the two sensors. This is expected as the data segment mean plot demonstrated highly correlated data produced by the two sensors. Distance between Min and Max point within a segment (δ)—Examining the plots for motions in both the horizontal and vertical range produced through the two sensors shown in Fig. 8 a,b and Fig. S3 (e,f), we see similar trends. Extreme and center positions including front, right, left, up and down tend to present a smaller level of change within a segment, represented here as the distances between max and min are closer to zero. In contrast, the intermediate positions such as front left, front right, front down, front up tend to present a greater level of change within a segment, as data segments classified as these motions are on the upper parts of the plot. This can be explained as the intermediate positions are associated with the action of the rotating or flexing motion of the head, resulting in a dramatic change in data produced. Figure 8 Extracted features and classifier result. ( a ) Difference between maximum and minimum value for horizontal motion range “/” sensor ( b ) Difference between maximum and minimum value for vertical motion range “/” sensor ( c ) Confusion matrix of Linear SVM testing result. ( d ) Confusion matrix of Cubic KNN testing result. Full size image Overall results—From the multiple trials of the testing process, all nine classifier models demonstrated high averaged testing accuracy of 89–92% over the 100 repetitions. Models such as Linear SVM and Gaussian kernel SVM demonstrate single round testing accuracy as high as 96%. The overall averaged testing accuracy is listed in Table 1 . Table 1 Averaged testing accuracy for different classifiers. Full size table The performance result from the best and worst models are as follows: Linear SVM- Linear SVM demonstrated the highest testing accuracy. Figure 8 c shows the combined confusion matrix of the testing result. Most of the errors are the orientation ‘front’ confused with ‘front down’, ‘front right’ confused with ‘right’ and ‘front left’ confused with ‘left’. There are also noticeable errors where ‘front up’ is misclassified as ‘front down’. Cubic KNN—Cubic KNN demonstrated the lowest testing accuracy among the nine classifier models. Figure 8 d shows the combined confusion matrix of the testing result. Beside from clustered errors resulting from misclassifying neighboring orientations such as ‘right’ with ‘front right’, ‘left’ with ‘front left’. This model displays a high number of misclassification between ‘front down’ and ‘front up’ as well as ‘front left’ with ‘front right’. Overall, all nine models demonstrated high accuracies in prediction of testing data. Most of the errors are between neighboring orientations, such as front left misclassified as left or right misclassified as front right. The high testing accuracy indicates that the features selected are highly distinctive and provide strong performance over a range of classification methods. Discussion A head position monitoring and classification method that demonstrates a high level of accuracy has been presented. The use of a sliding window concept to obtain data segments will ultimately allow for real-time data collection and processing, suggesting the method’s capability to process and generate results concurrently as the motions are being performed. Moreover, the design is flexible in terms of placement of the thread sensors. While the sensors are intended to be placed at the middle of the cervical spine region, slight displacement has no impact on the accuracy of the result. This was evident from the robustness of classification in spite of manual errors in placement during test and validation. The classification results demonstrate errors, if any exist, largely between neighboring orientations. In a real-life situation, when continuous segments are being collected and processed, the misclassification between neighboring orientations could be ameliorated via the development of more sophisticated, recursive tracking/filtering methods aimed at ensuring some degree of temporal “smoothness” in the estimated head orientation. The use of hand crafted features was dictated for this study by the relatively small size of the data set we had at our disposal. Future work would involve the collection of substantially more data which would then allow us to explore alternate feature generation methods such as deep nets. Different experimental conditions may well necessitate a different window length or, perhaps a more complex, adaptive approach to windowing in the event that motions occur over multiple time scales. The thread-based sensor method for head motion classification can be easily applied for motion monitoring of other parts of the body, such as elbow, knee or wrist. With a low complexity in usability and high portability, the thread-based motion monitoring method offers a new possibility of a flexible, cost-effective, and accurate design. Methods The study has been reviewed and approved by Tufts University Institutional Review Board. All experiments were performed in accordance with relevant guidelines and regulations. Informed consent was obtained from all participants. | Engineers at Tufts University have created and demonstrated flexible thread-based sensors that can measure movement of the neck, providing data on the direction, angle of rotation and degree of displacement of the head. The discovery raises the potential for thin, inconspicuous tatoo-like patches that could, according to the Tufts team, measure athletic performance, monitor worker or driver fatigue, assist with physical therapy, enhance virtual reality games and systems, and improve computer generated imagery in cinematography. The technology, described today in Scientific Reports, adds to a growing number of thread-based sensors developed by Tufts engineers that can be woven into textiles, measuring gases and chemicals in the environment or metabolites in sweat. In their experiments, the researchers placed two threads in an "X" pattern on the back of a subject's neck. Coated with an electrically conducting carbon-based ink, the sensors detect motion when the threads bend, creating strain that changes the way they conduct electricity. When the subject performed a series of head movements, the wires sent signals to a small Bluetooth module, which then transmitted data wirelessly to a computer or smartphone for analysis. The data analysis involved sophisticated machine learning approaches to interpret the signals and translate them to quantitate head movements in real time, with 93% accuracy. In this way, the sensors and processor track motion without interference from wires, bulky devices, or limiting conditions such as the use of cameras, or confinement to a room or lab space. While algorithms will need to be specialized for each location on the body, the proof of principle demonstrates that thread sensors could be used to measure movement in other limbs, according to the researchers. The skin patches or even form-fitting clothing containing the threads could be used to track movement in settings where the measurements are most relelvant, such as in the field, the workplace, or a classroom. The fact that a camera is not needed provides for additional privacy. "This is a promising demonstration of how we could make sensors that monitor our health, performance, and environment in a non-intrusive way," said Yiwen Jiang, an undergraduate student at Tufts University School of Engineering and first author of the study. "More work needs to be done to improve the sensors' scope and precision, which in this case could mean gathering data from a larger array of threads regularly spaced or arranged in a pattern, and developing algorithms that improve the quantification of articulated movement." Other types of wearable motion sensor designs have included 3-axis gyroscopes, accelerometers and magnetometers to detect movement of the subject in relation to their surroundings. Those sensors are based on inertial measurements—quantifying how the body accelerates, rotates or moves up and down -and tend to be bulkier and more inconvenient. For example, with other systems, in order to measure head movement, it is necessary to place one sensor on the forehead and another on the neck above the vertebrae. The obtrusive placement of equipment can interfere with the subjects' free movement or simply the convenience of not being conscious of being measured. For situations such as on the athletic field, the novel thread-based sensor paradigm could be a game changer. By placing thin tatoo-like patches on different joints, an athlete could carry motion sensors to detect their physical movement and form, while thread-based sweat sensors, described in earlier work by the Tufts team, could also potentially track their electrolytes, lactate and other biological markers of performance in sweat. On the road, a thread sensor patch could alert to truck driver fatigue or other situations where tracking operator alertness is critical, monitoring the head movements of someone about to nod off. "If we can take this technology further, there could be a wide range of applications in healthcare as well," said Jiang. "For example, those researching Parkinson's disease and other neuromuscular diseases could also track movements of subjects in their normal settings and daily lives to gather data on their condition and the effectiveness of treatments." "The objective in creating thread-based sensors is to make them 'disappear' as far as the person wearing them is concerned," said Sameer Sonkusale, professor of electrical and computer engineering at Tufts' School of Engineering, director of the Tufts Nanolab, and corresponding author of the study. "Creating a coated thread capable of measuring movement is a remarkable achievement, made even more notable by the fact that Yiwen developed this invention as an undergraduate. We look forward to refining the technology and exploring its many possibilities." | 10.1038/s41598-021-81284-7 |
Biology | Genetic study investigates ways to increase productivity and tenderness of meat | Bárbara Silva-Vignato et al. Comparative muscle transcriptome associated with carcass traits of Nellore cattle, BMC Genomics (2017). DOI: 10.1186/s12864-017-3897-x Vinicius Henrique da Silva et al. Genome-Wide Detection of CNVs and Their Association with Meat Tenderness in Nelore Cattle, PLOS ONE (2016). DOI: 10.1371/journal.pone.0157711 Journal information: BMC Genomics , PLoS ONE | http://dx.doi.org/10.1186/s12864-017-3897-x | https://phys.org/news/2017-10-genetic-ways-productivity-tenderness-meat.html | Abstract Background Commercial cuts yield is an important trait for beef production, which affects the final value of the products, but its direct determination is a challenging procedure to be implemented in practice. The measurement of ribeye area (REA) and backfat thickness (BFT) can be used as indirect measures of meat yield. REA and BFT are important traits studied in beef cattle due to their strong implication in technological (carcass yield) and nutritional characteristics of meat products, like the degree of muscularity and total body fat. Thus, the aim of this work was to study the Longissimus dorsi muscle transcriptome of Nellore cattle, associated with REA and BFT, to find differentially expressed (DE) genes, metabolic pathways, and biological processes that may regulate these traits. Results By comparing the gene expression level between groups with extreme genomic estimated breeding values (GEBV), 101 DE genes for REA and 18 for BFT (false discovery rate, FDR 10%) were identified. Functional enrichment analysis for REA identified two KEGG pathways, MAPK (Mitogen-Activated Protein Kinase) signaling pathway and endocytosis pathway, and three biological processes, response to endoplasmic reticulum stress, cellular protein modification process, and macromolecule modification. The MAPK pathway is responsible for fundamental cellular processes, such as growth, differentiation, and hypertrophy. For BFT, 18 biological processes were found to be altered and grouped into 8 clusters of semantically similar terms. The DE genes identified in the biological processes for BFT were ACHE, SRD5A1, RSAD2 and RSPO3 . RSAD2 has been previously shown to be associated with lipid droplet content and lipid biosynthesis. Conclusion In this study, we identified genes, metabolic pathways, and biological processes, involved in differentiation, proliferation, protein turnover, hypertrophy, as well as adipogenesis and lipid biosynthesis related to REA and BFT. These results enlighten some of the molecular processes involved in muscle and fat deposition, which are economically important carcass traits for beef production. Background Meat is the most important source of animal protein for the human diet; it consists mainly of skeletal muscle, and of varying amounts of connective tissue, implicated on its qualitative and quantitative characteristics, as well as small amounts of epithelial and nervous tissues. Meat represents the edible portion of the carcass, in other words, the part that will be destined for the final consumers and can be represented by the yield of commercial cuts [ 1 , 2 ]. Commercial cuts yield is economically important since it affects the final value of the products due to the proportion of fat, muscle, and bone in the carcasses. The direct determination of meat yield is difficult in practice, therefore the measures of ribeye area (REA) and backfat thickness (BFT), sections of the Longissimus dorsi muscle, are often used as indirect measures of this trait [ 3 , 4 , 5 ]. REA and BFT are well studied traits in beef cattle due to their implication in technological and nutritional characteristics of meat products. The ribeye area is used as an indicator of degree of muscularity, edible mass of carcass and yield of cuts with high commercial value. This measure can also be associated with the length and weight of the carcass (hot carcass weight) [ 3 , 6 , 7 ]. The amount of BFT deposited on the carcass is related to the total body fat and plays a major role in beef’s flavor and juiciness, which is directly associated with production costs. In the meat industry, an adequate layer of fat acts as a thermal insulator during carcass cooling process, avoiding problems such as cold shortening [ 8 , 9 ]. Also, the layer of fat is an important source of essential fatty acids and acts in the transport of fat-soluble vitamins, constituting a source of energy and insulation for the body of the animal [ 10 ]. Selection based on body composition, particularly on the relative proportion of muscle and fat in the carcass, is critical in meat-producing animals [ 5 , 11 ]. Most carcass traits have moderate to high heritability, indicating that the selection may result in significant genetic progress [ 5 ]. According to Costa et al. [ 12 ] and Clímaco et al. [ 10 ], feedlot finished zebu breeds may present the same proportion of edible portion as other genotypes (crosses with taurine breeds), and even greater muscularity and higher carcass yield. Several tools have been developed to improve the accuracy of animal selection and thus improve economically important traits in beef cattle, such as large-scale genotyping platforms, high-density panels of single nucleotide polymorphisms (SNP), and genome-wide association studies (GWAS). Besides these, many studies have used RNA Sequencing (RNA-Seq) to unravel complex traits in production animals. This high-throughput technology has been successfully employed in beef cattle for traits such as muscle development, intramuscular fat, and fatty acid profile, with interesting results of the phenotypic differences within and between populations [ 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 ]. Meat quality and carcass traits are influenced by a complex network of gene interactions in the muscle [ 21 ]. Therefore, elucidating the relationships between genes and how these genes, in turn, influence the carcass traits is critical for understanding the development of the animals, as well as the biological processes (BP) and metabolic pathways that may influence the final amount of fat and muscle in the carcasses. Tizioto et al. [ 22 ] working with this same population of Nellore steers, identified six QTL (quantitative trait loci) that individually explained 0.8% of the additive genetic variance of REA, and a QTL that explained 0.36% of the variation in BFT. Gomes et al. [ 23 ] reported that SNPs in genes related to protein turnover, like genes regulating the ubiquitin-proteasome system, may be associated with growth and carcass traits in bovine. Junior et al. [ 24 ] found SNP-windows located on chromosomes 5, 6, 7, 8, 10, 12, 13, 16, 17, 20, 24 and 29 that together explained 8.72% and 11.38% of the genetic variance for REA and BFT in a population of Nellore cattle. Despite those studies, there are still gaps to be filled about the molecular mechanisms that regulate carcass traits in cattle. Thus, the aim of this work was to study the Longissimus dorsi muscle transcriptome of Nellore cattle, associated with ribeye area and backfat thickness, to find differentially expressed genes, metabolic pathways, and biological processes that may regulate these traits. The results will improve our understanding of the molecular processes involved in muscle development and fat deposition of ruminants. Results Phenotypes and sequencing data The phenotypic values of REA (cm 2 ) and BFT (mm), animal identification, GEBVs (genomic estimated breeding values), the number of raw reads, and number and percentage of reads mapped against the Bos taurus UMD3.1 reference genome are shown in Tables 1 and 2 . The heritability values for REA and BFT were 0.22 and 0.20, respectively [ 22 ]. There was no difference between the REA and BFT groups in regarding the intramuscular fat content, as well as the animals selected for REA were not significantly different for BFT and vice-versa (Additional file 1 : Table S1). The correlation between REA and BFT in the sample of animals with contrasting GEBV ( n = 22) tended to be low ( r = −0.14). Table 1 Phenotypic data, GEBV, the number of raw-reads, number of reads after cleaning, number and percentage of mapped reads for High and Low groups of ribeye area (REA) Full size table Table 2 Phenotypic data, GEBV, the number of raw-reads, number of reads after cleaning, number and percentage of mapped reads for High and Low groups of backfat thickness (BFT) Full size table The choice of GEBV to select animals within extreme groups was made following Meuwissen et al. [ 25 ] and Sosnicki and Newman [ 26 ], who emphasized the importance of choosing genomic values as a vehicle to incorporate molecular information into selection programs. Also, the correlation between the GEBV and REA phenotypic values was high, r = 0.93. The same occurred for BFT, with GEBV and BFT correlation value of r = 0.90. On average 76.34% of total paired reads aligned against the reference genome. After filtering, 18,468 and 18,411 genes were used for differential expression analysis, for REA and BFT, respectively. Differential expression analysis Differential gene expression analysis between High and Low groups was conducted with DESeq2 software from R. DESeq2 uses statistical models based on a negative binomial distribution and is widely used to analyze RNA-Seq data since it allows more flexibility in assigning variations between samples [ 27 ]. One hundred and one differentially expressed (DE) genes were identified (false discovery rate, FDR 10%) between HighREA and LowREA groups, being 72 down-regulated and 29 up-regulated in the LowREA group. For BFT, 18 DE genes (FDR 10%) were identified, from which 13 were up-regulated and 5 were down-regulated in the LowBFT group. Figures 1 and 2 shows a Volcano plot of log2 foldChange (x-axis) vesus -log10 p value (FDR-corrected, y-axis) for REA and BFT, respectively. The gene annotation, log2foldChange, adjusted p value and p value of down- and up-regulated genes of REA and BFT can be found in Additional files 2 : Table S2 and Additional file 3 : Table S3, consecutively. Fig. 1 Volcano plot of log2FoldChange (x-axis) versus –log10 p value (FDR-corrected, y-axis) of high and low genomic breeding value groups for ribeye area in Nellore steers with FDR 10% Full size image Fig. 2 Volcano plot of log2FoldChange (x-axis) versus –log10 p value (FDR-corrected, y-axis) of high and low genomic breeding value groups for backfat thickness in Nellore steers with FDR 10% Full size image Functional enrichment analysis The functional enrichment analysis performed by DAVID (Database for Annotation, Visualization and Integrated Discovery) software identified two KEGG (Kyoto Encyclopedia of Genes and Genomes) pathways ( p value <0.1) for REA: MAPK (Mitogen-Activated Protein Kinase) signaling pathway (bta04010) and endocytosis pathway (bta04144). The DE genes enriched for MAPK pathway were: MAX and PPM1B down-regulated , and ARRB2, PTPRR and STMN1 up-regulated in the LowREA group. For the endocytosis pathway, the enriched genes were: NEDD4 and NEDD4L down-regulated, CHMP4A and ARRB2 up-regulated in the LowREA group. In the enrichment analysis performed by BINGO (Biological Networks Gene Ontology) software, three significant biological processes (FDR 5%) were identified for REA: response to endoplasmic reticulum stress (GO: 0034976), cellular protein modification process (GO: 0006464) and macromolecule modification (GO: 0043412). These BP can be seen in Table 3 . Redundant terms were not found by REVIGO (Reduce + Visualize Gene Ontology). Table 3 Significant biological processes (FDR 5%) identified by BINGO comparing high and low genomic breeding value groups for ribeye area Full size table For BFT, the functional enrichment analysis performed by DAVID identified five biological processes ( p value <0.1) (Additional file 4 : Table S4). The DE genes identified in these BP were IDO1 and ACHE , down and up-regulated in the LowBFT group, respectively. The second enrichment analysis, performed by BINGO software, identified 18 significant biological processes (FDR 5%), grouped into eight clusters of semantically similar Gene Ontology (GO) terms by REVIGO (Table 4 ). The DE genes identified in the BP were ACHE and SRD5A1 down-regulated, RSAD2 and RSPO3 up-regulated in the LowBFT group. Table 4 Significant biological processes (FDR 5%) identified by BINGO comparing high and low genomic breeding value groups for backfat thickness Full size table Discussion Understanding how growth and development work may provide elements for increasing the profit and quality of meat production [ 28 , 29 ]. Growth in livestock occurs mainly as a function of deposition of muscle and adipose tissue in the animal’s body [ 23 ]. As mentioned before, the ribeye area is a direct indicator of animal’s muscular development and has been used to predict the amount of lean meat in the carcass. In the other hand, the backfat thickness is used as an indirect indicator of meat in the carcass, and is very important to predict animal’s total body fat [ 3 , 4 , 5 , 30 , 31 ]. There are several biological processes involved in animal and muscle growth, such as the coordinated expression of many transcription factors (myogenic regulatory factors), genes and metabolic pathways, from the embryonic and fetal development until animals approach maturity. Most of the change in muscle weight during embryonic and fetal development is due to hyperplasia, the increase in number of muscle fibers. The postnatal stage of muscle growth (hypertrophy) consists in the increase in size of existing fibers. Both processes can be regulated by genetic factors, growth factors (insulin-like growth factors), hormones, and even environmental factors (mainly nutrition) acting as a positive or negative regulator of animal’s growth [ 29 , 32 , 33 ]. Furthermore, the processes of protein synthesis and degradation, also called protein turnover, affect muscle growth rates and can consequently alter carcass traits in beef cattle [ 23 , 29 , 34 , 35 ]. During muscle hypertrophy, there is a balance between protein synthesis and degradation that may result in protein deposition, and therefore muscle growth [ 32 ]. Altogether, these processes will lead to differences in muscle and fat deposition, and hence animals with different proportions of REA and BFT. Ribeye area The enrichment analysis performed by DAVID identified two pathways (KEGG). The first one was MAPK pathway, which is responsible for transduction of extracellular signals to their intracellular targets in various cell types, including skeletal muscle cells. This pathway acts in the control of fundamental cellular processes, such as proliferation, growth, migration, differentiation, apoptosis, and more specifically to muscle cells, hypertrophy [ 36 , 37 , 38 ]. According to Noordman, Jansen and Hendriks [ 39 ], the MAPK pathway is the main mechanism used by growth factors in processes such as cell proliferation and differentiation. When activated, MAPKs phosphorylate several intracellular targets, which include numerous transcription factors, resulting in the reprogramming of gene expression and cell division [ 40 ]. The activity is regulated by autophosphorylation or by phosphorylation of other kinases. On the other hand, the inactivation occurs by the process of dephosphorylation, which can be initiated by protein tyrosine phosphatases (PTPs) and metal-dependent protein phosphatases (PPMs) [ 39 , 41 , 42 ]. The PTPRR gene (protein tyrosine phosphatase, receptor type R), up-regulated in the LowREA group, can act by regulating the dephosphorylation of MAPKs and inhibiting cellular processes of proliferation and differentiation [ 43 , 44 ]. Li et al. [ 42 ], working with PTPRR expression in mice hippocampus, verified that a greater expression of this gene led to an increase in MAPK dephosphorylation and consequently neuronal apoptosis and a decrease in cellular proliferation, showing that this gene may be acting on inhibition of the MAPK pathway in the LowREA group. The PPM1B (protein tyrosine phosphatase, Mg 2+ /Mn 2+ dependent 1B) gene, down-regulated in LowREA group, encodes a protein of the PPM family and acts in MAPK pathway dephosphorylation. Wei and Liang [ 41 ] identified a negative correlation between PPM1B and muscle atrophy, that is, PPM1B expression gradually decreased when muscle atrophy increased. The second pathway found in the present study was endocytosis, which is fundamental for eukaryotic cells and is highly conserved between species and cell types. Endocytosis acts on the regulation of several processes, like cell adhesion and migration, extracellular signal transduction, cell growth and differentiation [ 45 ]. Junior et al. [ 24 ] also found genes involved in cell cycle regulation and transportation of cellular substances associated with REA in Nellore cattle. Four genes were enriched in the endocytosis pathway: CHMP4A, ARRB2, NEDD4 and NEDD4L. Within them, NEDD4 (neural precursor cell expressed, developmentally down-regulated 4, E3 ubiquitin protein ligase) and NEDD4L (neural precursor cell expressed, developmentally down-regulated 4-like, E3 ubiquitin protein ligase), also known by NEDD4–2 , encode ubiquitin proteins ligases belonging to the Nedd4 family. Among their functions, they may aid protein internalization in the cells [ 46 , 47 , 48 ]. In addition to the protein internalization function, NEDD4 is required for cell surface expression of the IGF-1R (insulin-like growth factor, type 1 receptor) and insulin receptor, and is a positive regulator of IGF-1 (insulin-like growth factor, type 1) and insulin signaling [ 46 , 47 ]. In mammals, the Insulin-like growth factors (IGF) axis is the largest fetal and postnatal growth regulator and is strongly related to muscle differentiation [ 32 , 49 , 50 ]. Studies with knockout mice for the NEDD4 gene showed that loss of NEDD4 reduced IGF-1 and insulin signaling, delayed embryonic development, and reduced growth and body weight [ 46 ]. Junior et al. [ 24 ] in a GWAS study associated with REA, BFT and hot carcass weight found this gene enriched for the GO terms “cellular protein metabolic process” and “protein metabolic process” related to protein turnover and, consequently animal growth and development. In the present study, NEDD4 and NEDD4L were down-regulated in LowREA group, emphasizing their importance in regulating muscle growth. ARRB2 (arrestin β-2), identified in both pathways – endocytosis and MAPK –, was up-regulated in the LowREA group. β-arrestins are multifunctional signaling molecules ubiquitously expressed that act as endocytosis regulators in different types of cell surface receptors [ 51 , 52 ]. According to Luttrell and Lefkowitz [ 51 ], β-arrestins can serve as scaffold proteins for MAPK pathway proteins. Additionally, Yan et al. [ 53 ] show the involvement of β-arrestin 2 in the activation of MAPK pathway. Analysis with BINGO software ascertained three biological processes: response to endoplasmic reticulum stress (GO: 0034976), cellular protein modification process (GO: 0006464) and macromolecule modification (GO: 0043412) (Table 3 ). Among the genes identified in these BP, AMFR (autocrine motility factor), a down-regulated gene in LowREA group, appears in all of them. AMFR – also known as gp78 – encodes a RING (Really Interesting New Gene) class E3 ubiquitin protein ligase that is involved in the mechanism of protein quality control, eliminating misfolded proteins from the endoplasmic reticulum of eukaryotic cells [ 54 , 55 , 56 ]. The endoplasmic reticulum (ER) is a ubiquitous multifunctional organelle, which ensures the correct protein formation, and plays a key role in lipids and sterols synthesis, and in intracellular calcium maintenance [ 57 ]. ER stress can occur due to perturbations in its homeostasis, such as chemical damage, gene mutations, nutrient insufficiency, cell differentiation, oxidative stress, and fluctuation in calcium concentrations, leading to changes in protein structure, resulting in the accumulation of misfolded proteins in the ER lumen [ 58 , 59 ]. ER stress can alter gene expression and cause post-transcriptional modifications, change cell physiology, and even induce cell apoptosis [ 57 , 60 , 61 ]. According to Nakanishi, Sudo and Morishima [ 61 ] and Nakanishi, Dohmae and Morishima [ 62 ] the ER response to stress related to the induction of apoptosis may be favorable for myogenesis. Nakanishi, Dohmae, and Morishima [ 62 ] working with mouse myoblast cells (C2C12) demonstrated that apoptosis induced by ER stress controls the differentiation of myoblasts, so only cells resistant to apoptosis undergo terminal differentiation to muscle tissue formation, improving the myoblasts quality. The other two biological processes identified in the present study - cellular protein and macromolecule modification processes - are intrinsically related, since proteins can be classified as macromolecules. According to Cantin and Yates III [ 63 ], most proteins need to undergo modifications to carry out their activities or become biologically active, and these changes are called post-translational modifications (PTM). PTMs are chemical changes that modify the protein structure reversibly or irreversibly, through proteolytic cleavage or covalent modifications in specific amino acid residues [ 64 , 65 , 66 ]. According to Blom et al. [ 65 ] and Zou and Blank [ 67 ], phosphorylation is the primary protein modifier and is considered a key event in several transductional signaling cascades, such as the MAPK pathway. As discussed previously, PTPRR and PPM1B , identified in the MAPK pathway, were also identified in the macromolecular modification process. These two genes encode protein phosphatases that can act by dephosphorylating and thus decreasing the activity of MAPK proteins [ 39 , 41 , 42 ]. Another gene found in the macromolecular modification process that encode a protein phosphatase is EYA2 (EYA transcriptional coactivator and phosphatase 2). Unlike PTPRR and PPM1B , this gene acts as transcription factor inducing myogenic regulatory factors, such MEF3 and MYOG, as well has important roles in differentiated muscle cells [ 68 , 69 ]. In our finds, EYA2 was down-regulated in LowREA group, showing its importance as a positive regulator of muscle growth. Another common PTM is ubiquitination, which acts in directing short-life proteins to the proteasome degradation pathway [ 34 , 54 , 55 , 56 ]. This ubiquitin-dependent proteolysis ensures protein turnover that is essential to cell survival [ 70 ]. In addition, the ubiquitination process also functions on cellular processes like signal transduction, enzymatic activation, endocytosis, molecular trafficking, chromatin rearrangement and DNA repair [ 71 ]. Among the genes identified in the cellular protein modification and macromolecular modification processes that encode ubiquitin proteins, NEDD4 and NEDD4L already had their functions in muscle discussed here. The UBE4A (ubiquitin conjugation factor E4 A), down-regulated in the LowREA group, is an important and ubiquitously expressed gene for the ubiquitination process, that may also participate in growth and differentiation processes [ 72 ]. Gomes et al. [ 23 ] have already reported that genes related to protein turnover were associated with growth and carcass traits in Nellore cattle. Latterly, Junior et al. [ 24 ] also found GO terms related to protein turnover in an association study with REA, BFT and hot carcass weight in Nellore. These findings show us the important role of ubiquitination and ubiquitin proteins for muscle growth, and consequently to improve REA in the animals. Backfat thickness The enrichment analysis performed by BINGO software identified 18 biological processes (FDR 5%), which were grouped into eight clusters of semantically similar GO terms by REVIGO (Table 4 ). Acetylcholinesterase gene ( ACHE ), present in four similarity clusters and up-regulated in the LowBFT group, is an essential component of the neuromuscular junction (NMJ). This enzyme is highly conserved in mammals and appears in multiple molecular forms, which originate from the alternative splicing of ACHE gene [ 73 , 74 ]. ACHE expression in muscle is regulated by transcriptional and post-transcriptional events, with an increased expression in the early stages of myogenic differentiation, and reaching a plateau when the myotubes are mature [ 74 , 75 ]. Mouisel et al. [ 76 ] reported a loss of muscular weight in the hind limb muscles of ACHE knockout mice. Soysal et al. [ 77 ] concluded that acetylcholinesterase inhibitors could cause weight loss and change muscle mass index in elderly people. Despite not showing a direct relation with BFT, it is clear that this gene has a role during animal’s muscle growth. The SRD5A1 (steroid-5-alpha-reductase, alpha polypeptide 1), also known as 5α-reductase type 1 , was identified in the androgen biosynthetic process (GO:0006702). Androgens, such as testosterone, play a critical role in muscle, increasing protein synthesis and energy metabolism, and promoting growth and muscle strength increase [ 78 , 79 ]. Ferrando et al. [ 79 ] hypothesized that testosterone might stimulate IGF-1 release in muscle tissue. Although skeletal muscle can synthesize and metabolize testosterone, this action on target organs often requires its metabolic conversion to one or more active products [ 80 ], such as DHT (dihydrotestosterone) metabolized by the 5α-reductase enzyme. DHT is one of the most potent natural androgens because of its high affinity for androgen receptors; it has several physiological effects on skeletal muscle, like activation of signaling pathways and anabolic action in protein synthesis, as well as the maintenance of muscle homeostasis [ 81 , 82 , 83 , 84 ]. Several studies found an association of SRD5A1 and its corresponding protein (5α-reductase) with muscle weight and strength [ 83 , 84 , 85 , 86 ]. Sato et al. [ 86 ] identified a positive correlation of 5α-reductase protein with the cross-sectional area of quadriceps femoralis muscle in humans. Although not DE for REA, SRD5A1 was up-regulated in the LowBFT group, that is, the group that presented a higher proportion of muscle mass represented by higher values of REA (Table 2 ). In contrast, Sun et al. [ 87 ] studying putative target genes for miRn25 and n26, highly expressed miRNAs in bovine backfat thickness, identified SRD5A1 related to lipid synthesis in adult animals. The R-spondin 3 gene ( RSPO3 ), down-regulated in the LowBFT group, encodes a member of a protein family widely recognized as an agonist of the canonical Wnt signaling pathway (or Wnt/β-catenin pathway) [ 88 , 89 , 90 , 91 ], one of the biological processes found here enriched for this gene (GO:0060070). This pathway plays an essential role during embryonic muscle development and in skeletal muscle homeostasis during adulthood [ 89 , 90 , 92 ]. This pathway also is an important regulator of adipocyte differentiation [ 93 , 94 ]. Han et al. [ 88 ] studied the role of RSPOs (r-spondins) during myogenic differentiation using primary satellite cells and C2C12 cells from mouse myoblast. The authors observed that silencing RSPO2 and RSPO3 significantly affected the Myf5 expression, the rate of myogenic differentiation and the myotubes formation in mice muscle cells. The authors also found that RSPOs can act via canonical Wnt signaling pathway in the positive regulation of myogenesis in skeletal muscle. Li et al. [ 93 ] studying adipose-derived mesenchymal stem cells in porcine found that the activation of Wnt signaling pathway suppressed mRNA and protein expression of the adipocyte-specific genes C/EBPα (CCAAT/enhancer-binding protein-α) and PPARγ (peroxisome proliferator-activated receptor-γ), inhibiting adipogenesis in these cells. Chen et al. [ 94 ] also found that Wnt signaling pathway may inhibit adipogenic differentiation in porcine intramuscular preadipocytes. So, even identifying this gene down-regulated in the LowBFT group, it is likely that it was acting as a negative regulator of BFT in the animals. RSAD2 gene (radical domain of S-adenosyl methionine containing 2), found in the defense response to virus process (GO:0051607), is a type I interferon response gene, which has been used in the clinical prediction of some diseases, also related to skeletal muscle myopathies caused by inflammatory cytokines [ 95 , 96 , 97 ]. Wei et al. [ 98 ] working with growing pigs fed a linseed-enriched diet, found RSAD2 up-regulated in the Longissimus dorsi muscle of treated animals. Dogan et al. [ 99 ], studying the structure and composition of mice fat, muscle, and liver, reported that RSAD2 may act as a modulator of lipid droplet content and lipid biosynthesis in adipose tissue. These findings coincide with this work, in which the RSAD2 gene was down-regulated in the LowBFT group. Conclusions Our results emphasize the complexity of gene regulation in the Longissimus dorsi muscle of Nellore cattle associated with REA and BFT. We identified 101 DE genes in the extreme GEBV groups for REA. These genes were enriched for metabolic pathways and biological processes mostly involved in differentiation, the proliferation of muscle cells, protein turnover and hypertrophy, such as the MAPK pathway, cellular protein, and macromolecule modification processes. For BFT, we identified 18 DE genes involved in biological processes that may regulate positively or negatively adipogenesis, lipid biosynthesis and muscle growth. These results might help us to enlighten the molecular processes involved in muscle and fat deposition, which are economically important carcass traits for beef production. Methods Animals, samples, and phenotypes Three hundred eighty-five (385) Nellore steers from Embrapa (Brazilian Agricultural Research Corporation) breeding herd, raised between 2009 and 2011, were included in this study. To breed this herd, 34 unrelated bulls were selected representing the main breeding lineages used in Brazil based on the information of the National Summary of Nellore produced by the Brazilian Association of Zebu Breeders (ABCZ) and the National Research Center for Beef Cattle. The animals were raised in grazing systems, under the same conditions of handling and nutrition until 21 months of age when they were taken to feedlots. All animals were slaughtered at an average age of 25 months. The slaughter was carried out in a commercial slaughterhouse located in the city of Bariri (São Paulo), under the supervision of the Federal Inspection Service (SIF) and within the standards established by the Brazilian Ministry of Agriculture, Livestock and Food Supply (MAPA), for more details see [ 100 ]. At the time of slaughter, approximately 5 g of the Longissimus dorsi muscle was collected between the 12th and 13th ribs (right half carcass) and were stored in liquid nitrogen. Twenty-four hour after slaughtering, steaks corresponding to a cross section of the Longissimus dorsi muscle between the 12th and 13th ribs (left half carcass) were sampled with bone and transported to the laboratory of Embrapa Pecuária Sudeste (São Carlos, SP), where REA and BFT were measured. REA was measured with a grid and the BFT with a graduated ruler. The genomic estimated breeding values (GEBV) were obtained by the GenSel program [ 101 ], which uses Bayesian methodology. The a priori values of genetic and residual variance were obtained from the Bayes C analysis, in which the a priori genetic and residual variance was equal to 1 [ 102 ]. Using the estimated a priori values, a new Bayes C analysis was performed to obtain GEBVs for each animal. The SNP markers information was obtained as described by Cesar et al. [ 103 ] using BovineHD 770 k BeadChip (Infinium BeadChip, Illumina, San Diego, CA, USA). For BFT, 384 animals were used in the GEBV estimate. The animals were separated into two groups of six animals each (High and Low), based on the extreme values of GEBVs for each of the two traits. Of the 12 animals selected for each trait, two of them were in common for both traits (Additional file 5 : Table S5). A Student’s t-test was performed to verify the difference in REA and BFT level between High and Low groups. The phenotypic values of intramuscular fat content were also included in this test to ascertain these animals were not significantly different for this trait. The phenotypic correlation between REA and BFT using the selected animals ( n = 22) was estimated with the R software. RNA extraction, quality analysis, library preparation and sequencing The total RNA was extracted from 100 mg of frozen Longissimus dorsi muscle collected at the slaughter using the TRIzol reagent (Life Technologies, Carlsbad, CA, USA), following the manufacturer’s instructions. At the end of the extraction process, RNA integrity was verified by the Bioanalyzer 2100 (Agilent, Santa Clara, CA, USA). The mean RIN (RNA integrity number) of all samples was 7.75. For library preparation, 2 μg of RNA from each sample was used, according to the protocol described in the TruSeq RNA Sample Preparation kit v2 guide (Illumina, San Diego, CA, USA). The libraries were quantified by quantitative PCR using the KAPA Library Quantification kit (KAPA Biosystems, Foster City, CA, USA) and the average library size was estimated using the Bioanalyzer 2100 (Agilent, Santa Clara, CA, USA). After quantification, the samples were diluted and pooled into three pools out of six samples each. Three lanes of a sequencing flowcell were clustered, using the TruSeq PE Cluster kit v3-cBot-HS (Illumina, San Diego, CA, USA). They were sequenced on the HiSeq2500 ultra-high throughput sequencing system (Illumina, San Diego, CA, USA) using the TruSeq SBS kit v3-HS (200 cycles), according to the manufacturer’s instructions. All sequencing analyses were performed at the Genomics Center at the College of Agriculture “Luiz de Queiroz” of the University of São Paulo. Quality control and read alignment The adapter sequences and low complexity reads were removed in an initial data-filtering step using SeqClean software ( ). The FastQC software was used to analyze the quality of raw reads ( ). The Tophat version 2.1.0 software [ 104 ] was used to map the reads against to the UMD3.1 Bos taurus reference genome ( ). Read counts (mRNA abundance) for all mapped genes were calculated using the HTSeq version 0.6.1 software ( ) [ 105 ]. Only read sequences that uniquely mapped to known chromosomes were used in this study. Identification and annotation of differentially expressed genes Differentially expressed genes were identified using the DESeq2 software of R [ 106 ]. Before the statistical analysis was performed, the read count data was filtered based on previous studies [ 16 , 17 ], as follows: i) genes with zero counts were removed (not expressed); ii) genes with less than one read per sample on average were removed (very low expression level); iii) genes that did not appear in at least three samples were removed (rarely expressed). After filtering, a total of 18,468 genes for REA and 18,411 for BFT were analyzed for differential expression employing the “nbinomWaldTest” function of the DESeq2 that assumes the level of gene expression as a negative binomial distribution. Exploratory plots were made to check the dispersion estimates (Additional file 6 : Figures S1 and S2, and Additional file 7 : Figures S3 and S4). The Benjamini and Hochberg [ 107 ] methodology was used to control the false discovery rate (FDR) at 10%. The DE genes were annotated by the online tool BioMart ( ) from Ensembl. Genes that lacked annotation information were annotated using the NCBI (National Center for Biotechnology Information - /) and Panther ( / ) databases. Functional enrichment analysis The functional enrichment analysis of DE genes (FDR 10%) for KEGG (Kyoto Encyclopedia of Genes and Genomes) pathways was carried out with the online tool DAVID (Database for Annotation, Visualization and Integrated Discovery) version 6.7 [ 108 ]. Also, another enrichment analysis was performed to identify biological processes related to the DE genes, using BINGO (Biological Networks Gene Ontology) version 3.0.3 [ 109 ], a Cytoscape [ 110 ] version 3.4.0 app. BINGO is a free-use tool that determines GO terms that are over-represented in a set of genes using the “Hypergeometric test” as a statistical test. BPs that presents FDR 5% [ 107 ] were considered significant. Lastly, REVIGO (Reduce + Visualize Gene Ontology), an algorithm that summarizes long lists of GO terms was used to remove redundant GO terms. REVIGO performs a simple clustering procedure finding a representative subset of GO terms that is based on semantic similarity measures [ 111 ]. Abbreviations BFT: Backfat thickness BINGO: Biological Networks Gene Ontology BP: Biological process DAVID: Database for Annotation, Visualization and Integrated Discovery DE: Differentially expressed DHT: Dihydrotestosterone ER: Endoplasmic reticulum FDR: False discovery rate GEBV: Genomic estimated breeding value GO: Gene ontology GWAS: Genome-wide association study IGF: Insulin-like growth factor KEGG: Kyoto Encyclopedia of Genes and Genomes MAPK: Mitogen-Activated Protein Kinase NMJ: Neuromuscular junction. PPM: Metal-dependent protein phosphatase. PTM: Post-translational modification. PTP: Protein tyrosine phosphatase. QTL: Quantitative trait loci. REA: Ribeye area. REVIGO: Reduce + Visualize Gene Ontology. RSPOs: r-spondins. SNP: Single nucleotide polymorphism. | Brazil has the world's largest commercial beef herd, numbering over 225 million, yet only 20 percent of Brazil's production is intended for export. Because of this, beef ranks 10th on the list of products exported by Brazil, after soybeans, iron ore, oil, sugarcane, automobiles, chicken, cellulose, soybean meal and coffee. Experts suggest that exports could be much higher if Brazilian beef were of a quality similar to that produced in Australia, Argentina or Uruguay. In addition to obtaining more tender meat, another goal of research is to improve productivity. To achieve this, genetic improvement techniques are used to breed animals that gain weight more quickly or that better resist disease. "Brazil is the world's second largest producer of beef, in an industry whose 2016 revenue exceeded US$ 5 billion. The cost of producing Brazilian beef is one of the lowest in the world, but in order to expand market importance, beef production needs to adapt to the standards established by importers. Among quality concerns, tenderness, the amount and the type of marbling can influence the sensory characteristics and nutritional value of the beef," said Luciana Correia de Almeida Regitano, a researcher at the Brazilian Agricultural Research Corporation - EMBRAPA and faculty member of the Federal University of São Carlos (in São Paulo, Brazil). Regitano explained that in Brazil, over 80 percent of beef cattle is Nelore. The breed, of the subspecies Bos taurus indicus, originated in India, but it does not produce meat that is as tender as Angus (Bos taurus taurus), which originated in Europe and presents more marbling. "Nelore cattle weigh less and have lower productivity and less tender meat. As a result, their price is lower," she said. Regitano leads a follow-up to a previous study that enabled the identification of genomic areas associated with the beef's production and quality characteristics. "We initially bred approximately 800 animals for three years on five farms, from birth to slaughter, measuring the phenotypic characteristics such as growth, production, meat quality and feed efficiency. We also collected samples of DNA and subjected them to high-density genotyping to determine specific mutations that cover the whole genome. We analyzed more than 700,000 specific mutations dispersed throughout the genome, which provided us with data on the segregation of chromosomal segments," she said. "In 2012, we had another project approved by FAPESP that allowed us to expand the initial study and analyze tissue samples to determine the complete sequence of messenger RNAs and all microRNAs found in the tissue. MicroRNAs are small RNA molecules that modulate how messenger RNAs will be translated into proteins," Regitano explained. The proteins from the tissue samples collected were also analyzed. "We were able to obtain a data set on 45 different phenotypes and sequenced the messenger RNA and microRNA of 200 muscle samples and 30 liver samples, in addition to analyzing proteins from 65 muscle samples. With support from the CNPq, we were also able to sequence the complete genome, each nucleotide of the genome, of 20 animals. This provided a very important set of data from which we have obtained multiple layers of genomic information on a single animal," she said. Genomic tools "Among the objectives of our current study are complementing the analyses of genomic association to include new phenotypes, integrating the analyses of the functional genome (RNA, microRNA and proteins), assessing the significance of the copy number variation (CNV) and training human resources in the fields of bioinformatics and genomics," Regitano explained. Copy number variation is defined as a type of genomic structural variation that includes amplifications and losses of a specific region, which could involve and possibly be one or more complete genes. "We are studying how these alterations could influence the expression of all the genes in muscle as well as the phenotypes assessed," the researcher asserted. The study led to the publication of a series of articles that are expanding knowledge of the possibilities for using genomic tools that can lead to gains in quality and productivity in beef production. In an article published in the journal BMC Genomics, Regitano and her group identified genes, metabolic pathways and biological processes involved in the differentiation, proliferation, protein conversion, hypertrophy and synthesis of lipids related to the area of the loin eye, a measurement that correlates with the muscularity of the animal and the thickness of its subcutaneous fat, both characteristics that have a direct impact on carcass quality and productivity. The results of the research point to molecular processes related to the deposit of muscles and fat, which are economically important characteristics for beef production. In another article, published in the journalPLOS ONE in 2016, Regitano and her colleagues described a broad genomic analysis of CNVs obtained from 723 bulls. The researchers identified 2,600 regions that represent nearly 6.5 percent of the complete genome of the animals. The results represent the first comprehensive study of the copy number variations of Nelore cattle, with the identification of regions in which genetic alterations could have important implications in breeding animals whose meat is more tender, thus improving its potential for export and higher prices. | 10.1186/s12864-017-3897-x |
Computer | Optical concentrator could help solar arrays capture more light, even on a cloudy day, without tracking the sun | Nina Vaidya et al, Immersion graded index optics: theory, design, and prototypes, Microsystems & Nanoengineering (2022). DOI: 10.1038/s41378-022-00377-z | https://dx.doi.org/10.1038/s41378-022-00377-z | https://techxplore.com/news/2022-06-optical-solar-arrays-capture-cloudy.html | Abstract Immersion optics enable creation of systems with improved optical concentration and coupling by taking advantage of the fact that the luminance of light is proportional to the square of the refractive index in a lossless optical system. Immersion graded index optical concentrators, that do not need to track the source, are described in terms of theory, simulations, and experiments. We introduce a generalized design guide equation which follows the Pareto function and can be used to create various immersion graded index optics depending on the application requirements of concentration, refractive index, height, and efficiency. We present glass and polymer fabrication techniques for creating broadband transparent graded index materials with large refractive index ranges, (refractive index ratio) 2 of ~2, going many fold beyond what is seen in nature or the optics industry. The prototypes demonstrate 3x optical concentration with over 90% efficiency. We report via functional prototypes that graded-index-lens concentrators perform close to the theoretical maximum limit and we introduce simple, inexpensive, design-flexible, and scalable fabrication techniques for their implementation. Introduction Harnessing the plentiful solar energy reaching the earth via photovoltaics will play a major role in satisfying our future energy needs in a sustainable way. One promising approach is concentrated photovoltaics 1 , and several ways to achieve concentration are being used in the field 2 , 3 - Fresnel lenses 4 , 5 , mirrors 6 , 7 , parabolic concentrators 8 , 9 , secondary high-index optics 10 , waveguides 11 , 12 , 13 , immersion lenses 14 , surface nanotexturing 15 . The majority of these require active tracking of the Sun as they have to face the light source within a few degrees. Some of the above are passive concentrators, i.e., do not need to track the Sun, however they still offer modest acceptance angles that do not span the available 2π steradians. We present a passive concentrator device, AGILE (Axially Graded Index LEns) 16 . Henceforth, this immersion graded index optical concentrator is written as AGILE in the manuscript. AGILE does not need solar tracking and follows the cosine projection maximum limit, concentrating light incident on it from all angles. The AGILE allows for non-pointing (i.e., removing the need to track the Sun) concentration systems which reduce the amount of photovoltaic (PV) material required but also efficiently absorb diffuse light. Light scattering is present due to cloud cover and atmosphere; and diffuse light can be as high as 20% even on a sunny day 17 . Figure 1a depicts how light is concentrated in the AGILE concentrator. Light rays incident from the entire 2π steradians enter the larger aperture with refractive index (RI) of 1, curve towards the normal via refraction along the height of the cone in the axial gradient RI, reflect from the sidewalls, and reach the smaller output aperture with high RI, e.g., silicon with RI ~ 3.5, without need to track the light source. Figure 1c portrays the AGILE concentrator array system, made up of the repeated unit shown in Fig. 1 b, that absorbs all the incident light and hence appears dark. Video clip of the AGILE array system is attached in the supplementary material. In this video, the AGILE does not have metallic reflective sidewalls so that the graded index material can be visualised. AGILE allows near perfect antireflection and coupling, encapsulation, space for circuitry and cooling, and conformal design. These immersion graded index optics can also realize applications in areas like light management in solid state lighting, laser couplers, display technology etc. Fig. 1: AGILE (Axially Graded Index LEns) concept and concentrator array system vision. a Depiction of the optical concentration action, b repeating unit of AGILE, c concentrator arrays with built-in anti-reflection and encapsulation, no need for tracking the source, and spatially separated PV cells which have advantages of reduced PV material use, hence lower cost with space for cooling and circuitry. Full size image Comparison of different concentrator designs Ray tracing simulations were performed with the software FRED. We performed simulations to compare optical concentration efficiencies of cone geometries with different RI profile fills. All the concentrators simulated in Fig. 2 have the same geometry- the input diameter is 3.5, the output diameter is 1, the height is 5, and the interior sidewalls of the cone are optically reflective. These are dimension-independent simulations in the ray domain, i.e., in the regime where the device dimensions are several times larger than the wavelength of the incident light. Details of scale invariance are presented in appendix A in the supplementary file. The angle of the incident ray array, theta, was swept from 0 to 90° in one plane as the structure has rotational symmetry. Figure 2 shows the cone geometries’ optical concentration efficiency, i.e., light transmission to the smaller output aperture versus incidence angle compared to the cosine theta theoretical maximum, i.e., projection limit when not tracking the light source. The results show that the concentrator that is air filled, homogeneously filled with RI = 3.5, or homogeneous filled with RI = 3.5 along with a lens-top reject a substantial amount of the incident light. In contrast, an AGILE with a linear gradient index from ambient to the detector material (i.e., RI from 1 to 3.5) concentrates light close to the theoretical limit. In these simulations, Fresnel reflections at the top surface are not included in most curves unless denoted in the chart legend, as an antireflection thin film can be added at the top surface. However, as seen in the curves where Fresnel reflections are included, the AGILE is unchanged, while the lens-top and high-index filled cones have reduced transmissions. Fig. 2: Ray tracing simulations for different concentrator designs with different refractive index profiles for comparison. AGILE with theoretical refractive index gradient from 1 to 3.5 tracks the upper bound limit for passive concentrators well (cosine projection maximum) and concentrates light across the incidence angles Full size image The AGILE theoretical concept While designing optical concentrators, the constant brightness theorem 18 , 19 , 20 imposes strict limits. The theorem says that the optical power-flow per unit of area and solid angle cannot be increased through a passive optical system: luminance is invariant. Stated this way, the theorem is not complete and higher luminance (formerly described as brightness) can be achieved inside a high refractive-index (RI) material, as seen in optical immersion techniques in microscopy and lithography 21 , 22 . Mathematically, due to conservation of etendue in a perfect optical system, the concentration ( C ) in a 3-D optical concentrator can be expressed 23 , 24 : $$C = \left\{ {\frac{{a_{in}}}{{a_{out}}}} \right\}^2 = \left\{ {\left( {n_{out}sin\emptyset _{out}} \right)/\left( {n_{in}sin\emptyset _{in}} \right)} \right\}^2$$ (1) where a in and a out are the radii of the input and output apertures, n in and n out are the input and output refractive indices, and ∅ in and ∅ out are the input and output acceptance half angles. RI is denoted as n(z) in the equations. In solar concentrators, if we consider the input RI is unity and the output spread half angle as 90°, the equation reduces to \(C = \left\{ {n_{out}/sin\emptyset _{in}} \right\}^2\) . Here, ∅ in is close to 90°, so the concentration of the device: $$C_{AGILE} = n_{out}^2$$ (2) equation 1 states that optical concentrators accepting all incidence angles are not in violation of the constant brightness theorem, provided that the area reduction ratio is less than or equal to the square of the ratio of RI from input to output. In this paper, we show that concentration at this limit ( \(C = \left\{ {n_{out}/n_{in}} \right\}^2\) ) can be achieved by gradually changing the RI from near unity in air to a high RI at the output. Extending Eq. 2 , the RI continuously increases from input to output, while the area (A) of the concentrator decreases such that: $${\int}_{A\left( {x,y,z_0} \right)} {n^2\left( {x,y,z} \right)dxdy = constant}$$ (3) where x and y are the transversal coordinates, z is the axial coordinate, A is the cross sectional area, and n(x,y,z) is the RI. For a circularly symmetric structure of radius r(z) and a RI that is only a function of z (i.e., RI constant at each z plane), Eq. 3 can be written as $$n\left( z \right) \ast r\left( z \right) = constant$$ (4) with Eq. 4 fulfilled, the number of electro-magnetic modes is constant along the height of the concentrator at each z plane; however, that condition in itself is not sufficient to ensure concentration. It is easy to find examples of structures that fulfill Eq. 4 but fail to concentrate light due to reflections from the sidewalls, e.g. jagged or discontinuous sidewalls or abrupt index variations. Therefore, the design challenge is to find concentrator shapes and corresponding index profiles that allows the electromagnetic modes of one layer to effectively couple to the next layer without unacceptable reflections. Consider r(z) and n(z) as the sidewall and the index profiles respectively along z (height of AGILE). K is a non-zero constant and h is a dimensionless parameter that gives the height relative to the input radius, height = hK. n 1 and n 2 are the input and output indices, with boundary conditions: \(r\left( 0 \right) = \frac{K}{{n_1}},n\left( 0 \right) = n_1,r\left( {hK} \right) = \frac{K}{{n_2}},\&\, n\left( {hK} \right) = n_2\) . By using n(z)*r(z)=K (i.e., Eq. 4 ) and setting the curvature, i.e., second derivatives of r(z) equal to a constant ε, give the analytical solutions. By defining the second derivative as a constant, the solutions would have parametric continuity ‘smoothness’ of the order ε 2 . Hence, this approach eliminates n(z) and r(z) that are unphysical and discontinuous. Different family of curves (i.e., complementary sidewall and index profiles) that create the AGILE design can be found. The analytical profiles for n(z) and r(z): $$\begin{array}{l}n(z) = \dfrac{{2n_2hK}}{{\left\{ {z^2n_2\epsilon h + z\left[ {2 - n_2\epsilon K {h^2} - 2\left( {\dfrac{{n_2}}{{n_1}}}\right)}\right] + 2hK\left( {\dfrac{{n_2}}{{n_1}}}\right)}\right\}}}\\\\\\ r\left( z \right) = \dfrac{{\left\{ {z^2n_2\epsilon h + z\left[ {2 - n_2\epsilon K {h^2} - 2\left( {\dfrac{{n_2}}{{n_1}}}\right)}\right] + 2hK\left( {\dfrac{{n_2}}{{n_1}}}\right)} \right\}}}{{2n_2h}}\end{array}$$ (5) We have mainly simulated and fabricated concentrators with linear sidewalls. For ε = 0, r(z) is linear and n(z) is hyperbolic: $$\begin{array}{l}n\left( z \right) = \dfrac{{n_2hK}}{{\left\{ {z\left[ {1 - \left( {\dfrac{{n_2}}{{n_1}}}\right)}\right] + hK\left( {\dfrac{{n_2}}{{n_1}}}\right)}\right\}}}\\\\\\ r\left( z \right) = \dfrac{{\left\{ {z\left[ {1 - \left( {\dfrac{{n_2}}{{n_1}}}\right)}\right] + hK\left( {\dfrac{{n_2}}{{n_1}}}\right)}\right\}}}{{n_2h}}\end{array}$$ (6) Height optimization The aim of the height optimization is to design a short device to save material and weight, while maintaining efficiency. AGILE device is scalable in the ray-domain (i.e., as long as the concentrator geometry is several times larger than the wavelength of light). Details in appendix A in the supplementary file. Scalable here means the concentrating phenomena of the AGILE works in the same way even if the geometry is increased or decreased by a scale factor. In other words, if we scale the Cartesian coordinates by a common factor, the ray paths are unaffected. The consequence of scale invariance is that the characteristics and therefore the performance of a particular AGILE is purely determined by the height to input diameter ratio. In appendix A , we show that: (1) reflections in the AGILE are scale invariant and (2) the ray tracing equation in the gradient index of the AGILE is scale invariant. Therefore, the height optimization study is also scalable. Shorter AGILEs are lossy, and efficiency increases as the AGILE gets longer. One of the main ray-rejection mechanisms is from the sidewalls of the top corner area of the AGILE. As the height of concentrator gets shorter, the corner angle gets smaller. The refractive index gradient inside the AGILE in the corner area is not large enough to bend the rays before they are reflected out by the sidewall. The combined effect of the small corner angle and not enough index variation in this space is that rays escape out of a short AGILE. A tall AGILE gives a large corner angle and hence near-perfect light capture. In reality the longer the AGILE gets, the larger the material transmission losses and greater the number of reflections/bounces; hence, there is a limit to how long the AGILE should be made. Another source of loss, apart from non-ideal height of the device, could be abrupt index variations. Reference 25 reports that in inhomogeneous media, even if the ∆RI (i.e., difference between largest and smallest RI) is large, as long as the RI slope is equal to ∆RI/(2λ) or smaller, the reflectivity is negligible. However, there is an abrupt limit around ∆RI/[(1/20)λ] where reflectivity increases sharply. In AGILE, the RI slope is 5 orders of magnitude lower than the abrupt limit. The index variation happens gradually over a large distance when compared to the wavelength in AGILE, so this abrupt limit, where reflections from index variations become significant, is not reached. We use ray tracing simulations performed with the software FRED to find the optimum height as shown in Fig. 3 . Both in simulations and in experiments, we focused on structures with linear sidewall geometry. Every curve in Fig. 3 is created from several simulations. Each point represents a particular AGILE device of a specific height, concentration, and index variation, and the point value gives the integration of transmission across 0° to 90° incidence angles compared to the input light, i.e., overall performance across all angles for a particular geometry. This process repeated for different heights made a curve for one concentration and index variation combination. Each curve is based on about 400 simulations. In Fig. 3 , the upper bound is the maximum efficiency of 1 (all light incident on the input aperture reaching the output). The ‘height to input diameter’ ratio dictates the efficiency, as this ratio controls the sidewall angle and hence the retention of rays inside the structure. Accordingly to Eq. 3 , the number of Suns concentration factor (i.e., the area reduction ratio) of 6.25, 12.25, and 20.25 are chosen to be less than, equal to, and greater than the square of index variation (3.5 2 and 4.5 2 ) in the simulated AGILEs, to evaluate how these curves differ from each other. For all the simulations the input diameter is the square root of the concentration and output diameter is 1. For the 6.25 Suns AGILEs the input diameter is 2.5 and output diameter is 1, for the 12.25 Suns AGILEs the input diameter is 3.5 and output diameter is 1, and for the 20.25 Suns AGILEs the input diameter is 4.5 and output diameter is 1. Height is normalized with output diameter of 1. For the various curves in Fig. 3 , the efficiency almost reaches the maximum of 1 when the height of the simulated device is approximately equal to the input diameter. For example, for the curve ‘12.25 Suns, n 1 to 3.5’, a height of 3.15 with input diameter of 3.5, i.e., height : input diameter of 0.9, gives an efficiency of over 98%. A ~ 1:1 aspect ratio of height : input diameter of the AGILE structure, where the efficiency curve plateaus to almost 1, ensures a good compromise between maximizing efficiency and minimizing height. Fig. 3: Efficiency ( E ) vs height ( H ) graph is created using ray tracing simulations. Curves represent the concentration capacity of AGILE, i.e., efficiency, with respect to height for different geometries and refractive index ratios. A ~ 1:1 aspect ratio of height to input diameter of the AGILE structure, where the efficiency curve plateaus to almost 1, ensures a good compromise between maximizing concentration efficiency and minimizing device height. A generalized design guide is found in Eq. 8 Full size image Figure 3 curves represent the concentration capacity of AGILE with respect to height for different geometries and refractive index ratios. All the simulation data in Fig. 3 closely follow the Pareto function of the form: $$E = {{{\mathrm{E}}}}_{{{{\mathrm{max}}}}}\left( {1 - \frac{1}{{\left( {1 + {{{\mathrm{a}}}}H} \right)^{{{\mathrm{b}}}}}}} \right)$$ (7) with a and b as constants, H as height, and E as efficiency that approaches its maximum value E max , which is the smaller of 1 and \({{{\mathrm{M}}}} = \frac{{n_{out}}}{{n_{in}}}/\frac{{r_{in}}}{{r_{out}}}\) , as height gets large, where n in and n out are the input and output indices, and r in and r out are the input and output radii. Here height is denoted as ‘ H ’ and is different to the small ‘h’ parameter used before in Eq. 6 . As described in Eq. 3 , when the area reduction ratio, as in the concentration factor, is equal to the square of the index ratio, the AGILE is at its optimum performance. This optimum performance is shown as the ‘design guide’ fitted to the height optimization data for ‘12.25 Suns with n varying from 1 to 3.5’ and ‘20.25 Suns with n varying from 1 to 4.5’, where M = 1. The ‘12.25 Suns with n varying from 1 to 4.5’, ‘6.25 Suns with n varying from 1 to 4.5’, and ‘6.25 Suns with n varying from 1 to 3.5’ curves lie above the design-guide curve and the ‘20.25 Suns with n varying from 1 to 3.5’ curve falls below the design guide, as they respectively have more and less index gradation compared to the area reduction factor. The generalized design-guide curve is given as: $$E = {{{\mathrm{E}}}}_{{{{\mathrm{max}}}}}\left( {1 - \frac{1}{{\left( {1 + 0.02\left( {\frac{{{{{\mathrm{RM}}}}^2}}{{{{\mathrm{N}}}}}} \right)H} \right)^{40{{{\mathrm{M}}}}}}}} \right)$$ (8) where M = \(\frac{{n_{out}}}{{n_{in}}}/\frac{{r_{in}}}{{r_{out}}}\) , N = \(\frac{1}{{n_{in}}} - \frac{1}{{n_{out}}}\) , and R = \(\frac{1}{{r_{out}}} - \frac{1}{{r_{in}}}\) . The above design guide characterizes how the efficiency, as in the concentrating action of the AGILE, changes with the height and can be used to create various immersion graded concentrators depending on the application requirements. The AGILE is scale invariant as described above. Therefore, the height optimization study is also scalable, and this generalized result in Eq. 8 can be applied to various geometrical concentrations and refractive index ratios, as these Pareto function curves fall above, below, or track the optimum performance. Practical AGILE design For photovoltaic (PV) systems where the light enters the input aperture in air (RI ≈ 1) and is absorbed in a high-index PV material (e.g. silicon with a RI ≈ 3.5), theoretical passive concentration (i.e., capturing light without tracking the movement of the source) given by Eq. 1 is (3.5 2 /1 2 ) =12.25. Achieving this level of passive concentration requires development of materials with broadband transparency and having low to high indices creating a large range of RI spread. In this section we discuss realistic AGILE designs as compared to the theoretical simulations presented so far, i.e., available broadband transparent optical materials with a wide range of indices, the trade off between concentration and the available graded index range in the AGILE design, other practical features include tile-able/tessellated input apertures to maximize space and light capture, and an optimum height design drawing on the results of the height optimization. Fabrication Robust, broadband transparent, and inexpensive graded RI materials were critical to the success of the AGILE. We have designed and fabricated two variations of AGILE prototype: a pyramid of stacked glass flats, and a cluster of cones filled with polymer layers of different RI 26 . The fabricated pyramid with a square-cross section is a tile-able design and the rotational symmetric polymer array made by overlapping cone shapes has a tile-able/tessellated input aperture as well - going from hexagons at the input to circles at the output. Prototype designs are seen in Fig. 4a, b . After extensive material search, selection, and characterization, broadband transparent optical glasses were found to be available having index of 1.5 to 2, and broadband transparent UV curable polymers having an index of 1.46 to 1.625. Broadband transparent in this application means high optical transmission across the solar spectrum from about ~300 nm to beyond ~1200 nm. Graded index materials were fabricated as layers of different indices using the different glasses and the different polymers selected; and this was an approximation (verified via simulations) to the theoretical continuous gradient refractive index according to Eq. 3 . For practicality of fabrication, we have used graded index layers. A hyperbolic refractive index profile matches the as-fabricated linear sidewalls of the devices as informed by Eq. 6 . The refractive indices of different glasses and polymers in the various layers across the height of the devices fabricated are shown in Fig. 4c (glass pyramid) and Fig. 4d (polymer cluster) along with the matching theoretical continuous hyperbolic gradient index according to Eq. 6 for comparison. Fig. 4: AGILE tile-able geometric designs and material selection of various different glasses and polymers having a large range of refractive indices along with high optical broadband transmission to create the graded index layers used in the prototypes. AGILE structures with tile-able input surfaces: a pyramid with square cross section and b cluster of 7 overlapping cones. Height axis going from input aperture to output aperture is shown, which is true for both a and b . Refractive index gradation from low to high in the two structures along height of the device: c in the glass pyramid and d in the polymer cluster. The larger apertures at the top have a lower refractive index and the bulk material is graded to increase to a higher refractive index at the smaller apertures at the bottom. In graphs c and d , we compare the refractive indices of the different glasses and polymers used in the layers across the height of the fabricated AGILE devices (the steps) to the theoretical continuous hyperbolic index as per Eq. 6 (the dotted curves). Full size image Materials and methods Glass pyramid fabrication The pyramid was made out of different optical glass flats, varying in index from 1.5 to 2 (list of glasses from Ohara Corp. in appendix B in the supplementary file). After an extensive search, these glasses were chosen from available optical-quality glasses, such that they have similar thermal expansion and glass transition temperatures, high broadband transmission in the solar spectrum, and RI evenly distributed in a wide range. To simplify fabrication, we create structures with linear sidewall geometry. The concentration efficiency difference between AGILE with the ideal hyperbolic index profile that matches a linear sidewall given in Eq. 6 and the layered index profile we fabricated is very small. In Fig. 4 c and d, we can compare how the selected material layers compare to the nominal hyperbolic index profile. The geometry of the pyramid was a square of side 14.5 mm down to a square of 8.5 mm giving a concentration of 3, along a total height of 8 mm with 8 glass layers, with each flat 1 mm thick. Glasses were cut into squares, polished on both sides, and rigorously cleaned. Bonding these different glass flats was a challenging task- melting glasses in a Kiln, silicate bonding 27 , diffusion bonding in a hot press 28 , anodic bonding 29 , and bonding using SoG (Spin-on-Glass) 30 were attempted. We built a pressure vice for the glass flats using Teflon pins and a high-pressure mechanical vise. Direct bonding of the polished flats after plasma treatment and anodic bonding using a voltage (upto 30 kV) and heat were attempted. However, these attempts did not create the mechanically robust bonded glass stack that is necessary for machining. For the interfaces that were not anodically bonded, optical quality glue was used in between the glass flats to fill in any non-flat imperfections while being held in place under the pressure vise to remove excess glue, and this successfully created a robust graded index glass material layered from RI of 1.5 to 2. The fringe pattern before adding the glue was noted and was used to calculate the spacing between the non-ideal polished flats. There were wedges and curves seen in the fringe pattern which indicated non-uniform spacing and the maximum spacing was calculated to be 0.8 microns. The pyramid shape was micro-machined in this glass stack. To make the sidewalls reflective and not just dependent on total internal reflection, the pyramid’s sidewalls were coated with aluminium. The pyramid’s fabrication flow can be seen in Fig. 5 . Fig. 5: Fabrication steps of graded index glass pyramid concentrator. Glass pyramid fabrication steps, a thin glass slabs of different refractive indices bonded together and the pyramid shape machined in the stack. Seen in the corners is the ‘checkerboard’ pattern, which is an optical illusion due to graded index layers, b aluminium deposited on the sidewalls, c pyramid in optical contact with solar cell absorbs and concentrates most of the incident light and appears dark. Full size image Polymer cluster fabrication For the experimental demonstration of AGILE using polymers, broadband transparent UV curable optical polymers were chosen. The single AGILE structure had the issue that in order to collect all the light at the output, it had to be directly fabricated on or optically bonded to the photodetector with an appropriate anti-reflection coating. In a back-to-back AGILE, measurements of transmission were easier because the transmitted power was back in a low-index medium, i.e., air, so a photodetector with a standard AR (Anti-Reflection) coating was sufficient. In a back-to-back AGILE, light went from a larger aperture to smaller and then back to larger, thereby proving the concentration effect. The single AGILE was the basic structure to be demonstrated, the back-to-back AGILE was made for solely testing purposes and represents the ideal AGILE concentrator fabricated directly on a detector. An array structure was made – AGILE back-to-back array of 7 overlapping cones at the input that decrease from a diameter of 7 mm to 4 mm and increase back to 7 mm over a height of 10 mm having a concentration of 3 Suns (calculation of the over-lapping input aperture area in appendix C in supplementary file). Three structures are presented in Fig. 6 c: 2 Suns AGILE cone (diameter of 7 mm to diameter of 5 mm, and index variation from 1.46 to 1.56), 3 Suns (7 overlapping cones of diameter of 7 mm to diameter of 4 mm) back-to-back AGILE with 10 layers (index variation from 1.46 to 1.56), and 3 Suns back-to-back AGILE with 12 layers variation (index variation from 1.46 to 1.625). The AGILE array is an extension of demonstration of the 2 Suns AGILE to 3 Suns (diameter of 7 mm to diameter of 4 mm). UV curable polymers having different indices were cured volumetrically layer by layer in polished aluminium cone shapes to create the AGILE prototypes 31 . Appendix D in supplementary file gives the fabrication steps used for creating these graded index polymer stacks. Discussion and results Characterization and results of the glass and polymer AGILEs In characterizing the glass pyramid performance, care was taken to establish optical contact between the pyramid and the photodetector using an index matching liquid with RI of 1.7 (Cargille Labs). This RI value is not ideal but chosen because index matching liquids with higher values are corrosive to the high-index glass of the pyramid at the output and these liquids need to be handled as hazardous materials, or the liquids have colour, i.e., not broadband transparent. The fabricated AGILEs were tested by comparing the amount of laser light reaching the photodetector via a fixed aperture area (input aperture of AGILE) with and without the AGILE to evaluate the concentrator performance. The measurement set-up is depicted in Fig. 6a , which includes a red HeNe laser, beam expander, a rotational stage, a holder for AGILE, and a photodetector. Details of the measurement set-up and the test procedure are given in appendix E in the supplementary file. Fig. 6: Measurement set-up and experimental performance of the glass and polymer concentrators. a Schematic of the measurement set-up, b glass pyramid experimental performance, and c polymer AGILE experimental performance, both for 3 Sun concentration, i.e., light incident on the glass and polymer AGILE prototypes was successfully concentrated in 3x smaller areas at the output. Full size image It can be seen from Fig. 6b that the two set of glass pyramid simulation results match in performance with the set of experimental results. There are, at least, two major distinct ways of measuring light concentration through a pyramid shape across different incidence angles. This is due to the fact that unlike the cone cluster the pyramid does not have rotational symmetry. The two set of results for the pyramid shape are (1) the angular measurements done with rotation along the side of square input aperture of the pyramid (annotated as ‘0° rotation’ in Fig. 6b ), and (2) the rotation along the diagonal of the square input aperture of the pyramid (annotated as ‘45° rotation’ in Fig. 6b ). The pyramid’s 45° rotation optical transmission results are slightly lower than the 0° rotation results. This lower transmission can be accounted for by the corner reflection action of the pyramid which may increase the number of bounces of the rays, adding losses, and which is not present when only rotating/pivoting along the sides of the pyramid. Measurements were re-done and verified using green and blue LED sources and under a solar simulator and they track the results measured using the HeNe laser; proving broadband transmission and function as a solar concentrator. Ray tracing simulations of the pyramid concentrator performance are shown in Fig. 6b , together with experimental results. The losses at each intersection/interface in the layered structure were taken into account in the simulation by including the indices and thicknesses of the glass flats and the index matching layer. Light left the pyramid from the last glass layer of RI 2 then entered the 0.2 micron index matching liquid of RI 1.7, and then the silicon material of the detector with RI of about 3.5, in that sequence. The lower transmission performance of the pyramid as compared to the simulations was due to the fact that there are thin adhesive layers in between the glasses, the material loss of the structure, and the coatings on the solar cell detector, which were not accounted for in the simulations. At normal incidence, the simulation predicts a transmission of about 0.83. In comparison, the highest transmission experimentally measured at normal incidence through the pyramid is about 0.72. As seen in Fig. 6c , the back-to-back polymer AGILE structures gave very good performance and were able to concentrate most of the light that was incident on the 7 mm diameter circular aperture through the smaller 4 mm diameter aperture half way along the axis in each cone of the cluster. The experimental concentration efficiencies should have been similar for the single AGILE and for the glass pyramid when compared to the back-to-back results, but reflections at the AGILE-photodetector interface led to reduced transmission, as seen in Fig. 6c . This lower transmission highlights the importance of optical and mechanical bonding and immersion between the AGILE and the detector. An ideal AGILE system includes fabrication of the AGILE directly on top of a solar cell detector, for perfect light capture, propagation, and conversion. The curves in Fig. 6c show a roughly sinusoidal modulation on the transmission through the AGILE. This harmonic modulation was more pronounced with a single wavelength than with a broadband illumination, typical of interference effects. To estimate the layer thickness d in this Fabry-Perot type resonance effect, \(\normalsize \large \large \normalsize \normalsize 2\pi m = \frac{{4\pi n_fd\,{{{\mathrm{cos}}}}\theta }}{\lambda }\) with the RI of the layer, i.e., n f = 2, m = integer = 1, λ = 632.8 nm, and θ = 0°; we find d = 157 nm, which is about the approximate antireflection coating/passivation layer thickness on a standard solar cell/detector. Constructive interference at the transmission side is due to the different resonant wave-fronts that arrive in phase, enhancing the signal at specific incidence angles and opposite for destructive interference dips. This also means that the harmonic variation is not caused by the graded index layers in the back-to-back AGILE, where layers of different types of polymers over a height of 20 mm make each layer 2 mm thick in the 10 layer cluster and 1.67 mm thick in the 12 layer one. As expected, the transmission curves through all AGILE devices fall between those of the conical clusters’ 4 mm diameter (output aperture) and the 7 mm diameter (input aperture) theoretical maximum curves (red and green dotted lines). It was noteworthy that the back-to-back AGILE results (blue line with square symbols) follow the theoretical maximum cosine projection curve quite well over the full angular range (for example, at normal incidence, experimentally measured transmission through the polymer cluster is ~ 0.93, i.e., over 90% efficiency). The results demonstrate that light incident from all angles on the AGILE cluster was successfully concentrated in 3x smaller area. Even though the inspiration for AGILE design did not come from nature, there are features of AGILE that can be found in the retina of fish (e.g. Gnathonemus) and compound eyes in insects (e.g. Lepidoptera), where a gradient index is present as anti-reflection to maximize transmission as well as to enable camouflage 32 , 33 . The human eye lens is also a layered structure of gradient RI ranging from about 1.406 to 1.386 34 , i.e., has (RI ratio) 2 of 1.03. We have taken the gradient immersion index idea further and designed and fabricated devices with (RI ratio) 2 of up to 2, pushing the limit seen in nature, the fiber optics industry, or research 35 , 36 , 37 . Conclusions Immersion-graded-index optics as an effective non-tracking optical concentrator was conceptualized, simulated, and fabricated. The design choice of 1:1 aspect ratio of height to input diameter of the AGILE structure was found to ensure a good compromise between maximizing light-capture and minimizing the device height. We present a generalized design-guide equation relating the refractive indices and the geometry, which can be used to create various immersion-graded optical concentrators. Search for appropriate broadband transparent materials and innovation of several fabrication techniques with multiple iterations were needed to create defect free materials with a large graded RI range. Approximating the ideal gradient index with a discrete stepped index yields results that are close to the theoretical maximum. The AGILE prototypes: the glass pyramid made by stacking of various different glass flats and the polymer array of over-lapping cones made in an aluminium stencil, experimentally demonstrated a passive concentration of 3 Suns. The simple to test and verify, back-to-back AGILE array tracked the cosine theta theoretical maximum across all incidence angles. Difference between the results of single and back-to back devices brought home the importance of optical contact between the concentrator and the detector/solar cell, i.e., immersion. More sophisticated AGILE designs involve the incorporation of lens-top surface to increase light collection; sub-wavelength nano-structuring, porosity, and aerogels to create the low index side; 38 , 39 , 40 , 41 , 42 , 43 top surface strengthening for exposure to environment; 44 functionalized nano-particle filler layers 45 and nano-structuring 46 , 47 with passivation in the PV cell to create the high values of the RI, i.e., to increase the range of graded index; an optimized sidewall profile matching the index profile used according to Eq. 4 ; and a 3D gradient index in both axial and radial directions to decrease height of the concentrator and to eliminate the need of a reflective sidewall. Successful optical quality fabrication and demonstration of concentrators using polymers: (1) enable lightweight-design-flexible structures and ability to fabricate directly on textured solar cells/detectors, (2) provide effective encapsulation and inexpensive panel packaging along with the PV costs offset by the optical concentration, and (3) allow possibility of large-scale manufacturing using spray coating, auto pipetting, multiscale imprint, casting, molding, and 3D printing 48 . Results of the functional prototypes demonstrate that immersion graded index technology can improve the way we concentrate and couple light many fold. The AGILE has the potential to greatly improve opto-electronic systems by reducing cost, increasing efficiency, providing a scalable concentration system with built-in anti-reflection and encapsulation without the need for tracking. | Researchers have imagined, designed, and tested an elegant lens device that can efficiently gather light from all angles and concentrate it at a fixed output position. These graded index optics also have applications in areas such as light management in solid-state lighting, laser couplers, and display technology to improve coupling and resolution. Even with the impressive and continuous advances in solar technologies, the question remains: How can we efficiently collect energy from sunlight coming from varying angles from sunrise to sunset? Solar panels work best when sunlight hits them directly. To capture as much energy as possible, many solar arrays actively rotate towards the sun as it moves across the sky. This makes them more efficient, but also more expensive and complicated to build and maintain than a stationary system. These active systems may not be necessary in the future. At Stanford University, engineering researcher Nina Vaidya designed an elegant device that can efficiently gather and concentrate light that falls on it, regardless of the angle and frequency of that light. A paper describing the system's performance, and the theory behind it, is the cover story in the July issue of Microsystems & Nanoengineering, authored by Vaidya and her doctoral advisor Olav Solgaard, professor of electrical engineering at Stanford. "It's a completely passive system—it doesn't need energy to track the source or have any moving parts," said Vaidya, who is now an assistant professor at the University of Southampton, UK. "Without optical focus that moves positions or need for tracking systems, concentrating light becomes much simpler." The device, which the researchers are calling AGILE—an acronym for Axially Graded Index Lens—is deceptively straightforward. It looks like an upside-down pyramid with the point lopped off. Light enters the square, tile-able top from any number of angles and is funneled down to create a brighter spot at the output. In their prototypes, the researchers were able to capture over 90% of the light that hit the surface and create spots at the output that were three times brighter than the incoming light. Installed in a layer on top of solar cells, they could make solar arrays more efficient and capture not only direct sunlight, but also diffuse light that has been scattered by the Earth's atmosphere, weather, and seasons. A top layer of AGILE could replace the existing encapsulation that protects solar arrays, remove the need to track the sun, create space for cooling and circuitry to run between the narrowing pyramids of the individual devices, and most importantly, reduce the amount of solar cell area needed to produce energy—and hence reduce the costs. And the uses aren't limited to terrestrial solar installations: if applied to solar arrays being sent into space, an AGILE layer could both concentrate light without solar tracking and provide necessary protection from radiation. AGILE (Axially Graded Index LEns) concept and concentrator array system vision. a Depiction of the optical concentration action, b repeating unit of AGILE, c concentrator arrays with built-in anti-reflection and encapsulation, no need for tracking the source, and spatially separated PV cells which have advantages of reduced PV material use, hence lower cost with space for cooling and circuitry. Credit: Microsystems & Nanoengineering (2022). DOI: 10.1038/s41378-022-00377-z Envisioning the perfect AGILE The basic premise behind AGILE is similar to using a magnifying glass to burn spots on leaves on a sunny day. The lens of the magnifying glass focuses the sun's rays into a smaller, brighter point. But with a magnifying glass, the focal point moves as the sun does. Vaidya and Solgaard found a way to create a lens that takes rays from all angles but always concentrates light at the same output position. "We wanted to create something that takes in light and concentrates it at the same position, even as the source changes direction," said Vaidya. "We don't want to have to keep moving our detector or solar cell or moving the system to face the source." Vaidya and Solgaard determined that theoretically, it would be possible to collect and concentrate scattered light using an engineered material that smoothly increased in refractive index—a property that describes how quickly light travels through a material—causing the light to bend and curve towards a focal point. At the surface of the material, the light would hardly bend at all. By the time it reached the other side, it would be almost vertical and focused. "The best solutions are often the simplest of ideas. An ideal AGILE has, at the very front of it, the same refractive index as the air and it gradually gets higher—the light bends in a perfectly smooth curve," said Solgaard. "But in a practical situation, you're not going to have that ideal AGILE." From theory to reality For the prototypes, the researchers layered together different glasses and polymers that bend light to different degrees, creating what's known as a graded index material. The layers change the light's direction in steps instead of a smooth curve, which the researchers found to be a good approximation of the ideal AGILE. The sides of the prototypes are mirrored, so any light going in the wrong direction is bounced back towards the output. One of the biggest challenges was finding and creating the right materials, Vaidya says. The material layers in the AGILE prototype let a broad spectrum of light, from near-ultraviolet to infrared, pass through it and bend that light increasingly towards the output with a wide range of refractive indices, which is not seen in nature or the present optics industry. These materials used also had to be compatible with each other—if one glass expanded in response to heat at a different rate than another, the whole device could crack—and robust enough to be machined into shape and remain durable. "It's one of these 'moonshot' engineering adventures, going right from theory to real prototypes," said Vaidya. "There are a lot of theory papers and great ideas out there, but it's hard to turn them into reality with real designs and real materials pushing the boundaries of what was deemed impossible before." After exploring many materials, creating new fabrication techniques, and testing multiple prototypes, the researchers landed on AGILE designs that performed well using commercially available polymers and glasses. AGILE has also been fabricated using 3D printing in the authors' prior work that created lightweight and design-flexible polymeric lenses with nanometer-scale surface roughness. Vaidya hopes the AGILE designs will be able to be put to use in the solar industry and other areas as well. AGILE has several potential applications in areas like laser coupling, display technologies, and illumination—such as solid-state lighting, which is more energy efficient than older methods of lighting. "Using our efforts and knowledge to make meaningful engineering systems has been my driving force, even when some trials were not working out," said Vaidya. "To be able to use these new materials, these new fabrication techniques, and this new AGILE concept to create better solar concentrators has been very rewarding. Abundant and affordable clean energy is a vital part of addressing the urgent climate and sustainability challenges, and we need to catalyze engineering solutions to make that a reality." | 10.1038/s41378-022-00377-z |
Other | Bursty behaviour found to have similar features across complex systems | The original article by Márton Karsai, Kimmo Kaski, Albert-László Barabási & János Kertész (2012) 'Universal features of correlated bursty behaviour', Nature Scientific Reports (2) 397. www.nature.com/srep/2012/12050 … /full/srep00397.html | http://www.nature.com/srep/2012/120504/srep00397/full/srep00397.html | https://phys.org/news/2012-06-bursty-behaviour-similar-features-complex.html | Abstract Inhomogeneous temporal processes, like those appearing in human communications, neuron spike trains and seismic signals, consist of high-activity bursty intervals alternating with long low-activity periods. In recent studies such bursty behavior has been characterized by a fat-tailed inter-event time distribution, while temporal correlations were measured by the autocorrelation function. However, these characteristic functions are not capable to fully characterize temporally correlated heterogenous behavior. Here we show that the distribution of the number of events in a bursty period serves as a good indicator of the dependencies, leading to the universal observation of power-law distribution for a broad class of phenomena. We find that the correlations in these quite different systems can be commonly interpreted by memory effects and described by a simple phenomenological model, which displays temporal behavior qualitatively similar to that in real systems. Introduction In nature there are various phenomena, from earthquakes 1 to sunspots 2 and neuronal activity 3 , that show temporally inhomogeneous sequence of events, in which the overall dynamics is determined by aggregate effects of competing processes. This happens also in human dynamics as a result of individual decision making and of various kinds of correlations with one's social environment. These systems can be characterized by intermittent switching between periods of low activity and high activity bursts 4 , 5 , 6 , which can appear as a collective phenomenon similar to processes seen in self-organized criticality 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 . In contrast with such self-organized patterns intermittent switching can be detected at the individual level as well (see Fig. 1 ), seen for single neuron firings or for earthquakes at a single location 17 , 18 , 19 , 20 , 21 where the famous Omori's law 22 , 23 describes the temporal decay of aftershock rates at a given spot. Figure 1 Activity of single entities with color-coded inter-event times. (a): Sequence of earthquakes with magnitude larger than two at a single location (South of Chishima Island, 8th–9th October 1994) (b): Firing sequence of a single neuron (from rat's hippocampal) (c): Outgoing mobile phone call sequence of an individual. Shorter the time between the consecutive events darker the color. Full size image Further examples of bursty behavior at the individual level have been observed in the digital records of human communication activities through different channels 4 , 24 , 25 , 26 , 27 , 28 . Over the last few years different explanations have been proposed about the origin of inhomogeneous human dynamics 4 , 24 , 29 , including the single event level 30 and about the impact of circadian and weekly fluctuations 31 . Moreover, by using novel technology of Radio Frequency ID's, heterogeneous temporal behavior was observed in the dynamics of face-to-face interactions 32 , 33 . This was explained by a reinforcement dynamics 34 , 35 driving the decision making process at the single entity level. For systems with discrete event dynamics it is usual to characterize the observed temporal inhomogeneities by the inter-event time distributions, P ( t ie ), where t ie = t i + 1 − t i denotes the time between consecutive events. A broad P ( t ie ) 3 , 25 , 36 reflects large variability in the inter-event times and denotes heterogeneous temporal behavior. Note that P ( t ie ) alone tells nothing about the presence of correlations, usually characterized by the autocorrelation function, A (τ), or by the power spectrum density. However, for temporally heterogeneous signals of independent events with fat-tailed P ( t ie ) the Hurst exponent can assign false positive correlations 37 together with the autocorrelation function (see Supplementary Information ). To understand the mechanisms behind these phenomena, it is important to know whether there are true correlations in these systems. Hence for systems showing fat-tailed inter-event time distributions, there is a need to develop new measures that are sensitive to correlations but insensitive to fat tails. In this paper we define a new measure that is capable of detecting whether temporal correlations are present, even in the case of heterogeneous signals. By analyzing the empirical datasets of human communication, earthquake activity and neuron spike trains, we observe universal features induced by temporal correlations. In the analysis we establish a close relationship between the observed correlations and memory effects and propose a phenomenological model that implements memory driven correlated behavior. Results Correlated events A sequence of discrete temporal events can be interpreted as a time-dependent point process, X ( t ), where X ( t i ) = 1 at each time step t i when an event takes place, otherwise X ( t i ) = 0. To detect bursty clusters in this binary event sequence we have to identify those events we consider correlated. The smallest temporal scale at which correlations can emerge in the dynamics is between consecutive events. If only X ( t ) is known, we can assume two consecutive actions at t i and t i + 1 to be related if they follow each other within a short time interval, t i + 1 − t i ≤ Δ t 30 , 38 . For events with the duration d i this condition is slightly modified: t i + 1 −( t i + d i ) ≤ Δ t . This definition allows us to detect bursty periods, defined as a sequence of events where each event follows the previous one within a time interval Δ t . By counting the number of events, E , that belong to the same bursty period, we can calculate their distribution P ( E ) in a signal. For a sequence of independent events, P ( E ) is uniquely determined by the inter-event time distribution P ( t ie ) as follows: for n > 0. Here the integral defines the probability to draw an inter-event time P ( t ie ) ≤ Δ t randomly from an arbitrary distribution P ( t ie ). The first term of (1) gives the probability that we do it independently n −1 consecutive times, while the second term assigns that the n th drawing gives a P ( t ie ) > Δ t therefore the evolving train size becomes exactly E = n . If the measured time window is finite (which is always the case here), the integral where a < 1 and the asymptotic behaviour appears like P ( E = n ) ~ a ( n −1) in a general exponential form (for related numerical results see SI ). Consequently for any finite independent event sequence the P ( E ) distribution decays exponentially even if the inter-event time distribution is fat-tailed. Deviations from this exponential behavior indicate correlations in the timing of the consecutive events. Bursty sequences in human communication To check the scaling behavior of P ( E ) in real systems we focused on outgoing events of individuals in three selected datasets: (a) A mobile-call dataset from a European operator; (b) Text message records from the same dataset; (c) Email communication sequences 26 (for detailed data description see Methods). For each of these event sequences the distribution of inter-event times measured between outgoing events are shown in Fig. 2 (left bottom panels) and the estimated power-law exponent values are summarized in Table 1 . To explore the scaling behavior of the autocorrelation function, we took the averages over 1,000 randomly selected users with maximum time lag of τ = 10 6 . In Fig. 2.a and b (right bottom panels) for mobile communication sequences strong temporal correlation can be observed (for exponents see Table 1 ). The power-law behavior in A (τ) appears after a short period denoting the reaction time through the corresponding channel and lasts up to 12 hours, capturing the natural rhythm of human activities. For emails in Fig. 2.c (right bottom panels) long term correlation are detected up to 8 hours, which reflects a typical office hour rhythm (note that the dataset includes internal email communication of a university staff). Table 1 Characteristic exponents of the (α) autocorrelation function, (β) bursty number, (γ) inter-event time distribution functions and ν memory functions calculated in different datasets (see SI ) and for the model study Full size table Figure 2 The characteristic functions of human communication event sequences. The P ( E ) distributions with various Δ t time-window sizes (main panels), P ( t ie ) distributions (left bottom panels) and average autocorrelation functions (right bottom panels) calculated for different communication datasets. (a) Mobile-call dataset: the scale-invariant behavior was characterized by power-law functions with exponent values , and (b) Almost the same exponents were estimated for short message sequences taking values , and . (c) Email event sequence with estimated exponents , and . A gap in the tail of A (τ) on figure (c) appears due to logarithmic binning and slightly negative correlation values. Empty symbols assign the corresponding calculation results on independent sequences. Lanes labeled with s, m, h and d are denoting seconds, minutes, hours and days respectively. Full size image The broad shape of P ( t ie ) and A (τ) functions confirm that human communication dynamics is inhomogeneous and displays non-trivial correlations up to finite time scales. However, after destroying event-event correlations by shuffling inter-event times in the sequences (see Methods) the autocorrelation functions still show slow power-law like decay (empty symbols on bottom right panels), indicating spurious unexpected dependencies. This clearly demonstrates the disability of A (τ) to characterize correlations for heterogeneous signals (for further results see SI ). However, a more effective measure of such correlations is provided by P ( E ). Calculating this distribution for various Δ t windows, we find that the P ( E ) shows the following scale invariant behavior for each of the event sequences as depicted in the main panels of Fig. 2 . Consequently P ( E ) captures strong temporal correlations in the empirical sequences and it is remarkably different from P ( E ) calculated for independent events, which, as predicted by (1), show exponential decay (empty symbols on the main panels). Exponential behavior of P ( E ) was also expected from results published in the literature assuming human communication behavior to be uncorrelated 29 , 30 , 39 . However, the observed scaling behavior of P ( E ) offers direct evidence of correlations in human dynamics, which can be responsible for the heterogeneous temporal behavior. These correlations induce long bursty trains in the event sequence rather than short bursts of independent events. We have found that the scaling of the P ( E ) distribution is quite robust against changes in Δ t for an extended regime of time-window sizes ( Fig. 2 ). In addition, the measurements performed on the mobile-call sequences indicate that the P ( E ) distribution remains fat-tailed also when it is calculated for users grouped by their activity. Moreover, the observed scaling behavior of the characteristic functions remains similar if we remove daily fluctuations (for results see SI ). These analyses together show that the detected correlated behavior is not an artifact of the averaging method nor can be attributed to variations in activity levels or circadian fluctuations. Bursty periods in natural phenomena As discussed above, temporal inhomogeneities are present in the dynamics of several natural phenomena, e.g. in recurrent seismic activities at the same location 19 , 20 , 21 (for details see Methods and SI ). The broad distribution of inter-earthquake times in Fig. 3.a (right top panel) demonstrates the temporal inhomogeneities. The characterizing exponent value γ = 0.7 is in qualitative agreement with the results in the literature 23 as γ = 2−1/ p where p is the Omori decay exponent 22 , 23 . At the same time the long tail of the autocorrelation function (right bottom panel) assigning long-range temporal correlations. Counting the number of earthquakes belonging to the same bursty period with Δ t = 2…32 hours window sizes, we obtain a broad P ( E ) distribution (see Fig. 3.a main panel), as observed earlier in communication sequences, but with a different exponent value β = 2.5 (see in Table 1 ). This exponent value meets with known seismic properties as it can be derived as β = b / a + 1, where a denotes the productivity law exponent 40 , while b is coming from the well known Gutenberg-Richter law 41 . Note that the presence of long bursty trains in earthquake sequences were already assigned to long temporal correlations by measurements using conditional probabilities 42 , 43 . Figure 3 The characteristic functions of event sequences of natural phenomena. The P ( E ) distributions of correlated event numbers with various Δ t time-window sizes (main panels), P ( t ie ) distributions (right top panels) and average autocorrelation functions (right bottom panels). (a) One station records of Japanese earthquake sequences from 1985 to 1998. The functional behavior is characterized by the fitted power-law functions with corresponding exponents , and . Inter-event times for P ( t ie ) were counted with 10 second resolution. (b) Firing sequences of single neurons with 2 millisecond resolution. The corresponding exponents take values as , and . Empty symbols assign the calculation results on independent sequences. Lanes labeled with ms, s, m, h, d and w are denoting milliseconds, seconds, minutes, hours, days and weeks respectively. Full size image Another example of naturally occurring bursty behavior is provided by the firing patterns of single neurons (see Methods). The recorded neural spike sequences display correlated and strongly inhomogeneous temporal bursty behavior, as shown in Fig. 3.b . The distributions of the length of neural spike trains are found to be fat-tailed and indicate the presence of correlations between consecutive bursty spikes of the same neuron. Memory process In each studied system (communication of individuals, earthquakes at given location, or neurons) qualitatively similar behaviour was detected as the single entities performed low frequency random events or they passed through longer correlated bursty cascades. While these phenomena are very different in nature, there could be some element of similarities in their mechanisms. We think that this common feature is a threshold mechanism. From this point of view the case of human communication data seems problematic. In fact generally no accumulation of stress is needed for an individual to make a phone call. However, according to the Decision Field Theory of psychology 44 , each decision (including initiation of communication) is a threshold phenomenon, as the stimulus of an action has to reach a given level for to be chosen from the enormously large number of possible actions. As for earthquakes and neuron firings it is well known that they are threshold phenomena. For earthquakes the bursty periods at a given location are related to the relaxation of accumulated stress after reaching a threshold 7 , 8 , 9 . In case of neurons, the firings take place in bursty spike trains when the neuron receives excitatory input and its membrane potential exceeds a given potential threshold 45 . The spikes fired in a single train are correlated since they are the result of the same excitation and their firing frequency is coding the amplitude of the incoming stimuli 46 . The correlations taking place between consecutive bursty events can be interpreted as a memory process, allowing us to calculate the probability that the entity will perform one more event within a Δ t time frame after it executed n events previously in the actual cascade. This probability can be written as: Therefore the memory function, p ( n ), gives a different representation of the distribution P ( E ). The p ( n ) calculated for the mobile call sequence are shown in Fig. 4.a for trains detected with different window sizes. Note that in empirical sequences for trains with size smaller than the longest train, it is possible to have p ( n ) = 1 since the corresponding probability would be P ( E = n ) = 0. At the same time due to the finite size of the data sequence the length of the longest bursty train is limited such that p ( n ) shows a finite cutoff. Figure 4 Empirical and fitted memory functions of the mobile call sequence (a) Memory function calculated from the mobile call sequence using different Δ t time windows. (b) 1− p ( n ) complement of the memory function measured from the mobile call sequence with Δ t = 600 second and fitted with the analytical curve defined in equation (4) with ν = 2.971. Grey symbols are the original points, while black symbols denotes the same function after logarithmic binning. (c) P ( E ) distributions measured in real and in modeled event sequences. Full size image We can use the memory function to simulate a sequence of correlated events. If the simulated sequence satisfies the scaling condition in (2) we can derive the corresponding memory function by substituting (2) into (3), leading to: with the scaling relation (see SI ): In order to check whether (5) holds for real systems and whether the memory function in (4) describes correctly the memory in real processes we compare it to a memory function extracted from an empirical P ( E ) distributions. We selected the P ( E ) distribution of the mobile call dataset with Δ t = 600 second and derived the corresponding p ( n ) function. The complement of the memory function, 1− p ( n ), is presented in Fig. 4.b where we show the original function with strong finite size effects (grey dots) and the same function after logarithmic binning (black dots). Taking equation (4) we fit the theoretical memory function to the log-binned empirical results using least-squares method with only one free parameter, υ. We find that the best fit offers an excellent agreement with the empirical data (see Fig. 4.b and also Fig. 4.a ) with υ = 2.971 ± 0.072. This would indicate through (5), close to the approximated value , obtained from directly fitting the empirical P ( E ) distributions in the main panel of Fig. 2.a (for fits of other datasets see SI). In order to validate whether our approximation is correct we take the theoretical memory function p ( n ) of the form (4) with parameter υ = 2.971 and generate bursty trains of 10 8 events. As shown in Fig. 5.c , the scaling of the P ( E ) distribution obtained for the simulated event trains is similar to the empirical function, demonstrating the validity of the chosen analytical form for the memory function. Figure 5 Schematic definition and numerical results of the model study. (a) P ( E ) distributions of the synthetic sequence after logarithmic binning with window sizes Δ t = 1…1024. The fitted power-law function has an exponent β = 3.0. (b) Transition probabilities of the reinforcement model with memory. (c) Logarithmic binned inter-event time distribution of the simulated process with a maximum interevent time . The corresponding exponent value is γ = 1.3. (d) The average logarithmic binned autocorrelation function with a maximum lag τ max = 10 4 . The function can be characterized by an exponent α = 0.7. Simulation results averaged over 1000 independent realizations with parameters µ A = 0.3, µ B = 5.0, ν = 2.0, π = 0.1 and T = 10 9 . For the calculation we chose the maximum inter-event time , which is large enough not to influence short temporal behavior, but it increases the program performance considerably. Full size image Model study As the systems we analysed are of quite different nature, from physics (earthquakes) to social (human communication) and biological (neuron spikes) systems, finding a single mechanistic model to describe them all is impossible. Therefore, our goal is not to reproduce in detail our observations for the different systems but to identify minimal conditions or common characteristics that may play a role in all of their dynamics and are sufficient to reproduce the observed overall temporal behaviour. Here we define a phenomenological model which integrates the deliberated features and study how they are related to each other. Reinforcement dynamics with memory We assume that the investigated systems can be described with a two-state model, where an entity can be in a normal state A , executing independent events with longer inter-event times, or in an excited state B , performing correlated events with higher frequency, corresponding to the observed bursts. To induce the inter-event times between the consecutive events we apply a reinforcement process based on the assumption that the longer the system waits after an event, the larger the probability that it will keep waiting. Such dynamics shows strongly heterogeneous temporal features as discussed in 34 , 35 . For our two-state model system we define a process, where the generation of the actual inter-event time depends on the current state of the system. The inter-event times are induced by the reinforcement functions that give the probability to wait one time unit longer after the system has waited already time t ie since the last event. These functions are defined as where µ A and µ B control the reinforcement dynamics in state A and B , respectively. These functions follow the same form as the previously defined memory function in (4) and satisfy the corresponding scaling relation in (5). If the characteristic inter-event times at state A and B become fairly different, which induces further temporal inhomogeneities in the dynamics. The actual state of the system is determined by transition probabilities shown in Fig. 5.b , where to introduce correlations between consecutive excited events performed in state B we utilize the memory function defined in equation (4) . To be specific, the model is defined as follows: first the system performs an event in a randomly chosen initial state. If the last event was in the normal state A , it waits for a time induced by f A ( t ie ), after which it switches to excited state B with probability π and performs an event in the excited state, or with probability 1−π stays in the normal state A and executes a new normal event. In the excited state the inter-event time for the actual event comes from f B ( t ie ) after which the system decides to execute one more excited event in state B with a probability p ( n ) that depends on the number n of excited events since the last event in normal state. Otherwise it switches back to a normal state with probability 1− p ( n ). Note that a similar model without memory was already defined in the literature 47 . The numerical results predicted by the model are summarized in Fig. 5 and Table 1 . We find that the inter-event time distribution in Fig. 5.c reflects strong inhomogeneities as it takes the form of a scale-free function with an exponent value γ = 1.3, satisfying the relation γ = µ A + 1. As a result of the heterogeneous temporal behavior with memory involved, we detected spontaneously evolving long temporal correlations as the autocorrelation function shows a power-law decay. Its exponent α = 0.7 (see Fig. 5.d ) also satisfies the relation α + γ = 2 (see SI ). The P ( E ) distribution also shows fat-tailed behavior for each investigated window size ranging from Δ t = 1 to 2 10 (see Fig. 5.a ). The overall signal here is an aggregation of correlated long bursty trains and uncorrelated single events. This explains the weak Δ t dependence of P ( E ) for larger window sizes, where more independent events are merged with events of correlated bursty cascades, which induces deviation of P ( E ) from the expected scale-free behavior. The P ( E ) distributions can be characterized by an exponent β = 3.0 in agreement with the analytical result in (5) and it confirms the presence of correlated bursty cascades. In addition, even if we fix the value of β and γ, the α exponent satisfies the condition α < γ < β, an inequality observed in empirical data (see Table 1 ). Discussion In the present study we introduced a new measure, the number of correlated events in bursty cascades, which detects correlations and heterogeneity in temporal sequences. It offers a better characterization of correlated heterogeneous signals, capturing a behavior that cannot be observed from the inter-event time distribution and the autocorrelation function. The discussed strongly heterogeneous dynamics was documented in a wide range of systems, from human dynamics to natural phenomena. The time evolution of these systems were found to be driven by temporal correlations that induced scale-invariant distributions of the burst lengths. This scale-free feature holds for each studied system and can be characterized by different system-dependent exponents, indicating a new universal property of correlated temporal patterns emerging in complex systems. We found that the bursty trains can be explained in terms of memory effects, which can account for the heterogeneous temporal behavior. In order to better understand the dynamics of temporally correlated bursty processes at single entity level we introduced a phenomenological model that captures the common features of the investigated empirical systems and helps us understand the role they play during the temporal evolution of heterogeneous processes. Methods Data processing To study correlated human behavior we selected three datasets containing time-stamped records of communication through different channels for a large number of individuals. For each user we extract the sequence of outgoing events as we are interested in the correlated behavior of single entities. The datasets we have used are as follows: (a) A mobile-call dataset from a European operator covering ~325×10 6 million voice call records of ~6.5×10 6 users during 120 days 48 . (b) Text message records from the same dataset consisting of 125.5 × 10 6 events between the same number of users. Note that to consider only trusted social relations these events were executed between users who mutually called each other at least one time during the examined period. Consecutive text messages of the same user with waiting times smaller than 10 seconds were considered as a single multipart message 49 though the P ( t ie ) and A (τ) functions do not take values smaller than 10 seconds in Fig. 2.b . (c) Email communication sequences of 2, 997 individuals including 20.2×10 4 events during 83 days 26 . From the email sequence the multicast emails (consecutive emails sent by the same user to many other with inter-event time 0) were removed in order to study temporally separated communication events of individuals. To study earthquake sequences we used a catalog that includes all earthquake events in Japan with magnitude larger than two between 1st July 1985 and 31st December 1998 50 . We considered each recorded earthquake as a unique event regardless whether it was a main-shock or an after-shock. For the single station measurement we collected a time order list of earthquakes with epicenters detected at the same region 7 , 21 (for other event collection methods see SI ). The resulting data consists of 198, 914 events at 238 different regions. The utilized neuron firing sequences consist of 31, 934 outgoing firing events of 1, 052 single neurons which were collected with 2 millisecond resolution from rat's hippocampal slices using fMCI techniques 51 , 52 . Random shuffling of inter-event times of real sequences To generate independent event sequences from real data in Fig. 2 and 3 (empty symbols) we randomly shuffled the inter-event times of entities (persons, locations and neurons) allowing to change t ie values between any entities but keeping the original event number unchanged for each of them. This way the original inter-event time and node strength distributions remain unchanged but we receive null model sequences where every temporal correlations were destroyed. The aim of this calculation was twofold as to show on real data that the autocorrelation function still scales as a power-law after temporal correlations are removed and that the P ( E ) distribution decays exponentially for uncorrelated signals. The presented P ( E ) distributions were calculated with one Δ t window size to demonstrate this behaviour. | Several complex systems live in periods of short bursts of high activity followed by long uneventful intermissions. This phenomenon called burstiness can be modelled and predicted with mathematical algorithms. Research of Dr Márton Karsai of Aalto University Department of Biomedical Engineering and Computational Science, now shows that burstiness has universal features in very different systems. Karsai and his collaborators – Dean Kimmo Kaski of Aalto University School of Science, FidiPro Professor in Aalto University János Kertész, and the world-renowned physicist and network theorist Professor Albert-László Barabási – studied burstiness in human mobile and email communication, in neuron spike trains, and in seismic activity in earthquakes. The results have recently been published in Nature Scientific Reports. "The method we developed helped us to highlight a novel universal feature of bursty behaviour. This is one step beyond the state of the art assumptions regarding the phenomena of burstiness," assesses Karsai the significance of the group's work. The research focuses on the dynamic phenomena of burstiness, that is, the mechanisms of temporal fluctuation of levels of activity. We make, for instance, several phone calls and send many emails in a short spurts of time, and otherwise not so much. Neurons fire in spike trains, and earth quakes in similar temporal patterns. There are not only connections between consecutive events but also in events within bursty periods. The common feature shared by the studied systems is that beyond being bursty, the bursty events evolve rather in long trains of events than in pairs – contrary to what existing modelling methods have lead to assume. "We observed that bursts are not independent but rather clustered and they evolve in long bursty trains, which contain several correlated events. The universality of the analysed systems come from the fact that the size distribution of these trains scale very similarly in human communication, neuron firing and earthquakes," describes Karsai the group's results. All the systems share both a threshold mechanism of a sort and memory effects within their processes. Earth begins to shake when accumulated stress relaxes, and one quake can trigger several aftershocks. Neurons fire in consecutive spike trains when they receive enough excitatory stimuli. Humans make choices between countless virtual options; one phone call or email often turns into many. "These correlations can be interpreted as a very simple memory process where the actual state of the system depends not only on the previous bursty event but also from all the other events that have evolved in the actual burst train," points out Karsai. "We hope that our approach will help to disclose other unknown features of correlated heterogeneous temporal behaviour. The methodology can be applied in many different fields of science, engineering and business. For instance, by predicting human communication behaviour, one can better design the usage of resources in telecommunication or help service providers make better business plans." | www.nature.com/srep/2012/12050 … /full/srep00397.html |
Physics | Study reveals principles behind electron heating in weakly ionized collisional plasmas | Sanghoo Park et al. Electron heating in rf capacitive discharges at atmospheric-to-subatmospheric pressures, Scientific Reports (2018). DOI: 10.1038/s41598-018-27945-6 Sanghoo Park et al. Electron Information in Single- and Dual-Frequency Capacitive Discharges at Atmospheric Pressure, Scientific Reports (2018). DOI: 10.1038/s41598-018-25892-w Journal information: Scientific Reports | http://dx.doi.org/10.1038/s41598-018-27945-6 | https://phys.org/news/2018-09-reveals-principles-electron-weakly-ionized.html | Abstract Electron heating is a fundamental and multidisciplinary phenomenon in partially ionized gases, from the planet’s ionosphere to laboratory-scale plasmas. Plasmas produced at ambient or reduced pressures have recently shown potential for scientific and industrial applications. However, electron heating, which is strongly coupled to the physicochemical properties of these plasmas, has been poorly understood. We experimentally found the rapid structural transition from non-local to local electron heating in collisional radio-frequency discharges at atmospheric-to-subatmospheric pressures. As the gas pressure decreased from 760 to 200 Torr, the time-averaged electron density increased from 1.3 × 10 12 to 1.3 × 10 13 cm −3 , and the electron temperature decreased from 2.5 to 1.1 eV at the maximum allowable discharge current in the abnormal α-mode in the plasma bulk. The spatiotemporal evolution of the electron temperature clearly shows that the electron temperature increases uniformly throughout the bulk plasma region during sheath expansion and collapse at 760 Torr, but the electron heating weakens with sheath collapse as the gas pressure decreases. Introduction Most natural phenomena, to our knowledge, are associated with gas pressure and arise from pressure changes. For ionized gases, including those in the earth’s atmosphere and laboratory-scale plasmas, it is also one of the most crucial parameters that influence the ionization process because it is related to electron-neutral collisional coupling and electron mobility. In the earth’s ionosphere, where the gas pressure is under 10 −6 bar, free electrons and ions readily respond to the electromagnetic field and affect geophysical phenomena, which do not appear in the lower atmosphere 1 , 2 , 3 . Advances in knowledge through scientific studies on electron kinetics and heating at given pressure have unveiled anomalous phenomena and their mechanisms. Alongside with ionized gases in the atmosphere, many studies have investigated the pressure dependence of the electron characteristics of laboratory plasmas, and previous works have shown the great influence of gas pressure on plasma properties 4 , 5 . However, in contrast to low-pressure plasmas (e.g., under 1 mTorr), plasma properties in the high-pressure regime, including atmospheric pressure, are not well understood; yet, the electron diagnostic for such plasmas still remains a significant challenge. Due to the different plasma characteristics at different pressures in the subatmospheric-to-atmospheric pressure range, characterizing the plasma over such a pressure range is a prerequisite for understanding the underlying principles of plasmas and for industrial plasma applications 6 . One example is the dielectric barrier discharge (DBD). Because of the potential implementation of DBD actuators as aerodynamic devices in gas turbines and airplanes, the effects of gas pressure on the characteristics of DBD actuators have been actively investigated over the expected range of pressures 7 , 8 . Over the past several decades, plasmas generated at ambient pressure have received great attention as a multidisciplinary topic due to their complexity, and there has been a dramatic increase in plasma applications. Recent publications clearly demonstrate possible utilizations of atmospheric-pressure plasmas and their outstanding results 9 , 10 , 11 , 12 . In most applications, various plasma sources are used as generators of reactive chemical species because the contribution of plasmas to processing targets is dominated by chemical reactions. Moreover, short-lived reactive species, which are key players in most plasma applications, are remotely produced and controlled via the photolysis or post-discharge reactions of long-lived species 13 , 14 , 15 , 16 . Thus, direct contact of the target with the plasma or releasing the plasma into ambient air is not a key requirement in certain applications, and attempts to utilize plasmas in the subatmospheric pressure range have recently increased. However, as previously mentioned, despite the remarkable demand and interest in plasmas at atmospheric-to-subatmospheric pressures, not many studies have been performed to determine the effect of gas pressure on the electron kinetics, which are directly coupled to the chemical reactions in plasmas near atmospheric pressure. This is mainly because suitable and available electron diagnostics for these plasmas are lacking. Here, we report the electron properties, i.e., electron density ( n e ) and temperature ( T e ), and electron kinetics of radio-frequency (rf) argon capacitive discharges in a pressure range from 200 to 760 Torr. To understand the electron properties under different pressure condition, we have experimentally investigated the electron heating structures during an rf cycle based on the measured spatiotemporal distribution of the neutral bremsstrahlung and electron temperature. The results clearly demonstrate the rapid structural transition from non-local to local electron heating; the electron heating during the sheath collapse weakens as the pressure decreases. Results The root mean square (rms) current versus the rms voltage ( I rms - V rms ) and the rms current versus the power dissipated in the plasma ( I rms - P dis ) curves are shown in Fig. 1(a,b) , respectively. As all the discharges were operated in the abnormal α-mode, V rms and P dis are almost linearly proportional to I rms . As presented in the figure, the accessible ranges of V rms and P dis decrease with the decreasing pressure. The slopes of the I rms - V rms curves slightly increase with the decreasing pressure. According to the one-dimensional (1-D), simple resistor-capacitor series circuit model, which is an acceptable model for atmospheric-pressure capacitive discharges 6 , the slope of the I rms - V rms curve is roughly \(1/2{\rm{\pi }}(13.56[{\rm{MHz}}])/(1.52{\varepsilon }_{0}S/d)\) , where ε 0 is the permittivity in a vacuum, S is the cross sectional area of the plasma, and d is the sheath thickness. Therefore, an increase in the slope as the pressure decreases indicates an increase in the sheath thickness. Consequently, because the voltage drop across the sheath increases as the sheath thickness increases at the same discharge current, the rf power that dissipates in the plasma decreases as the gas pressure decreases, as seen in Fig. 1(b) . An analytical solution for a collisional sheath 17 presents the relation between the gas pressure, p , the sheath thickness, s m . The thickness of an ion-dominated collisional sheath is expressed as $${s}_{{\rm{m}}}=1.95{(\frac{2{\delta }_{{\rm{i}}}}{{{\rm{\pi }}}^{2}{\delta }_{{\rm{D}}}})}^{1/2}{\{\frac{J}{e(2{\rm{\pi }}f){n}_{0}}\}}^{3/2},$$ (1) where \({\delta }_{{\rm{i}}}[{\rm{cm}}]={(300\times p[{\rm{Torr}}])}^{-1}\) is the ion mean free path for an argon discharge, δ D is the electron Debye length, J is the rf current density passing through the sheath, e is the elementary charge, f is the driving frequency, n 0 is the electron density at the sheath edge. This solution indicates that the sheath thickness is proportional to p −1/2 . The total sheath thickness, which was evaluated by the circuit model using the phase difference between the voltage and current and other electrical factors, is plotted as a function of I rms in Fig. 1(c) . The maximum sheath thickness of 564 μm at 200 Torr is approximately two times larger than the minimum sheath thickness of 290 μm at 760 Torr. As intuitively and analytically expected, the sheath thickness of an argon capacitive discharge increases with the decreasing pressure at a constant discharge current. Figure 1 Electrical characteristics of argon rf discharge at 200–760 Torr. ( a ) I rms - V rms and ( b ) I rms - P dis characteristics; ( c ) the sheath thickness of argon capacitive discharges at various pressures. The leftmost and rightmost data points of the 200, 300, 400 Torr discharges indicate the minimum and maximum attainable conditions, respectively, in the abnormal α-mode, whereas the maximum operation condition of the α-mode for the 760 Torr discharge are not presented due to the limited power supply capacity. Full size image In the following, the relationship between the gas pressure and electron properties at atmospheric-to-subatmospheric pressures is discussed. First, the cut-away views of the time-averaged T e and n e profiles relative to the electrode are presented in Fig. 2(a,b) , respectively (two-dimensional distribution of neutral bremsstrahlung and T e can be found as Supplementary Fig. S1 ). The measured T e profiles near each electrode are partially inconsistent with the numerical modelling results 18 , 19 . The profiles presented in Fig. 2(a) show that the maximum T e exists slightly away from the electrodes, and T e decreases towards both the electrodes and bulk. In comparison, the simulation results show that the T e in the plasma sheath (from the electrode surface to the plasma-sheath boundary; the location of the highest T e is considered to be the plasma-sheath boundary in this paper) is the highest and generally higher than 4 eV. This discrepancy in the sheath region may be attributed to the non-Maxwellian nature of the electron energy distribution, particularly near the electrodes. The uncertainty in the electron diagnostics used in this work increases when the electron energy distribution function distorts from a Maxwellian distribution because the neutral bremsstrahlung emissivity was calculated based on a Maxwellian electron energy distribution. In the plasma sheath, a non-Maxwellian distribution can be caused by cold electrons; these are produced within the sheath region through the ionization processes initiated by the secondary electrons emitted from the metallic electrode or dielectric surfaces. Thus, the estimated n e and T e in the vicinity of the electrodes [see Fig. 2(a,b) ] may possibly deviate from the real values. Figure 2 Dependence of the electron characteristics on the gas pressure. Profiles of time-averaged ( a ) T e and ( b ) n e along the axis perpendicular to the electrodes; the ( c ) electron density and temperature at the center of the gas gap in the profiles as a function of gas pressure. All cases at each pressure correspond to the rightmost data of the characteristic curves in Fig. 1 . The colored arrows in ( a , b ) indicate the sheath thickness given in Fig. 1(c) . Full size image At atmospheric pressure, the n e and T e are more or less constant throughout the bulk plasma, and the values are approximately 10 12 cm −3 and 2.5 eV, respectively, except near the electrodes. The rapid change in the n e and T e near the electrode is due to the frequent collisions between the electrons and neutrals that hinder the energetic electrons from moving the electrode gap distance. As the pressure decreases, the mean free path of the electron increases, resulting in a smooth profile over the entire gas gap. As discussed in the foregoing paragraph, the distance of the highest T e from the electrode, which corresponds to the sheath width, increases with decreasing pressure [see Fig. 1(c) ]. A noticeable difference between the profiles of n e and T e is observed in Fig. 2(a,b) as the gas pressure changes, and the difference is presented in Fig. 2(c) . The n e and T e plotted in the figure represent the values at the center of the gas gap for each pressure. As depicted in the figure, T e decreases (from 2.5 to 1.1 eV) while n e increases (from 1.3 × 10 12 to 1.3 × 10 13 cm −3 ) with decreasing pressure. This relation of n e and T e with the pressure can be intuitively interpreted as follows. As the gas pressure decreases, the electrons and metastable Ar atoms that are produced at the plasma-sheath boundary can diffuse quickly enough to overcome the rf oscillating field due to the increased mean free path, thereby they move further towards the electrode and plasma bulk region. Additionally, near atmospheric pressure, the dominant electron heating mechanism is ohmic (collisional) heating due to frequent particle collisions. Thus, the electron heating is strongly governed by the electric field; i.e., the n e and T e distributions are affected by the electric field distribution and vice versa. In other words, the electron production in the field-enhanced region, which is built up by space charges in the plasma-sheath boundary, weakens with decreasing pressure during the sheath collapse. As a result, the n e profile becomes a convex profile with a maximum in the plasma bulk as the pressure decreases instead of a weakly concave profile with two maxima near the electrodes. By considering the power balance, the T e profiles can be estimated from the n e profiles. The number density of the electrons increases with decreasing pressure in the plasma bulk, and consequently, the power absorbed per electron decreases in the plasma bulk. For further insight into the electron kinetics and heating structures in the rf oscillating field, the space- and phase-resolved profiles of the T e and Ar I atomic line (2p → 1s, 696.5 nm and 706.7 nm) emission were obtained using an intensified charge-coupled device (iCCD) camera with ultra-fast gating (see Materials and Methods Section and Supplementary Materials for details on the imaging technique for electron heating structure). The spatiotemporal evolution of the continuum radiation, T e , the Ar I line emission in argon capacitive discharges operated at 200, 300, 400, 760 Torr are shown in each column of Fig. 3 . Figure 3 Nanosecond-resolved visualization of the electron heating structure. Spatiotemporal evolution of 514.5-nm continuum radiation (1st column), T e (2nd column), Ar I emission (3rd column) at ( a ) 760 Torr, ( b ) 400 Torr, ( c ) 300 Torr, ( d ) 200 Torr. The intensities of the continuum radiation and Ar I emission are normalized by the maximum intensity, and the unit of T e is eV. Color bars are located on the right side of each image. Full size image The neutral bremsstrahlung images demonstrate the periodic behaviour of the plasma-sheath boundary and the electron heating structure for all the gas pressures, and the T e and Ar I emission profiles explicitly show the pressure dependence of the spatiotemporal behaviour of the electrons. As shown in Fig. 3(a) , the 514.5-nm continuum emission and T e increase during the sheath expansion and retreat phases at 760 Torr, and their profiles have symmetric and non-local structures with respect to the center of the gas gap, which indicates simultaneous electron heating near both electrodes. A similar distribution is found in the spatiotemporal evolution of the Ar I emission, which is shown in the rightmost image in Fig. 3(a) . Under high-pressure conditions, electron heating is due to the formation of a field-enhanced region caused by space electrons and ions at the retreating sheath edge. Due to the high collision rate, the electron motion is limited, and as a result, the localized electric field induced by the space electrons accelerates the electrons towards the electrode during the sheath collapse. Although heating is negligible at the edge of the retreating sheath in low-pressure discharges 20 , a similar heating mechanism was observed when electrons were subjected to increased collisions in the presence of molecular gases 21 . However, electron heating is accompanied by a field reversal during the retreating sheath in low-pressure discharges 21 , whereas there is no field reversal in the retreating sheath of rf capacitive discharge at atmospheric pressure 22 . As shown in Fig. 3 and detailed in Supplementary Fig. S2 , the emission intensity and T e during the sheath collapse become lower than those during the sheath expansion as the pressure decreased because charged species, including electrons, can diffuse sufficiently fast during the half rf period. A numerical simulation solving the 1-D fluid equations 22 showed that the heating in the neighbourhood of the retreating sheath decreases rapidly with decreasing pressure, which is quite consistent with our observation. One noticeable feature is the pressure dependence of the electron heating profile in the direction perpendicular to the electrodes. As noticed in the time-averaged T e distribution [see Supplementary Fig. S1(e–h) and Fig. 2(a) ], the electron temperature in the plasma bulk region decreases with decreasing pressure, resulting in a crater-like profile shape. As discussed above, the ohmic heating caused by the field-enhanced region built up by the space charge is depressed with decreasing pressure, resulting in the T e decreasing. As shown in the leftmost column of Fig. 3 , the width of the weak-intensity area corresponding to the electron-depleted regions near the electrodes (sheath edges) increases with decreasing pressure. This result is consistent with the relation between the sheath thickness and pressure obtained from the time-averaged T e profiles and I rms - V rms characteristics, as seen in Supplementary Fig. S1 and Fig. 1(c) . Discussion Our findings noted that the pressure change from atmospheric to subatmospheric pressures results in a rapid transition of electron heating in partially ionized gases. The model experiment was based on capacitively coupled argon plasma at 200–760 Torr. The sheath thickness, which was estimated by both the electric circuit model and experimental T e images, shows an increasing trend with decreasing pressure. As the gas pressure decreased, the time-averaged n e increased from 1.3 × 10 12 to 1.3 × 10 13 cm −3 at the maximum allowable discharge current in the abnormal α-mode, while T e decreased from 2.5 to 1.1 eV. We have clearly demonstrated that the electron heating structures of discharges are significantly different in the pressure range from 200 to 760 Torr. The electron temperature increases uniformly throughout the plasma bulk region during the sheath expansion and collapse at 760 Torr. However, the spatiotemporal evolution of the continuum radiation (neutral bremsstrahlung) and T e indicate that the local electron heating during the sheath collapse, which is ohmic heating caused by space charges, weakens as the gas pressure decreases. Even at this very moment electron heating occurs continuously in ionized gases and impacts on natural phenomena. The results that can serve as a basic and informative reference for future scientific research is of paramount importance for scientific impact. Moreover, this report provides the fundamental knowledge of the electron heating in partially ionized gases in any plasma, ranging from plasma processing, astrophysics to space propulsion as well. Materials and Methods Information about the plasma apparatus The present study was performed using a plasma chamber designed to facilitate optical diagnostics and plasma generation in the pressure range from 200 to 760 Torr. A schematic illustration of the capacitive plasma source and relevant system is shown in Fig. 4(a) . Two rectangular electrodes with a plasma-facing area of 138 × 60 mm 2 were parallelly installed inside the chamber and cooled by city water to maintain the electrode temperature at 25°C. A sinusoidal 13.56-MHz rf power supply (RFPP RF10S) was connected to the bottom electrode through an impedance matching circuit, and the upper electrode was grounded. A 1.5-mm thick alumina plate was used to cover the entire surface area of the powered electrode as a dielectric barrier. The gap distance between the bare upper electrode and the alumina plate was fixed at 4 mm for all experiments. To obtain the electrical characteristics of the discharges, a wide-band voltage probe (Tektronix P6015A) and a current probe (Tektronix TCP202) were used with an oscilloscope (Tektronix TDS3012B). The following procedure was used to produce plasma. After pumping to near 20 mTorr using a rotary pump, 99.999% purity argon gas was supplied at 1.2 slpm (standard liters per minute) into the chamber through a mass flow controller (MKS 1179A). Likewise, the argon purge was performed several times before every experiment; it was repeated until the electric characteristics of plasma reach steady state. Subsequently, the argon gas was continuously supplied into the chamber throughout experiment at 1.2 slpm, and the gas pressure was monitored by a vacuum gauge (MKS 626A) and controlled by adjusting a needle valve installed at the pump inlet. Throughout the experiment, the gas supply and pumping systems were continuously operated to maintain the gas pressure at a specific level. Figure 4 Plasma apparatus and allowable plasma characteristics for neutral bremsstrahlung-based electron diagnostics. ( a ) Schematic of a plasma chamber and relevant experimental setup for producing a parallel-plate capacitive discharge in the pressure range 200–760 Torr. ( b ) Fraction of the neutral bremsstrahlung emissivity, \({\kappa }_{{\rm{ea}}}/({\kappa }_{{\rm{ea}}}+{\kappa }_{{\rm{ei}}}^{{\rm{ff}}}+{\kappa }_{{\rm{ei}}}^{{\rm{fb}}})\) , with 3.0-eV T e at 300 nm as a function of the gas pressure and electron density. Neutral bremsstrahlung-based electron diagnostics is valid under the conditions in the white region. Full size image Electron diagnostics based on electron-neutral bremsstrahlung Electron diagnostics, which is based on neutral bremsstrahlung, was used in this work. Continuum radiation emitted from weakly ionized gases mainly originates from electron-neutral interactions, i.e., neutral bremsstrahlung, and its emissivity ( κ ea ) contains electron information 23 , 24 , 25 , 26 . Because the contributions of other continuum radiation sources, electron-ion (free-free) bremsstrahlung \(({\kappa }_{{\rm{ei}}}^{{\rm{ff}}})\) and (free-bound) recombination \(({\kappa }_{{\rm{ei}}}^{{\rm{fb}}})\) , to the emissivity in the UV and visible range vary with the plasma driving conditions, particularly the gas pressure, κ ea dominant conditions should be assured. A simple calculation using equations (1–3), which were described in the previous paper 23 , with T e = 3 eV, n e = n i , and wavelength dependent Biberman factors indicates that \({\kappa }_{{\rm{ea}}}\gg {\kappa }_{{\rm{ei}}}^{{\rm{ff}}}+{\kappa }_{{\rm{ei}}}^{{\rm{fb}}}\) at 300 nm when n e / n a < 10 −3 [ n i and n a are the ion and neutral (gas) density], which is the case for most low-temperature plasmas at subatmospheric-to-atmospheric pressure. A κ ea dominant range, i.e., \({\kappa }_{{\rm{ea}}}/({\kappa }_{{\rm{ea}}}+{\kappa }_{{\rm{ei}}}^{{\rm{ff}}}+{\kappa }_{{\rm{ei}}}^{{\rm{fb}}}) \sim \,1\) , corresponding to valid plasma conditions for neutral bremsstrahlung-based diagnostics, can be seen in the color scale \({\kappa }_{{\rm{ea}}}/({\kappa }_{{\rm{ea}}}+{\kappa }_{{\rm{ei}}}^{{\rm{ff}}}+{\kappa }_{{\rm{ei}}}^{{\rm{fb}}})\) in Fig. 4(b) . The n e profile in Fig. 2(b) was estimated based on the 514.5-nm neutral bremsstrahlung and T e profiles; the neutral bremsstrahlung emissivity can be expressed as \({\kappa }_{{\rm{ea}}}(\lambda )={n}_{{\rm{e}}}{n}_{{\rm{a}}}C\times f({T}_{{\rm{e}}})\) , and the electron density is \({n}_{{\rm{e}}}={\kappa }_{{\rm{ea}}}(\lambda )/{n}_{{\rm{a}}}C/f({T}_{{\rm{e}}})\) , where C is a constant; all the parameters in the right-hand side of the equation are known. Time-averaged and Time-resolved 2-D electron temperature measurement The technique for imaging 2-D distribution of T e is found in our previous papers 27 , 28 , and practical considerations of this technique are provided in Supplementary Materials. For measuring the time-averaged T e profiles, the continuum radiation at two different wavelengths were acquired using a combination of optical interference filters having ultra-narrow transmittances with center wavelengths of 514.5 nm and 632.8 nm and an intensified charge-coupled device (iCCD) camera (Andor DH312T) with a 0.5-s exposure. Two hundred shots were averaged for a single image to reduce the instrumental noise. Using the same materials, spatiotemporally resolved T e profiles were obtained as follows. Phase-resolved sequential images of the continuum radiation at 514.5 nm and 632.8 nm were obtained with a gate width of 6 ns on the iCCD camera, and the interval between two time-adjacent shots was 1 ns (see Fig. S4 ). All shots were acquired with a 2-s exposure time, and five shots were averaged to produce a single image. The 452-kHz trigger signal, which is 1/30 of 13.56 MHz, for the iCCD camera was provided using a signal generator (Agilent 33512B), and the signal was synchronized with the rf power supply. A single shot was captured using the Integrate-on-chip mode of the iCCD camera, in which charges were accumulated 9.04 × 10 5 (452-kHz gate signal × 2-s exposure time) times on the CCD. By integrating the emission profiles along the direction parallel to the electrodes, the spatiotemporal evolutions of the continuum radiation were obtained, which are presented in the leftmost column in Fig. 3 . Through the same procedure, the Ar atomic emissions, which are shown in the rightmost column in Fig. 3 , were obtained using an optical interference filter with a center wavelength transmittance at 700 nm and a full width at half maximum of 25 nm. Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. | A KAIST research team successfully identified the underlying principles behind electron heating, which is one of the most important phenomena in plasmas. As the electric heating determines wide range of physical and chemical properties of plasmas, this outcome will allow relevant industries to extend and effectively customize a range of plasma characteristics for their specific needs. Plasma, frequently called the fourth state of matter, can be mostly formed by artificially energizing gases in standard temperature (25°C) and pressure (1 atm) range. Among the many types of plasma, atmospheric-pressure plasmas have been gaining a great deal of attention due to their unique features and applicability in various scientific and industrial fields. Because plasma characteristics strongly depends on gas pressure in the sub-atmospheric to atmospheric pressure range, characterizing the plasma at different pressures is a prerequisite for understanding the fundamental principles of plasmas and for their industrial applications. In that sense, information on the spatio-temporal evolution in the electron density and temperature is very important because various physical and chemical reactions within a plasma arise from electrons. Hence, electron heating has been an interesting topic in the field of plasma. Because collisions between free electrons and neutral gases are frequent under atmospheric-pressure conditions, there are physical limits to measuring the electron density and temperature in plasmas using conventional diagnostic tools, thus the principles behind free electron heating could not be experimentally revealed. Moreover, lacking information on a key parameter of electron heating and its controlling methods is troublesome and limit improving the reactivity and applicability of such plasmas. Figure 2. Nanosecond-resolved visualization of electron heating. Spatiotemporal evolution of neutral bremsstrahlung at 514.5 nm. Credit: The Korea Advanced Institute of Science and Technology (KAIST) To address these issues, Professor Wonho Choe and his team from the Department of Nuclear and Quantum Engineering employed neutral bremsstrahlung-based electron diagnostics in order to accurately examine the electron density and temperature in target plasmas. In addition, a novel imaging diagnostics for two dimensional distribution of electron information was developed. Using the diagnostic technique they developed, the team measured the nanosecond-resolved electron temperature in weakly ionized collisional plasmas, and they succeeded in revealing the spatiotemporal distribution and the fundamental principle involved in the electron heating process. The team successfully revealed the fundamental principle of the electron heating process under atmospheric to sub-atmospheric pressure (0.25-1atm) conditions through conducting the experiment on the spatiotemporal evolution of electron temperature. Their findings of the underlying research data on free electrons in weakly ionized collisional plasmas will contribute to enhancing the field of plasma science and their commercial applications. Professor Choe said, "The results of this study provide a clear picture of electron heating in weakly ionized plasmas under conditions where collisions between free electrons and neutral particles are frequent. We hope this study will be informative and helpful in utilizing and commercializing atmospheric-pressure plasma sources in the near future." Articles related to this research, led by Research Professor Sanghoo Park, were published in Scientific Reports on May 14 and July 5. | 10.1038/s41598-018-27945-6 |
Earth | Earliest human remains in eastern Africa dated to more than 230,000 years ago | Céline Vidal, Age of the oldest known Homo sapiens from eastern Africa, Nature (2022). DOI: 10.1038/s41586-021-04275-8. www.nature.com/articles/s41586-021-04275-8 Journal information: Nature | http://dx.doi.org/10.1038/s41586-021-04275-8 | https://phys.org/news/2022-01-earliest-human-eastern-africa-dated.html | Abstract Efforts to date the oldest modern human fossils in eastern Africa, from Omo-Kibish 1 , 2 , 3 and Herto 4 , 5 in Ethiopia, have drawn on a variety of chronometric evidence, including 40 Ar/ 39 Ar ages of stratigraphically associated tuffs. The ages that are generally reported for these fossils are around 197 thousand years (kyr) for the Kibish Omo I 3 , 6 , 7 , and around 160–155 kyr for the Herto hominins 5 , 8 . However, the stratigraphic relationships and tephra correlations that underpin these estimates have been challenged 6 , 8 . Here we report geochemical analyses that link the Kamoya’s Hominid Site (KHS) Tuff 9 , which conclusively overlies the member of the Omo-Kibish Formation that contains Omo I, with a major explosive eruption of Shala volcano in the Main Ethiopian Rift. By dating the proximal deposits of this eruption, we obtain a new minimum age for the Omo fossils of 233 ± 22 kyr. Contrary to previous arguments 6 , 8 , we also show that the KHS Tuff does not correlate with another widespread tephra layer, the Waidedo Vitric Tuff, and therefore cannot anchor a minimum age for the Herto fossils. Shifting the age of the oldest known Homo sapiens fossils in eastern Africa to before around 200 thousand years ago is consistent with independent evidence for greater antiquity of the modern human lineage 10 . Main Only eight sites in Africa have yielded possible early anatomically modern Homo sapiens fossils from the late Middle Pleistocene (approximately 350–130 thousand years ago (ka)) 11 . Most of these have considerable age uncertainty or debatable H. sapiens apomorphy 11 . A principal method for constraining the fossil ages is the use of single-crystal 40 Ar/ 39 Ar isotope dating applied to stratigraphically associated volcanic ash (tephra) beds 12 , 13 , 14 . However, many distal tephra deposits consist largely of glass and lack suitable crystals for dating. In this case, geochemical fingerprinting can be used to match a tephra layer to more readily dated proximal deposits with larger, more abundant phenocrysts. The most widely accepted fossils that are interpreted as possessing unequivocal modern cranial apomorphies (that is, a tall cranial vault and a chin) and classified as H. sapiens are two Ethiopian finds 11 , 15 , 16 , namely the Omo I 1 and Herto specimens 4 . Accordingly, the evidence that constrains their ages assumes particular importance but is a topic of considerable geochronological controversy 3 , 6 , 8 . The Omo I remains were discovered in the late 1960s in the lower Omo valley of southern Ethiopia 1 , 14 , at the surface of a siltstone near the top of Member I of the Omo-Kibish Formation (Fig. 1a, b ). The maximum age of Omo I was derived from the 40 Ar/ 39 Ar age of 196 ± 4 kyr (2 σ ) 3 , 6 , 17 obtained for alkali feldspar phenocrysts from the three youngest pumice clasts that were sampled from a heterogeneous tuffaceous deposit correlated with the Nakaa’kire Tuff 3 , which is reported to lie “near, but probably slightly below” the fossils 3 (Fig. 1b ). Recalculated using a more widely adopted age of 28.201 million years (Myr) for the irradiation monitor (sanidine from the Fish Canyon Tuff of Colorado) 18 , the Nakaa’kire Tuff age shifts marginally to 197 ± 4 kyr. Owing to the uncertain stratigraphic relationship between this tuff and the hominin fossils 19 , much attention has been focused on dating the KHS Tuff—a widespread, more-than-2-m-thick deposit of fine ash fallout at the base of Member II of the Omo-Kibish Formation (Fig. 1b ). The KHS Tuff overlies Member I, where Omo I was retrieved around 1.4 m lower down section, and is demonstrably younger than the fossils 3 , 9 . Although the Nakaa’kire Tuff was identified in several sections below the KHS Tuff, the latter was not found in the same section from which the dated pumice clasts correlated with the Nakaa’kire Tuff (on the basis of major element composition) were sampled 3 . The fine grain size of the KHS Tuff has precluded direct 40 Ar/ 39 Ar dating, and no correlation to a source volcano or proximal pyroclastic unit has to our knowledge been made previously. However, drawing on published major element glass compositions, it has been correlated with both tephra TA-55 20 , 21 from the Konso Formation and the directly 40 Ar/ 39 Ar-dated 184 ± 10 kyr unit D 22 (recalculated age) of the Gademotta Formation 6 (Fig. 1b ). Relating the sediment flux in the Omo-Kibish basin with high lake levels that correspond to Mediterranean sapropel deposition 9 , 23 , a slightly younger age for the KHS Tuff of around 172 kyr has also been proposed 6 . Either of these ages (184 or 172 kyr) would be consistent with the proposed age of 197 ± 4 kyr for Omo I. Fig. 1: Late Middle Pleistocene tephrostratigraphy of the Main Ethiopian Rift. a , Map of the MER showing silicic volcanoes and the late Middle Pleistocene sedimentary formations and relevant tephra units. White boxes with blue edges depict former correlatives of the KHS Tuff 6 , 8 b , Synthetic stratigraphic logs of the late Middle Pleistocene formations showing former correlations for the Alyio Tuff 6 (green), Konso SVT (pink, also identified in the Chew Bahir sediment core 33 ), new correlations for Konso unit TA-56 (yellow), and source eruptions (stars). LHM, lower Herto Member; UHM, upper Herto Member. c , Tephra ETH18-8 above the KHS Tuff at the KS locality in the Omo-Kibish Formation 9 . Full size image The Herto H. sapiens fossils were recovered in the late 1990s in the Middle Awash 4 , 5 (Afar rift; Fig. 1a ). They were preserved in a sandstone within the upper Herto Member of the Bouri Formation (Fig. 1b ). This sandstone is capped by the Waidedo Vitric Tuff (WAVT) (Fig. 1b ), which is widespread across western Afar and is also present at Gona 24 , 50 km north of Herto. Direct dating of the WAVT has remained inconclusive owing to crystal contamination, but dating of pumice and obsidian clasts in the fossiliferous sandstone yielded a maximum age of around 160 kyr (ref. 5 ). The WAVT was identified as a distal correlative of tephra TA-55 (Fig. 1b ), on the basis of major element analysis of individual grains and major and trace element analysis of purified bulk separates 5 , 25 . In Konso, unit TA-55 lies below the 155 ± 14 kyr Silver Tuff 5 (SVT) (recalculated age) (Fig. 1b ), suggesting an age for the Herto fossils of around 160–155 kyr (ref. 4 ). This finding was challenged, however, in a study 6 that correlated the Kibish KHS with Konso TA-55, and therefore with the Herto WAVT (Fig. 1b ). This argument suggested an age of around 172 kyr for the WAVT, contradicting the established Herto stratigraphy. The Herto research group 8 responded by corroborating their original stratigraphy, with the WAVT above the Herto fossils, thus challenging an age of about 172 kyr for the KHS. They concluded that the KHS, Konso unit TA-55 5 , Gademotta unit D (around 184 kyr) 22 and WAVT 5 could all represent a single tephrostratigraphic marker lying above the Omo-Kibish and Herto H. sapiens fossils, but that multiple eruptive sources would also be plausible 8 (Fig. 1b ). Given the lingering uncertainties of the stratigraphic relationship of the Nakaa’kire Tuff to Omo I, the age of the KHS Tuff becomes critical to the chronostratigraphy of these sites. We have re-sampled the KHS Tuff and other pertinent ash deposits at Omo-Kibish, Konso and Gademotta to assess the geochemical correlations from which the ages of the oldest modern human fossils are inferred. While revisiting the sampling locality of the KHS Tuff (KS type section) 9 at Omo-Kibish, we sampled another tephra layer in Member II (Fig. 1c ) in an outcrop about 100 m from the KS type section. Unit ETH18-8 is an approximately 15-cm-thick, very well-sorted crystal-rich fine sand grey tephra layer situated 40 cm above the KHS Tuff (Fig. 1c ). It is ubiquitous between the KHS section (KS) and the Chibele section (CB), and might stratigraphically correspond to unit CRF-23 previously identified above the KHS Tuff at the CB section 9 , although this cannot be confirmed through geochemical analysis because of the different microprobe conditions used. In an attempt to identify and date the eruption that generated the KHS tuff, we included samples of ignimbrites from the caldera-forming eruptions of Shala and Corbetti volcanoes. Shala and Corbetti are the only Main Ethiopian Rift (MER) systems known to have produced major eruptions between around 170 ka and 250 ka 26 . At Shala, the largest caldera in the central MER (Fig. 2a ), we sampled at a more-than-20-m-thick exposure of the unwelded Qi2 ignimbrite 27 (Fig. 2b, c ), southwest of Lake Shala and 350 km northeast of Omo-Kibish (Fig. 2a ). We also analysed glass from a welded ignimbrite (COI2E) attributed to the formation of Corbetti caldera, dated at 177 ± 8 kyr (ref. 26 ). A challenge of geochemical correlations between proximal and distal tephra deposits in the region is similarity in major and trace element compositions between pyroclastic products, not only of the same volcano but of different volcanoes in the MER 28 . Accordingly, correlations are ideally based on a detailed suite of major, minor and trace element single-grain glass shard or pumice glass analyses. Fig. 2: Stratigraphy and age of the Shala Qi2 ignimbrite. a , Location of site ETH17-14 near Lake Shala in the MER. b , Synthetic stratigraphy of the Qi2 ignimbrite of Shala at location ETH17-14. c , Photographs of units 14A, 14B and 14C of the Qi2 ignimbrite at site ETH17-14. Field observations indicate that deposits 14A and 14B are subunits of the same phase of the Qi2 eruption. d , 40 Ar/ 39 Ar age pooled data plotted on ideograms for samples 14A and 14C of the Qi2 ignimbrite (bottom) yielding a preferred composite eruption age of 233 ± 22 kyr (top). Data are weighted means. Error bars show data and results at 2 σ . 40 Ar*, radiogenic 40 Ar; MSWD, mean square of weighted deviates; P , probability that residuals are explained by measurement errors exclusively; n , number of accepted grains. Full size image The KHS glass shards are homogeneous pantelleritic rhyolite in composition (77.0 ± 0.3 wt% SiO 2 , 9.7 ± 0.1 wt% Al 2 O 3 , 5.0 ± 0.1 wt% FeO* (FeO* refers to the total Fe as FeO) and 7.1 ± 0.4 wt% Na 2 O+K 2 O; Supplementary Table 1 ). Immobile oxide abundances, including FeO*, CaO, Al 2 O 3 and TiO 2 (Fig. 3 , Supplementary Table 1 ), correspond with those of glasses from the proximal products of the Qi2 eruption of Shala volcano (samples ETH17-14A1, B1, B5 and C) (Figs. 2b, c , 3 , Supplementary Fig. 4 , Supplementary Table 1 , Supplementary Information ). These correlations are corroborated by comparing immobile trace element ratios for Qi2 and KHS glasses and principal component analysis (Fig. 3 , Supplementary Figs. 4 , 5 , Supplementary Table 2 , Supplementary Information ). Fig. 3: Geochemical fingerprints of MER tephra and their sources. Major element abundances and trace element ratios of glasses from the Shala Qi2 ignimbrite (around 233 kyr), the Corbetti ignimbrite (around 177 kyr), the Gademotta unit D (around 184 kyr), the Kibish KHS and ETH18-8 tuffs, and the Konso TA-56 tuffs (all data from this study). Major element data are normalized to 100% anhydrous. Error bars shown are relative standard deviations derived from repeat measurements of matrix match glass secondary standards STH-S6 (for FeO*, n = 91; Supplementary Table 6 ) and ATHO-G (for Al 2 O 3 , CaO and TiO 2 , n = 70; Supplementary Table 6 ). They are plotted in the top right corner of each plot for clarity and rescaled to the value of the centre point. In the case of element ratios, error propagation has been applied using analyses of standard ATHO-G ( n = 15; Supplementary Table 7 ). Additional compositional observations and biplots are presented in Supplementary Fig. 5 . Full size image In addition, we find that the COI2E pantelleritic rhyolite glass from the 177 ± 8 kyr (ref. 26 ) Corbetti ignimbrite (74.3 ± 0.2 wt% SiO 2 , 9.1 ± 0.1 wt% Al 2 O 3 , 5.6 ± 0.2 wt% FeO* and 10.1 ± 0.2 wt% Na 2 O+K 2 O) (Fig. 3, Supplementary Fig. 4 , Supplementary Table 1, Supplementary Information ) has immobile oxides and trace element abundances that match those for Kibish unit ETH18-8 and Konso TA-56 (Fig. 3 , Supplementary Figs. 4 , 5 , Supplementary Table 2 , Supplementary Information ). We used the 40 Ar/ 39 Ar dating method to analyse 113 individual sanidine crystals extracted from pumice samples ETH17-14A1 (base, 68 crystals) and ETH17-14C (top, 45 crystals) collected from the Shala Qi2 deposits (Fig. 2 ). The resulting data were filtered to exclude grains with low gas yields, at or below blank level, and xenocrysts with ages significantly older than the mean of the dataset (six grains with ages exceeding 1 Myr). The distributions of ages from each sample were indistinguishable at 2 σ uncertainty (Fig. 2d ). Combining analyses from both pumice samples yielded a weighted mean of 233 ± 22 kyr at 2 σ (Fig. 2d , Supplementary Table 3 ), thereby dating the Qi2 eruption and the KHS tuff. An age of 233 ± 22 kyr for KHS is consistent with the 177 ± 8 kyr age that we associate with the overlying ETH18-8 tephra (Fig. 1b ). However, it casts doubt on the suggested correlation between high deposition fluxes in the Omo basin with large in-flows of fresh water from the Nile River system into the Mediterranean sea 6 , 7 , 9 , at least during the formation of Member II. Our KHS age is incongruent with the formation of Mediterranean Sapropel S6 at 172 ka 6 , and instead overlaps the timing of the formation of sapropel S8 (217 ka) 9 , 29 . Although the 177 ± 8 kyr age of ETH18-8 is consistent with the formation of sapropel S6 (172 ka) 29 , only a mudstone unit of around 40 cm thickness separates KHS from ETH18-8, which cannot account for the suggested rapid deposition in the basin concomitant with sapropel S7 (192–199 ka) 3 . The revised Omo-Kibish stratigraphy is also incompatible with the 197 ± 4 kyr age reported for the Nakaa’kire Tuff 3 , 7 , 9 , which is found in Member I of the formation 3 , 7 , 9 and which must therefore be older than 233 ± 22 kyr. The age of 197 ± 4 kyr was inferred from three out of five dated pumice clasts from lenses found in ‘a sandy tuffaceous matrix’ 7 . Although these samples had similar major element compositions to the Nakaa’kire Tuff, they were collected from a lateral outcrop and not in section 3 , 7 , 9 . Given the uncertainty in the age and stratigraphic placement of the Nakaa’kire Tuff, as well as its heterogeneous lithology and geochemistry, the identification of the 233 ± 22 ka Qi2 eruption of Shala as the source of the KHS Tuff provides a more robust minimum age for Omo I H. sapiens . Furthermore, our glass compositional data, source correlation and age estimate for KHS allow us to re-assess its identification at other archaeological sites in Ethiopia. New lithological examination of the pedogenically altered unit TA-55 at Konso (Supplementary Fig. 1 ) in grain size fractions of greater than 125 µm, greater than 80 µm and greater than 25 µm, after density separation, failed to identify glass shards in this deposit, which was previously correlated with the WAVT at Herto. This precluded evaluation of the reported correlation with the KHS Tuff 6 . However, with the underlying unit TA-56 now correlated with Kibish unit ETH18-8 and the 177 ± 8 kyr Corbetti ignimbrite (Fig. 3 , Supplementary Figs. 4 , 5 ), it is clear that TA-55 is younger than 177 ± 8 kyr and so cannot correlate with Qi2 or the KHS Tuff. Although the 184 ± 10 kyr unit D of Gademotta appears close to KHS in major element contents, neither major nor trace element abundances clearly overlap (Fig. 3 , Supplementary Figs. 4 , 5 , Supplementary Information ), precluding a match. Immobile trace element ratios and principal component analysis show that unit D also differs from TA-56 (Fig. 3 , Supplementary Figs. 4 , 5 , Supplementary Information ). The correlation of the Herto WAVT with Konso unit TA-55 5 , around 800 km south of Herto, led earlier investigators to accept the 155 ± 14 kyr age of the SVT at Konso as the terminus ante quem of the Herto fossils. This correlation has been debated 30 but reinforced by additional geochemical data 25 . We were unable to find preserved glass in our TA-55 sample but our results undermine the tephrostratigraphic correlations proposed between the Omo-Kibish, Gademotta and Konso formations 6 and bracket the age of the Konso TA-55 tuff between 177 ± 8 kyr (TA-56) and 155 ± 14 kyr (SVT). Although its correlation with the WAVT at Herto should be confirmed in the future using grain-discrete single-point glass analyses, this age bracket is consistent with the underlying Herto fossiliferous sandstone (approximately 160 kyr) 5 , and confirms that the Herto H. sapiens fossils are considerably younger than Omo I at Omo-Kibish. Our new age constraints are congruent with most models for the evolution of modern humans, which estimate the origin of H. sapiens and its divergence from archaic humans at around 350–200 ka (refs. 16 , 31 , 32 ). The challenge remains to obtain a robust maximum age for Omo I. Our revised tephrostratigraphy demonstrates that the Herto specimens postdate the Omo I remains from Omo-Kibish, and that they do not lie beneath the same tephra horizon as the Kibish fossils, as previously inferred 8 . Further geochemical data are needed to clarify the relationship between the WAVT and other MER tephra, and may ultimately identify the WAVT source, promising a more reliable minimum age for the Herto fossils. More generally, continued efforts to develop the tephrochronological framework for eastern Africa will help in addressing a range of interrelated volcanological, palaeoenvironmental and palaeoanthropological questions. Methods Sampling Stratigraphic descriptions and sampling were carried out during two field seasons in 2017 and 2018. We sampled the previously described 27 Qi2 eruption of Shala volcano, and we revisited the Konso 20 , 21 , Omo-Kibish 3 , 6 , 9 and Gademotta 22 , 34 formations (Fig. 1 ). At each site we described extensively the stratigraphy of the outcrops, measured the thickness of units and sampled deposits where best exposed and least altered. 40 Ar/ 39 Ar dating Feldspars were extracted from pumice samples at the Departments of Geography and Earth Sciences, University of Cambridge. Rocks were crushed in a jaw crusher and sieved to obtain a 250–500-μm size fraction, cleaned under water and passed through a Frantz magnetic barrier laboratory separator to isolate sanidine phenocrysts from the groundmass. Because separates would still contain other phases (primarily glass and quartz), 100–200 sanidine grains were further handpicked and then leached in 5% HF to remove any glass attached to the crystals. Samples and neutron flux monitors were packaged in copper foil and stacked in quartz tubes with the relative positions of packets precisely measured for later reconstruction of neutron flux gradients. The sample package was irradiated for 2 h in the Oregon State University reactor, Cd-shielded facility (CLICIT). Fish Canyon sanidine (28.294 ± 0.036 (1 σ ) million years ago; Ma) (ref. 35 ) was used to monitor 39 Ar production and establish neutron flux values ( J ) for the samples (Supplementary Table 4 ). Gas was extracted from samples via step-heating using a mid-infrared (10.6 µm) CO 2 laser with a non-gaussian, uniform energy profile and a 1.5-mm beam diameter. The samples were housed in a doubly pumped ZnS-window laser cell and loaded into a stainless steel planchette containing 208 2.0-mm-diameter round wells. Liberated argon was purified of active gases—for example, CO 2 , H 2 O, H 2 , N 2 and CH 4 —using three Zr-Al getters; one at 16 °C and two at 400 °C. Data were collected on a Mass Analyser Products MAP-215-50 single-collector mass spectrometer using an electron multiplier collector in dynamic collection (peak hopping) mode. Time-intensity data were regressed to inlet time with second-order polynomial or linear fits to the data. Sample runs were corrected using the standard deviation of blanks throughout the runs. Mass discrimination was monitored on a daily basis, between and within sample runs by analysis of an air standard aliquot delivered by an automated pipette system (see Supplementary Table 4 for D values). All blank, interference and mass discrimination calculations were performed with the MassSpec software package (MassSpec, v.8.058, A. Deino, Berkeley Geochronology Center). Decay constants and corrections (Supplementary Table 5 ) were made using the approach of Renne et al. 2010 36 with the parameters of Renne et al. 2011 35 . Following the approach of Kuiper et al. 18 , samples with low radiogenic yields ( 40 Ar* < 10%, 23 grains), and obvious outliers (age > 1 Myr, 6 grains) were rejected. After this initial filtering, peak age distributions were defined by determining the youngest population of individual grain analyses ( n ≥ 10) that conforms to a Gaussian distribution with the expected scatter as indicated by the value of the mean square of weighted deviates (MSWD)); this second stage of filtering resulted in the rejection of an additional ten older grains, leaving 71 accepted grains. Ages for unit samples ETH17-14A1 and ETH17-14C are reported with two sigma errors in Supplementary Table 3 with the raw data in Supplementary Table 4 . These two sub-samples from the top and bottom of the same stratigraphic unit are indistinguishable in age at 2 σ uncertainty, which permits them to be combined into a single composite sample. The accepted age for this population is 234 ± 22 kyr (relative to ref. 36 ) or 233 ± 22 kyr (relative to ref. 18 ). An inverse isochron plotted through the data (Supplementary Fig. 2 ) yields an age of 219 ± 27 kyr ( 40 Ar/ 36 Ar (i) = 314 ± 24, MSWD = 1.1, P = 0.19, n = 71), which is indistinguishable from the accepted age. Although we are using the Kuiper et al. (ref. 18 ) calibration, the Renne et al. 2011 (ref. 36 ) calibration has quantifiable uncertainties and is our preferred age for the sample. Nevertheless, for consistency with previous work, the latter age (233 ± 22 kyr) is used throughout the manuscript. Sample preparation for geochemical analyses Sample preparation was carried out in the Cambridge Tephra Laboratory in line with the protocols of the International Focus Group on Tephrochronology (INTAV) 12 , 37 for geochemical characterization of volcanic glass. Pumice samples of the Qi2 Shala eruption were crushed, sieved at 500, 250, and 125 μm, and washed in purified water and hydrochloric acid (1%) in an ultrasonic bath. Glass grains from the 125–250-μm fraction were handpicked under microscope, mounted in epoxy resin stubs, then sectioned and polished. Distal tephra samples from Gademotta (unit D), Konso (TA-55/ETH18-14B and TA-56/ETH18-14A) and Omo-Kibish formations (KHS, ETH18-08) were washed through a sieve in purified water at 80 or 25 μm, then dried, described under microscope and mounted in epoxy resin stubs, then sectioned and polished. Strongly altered samples of TA-56 (ETH18-14A) and TA-55 (ETH18-14B) units from the Konso formation were density extracted to facilitate the search for volcanic glass 38 , 39 . Sample ETH18-14B from TA-55 was sieved at 125, 80 and 25 μm and residues inspected under the microscope, yet no glass was found. Major element analysis Mounted samples were analysed for major element compositions with a SX100 CAMECA electron microprobe at the Department of Earth Sciences, University of Cambridge. Major elements were measured with an accelerating voltage of 10 kV and a 10-nA defocused beam. Elements were counted on-peak for 10 s (Na, Si), 20 s (Al, Fe and K), 60 s (Ti, Mg, Ca, and Cl), 90 s (P) and 120 s (Mn). Sodium was measured first to minimize alkali loss. The analytical accuracy was checked against international standards ATHO-G, STH-S6 and internal peralkaline obsidian from Lipari (74 wt% SiO 2 , 3.8 wt% Na 2 O and 5.3 wt% K 2 O). Replicate standard analyses and standard deviations are reported in Supplementary Table 6 . The latter are used for error bars on biplots instead of the standard deviation of each sample, which is affected by their natural variability. Where possible, we analysed 40–50 points per sample. All analyses are reported in Supplementary Table 1 . Trace element analysis Trace element compositions of individual tephra shards were analysed by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) at the iCRAG laboratory at Trinity College Dublin. The instrument used was a Thermo iCAPQ coupled to a Photon Machines 193-nm G2 laser and a Helex two-volume cell. We used a spot size of 40 µm, depending on the area available for analysis, a repetition rate of 6 Hz and a count time of 33 s (200 pulses) on the sample and 30 s on the gas blank (background). We analysed large-enough glass shards analysed by electron microprobe analysis (EMPA) for major elements; however, spots are not tied through codes as we used the average Ca concentration of each sample as Ca correction factor. Concentrations were calibrated using NIST612 with 29 Si as the internal standard. Data reduction was undertaken in Iolite v.3.4 and a secondary Ca correction factor was applied 40 . Accuracies of ATHO-G and StHs6/80-G MPI-DING glass analyses are typically better than 6% for most elements. The precision is reflected by the standard deviations of replicate standard analyses (Supplementary Table 7 ), used for error bars on Fig. 3 , Supplementary Fig. 4 . Standard deviations of trace element ratios (Fig. 3 ) take into account error propagation. Detailed compositions of samples are reported in Supplementary Table 2 . Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability All data supporting the findings of this study are available within the paper and its Supplementary Information files. Background maps for Fig. 1 are Shuttle Radar Topography Mission Digital Elevation Model data at one arcsecond resolution from the NASA Land Processes Distributed Active Archive Center ( ); settlements, lakes and other features are from ( ). Background image for the top left corner inset of Fig. 1 from Google Earth and plate boundaries data courtesy of the US Geological Survey. Change history 04 February 2022 In the version of this article initially published online, Supplementary Tables 3 and 4 were duplicates of Supplementary Tables 5 and 6. The correct versions of Supplementary Tables 3 and 4 have now been posted. | The age of the oldest fossils in eastern Africa widely recognized as representing our species, Homo sapiens, has long been uncertain. Now, dating of a massive volcanic eruption in Ethiopia reveals they are much older than previously thought. The remains—known as Omo I—were found in Ethiopia in the late 1960s, and scientists have been attempting to date them precisely ever since, by using the chemical fingerprints of volcanic ash layers found above and below the sediments in which the fossils were found. An international team of scientists, led by the University of Cambridge, has reassessed the age of the Omo I remains—and Homo sapiens as a species. Earlier attempts to date the fossils suggested they were less than 200,000 years old, but the new research shows they must be older than a colossal volcanic eruption that took place 230,000 years ago. The results are reported in the journal Nature. The Omo I remains were found in the Omo Kibish Formation in southwestern Ethiopia, within the East African Rift valley. The region is an area of high volcanic activity, and a rich source of early human remains and artifacts such as stone tools. By dating the layers of volcanic ash above and below where archaeological and fossil materials are found, scientists identified Omo I as the earliest evidence of our species, Homo sapiens. "Using these methods, the generally accepted age of the Omo fossils is under 200,000 years, but there's been a lot of uncertainty around this date," said Dr. Céline Vidal from Cambridge's Department of Geography, the paper's lead author. "The fossils were found in a sequence, below a thick layer of volcanic ash that nobody had managed to date with radiometric techniques because the ash is too fine-grained." As part of a four-year project led by Professor Clive Oppenheimer, Vidal and her colleagues have been attempting to date all the major volcanic eruptions in the Ethiopian Rift around the time of the emergence of Homo sapiens, a period known as the late Middle Pleistocene. The researchers collected pumice rock samples from the volcanic deposits and ground them down to sub-millimeter size. "Each eruption has its own fingerprint—its own evolutionary story below the surface, which is determined by the pathway the magma followed," said Vidal. "Once you've crushed the rock, you free the minerals within, and then you can date them, and identify the chemical signature of the volcanic glass that holds the minerals together." The researchers carried out new geochemical analysis to link the fingerprint of the thick volcanic ash layer from the Kamoya Hominin Site (KHS ash) with an eruption of Shala volcano, more than 400 kilometers away. The team then dated pumice samples from the volcano to 230,000 years ago. Since the Omo I fossils were found deeper than this particular ash layer, they must be more than 230,000 years old. "First I found there was a geochemical match, but we didn't have the age of the Shala eruption," said Vidal. "I immediately sent the samples of Shala volcano to our colleagues in Glasgow so they could measure the age of the rocks. When I received the results and found out that the oldest Homo sapiens from the region was older than previously assumed, I was really excited." "The Omo Kibish Formation is an extensive sedimentary deposit which has been barely accessed and investigated in the past," said co-author and co-leader of the field investigation Professor Asfawossen Asrat from Addis Ababa University in Ethiopia, who is currently at BIUST in Botswana. "Our closer look into the stratigraphy of the Omo Kibish Formation, particularly the ash layers, allowed us to push the age of the oldest Homo sapiens in the region to at least 230,000 years." "Unlike other Middle Pleistocene fossils which are thought to belong to the early stages of the Homo sapiens lineage, Omo I possesses unequivocal modern human characteristics, such as a tall and globular cranial vault and a chin," said co-author Dr. Aurélien Mounier from the Musée de l'Homme in Paris. "The new date estimate, de facto, makes it the oldest unchallenged Homo sapiens in Africa." The researchers say that while this study shows a new minimum age for Homo sapiens in eastern Africa, it's possible that new finds and new studies may extend the age of our species even further back in time. "We can only date humanity based on the fossils that we have, so it's impossible to say that this is the definitive age of our species," said Vidal. "The study of human evolution is always in motion: boundaries and timelines change as our understanding improves. But these fossils show just how resilient humans are: that we survived, thrived and migrated in an area that was so prone to natural disasters." "It's probably no coincidence that our earliest ancestors lived in such a geologically active rift valley—it collected rainfall in lakes, providing fresh water and attracting animals, and served as a natural migration corridor stretching thousands of kilometers," said Oppenheimer. "The volcanoes provided fantastic materials to make stone tools and from time to time we had to develop our cognitive skills when large eruptions transformed the landscape." "Our forensic approach provides a new minimum age for Homo sapiens in eastern Africa, but the challenge still remains to provide a cap, a maximum age, for their emergence, which is widely believed to have taken place in this region," said co-author Professor Christine Lane, head of the Cambridge Tephra Laboratory where much of the work was carried out. "It's possible that new finds and new studies may extend the age of our species even further back in time." "There are many other ash layers we are trying to correlate with eruptions of the Ethiopian Rift and ash deposits from other sedimentary formations," said Vidal. "In time, we hope to better constrain the age of other fossils in the region." | 10.1038/s41586-021-04275-8 |
Biology | Oldest evidence of South American egg-laying mammals found in Patagonia | Nicolás R. Chimento et al, First monotreme from the Late Cretaceous of South America, Communications Biology (2023). DOI: 10.1038/s42003-023-04498-7 Journal information: Communications Biology | https://dx.doi.org/10.1038/s42003-023-04498-7 | https://phys.org/news/2023-02-oldest-evidence-south-american-egg-laying.html | Abstract Monotremata is a clade of egg-lying mammals, represented by the living platypus and echidnas, which is endemic to Australia, and adjacent islands. Occurrence of basal monotremes in the Early Cretaceous of Australia has led to the consensus that this clade originated on that continent, arriving later to South America. Here we report on the discovery of a Late Cretaceous monotreme from southern Argentina, demonstrating that monotremes were present in circumpolar regions by the end of the Mesozoic, and that their distinctive anatomical features were probably present in these ancient forms as well. Introduction The fossil record and extant distribution of monotremes is almost restricted to Australasia, with the single exception of a fossil ornithorhynchid from the earliest Cenozoic in Patagonia 1 . In this context, occurrence of a monotreme in Patagonia was interpreted as the result of a single dispersal from Australia to South America, before or during the Late Cretaceous or early Paleocene 2 , 3 , 4 , 5 , 6 . The aim of present contribution is to report on the discovery of a new monotreme, Patagorhynchus pascuali n.gen. et sp., which represents the first Cretaceous toothed monotreme from Gondwana. The material here reported consists of a fragmentary right jaw preserving the lower molar 2 showing the dilambdodon pattern characteristic of monotremes. The molar was collected from levels of the Chorrillo Formation (Upper Cretaceous, Early Maastrichtian 7 ), cropping out in SW Santa Cruz province, Patagonia, Argentina. It was found in association with both terrestrial and aquatic mollusks, calyptocephalellid anurans, chelid turtles, snakes, ornithopod, sauropod, and non-avian and avian theropod remains 7 , 8 . As far as mammals is concerned, the same fossil spot yielded a molar of the gondwanatherian Magallanodon baikashkenke 9 and isolated caudal vertebrae regarded as Mammalia incertae sedis 8 . From almost equivalents levels belonging to the Dorotea Formation (Valle del Río de las Chinas, southern Chile), remains of Magallanodon and the meridiolestidan Orretherium have been reported 10 , 11 . Results Mammalia, Linnaeus 1758 Monotremata, Bonaparte, 1837 Patagorhynchus gen. nov. (monotypic genus) Etymology Patago , from Patagonia, and rhynchus , nose. Diagnosis Patagorhynchus differs from basal monotremaformes (including Steropodon ) in having a dilambdodont crown morphology and a labial cingulid; 12 dilambdodont disposition of cusps and crests on molar crown is shared with Teinolophos and ornithorhynchids; Patagorhynchus and ornithorhynchids differ from the basal monotremaformes Teinolophos in having notably low and mesiodistally expanded teeth with the anterior lobe (equivalent to trigonid) positioned lower than the posterior one (equivalent to talonid), in having talonid composed of two (rather than one) transverse lophids, and lacking a labial cingulid. The anterior cingulid of Patagorhynchus is wider than that in Teinolophos but narrower than that in Obdurodon . Patagorhynchus shares with the toothed monotremes Obdurodon and Monotrematum both lingual and buccal extremes of the V-shaped lobe (equivalent to trigonid) with one buccal and two lingual cusps, with the first being more elevated than the latter two, and a complete mid-valley. Patagorhynchus bears two roots on m2 (as also probably in Monotrematum ) and differs from Obdurodon and Ornithorhynchus , in which more than 5 roots are present. The lobes of Patagorhynchus and Obdurodon show hypsodonty, in contrast with the much more brachyodont molariforms of Monotrematum . Patagorhynchus exhibits the following features that are lacking in other monotremes, and may be considered autapomorphic among monotremes: mid-valley labially diverges (i.e., the length of the labial edge of this valley represents two times its lingual length) and anterior cingulid labially narrow and does not reach the labial margin of the protoconid. Type and the only species Patagorhynchus pascuali sp. nov. (Figs. 1 , 2a and Supplementary Fig. 3 ). Fig. 1: Images of Patagorhynchus pascuali , MPM-PV-23087. Lower molar 2 and part of the right jaw, in a , occlusal view; b , medial/lingual view; c , lateral/labial view; d , posterior view; e , anterior view. Scale bar: length 2 mm. Abbreviations, ac, anterior cingulid; alv, alveolus; ant, anterior; ar, anterior root; hy, hypoconid; hl, hypoconulid; lapcc, labial posterior cingular cusp; liacc, lingual anterior cingular cusp; me, metaconid; mv, mid-valley; NC1, neomorphic cusp 1; pa, paraconid; pc, posterior cingulid; pr, protoconid; prt, posterior root. Full size image Fig. 2: Comparisons of the second lower molar of selected monotremaformes in occlusal view. a , Patagorhynchus pascuali (based in MPM-PV-23087); b , Obdurodon insignis 2 , 19 ; c , Monotrematum sudamericanum 17 ; d , Teinolophus trusleri 20 . Not to scale. Abbreviations: NC1, neoformation cusp 1. Full size image Patagorhynchus pascuali sp. nov. Etymology Species name honors the Argentine paleomammalogist Rosendo Pascual (1923–2012), who described the first Cenozoic monotreme remains from Patagonia, thus demonstrating the presence of this clade outside Australia. Holotype MPM-PV-23087, Museo Padre Molina (Rio Gallegos, Santa Cruz, Argentina), a right lower m2 attached to a fragment of dentary. Collected by N. R. Chimento during a joint Argentine-Japanese field trip in March 2022. Diagnosis The same as for genus by monotypy. Type locality and age La Anita farm, Santa Cruz Province, Patagonia, Argentina. The tooth was collected from the “Puma Cave” fossil site (S 50 30.639 W 72 33.617), Chorrillo Formation, early Maastrichtian 7 , 8 . This new discovery expands the list of Late Cretaceous mammaliaforms recorded in the Chorrillo Formation and equivalent Dorotea Formation in southern Chile, previously known to include gondwanatherians ( Magallanodon ) and dryolestoids ( Orretherium ) 9 , 10 , 11 , 13 . Description Despite the occlusal surface being somewhat damaged, the morphology of the main cusps and anatomical details can be clearly discerned. The tooth is identified as a second lower molar based on the similarities with the m2 of Obdurodon , including a subrectangular-shaped outline in occlusal view, the presence of two lobes each bearing three cusps, a mid-valley lacking cusps, and prominent anterior and posterior cingulids (Fig. 1 ). Immediately anterior to m2, the fragmentary mandible shows a partially preserved and relatively small alveolus on the labial margin, which presumably corresponds to one of the roots of m1. The Patagorhynchus m2 exhibits a distinct morphology that easily identifies it as a monotreme. This includes a unique lophid and cusp structure resulting in the presence of two mesiodistally compressed lobes that are sub-equally in shaped and size each consisting of three cusps, twinned paraconid and metaconid, wrapping cingulid, hypsodont lobes, and un-basined talonid 12 , 14 , 15 . The m2 is 5.8 mm in mesiodistal length (see Supplementary Results 3 ), indicating that this tooth of Patagorhynchus was possibly intermediate in size between Monotrematum and some species of Obdurodon . The m2 is mesiodistally longer than transversely wide, and narrows mesially. Six large cusps are present: protoconid, paraconid, metaconid, hypoconid, hypoconulid, and NC1 (neomorphic cuspid 1) 16 . These cusps are relatively low and mound-like and connected by lophids, which form two main lobes or triakididrepanids (Fig. 1 , Supplementary Fig. 2 ). The anterior lobe (equivalent to trigonid) is labiolingually narrower and apicobasally taller than the posterior lobe (equivalent to a talonid), a condition shared with Obdurodon 12 . In Patagorhynchus , the anterior lobe is heart-shaped, with the anterior and posterior lophids being slightly convex posteriorly. This results in the paraconids being located anteriorly at the same level as the protoconid. In Obdurodon , by contrast, the anterior lophid is anteriorly convex and the posterior one is straight, resulting in metaconid and protoconid being located at the same level. In Patagorhynchus , the paraconid is larger than the metaconid, and its base is ventrally positioned relative to the base of both the metaconid and protoconid, suggesting that the paraconid was more ventrally located than the other cusps. The posterior (talonid) lobe is similar in shape to the anterior (trigonid) lobe, but much wider labiolingually. The lingual cusps are notably taller than the labial one (hypoconid). The preserved bases of the NC1 and hypoconulid are subequal in size and position. The hypoconid is mesiodistally narrower than the protoconid. Between the lingual cusps of paired lobes there is a narrow, eye-shaped enamel invagination, reminiscent of a flexid. Such a condition is also present in Monotrematum 17 and some specimens of Obdurodon 18 . Patagorhynchus resembles Monotrematum in that the invaginations are delimited by a narrow enamel layer (Fig. 2 , Supplementary Fig. 2 ), in contrast to Obdurodon in which the invaginations are labiolingually extended 19 . Both lobes are separated by a wide, deep mid-valley, which extends from the labial through the lingual edges of the tooth. The margins of the valley widen slightly towards the labial edge of the tooth. The valley lacks well-defined cusps or fossettes, and becomes deeper towards its labial margin. Posterior and anterior cingulids are prominent, being wider than those in Teinolophos but narrower than those in Obdurodon 16 , 20 . The posterior cingulum is eroded on its lingual end, but the preserved segment maintains a constant width along its length, being similar in this morphology to that in Obdurodon . In contrast, the cingulids become lingually wider in Teinolophos 20 . The anterior cingulid hosts a small cusp on its lingual end, whereas in the posterior cingulid the labial end shows a cusp (the lingual end of this cingulid is eroded, precluding the recognition of cusps), similar to the morphology in Monotrematum , but differing from that in Obdurodon (Fig. 2 , Supplementary Fig. 2 ). The tooth bears two roots that are broad labiolingually and constricted at mid-height; they are obliquely oriented with respect to the main axis of the tooth. Regarding the root number, Patagorhynchus retains the ancestral condition in m2 shared with Teinolophos (and probably Monotrematum ), differing from the multiple roots present in Obdurodon and Ornithorhynchus 16 , 20 . Discussion As indicated above, the crown shape of Patagorhynchus unambiguously indicates that this taxon belongs to monotremes. With the aim to test the phylogenetic position of Patagorhynchus , we scored this tooth into a previously published data matrix composed by 558 characters and 128 taxa 21 (see Supplementary Methods 1 and 2 ). We concentrated on the characters available in this new tooth, a total of 54 characters can be scored for Patagorhynchus (Supplementary Results 1 and 2 ). The results of the phylogenetic analysis consistently place Patagorhynchus as nested within monotremes, together with the genera Ornithorhynchus, Tachyglossus, Monotrematum and Obdurodon (Fig. 3 , Supplementary Fig. 1 ). Fig. 3: Simplified calibrated cladogram showing the phylogenetic affinities of Patagorhynchus pascuali . Basal Monotremaformes 44 are indicated in red and Monotremata in green. The Late Cretaceous (Maastrichtian) palaeogeographical map (based in Scotese 35 ) indicates the fossiliferous sites that yielded fossil toothed monotremes and distribution of the extant platypus Ornithorhynchus anatinus shaded in light brown. [1], occurrence of Patagorhynchus pascuali , La Anita farm, Chorrillo Formation (Maastrichtian, Late Cretaceous); [2], occurrence of Monotrematum sudamericanum , Punta Peligro locality, Salamanca Formation (Danian, lower Paleocene); [3], occurrence of Obdurodon spp., different localities from South Australia, Queensland, and New South Wales Oligocene-Pliocene); [4], Pleistocene occurrences and geographic distribution of extant Ornithorhynchus anatinus . Full size image Australia has yielded the most complete fossil record of monotremes 2 , including an array of Barremian through Cenomanian taxa, as well as several species of the Oligocene-Pliocene monotreme Obdurodon . In this context, the presence of the toothed monotreme Monotrematum in the early Paleocene of Patagonia 1 , 22 was interpreted as the result of a single dispersal of monotremes from Australia to South America, before or during the Late Cretaceous or early Paleocene 2 , 3 , 4 , 5 , 6 . Discovery of Patagorhynchus clearly demonstrates that the monotremes had already attained a wide paleogeographic distribution, stretching across southern South America, Australia, and Antarctica, the later one as a connecting pathway (but fossil monotremes are still unknown from this landmass), constituting a clade characteristic of the Weddelian Paleobiogeographical Province 23 , 24 , 25 , 26 , 27 , 28 . The new discovery expands the list of mammals documented in the Chorrillo and equivalent Dorotea formations of southern South America, adding the Monotremata to the assemblage of non-therian mammals (i.e., gondwanatherians and meridiolestidan dryolestoids 9 , 10 , 11 , 13 ). Remarkably, monotremes are absent from the extensively sampled Late Cretaceous localities of northern and central Patagonia 2 , 29 , 30 . Such a difference among mammalian assemblages characteristic of Patagonia is consistent with the uneven distribution of non-avian dinosaurs in this region. For example, megaraptorid theropods, colossosaurian titanosaurs, and elasmarian iguanodontians are numerically dominant in the Chorrillo Formation 8 , 31 whereas abelisaurid theropods and saltasaurine titanosaurs are prevalent in coeval units in northern Patagonia. Similar differences are documented in terrestrial and marine biotas between southern and northern Patagonia 32 , 33 , 34 . Thus, evidence at hand suggests that the Maastrichtian vertebrate fauna in southern Patagonia was different from that in northern Patagonia. It is noteworthy that the former had, instead, several taxa in common with Australia ( e.g ., Monotremata, Megaraptoridae). It is likely that a latitudinal zonation of environmental conditions (i.e., dry and warm in northern Patagonia versus humid and cold in southern Patagonia) controlled the distribution and partial abundance of the above-mentioned vertebrate clades. The presence of monotremes in the southern La Anita fossil site (which occupied a paleolatitude of approximately 60° S during the Maastrichtian, roughly the same as that of southern Australia 35 ) is congruent with the interpretation by Flannery et al. 2 that monotremes evolved under humid, cool and densely forested environments in circumpolar Gondwana. Some authors already proposed that certain anatomical and physiological characteristics of living monotremes (e.g., low metabolism, a mechanoreceptive and electroreceptive beak for probe feeding, and relatively large body size) may have evolved in the context of polar environments 2 , 18 , 36 . The crown morphology of the only available molar of Patagorhynchus is closely similar to that of the Paleogene Monotrematum and the Neogene Obdurodon , revealing a highly conservative dental morphology for toothed monotremes 15 . Remarkably, this molar pattern underwent only minor changes for approximately 60 million years from the Late Cretaceous through to Miocene times. This duration of stasis in dental morphology considerably exceeds that seen in other mammalian groups ( e.g ., therians and dryolestoids 37 , 38 , 39 , 40 ). The labiolingually broad segment of the molar of Patagorhynchus and the reduction in the number of teeth (eventually restricted to only two upper molars inferred for Monotrematum 2 ) may be congruent with the duck-billed morphology of the snout documented in more derived ornithorhynchids. In addition, the presence of a hypertrophied mandibular canal in Teinolophos suggests the development of electroreception occurred early in the evolutionary history of Monotremata and that the acquisition of a specialized duckbill for high-resolution aquatic electroreception is unique to the clade 39 . Based on such evidence, we hypothesize that a highly sensitive duck-billed snout is likely to have already been present in Late Cretaceous monotremes, such as Patagorhynchus . Apparently, a similar anatomical inference could be made for the rest of the body, as suggested by the morphology of the distal femur of Monotrematum 41 being almost identical to that of the living platypus. As in Ornithorhynchus , extinct monotremes may have had a sprawling posture of their hind limbs, and eventually adapted for swimming 42 . The possibility that Patagorhynchus had already acquired ecological and behavioral characteristics similar to those of the living platypus, which inhabits ponds and lakes, is congruent with sedimentological evidence suggesting that such environments were prevalent during deposition of the Chorrillo Formation 7 , as well as with occurrences of Nymphaeaceae aquatic plants, freshwater snails and abundant larvae of chironomid insects, with the latter two invertebrates constituting part of the food for the living platypuses 8 , 36 , 43 . Discovery of Patagorhynchus gives an insight into the degree of continuity between the terrestrial vertebrate faunas of western and eastern Gondwana during the Late Cretaceous, suggesting the lack of paleobiogeographic barriers to their dispersal prior to the deep-water opening of the Drake Passage and the Tasman Gateway. The diversification of monotremes towards the end of the Mesozoic suggested by the present discovery implies that an extensive and still unknown history of this clade of peculiar mammals awaits to be documented in Mesozoic beds in southern South America. Methods The material reported in this publication was collected from the Chorrillo Formation (Upper Cretaceous, lower Maastrichtian 7 ) cropping out in La Anita fossil site, SW Santa Cruz Province, Patagonia, Argentina. The specimen was found in association with both terrestrial and aquatic mollusks, calyptocephalellid anurans, chelid turtles, snakes, ornithopods, sauropods, and non-avian and avian theropod remains 7 , 8 . With regard to mammals, the same outcrop yielded remains of the gondwanatherian Magallanodon baikashkenke and isolated caudal vertebrae of yet unidentified mammals 8 , 9 , 13 . Cusp nomenclature of molariforms used in the description and codifications of Monotrematum and Patagorhynchus follows the terminology applied by Kielan-Jaworowska et al. 45 , Rich et al. 46 , and Woodburne 16 . Nomenclatural acts This published work and the nomenclatural acts it contains have been registered in ZooBank, the proposed online registration system for the International Code of Zoological Nomenclature (ICZN). The ZooBank LSIDs (Life Science Identifiers) can be resolved and the associated information viewed through any standard web browser by appending the LSID to the prefix “ ”. The LSID for this publication is: 01EF7079-F4F8-4996-ABD3-D61BBD04A2BA. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability The datasets analyzed during the current study are included in this published article (and its Supplementary Information file). | It was just a tooth and a fragment of jaw bone—discovered in an excavation layer of the Chorrillo Formation, a unique geological formation in Patagonia, Argentina. Field researchers found it amongst fossils of both terrestrial and aquatic mollusks, frogs, turtles, and snakes, as well as theropod and sauropod dinosaur fossils. Based on the sediment layer, the tooth dates from the Maastrichtian Stage, a timeframe at the end of the Late Cretaceous ranging from 72.1 million to 66 million years ago. Surprisingly, the unusual morphology of the tooth clearly showed that it belongs to a member of the monotreme clade, the small group of egg-laying mammals that today includes the platypus and echidna. Further analysis found that the tooth is most similar to Australian platypuses older than 20 million years in the fossil record. Together, it shows a long history of conserved monotreme tooth morphology and an ancient connection between Australia and South America. The research team from Argentina, Australia, and Japan have published their findings in a paper titled "First monotreme from the Late Cretaceous of South America" in Communications Biology. This find isn't the first ancient platypus-like tooth found in Patagonia. Thirty years ago, paleo mammalogist Rosendo Pascual described a 62 million-year-old monotreme tooth, challenging what had been thought to be the clades exclusively Australasian existence. In honor of that discovery, the new find has been named Patagorhynchus pascuali. Patagorhynchus pascuali is older than the previous find. How did it get there? Most likely, it swam. Look at any map today, and you will see a lot of ocean between Australia and Argentina, many thousands of miles of deep water. But long ago, the lands were much closer. Even if Patagorhynchus pascuali only just arrived in Patagonia 70 million years ago it would have had an easy time making the trip. Australia and Argentina were both much further south than they are today, connected to Antarctica or only separated by small islands and channels. The circumpolar environment would have been warmer than it is today, supporting a more vibrant animal and plant life. If the tooth does belong to a creature that closely resembles a modern-day platypus, a good river and lake swimmer like that would have no problem traversing coastal waters or short stretches that once separated South America from Australia. It may be interesting to note that a different clade of mammal, marsupials, often regarded as a quintessentially Australian animal, didn't originate there. The Australian marsupials of today originated in South America and likely took a similar path (in reverse) as the Patagonian platypus over 50 million years ago. More about monotremes Monotremes are a group of mammals that lay eggs instead of giving birth to live young. They have other special features, such as a cloaca—a single opening for excretion and reproduction—a trait they have in common with all amphibians, reptiles, and birds. While the rest of us mammals stopped laying eggs over 160 million years ago, monotremes stuck with it. | 10.1038/s42003-023-04498-7 |
Medicine | Children of mothers with type 1 diabetes have a higher body mass index | Anitha Pitchika et al. Associations of maternal type 1 diabetes with childhood adiposity and metabolic health in the offspring: a prospective cohort study, Diabetologia (2018). DOI: 10.1007/s00125-018-4688-x Journal information: Diabetologia | http://dx.doi.org/10.1007/s00125-018-4688-x | https://medicalxpress.com/news/2018-07-children-mothers-diabetes-higher-body.html | Abstract Aims/hypothesis Exposure to an intrauterine hyperglycaemic environment has been suggested to increase the offspring’s later risk for being overweight or having metabolic abnormalities, but conclusive evidence for pregnancies affected by maternal type 1 diabetes is still lacking. This study aims to analyse the relationship between maternal type 1 diabetes and the offspring’s metabolic health and investigate whether birthweight and/or changes in the offspring’s metabolome are in the potential pathway. Methods We analysed data from 610 and 2169 offspring having a first-degree relative with type 1 diabetes from the TEENDIAB and BABYDIAB/BABYDIET cohorts, respectively. Anthropometric and metabolic outcomes, assessed longitudinally at 0.3–18 years of age, were compared between offspring of mothers with type 1 diabetes and offspring of non-diabetic mothers but with fathers or siblings with type 1 diabetes using mixed regression models. Non-targeted metabolomic measurements were carried out in 500 individuals from TEENDIAB and analysed with maternal type 1 diabetes and offspring overweight status. Results The offspring of mothers with type 1 diabetes had a higher BMI SD score (SDS) and an increased risk for being overweight than the offspring of non-diabetic mothers (e.g. OR for overweight status in TEENDIAB 2.40 [95% CI 1.41, 4.06]). Further, waist circumference SDS, fasting levels of glucose, insulin and C-peptide, and insulin resistance and abdominal obesity were significantly increased in the offspring of mothers with type 1 diabetes, even when adjusted for potential confounders and birthweight. Metabolite patterns related to androgenic steroids and branched-chain amino acids were found to be associated with offspring’s overweight status, but no significant associations were observed between maternal type 1 diabetes and metabolite concentrations in the offspring. Conclusions/interpretation Maternal type 1 diabetes is associated with offspring’s overweight status and metabolic health in later life, but this is unlikely to be caused by alterations in the offspring’s metabolome. Working on a manuscript? Avoid the common mistakes Introduction Obesity and excess weight in children and adolescents remains a major public health problem because it induces other metabolic disorders, such as diabetes and cardiovascular disease [ 1 ]. A growing body of evidence supports the concept of fuel-mediated teratogenesis, in which intrauterine exposure to hyperglycaemia leads to excess fetal glucose and insulin, and thus overgrowth of the fetus [ 2 ]. These exposures during fetal life have been reported to extend beyond the neonatal period and influence metabolic complications in later life. Various studies have shown evidence associating gestational diabetes and type 2 diabetes with later adiposity, increased BMI, insulin resistance, impaired glucose tolerance, higher cholesterol, hypertension and type 2 diabetes in the offspring [ 3 , 4 , 5 , 6 ], but less evidence exists to support a similar effect of maternal type 1 diabetes on offspring health. However, it appears relevant to differentiate between type 1 diabetes, gestational diabetes and type 2 diabetes, because the last two are associated with maternal obesity, while type 1 diabetes is not. Studies which reported a positive association of maternal type 1 diabetes with BMI or metabolic outcomes in the offspring [ 7 , 8 , 9 , 10 ] were cross-sectional in design and limited with respect to their sample size ( n < 600 in each). Furthermore, two of these studies were based on children born as early as 1978–1985 [ 7 ] and 1982–1991 [ 10 ], respectively, when diabetes care in pregnant women was probably less good than nowadays [ 11 ]. Previous analyses of our own data indicated that children with non-diabetic and type 1 diabetic mothers follow different growth patterns [ 12 , 13 ], and also that a potential association between maternal type 1 diabetes and risk of being overweight in the offspring is not independent of birthweight and breastfeeding duration [ 14 ]. Here, we analysed data from two prospective cohort studies containing over 2770 children of whom more than 1500 were exposed to maternal type 1 diabetes during pregnancy. A subset of 500 children were also characterised for non-targeted metabolomics; these are of particular interest as recent studies have shown significant associations between metabolic concentrations and childhood obesity [ 15 , 16 , 17 ], while the associations between maternal type 1 diabetes and metabolic profile in the offspring have not yet been investigated. The aims of this study were to investigate: (1) whether there are differences in anthropometric and metabolic outcomes between offspring of mothers with type 1 diabetes and non-diabetic mothers; and (2) whether birthweight and/or changes in the offspring’s metabolome may be in the potential pathway from maternal type 1 diabetes to later overweight status and poor metabolic health in the offspring. Methods Our analysis was based on the prospective German cohorts TEENDIAB and BABYDIAB/BABYDIET. These cohorts include children with a familial background of type 1 diabetes and have already been combined for other research questions [ 18 , 19 ]. All parents gave written informed consent for participation. The studies were approved by the ethical committees of the Technische Universität München (number 2149/08) and Hannover Medical School (number 5644); the Bavarian General Medical Council (number 95357) and Ludwig-Maximilians University (number 329/00), respectively. TEENDIAB study The TEENDIAB study is a prospective cohort study conducted in the cities of Munich and Hannover, Germany. During 2009–2015, this study recruited 610 children aged 6–16 years who were resident in Germany and had at least one parent or sibling with type 1 diabetes [ 20 ]. Children were followed, on average, every 6 months from 6 to 18 years of age until 2016. Maternal characteristics and offspring measurements At the first visit, information on type 1 diabetes, smoking status and education level of the parents as well as monthly family income was obtained via self-administered questionnaire. Birthweight information was taken from health records collected during the well-baby preventive health programme, which is routinely offered to all children in Germany. During each visit, weight was measured digitally or using a beam scale with a precision of ±100 g in light clothing. Height was measured using a stadiometer with a precision of ±1 mm. Waist circumference was measured using a measuring tape between the pelvic crest and the lower ribs while breathing with a precision of ±1 mm. Subscapular and triceps skinfold thickness were measured three times using a caliper at the inferior angle of the right scapula and at the posterior right upper arm, respectively, and were calculated as the average of the three measurements. Systolic and diastolic blood pressure were calculated as the average of two measurements, made using the auscultatory or oscillometric method and the upper arm, with the individual in a sitting position after 3–5 min of rest. Tanner’s staging was assessed by the study doctor or local paediatrician using validated questionnaires [ 21 ]. Venous blood samples were collected to assess fasting blood glucose, insulin and C-peptide, and lipids (cholesterol and triacylglycerols). All participants were asked to fast for at least 10 h before blood collection. Dietary intake was assessed in 330 children during their first study visit using two different methods. In 268 children, Diet Interview Software for Health Examination Studies Junior (DISHES Junior; Robert Koch Institute, Berlin, Germany), computer-assisted interview software, was used to assess retrospectively the frequency, type and quantity of foods and beverages consumed in the last 4 weeks. In the remaining 62 children, diet was assessed using a 3 day dietary record which was entered into PRODI (Nutri-Science, Stuttgart, Germany) nutrition software. Both software packages are linked to the German Nutrient Database (Bundeslebensmittelschluessel; Max Rubner Institut, Karlsruhe, Germany), which allows estimates to be made of the average daily intake of energy, macronutrients and micronutrients. Metabolomic profiling Non-targeted metabolomic profiling was performed on fasting serum samples taken from 500 children at the first visit using ultra high-performance liquid chromatography and mass spectrometry on the Metabolon platform (Metabolon, Durham, NC, USA). All samples were stored at −80°C prior to analysis. Metabolites were identified following the metabolomics standardisation initiative guidelines [ 22 ]. Metabolites were quantified as outlined previously [ 23 ]. A total of 575 metabolites were quantified, of which 239 were unknown. Metabolites and samples which had more than 30% missing values were excluded, leaving a total of 441 metabolites, including 294 known and 147 unknown ones, and 485 samples. Metabolite concentrations in terms of raw ion counts were normalised to account for run-day differences and log-transformed to bring them closer to a normal distribution. Missing data were imputed using random forest imputation. BABYDIAB/BABYDIET studies The BABYDIAB and BABYDIET studies are two ongoing prospective studies of German birth cohorts; they include 2441 children born between 1989 and 2006 with a first-degree relative with type 1 diabetes. During 1989–2000, a total of 1650 offspring of individuals with type 1 diabetes were recruited for the BABYDIAB study. During 2000–2006, 791 additional offspring or siblings of individuals with type 1 diabetes were screened in the context of the BABYDIET study. Of those, 150 participated in the BABYDIET dietary intervention study randomising the timing of first gluten exposure; the intervention had no effect on islet autoimmunity development or on growth [ 24 , 25 ]. Further details on the study design are described elsewhere [ 24 , 26 , 27 ]. Data from these two cohorts were combined for longitudinal analyses of maternal type 1 diabetes and anthropometric outcomes in the offspring. Maternal characteristics and offspring measurements Information on the presence of type 1 diabetes within the family (mother, father or sibling) and smoking status of the mother during pregnancy was obtained via self-administered questionnaire. Height and weight measurements of the offspring were obtained from health records from the well-baby preventive health programme visits, which were regularly conducted at birth and at the age of 3–10 days, 4–6 weeks and 3–4, 6–7, 10–12, 21–24, 46–48 and 60–64 months. Further height and weight measurements were assessed during study visits, which were scheduled at birth, age 9 months and at 2, 5, 8, 11, 14, 17 and 20 years of age in BABYDIAB, as well as 3-monthly from birth until the age of 3 years, and yearly until the age of 12 years in BABYDIET. These measurements were performed in the same way as described for the TEENDIAB study. From the age of 8 years, Tanner’s staging was assessed by a paediatrician or trained staff using validated questionnaires at every study visit. Exclusions We excluded from our analysis the data from BABYDIAB/BABYDIET participants who had no height and weight measurements ( n = 14), were lost to follow-up after 0.3 years of age ( n = 44), or who also participated in the TEENDIAB study ( n = 214), leaving a final sample size of n = 2169. We further excluded all visits performed before 0.3 years of age because these measurements were likely to be highly correlated with birthweight, which we wanted to investigate separately. Statistical analysis Height, weight, BMI, waist circumference, subscapular and triceps skinfold thickness and lipids were transformed into age- and sex-specific SD scores (SDSs), and blood pressure into age-, sex- and height-specific SDSs according to German reference values [ 28 , 29 , 30 ]. Overweight was defined as a BMI at or above an SDS of 1.31, corresponding with the 90th percentile. For waist circumference SDS, the respective reference percentiles were available for only participants aged between 11 and 18 years. Abdominal obesity was defined as a waist circumference at or above the 90th percentile or the adult threshold set by the International Diabetes Federation [ 31 ]. Birthweight was transformed into age- and sex-specific percentiles based on German reference values [ 32 ], and categorised as small for gestational age (birthweight <10th percentile), appropriate for gestational age (10th–90th percentile) or large for gestational age (>90th percentile). Participants were classified as having high overall metabolic risk at a certain visit when at least one SDS of BMI, waist, skinfold thickness, blood pressure or lipids was greater than 1.5. Insulin resistance was estimated by HOMA-IR [ 33 ]. To adjust for potential confounders, categories of socioeconomic status (high, middle and low) were calculated based on parental education and family income as described previously [ 34 ]. Energy intake was adjusted for age and sex using the residual method [ 35 ]. Further, an energy-adjusted dietary inflammatory index (DII) score was calculated based on 27 out of a possible 45 food variables as described elsewhere [ 36 ]. A positive DII score indicates a proinflammatory diet, whereas a negative DII score indicates an anti-inflammatory diet. Maternal type 1 diabetes and metabolic outcomes in the offspring In all our analyses, we compared offspring of mothers with type 1 diabetes with offspring who had mothers without diabetes, but fathers or siblings with type 1 diabetes. We did this separately for TEENDIAB and BABYDIAB/BABYDIET because the studies differed in the number of outcomes assessed and the timing of the respective measurements. First, anthropometric and metabolic outcomes were visually compared at yearly time intervals between offspring of mothers with and without type 1 diabetes. Second, linear and logistic mixed-effect models accounting for repeated observations within individuals were performed. Fasting glucose, insulin and C-peptide as well as HOMA-IR were log-transformed because of non-normal residuals in the respective linear models. Associations were analysed based on stepwise adjustment. In the first model, we performed univariate analysis for all outcomes. Consistent with other studies [ 8 ], we adjusted for age and sex (except for the SDS-corrected outcomes) as well as for Tanner’s staging in the second model, and additionally for socioeconomic status and maternal smoking, which are known to be potential risk factors for excess weight gain in childhood [ 37 , 38 ]. In order to investigate whether birthweight was in the causal pathway from maternal type 1 diabetes to overweight status and metabolic risk in the offspring, birthweight was added as a categorical variable in the third model. Sensitivity analyses As a first sensitivity analysis, we excluded all children who developed type 1 diabetes during follow-up (8/610 in TEENDIAB and 100/2169 in BABYDIAB/BABYDIET), and reassessed the associations between maternal type 1 diabetes and offspring metabolic outcomes. Second, we compared anthropometric outcomes from the offspring of mothers with type 1 diabetes and fathers with type 1 diabetes separately from those for offspring whose parents did not have type 1 diabetes to see whether parental genetic transmission may also be a relevant factor in addition to intrauterine hyperglycaemia. Children who had both parents with type 1 diabetes were not considered in this analysis. Third, we further investigated cross-sectional associations after adjustment for daily energy intake and DII separately in two different models in addition to Tanner’s staging, socioeconomic status and maternal smoking. Fourth, we analysed BMI, weight and height outcomes (not SDS transformed) by adding interaction terms between maternal type 1 diabetes status and child’s age in the combined TEENDIAB and BABYDIAB/BABYDIET cohort data to explore whether the association changed with increasing age. Analyses of metabolomic profiles We further explored the extent to which the offspring’s metabolomic profile may play a mediating role in the association between maternal type 1 diabetes and being overweight. First, we examined associations between every single metabolite concentration and being overweight in the offspring assessed at the same visit using logistic regression models. The Benjamini–Hochberg procedure was used to control the false-discovery rate based on 441 tests in order to account for multiple comparisons. Further, principal components analysis with varimax rotation was performed on the 441 log-transformed metabolites to consolidate them into 15 principal components with eigenvalues >5, which accounted for 43% of the variance in metabolites; the associations between these 15 principal components and being overweight in the offspring were analysed. Second, we investigated whether maternal type 1 diabetes was associated with principal components or metabolites that were significant for overweight status, adjusted for age and sex. Third, associations between maternal type 1 diabetes and overweight status in the offspring were assessed after adjusting for metabolites or principal components which were significantly associated with being overweight. In addition, metabolite concentrations were categorised into 68 sub- and eight superpathways [ 23 ]. For each super- and subpathway, the mean of the metabolites belonging to that particular pathway was calculated for all samples and associated with offspring overweight status and maternal type 1 diabetes. Results were reported as absolute change with 95% CI for SDS outcomes, per cent change with 95% CI for log-transformed outcomes and as OR with 95% CI for risk of being overweight and having metabolic abnormalities between offspring of type 1 diabetic and non-diabetic mothers. All analyses were carried out using SAS 9.4 (SAS Institute, Cary, NC, USA) and R 3.4.1 ( ). Results The study participants in TEENDIAB and BABYDIAB/BABYDIET had a median follow-up of 3.0 and 10.7 years, respectively, which corresponds to a median of six follow-up visits (TEENDIAB range 1–13; BABYDIAB/BABYDIET range 1–18) resulting in 3583 and 13,235 observations in the TEENDIAB and BABYDIAB/BABYDIET cohorts, including 257 (42%) and 1287 (59%) children of mothers with type 1 diabetes, respectively (Table 1 ). The age of enrolment and follow-up duration were not significantly different between offspring of type 1 diabetic and non-diabetic mothers in either cohort ( p > 0.90 each; Mann–Whitney U test). Table 1 Characteristics of study participants stratified by maternal type 1 diabetes in the TEENDIAB and BABYDIAB/BABYDIET cohort Full size table Maternal type 1 diabetes and metabolic outcomes in the offspring In TEENDIAB, we observed a pattern of higher BMI SDS, weight SDS, fasting levels of glucose, insulin and C-peptide as well as insulin resistance, and of lower height SDS in offspring of mothers with type 1 diabetes in most age groups (Fig. 1 and electronic supplementary material [ESM] Fig. 1 ). In BABYDIAB/BABYDIET, the anthropometric associations were similar, but weaker and less consistent. However, in mixed models based on all longitudinal measurements significant associations were observed in both cohorts: offspring of mothers with type 1 diabetes had a significantly higher BMI SDS (TEENDIAB 0.35 [95% CI 0.19, 0.52]; BABYDIAB/BABYDIET 0.13 [95% CI 0.06, 0.20], Tables 2 and 3 ) and increased risk for being overweight (TEENDIAB OR 2.40 [95% CI 1.41, 4.06]; BABYDIAB/BABYDIET OR 1.44 [95% CI 1.20, 1.73]) compared with offspring of non-diabetic mothers. These associations did not change considerably when adjusted for Tanner’s staging, socioeconomic status and maternal smoking. However, after further adjustment for birthweight, the observed associations were attenuated in TEENDIAB and were no longer significant in BABYDIAB/BABYDIET, while the negative associations for height SDS became stronger and significant in both cohorts. In TEENDIAB, weight SDS, waist circumference SDS and subscapular and triceps skinfold thickness SDSs were also significantly higher in offspring of mothers with type 1 diabetes compared with those whose mothers did not have type 1 diabetes, but only the estimates for waist circumference SDS remained significant when adjusted for potential confounders and birthweight. The offspring of type 1 diabetic mothers showed significantly increased abdominal obesity risk and metabolic risk, as well as significantly increased levels of fasting insulin and HOMA-IR, independent of potential confounders. Significant associations with fasting glucose and C-peptide were observed only after adjustment. Systolic blood pressure SDS was slightly higher in children with type 1 diabetic mothers in unadjusted analyses (+0.16 [95% CI +0.01, +0.31]), but not after adjustment, while no significant differences in lipids were observed between offspring of mothers with or without type 1 diabetes in unadjusted or adjusted models. The observed associations did not change considerably after excluding children who developed type 1 diabetes (data not shown). Also, the offspring of mothers with type 1 diabetes showed stronger anthropometric associations than offspring of fathers with type 1 diabetes when compared with offspring without parents with type 1 diabetes (ESM Table 1 ). Our sensitivity analyses based on 330 children indicated that the associations were independent of total energy intake or DII (ESM Table 2 ). Further, we observed that as children got older, BMI and weight increased at a greater rate in offspring of mothers with type 1 diabetes compared with offspring of non-diabetic mothers, whereas height increased at a greater rate in offspring of non-diabetic mothers (ESM Fig. 2 and 3 ). Fig. 1 Mean and 95% CI for BMI ( a , d ), weight ( b , e ) and height ( c , f ) SDSs stratified by age and maternal type 1 diabetes in the TEENDIAB ( a – c ) and BABYDIAB/BABYDIET ( d – f ) cohorts. Black circles, offspring of mothers with type 1 diabetes; white circles, offspring of non-diabetic mothers Full size image Table 2 Effect estimates for anthropometric and metabolic outcomes in offspring born to a mother with vs without type 1 diabetes in the TEENDIAB cohort Full size table Table 3 Effect estimates for anthropometric outcomes in offspring born to a mother with vs without type 1 diabetes in the BABYDIAB/BABYDIET cohort Full size table Analyses of metabolomic profiles The metabolomics blood samples were taken at a median age of 10 years (range 6–16 years), and 48 individuals (10%) were overweight at that time. Of the children included in the metabolomics analyses ( n = 485), 247 (51%) were male and 197 (41%) had mothers with type 1 diabetes. Of the 441 metabolites analysed, 28 showed significant associations with being overweight after multiple testing correction, and 19 of these were of known identity (Table 4 ). All these metabolites were upregulated in overweight individuals, including four metabolites from the amino acid class (valine, kynurenate, tyrosine and alanine), 11 from the lipid class (androgenic steroids such as androsterone sulphate, epiandrosterone sulphate, carnitine and the short-chain acyl-carnitine [butyryl carnitine (C4)], glycerol, thromboxane B 2 , stearidonate and 2-aminoheptanoate), and four metabolites from other classes (N1-methyl-4-pyridone-3-carboxamide, urate, γ-glutamyltyrosine and piperine). At the pathway level, several subpathways such as androgenic steroids and branched-chain amino acid (BCAA) metabolism were upregulated in overweight individuals, as was the superpathway nucleotide (Fig. 2 ). Similarly, three principal components, characterised by androgenic steroids, BCAAs and related metabolites or composed of amino acid, lipid and acetylated peptides, were associated with being overweight (ESM Fig. 4 and ESM Table 3 ). The principal components related to androgenic steroids and BCAAs were also positively associated with HOMA-IR ( p < 0.0001 and p = 0.002 respectively), fasting insulin ( p < 0.0001 and p = 0.005) and fasting C-peptide ( p = 0.002 and p < 0.0001). Table 4 Cross-sectional associations between metabolite concentrations and overweight status in the offspring Full size table Fig. 2 Association between super- and subpathways of metabolites and overweight status in the offspring. Pathways located to the right of the zero line indicate upregulation, and left of the zero line indicate downregulation, in overweight individuals. Pathways lying beyond the dashed grey line on both sides indicate associations with p < 0.05 without adjustment for multiple testing. After multiple testing correction, the subpathways of androgenic steroids, fatty acid metabolism (also BCAA metabolism), glycerolipid metabolism, lysine metabolism, polypeptide and food component/plant were upregulated in overweight individuals. Similarly, the superpathway nucleotide was also found to be upregulated in overweight individuals. *Significant after correction for multiple testing. The numbers in brackets represent the number of metabolites in each super- or subpathway. Black squares, superpathway; grey squares, subpathway. SAM, S -adenosyl methionine; TCA, tricarboxylic acid Full size image In contrast, there was no significant association of any metabolite with maternal type 1 diabetes when corrected for multiple testing, and there was not even a significant association at the 5% level for any of the metabolites found to be associated with being overweight (ESM Table 4 ). No significant associations were observed between maternal type 1 diabetes and any of the principal components (ESM Fig. 5 ) or super- and subpathways (ESM Fig. 6 ) after correcting for multiple testing. Further, the associations between maternal type 1 diabetes and offspring overweight status remained significant and were not markedly attenuated after adjustment for any potentially relevant single metabolite concentration or principal components (Table 5 ), indicating that none is in the causal pathway. Table 5 Association between maternal type 1 diabetes and being overweight in the offspring adjusting for different covariates in the metabolomics subset ( n = 485) Full size table Discussion Our findings suggest that the offspring of mothers with type 1 diabetes have a higher BMI and increased risk for being overweight as well as increased insulin resistance compared with offspring of non-diabetic mothers. The association between maternal type 1 diabetes and excess weight later in life could be substantially explained by birthweight in our birth cohort data, but only partially in our TEENDIAB data, perhaps because these did not include measurements before school age. Metabolic alterations, however, do not seem to be involved in the pathway. Although some metabolic patterns were found to be associated with being overweight, no such associations were observed with respect to maternal type 1 diabetes. Previous studies that examined the offspring of mothers with type 1 diabetes reported similar findings with respect to excess weight gain, the metabolic syndrome and related outcomes at different ages [ 7 , 8 , 9 , 10 ]. However, one study [ 39 ] found that the prevalence of being overweight in 6–8-year-old offspring of mothers with type 1 diabetes under adequate glycaemic control was similar to that in a reference population, potentially pointing to a possible approach for the early prevention of excess weight gain in these children. Our analysis indeed suggests that offspring of mothers with type 1 diabetes are more prone to worsening of metabolic profile than offspring of fathers with type 1 diabetes when compared with offspring whose parents did not have type 1 diabetes, thus providing evidence to support a potential role for intrauterine hyperglycaemia rather than for parental genetic transmission. Previous analyses of the BABYDIAB data (without BABYDIET and with much shorter follow-up than here) suggested that maternal type 1 diabetes may not be an independent predictor of overweight status during childhood but associated factors such as birthweight may predispose individuals to risk of being overweight [ 14 ]. Indeed, the associations between maternal type 1 diabetes and offspring overweight status were attenuated by 62% after adjustment for birthweight in the BABYDIAB/BABYDIET study, but only by 10% in the TEENDIAB study. Moreover, the effect estimates were generally weaker in BABYDIAB/BABYDIET compared with TEENDIAB. We assume that these differences come from the different age structures in the studies. The BABYDIAB/BABYDIET cohort followed children from birth, with most anthropometric measurements taken during the preschool period, whereas recruitment started at a minimum age of 6 years in TEENDIAB. Although both studies followed children until 18 years, anthropometric data were not available after 6 years of age for 30% of the BABYDIAB/BABYDIET participants. Birthweight is more strongly associated with a child’s BMI in early childhood than later, which may explain the observed differences between the two studies. It has also been suggested that maternal diabetes may have a delayed influence on the offspring’s adiposity that increases with age [ 40 , 41 ]. We consider it less likely that the differences observed between our two cohorts are caused by different environmental conditions around the time of birth, as the median birth year in TEENDIAB was 2001 compared with 1997 for BABYDIAB/BABYDIET, and a significant association between maternal type 1 diabetes and offspring being overweight has been consistently observed in previous studies irrespective of when the children were born [ 7 , 8 , 9 , 10 ]. Our findings are similar to previous studies on metabolomics and overweight status in children and adolescents without a type 1 diabetes background. Of the 19 metabolite concentrations associated with being overweight in our data, 16 have previously been reported in the literature [ 15 , 16 ]. For example, our finding that elevated androgenic steroids and BCAA-related metabolite pattern are associated with being overweight and increased insulin resistance is consistent with other studies based on data from children without family history of type 1 diabetes [ 15 , 16 ]. Studies on the association of exposure to maternal diabetes and changes in the offspring’s metabolome are rare. We are aware of only one study which found no significant associations of gestational diabetes and offspring metabolites [ 16 ]. Similarly, we found no associations of maternal type 1 diabetes with metabolite concentrations in the offspring. Nevertheless, we were able to identify differences between the metabolomes of overweight and normal-weight children. It may be possible that these differences were observed as an effect, rather than a cause, of being overweight, and hence are not in the causal pathway between maternal type 1 diabetes and excess weight gain in offspring. The main strength of our study is the prospective design with multiple follow-ups and the availability of a wide range of anthropometric and metabolic outcomes in addition to metabolomics data. As we had data available from two large study populations, we could validate the results for overweight status and BMI. Both cohorts were based on children with a first-degree relative with type 1 diabetes, who were at increased risk of developing type 1 diabetes themselves, but otherwise healthy. Despite adjustment for some important covariates in our analyses, we cannot rule out the possibility of unmeasured confounding in our study. In particular, we had no data on maternal pre-pregnancy BMI, which is known to play a major confounding role with respect to childhood excess weight gain. However, it should not be as relevant when comparing mothers with and without type 1 diabetes as it would be in the context of other diabetes forms. While the mothers of all BABYDIAB/BABYDIET children had been diagnosed with type 1 diabetes before the index pregnancy, we did not have this information available for the TEENDIAB children. Although we therefore cannot rule out that a small number of the TEENDIAB children had not been exposed to type 1 diabetes in utero, we believe that this is not a major concern as the onset of type 1 diabetes occurs most frequently at a young age and hence before women get pregnant for the first time. To our knowledge, this is the first study examining the influence of the metabolomics profile on the association between maternal type 1 diabetes and offspring overweight status. With 441 metabolites analysed in 485 children, and a number of metabolites confirming previously reported associations with being overweight, we believe that the missing associations between maternal type 1 diabetes and metabolites in our data are not likely to be false-negative findings. In summary, offspring of mothers with type 1 diabetes showed increased adiposity, insulin resistance, fasting insulin and C-peptide compared with offspring of non-diabetic mothers. Certain metabolite concentrations were positively associated with being overweight in the offspring. However, metabolic changes seem unlikely to be in the causal pathway between maternal type 1 diabetes and excess weight in offspring, as this association could not be explained by any of the potentially relevant metabolites. Data availability The datasets analysed during the current study are available from the corresponding author on reasonable request. Abbreviations BCAA: Branched-chain amino acid DII: Dietary inflammatory index SDS: Standard deviation score | Children of mothers with type 1 diabetes are at significantly higher risk of being overweight and of exhibiting insulin resistance. This was published by scientists from Helmholtz Zentrum München and the Technical University of Munich in the journal Diabetologia. Type 1 diabetes is the most common metabolic disorder in childhood. But what effects does this condition have when sufferers themselves have children? It was already known that children of parents with type 1 diabetes are at much higher risk of developing the disease than the rest of the population. "Moreover, there were also sporadic indications from previous studies that children of mothers with type 1 diabetes are also at increased risk of having metabolic syndrome, as the intermittent high blood glucose levels in the uterus appear to have long-term effects on the child's metabolism and body weight," explains Dr. Andreas Beyerlein. "We now had the possibility to investigate this hypothesis with a large and appropriate dataset," adds the statistician and epidemiologist, who led the study together with Prof. Dr. Anette-Gabriele Ziegler from the Institute of Diabetes Research at Helmholtz Zentrum München. Nearly 2,800 children followed over an 18 year period The starting point for their work were three large studies aimed at understanding the mechanisms underlying type 1 diabetes (TEENDIAB, BABYDIAB and BABYDIET). "In total, we studied data from nearly 2,800 children with a first-degree relative with type 1 diabetes," explains lead author Anitha Pitchika. "Their metabolic status and body weight were tracked up to the age of 18." "This analysis was possible only now with our dataset which contains such a large number of mothers with type 1 diabetes," adds Anette-Gabriele Ziegler. "A few decades ago, mothers with this condition were often advised not to get pregnant due to the high risk of complications at birth." The researchers found that children of mothers with type 1 diabetes had a significantly higher body mass index than children from mothers without diabetes. "Children in the TEENDIAB study were for instance almost twice as likely to become overweight," explains Andreas Beyerlein. Other parameters, such as waist circumference, fasting blood glucose level and risk for insulin resistance, were also significantly higher if the mother had type 1 diabetes. The scientists corrected for a number of possible confounding factors, such as the mother's socioeconomic status and high birth weight. To find out to what extent the differences were due to fundamental changes in the child's metabolism, the researchers collected metabolomics data from 500 children in the TEENDIAB study. As it turned out, however, they did not find any significant changes in metabolic products and pathways caused by maternal type 1 diabetes. "Our study shows that children of mothers with type 1 diabetes are not only at significantly higher risk of having the condition itself, but are also at greater risk of being overweight and developing insulin resistance," says Anette-Gabriele Ziegler, summarizing the findings. "We would therefore advise that paediatricians should bear this correlation in mind, so that they can react on early warning signs in such children." | 10.1007/s00125-018-4688-x |
Nano | Nanoparticle chomps away plaques that cause heart attacks | Alyssa M. Flores et al. Pro-efferocytic nanoparticles are specifically taken up by lesional macrophages and prevent atherosclerosis, Nature Nanotechnology (2020). DOI: 10.1038/s41565-019-0619-3 Journal information: Nature Nanotechnology | http://dx.doi.org/10.1038/s41565-019-0619-3 | https://phys.org/news/2020-01-nanoparticle-chomps-plaques-heart.html | Abstract Atherosclerosis is the process that underlies heart attack and stroke. A characteristic feature of the atherosclerotic plaque is the accumulation of apoptotic cells in the necrotic core. Prophagocytic antibody-based therapies are currently being explored to stimulate the phagocytic clearance of apoptotic cells; however, these therapies can cause off-target clearance of healthy tissues, which leads to toxicities such as anaemia. Here we developed a macrophage-specific nanotherapy based on single-walled carbon nanotubes loaded with a chemical inhibitor of the antiphagocytic CD47-SIRPα signalling axis. We demonstrate that these single-walled carbon nanotubes accumulate within the atherosclerotic plaque, reactivate lesional phagocytosis and reduce the plaque burden in atheroprone apolipoprotein-E-deficient mice without compromising safety, and thereby overcome a key translational barrier for this class of drugs. Single-cell RNA sequencing analysis reveals that prophagocytic single-walled carbon nanotubes decrease the expression of inflammatory genes linked to cytokine and chemokine pathways in lesional macrophages, which demonstrates the potential of ‘Trojan horse’ nanoparticles to prevent atherosclerotic cardiovascular disease. Main The phagocytic clearance of apoptotic cells (ACs) is a routine homeostatic process that protects tissues from exposure to the inflammatory contents of dying cells 1 , 2 , 3 . To remove these cells, the body engages in a process known as efferocytosis (‘to take to the grave’). Efferocytosis is a highly conserved process triggered by ‘eat me’ ligands, which signal to phagocytes to induce engulfment 1 . Conversely, cells may overexpress ‘don’t eat me’ ligands to avoid removal 4 . By delivering an antiphagocytic signal that enables immune evasion, the upregulation of the ‘don’t eat me’ molecule, CD47, is a major mechanism by which cancers establish and propagate disease 4 , 5 . We recently discovered that CD47 signalling also has a critical role in atherosclerosis 6 . Atherosclerosis is the process that underlies heart attack and stroke and has remained the leading cause of death in the United States for nearly the past century 7 , 8 . While pursuing the mechanism by which apoptotic vascular cells escape clearance from the diseased artery, we found that CD47 is markedly upregulated in the atherosclerotic plaque 6 . CD47 functions as a ligand for the signal regulatory protein-α (SIRPα) on macrophages 9 . After this interaction, SIRPα activates the Src homology 2 domain-containing phosphatase-1 (SHP-1) to mediate the intracellular signalling that suppresses phagocytic function 10 . This signalling cascade renders diseased vascular cells resistant to removal and promotes plaque expansion. In hyperlipidaemic mice, CD47-blocking antibodies normalize the defect in efferocytosis, prevent the progression of established lesions, and protect against plaque rupture 6 . However, the antibody-mediated blockade of CD47 also accelerates the off-target removal of certain healthy tissue, which includes the Fc-mediated elimination of red blood cells (RBCs) in the spleen 6 , 11 , 12 . The resulting anaemia and reduced oxygen-carrying capacity may exacerbate ischaemia in individuals with atherosclerotic disease, and thus limit the translational potential of the systemic pro-efferocytic therapies currently in development. To develop a method that more specifically and safely restores impaired efferocytic activity, we precision-engineered nanoparticles (NPs) that interrupt CD47-SIRPα signalling in monocytes and macrophages. The system, termed SWNT-SHP1i, involves a backbone of polyethylene glycol (PEG)-functionalized single-walled carbon nanotubes (SWNTs) loaded with (1) a fluorescent probe Cy5.5 and (2) a small-molecule inhibitor of CD47’s downstream effector molecule, SHP-1 (Fig. 1a ). PEG-functionalized SWNTs were chosen because of their ultrahigh loading capacity 13 , favourable toxicology 14 , 15 and ability to accumulate within a specific leukocyte subset, Ly-6C hi monocytes (inflammatory monocytes) 16 . The selectivity for this cell type is important, as Ly-6C hi monocytes are the primary circulating cells recruited to the diseased artery, where they differentiate into lesional macrophages 17 , 18 , 19 . In addition to regulating the inflammatory response, macrophages have a homeostatic role as phagocytes that scavenge lipids and apoptotic debris 20 . As their phagocytic capacity becomes impaired in advanced atherosclerosis, strategies that restore the ‘appetite’ of macrophages have the potential to both combat plaque expansion and prevent the inflammation that results from postapoptotic necrosis. We hypothesized that leveraging SWNTs as a ‘Trojan horse’ would enable us to achieve a plaque-specific modulation of the CD47-SIRPα-SHP-1 axis, and thereby promote the clearance of diseased cells in the lesion and minimize toxicities elsewhere in the body. Fig. 1: SWNT-SHP1i promotes the phagocytosis of ACs by macrophages. a , Schematic of SWNT-SHP1i, which comprises a backbone of SWNTs functionalized with phospholipid PEG (DSPE-PEG, 1,2-distearoyl- sn -glycero-3-phosphoethanolamine- N -[amino(polyethylene glycol)]) to form biocompatible nanotubes, Cy5.5 fluorophore for tracking in vivo delivery and SHP1i via π – π stacking and hydrophobic interactions with the nanotube surface. b , Negative staining TEM images show the cylindrical morphology of SWNTs with their surrounding PEG phospholipid layer. Bare SWNTs apparently have a diameter of ~2–3 nm (inner white line). The adsorbed phospholipid PEG chains result in an increased SWNT diameter to ~5–6 nm (outer white line). Inset: magnified TEM image. c , Ultraviolet–visible spectra of SWNTs, SWNT-Cy5.5 and SWNT-SHP1i. d , Release curve of SHP1i from SWNT-Cy5.5 in serum, which demonstrates the controlled release over 7 days ( n = 3 biologically independent experiments). e , Flow cytometry histograms of uptake studies with murine macrophages (RAW264.7) (left), endothelial cells (centre), and VSMCs (right). f , Cellular uptake assays demonstrate the propensity of SWNTs to specifically accumulate in murine macrophages (RAW264.7) ( n = a minimum of three biologically independent experiments). *** P = 0.0001 and **** P < 0.0001 by one-way analysis of variance (ANOVA) with a Tukey post hoc test. g , In vitro phagocytosis assays confirm that SWNT-SHP1i augments the clearance of apoptotic vascular cells by macrophages at least as potently as the gold standard anti-CD47 antibodies, compared with SHP1i and SWNT-Cy5.5 controls ( n = 5 biologically independent experiments). On the left are representative flow cytometry plots. * P < 0.05 by an unpaired two-tailed t -test and ** P < 0.01 by one-way ANOVA with a Tukey post hoc test. For all the graphs, the data are expressed as the mean and s.e.m. IgG, immunoglobulin. Full size image Preparation and characterization of SWNT-SHP1i After fabricating SWNT-PEG-Cy5.5 (SWNT-Cy5.5) as previously described 16 , we loaded them with a SHP1 inhibitor (SHP1i) (Fig. 1a and Extended Data Fig. 1 ). PEG was used to disperse SWNTs in aqueous solutions, endow biocompatibility and prolong in vivo circulation times 16 . As shown by transmission electron microscopy (TEM) with negative staining, PEG functionalization resulted in well-dispersed cylindrical NPs with a PEGylated diameter of 5–6 nm, which included a 2–3 nm core nanotube structure (Fig. 1b ). We employed fluorescent Cy5.5 dye for the flow cytometric characterization and loaded SHP1i onto SWNTs through π – π stacking and hydrophobic interactions 13 . Ultraviolet–visible spectroscopy validated the presence of Cy5.5 (sharp peak at 674 nm) and SHP1i loading (absorption peaks at 230, 320 and 490 nm over the characteristic SWNT absorption spectrum) (Fig. 1c ). The presence of Cy5.5 and SHP1i on the SWNTs was further confirmed by (1) the visible colour change in the SWNT-SHP1i solution on SHP1i adsorption, (2) attenuated total reflectance infrared spectroscopy (Extended Data Fig. 1 ) and (3) the shift in the ζ -potential of SWNT-Cy5.5 from −6.69 ± 2.11 to −7.19 ± 2.53 mV on SHP1i loading. No endotoxin was detectable in the synthesized SWNT-SHP1i (<0.01 ng ml − 1 ). To mimic in vivo biological conditions and simulate in vivo release, we studied the release profile of SHP1i from SWNT-Cy5.5 in serum (Fig. 1d ). Similar to the release profile in PBS (Extended Data Fig. 1 ), SWNT-Cy5.5 demonstrated a sustained release of SHP1i for 7 days (nearly linear until day two; diminishing rates through day seven). The ability of this system to gradually offload substantial amounts of drug over a week suggest it may be suitable for delivering a sustained payload in vivo. SWNTs are taken up by macrophages and enhance AC phagocytosis in vitro Although we have demonstrated the exquisite selectivity of SWNTs for circulating Ly-6C hi monocytes, their uptake by macrophages and other vascular cells has not yet been determined. Therefore, the propensity of SWNTs to be taken up by phagocytic cells was first examined in murine (RAW264.7) and human (THP-1) macrophages and compared to other vascular cells, such as endothelial and vascular smooth muscle cells (VSMCs). Using Cy5.5 positivity as a surrogate for uptake, flow cytometry revealed that SWNTs were robustly and preferentially taken up by >95% of macrophages relative to non-phagocytic cells (Fig. 1e,f and Extended Data Fig. 2 ). To confirm their ability to inhibit CD47-induced signalling, we next studied the physiological properties of SWNT-SHP1i. In vitro phagocytosis assays confirmed that SHP1i-conjugated SWNTs potently stimulated the clearance of diseased vascular cells exposed to the pro-atherosclerotic tumour necrosis factor-α (Fig. 1g and Extended Data Fig. 2 ). Interestingly, when compared to anti-CD47 antibodies, SWNT-SHP1i yielded the highest degree of apoptotic-cell clearance. Further, SWNT-SHP1i did not alter the cell viability, proliferation rates or apoptosis of macrophages (Extended Data Fig. 2 ). Together, these data indicate that SWNTs stably facilitate the delivery of pro-efferocytic SHP1i specifically to macrophages, enhance their ability to clear ACs and do so without altering the cell physiology in other ways. SWNTs accumulate in atherosclerotic lesions in vivo As we desire an agent that is not only taken up by phagocytes, but also delivers the pro-efferocytic payload to the atherosclerotic lesion, the biodistribution properties of SWNTs were next assessed. These studies were performed using a combination of radiochemical, flow cytometric and histological approaches in apolipoprotein-E-deficient ( apoE − / − ) mice with established plaques after a single systemic infusion of SWNTs labelled with Cy5.5 and/or 89 Zr. Pharmacokinetic analysis of 89 Zr-radiolabelled SWNTs demonstrated excellent serum stability and a blood half-life ( t 1/2 ) of 1.64 h (Fig. 2a and Extended Data Fig. 3 ). Consistent with prior reports that demonstrated SWNT distribution to organs of the reticuloendothelial system 15 , 21 , 22 , we observed a high initial uptake of radiolabelled SWNTs in macrophage-rich clearance organs, such as the spleen and liver 7 days postinjection (Fig. 2b ). Flow cytometry analyses of the homogenized organs confirmed this distribution pattern, and demonstrated the specific accumulation of SWNTs locally within the microdissected plaque relative to the surrounding non-atherosclerotic aorta one week after treatment (Fig. 2c,d ). Confocal microscopy further revealed significant SWNT accumulation within the atherosclerotic aortic sinus, with minimal to no accumulation in other non-clearance organs or the healthy aorta (Extended Data Fig. 3 ). No significant uptake was observed in the bone marrow, heart, lung, gut, fat, muscle and kidney (Fig. 2b,c ). Fig. 2: SWNTs accumulate within phagocytes in the atherosclerotic plaque. a , Blood decay curve of 89 Zr-radiolabelled-SWNTs. The mean t 1/2 was calculated as 1.64 h ( R 2 = 0.96, n = a minimum of four biologically independent animals per time point). %ID g – 1 , percent injected dose per gram. b , Quantitative biodistribution studies 7 days after the intravenous administration of 89 Zr-SWNTs reveal that SWNTs primarily accumulate in organs with a high macrophage content, such as the spleen and liver ( n = 8 biologically independent animals). c , d , Flow cytometry analyses of homogenized organs confirm the trend for an enhanced uptake by organs of the reticuloendothelial system ( c ), and reveal that SWNT accumulation is largely restricted to the macrophage-rich plaque, as compared to the less disease-prone descending aorta ( n = minimum of three biologically independent animals) ( d ). * P < 0.05, * *P < 0.01, **** P < 0.0001 by one-way ANOVA with a Tukey post-hoc test in c . * P < 0.05 by an unpaired two-tailed t -test in d . e – g , Results after 4 weeks of weekly SWNT administration. e , SWNTs specifically accumulate within Ly-6C hi monocytes and macrophages in the atherosclerotic aorta, whereas SWNT detection is low in other vascular cells ( n = 4 biologically independent animals). *** P < 0.001, **** P < 0.0001 one-way ANOVA with a Tukey post hoc test. f , Lesional macrophage and SWNT co-localization was confirmed by confocal images of the aortic sinus (co-localized regions indicated by arrows). Scale bars, 50 μm; insets, 25 μm. g , Enhanced uptake is observed by Ly-6C hi monocytes in the aorta compared to the spleen, which suggests that SWNTs may be efficiently delivered to the diseased artery by inflammatory monocytes ( n = 4 biologically independent animals). * P < 0.05 by unpaired two-tailed t -test. The data in f are representative of four independent experiments. For all the graphs, the data are expressed as the mean and s.e.m. MFI, median fluorescence intensity. Full size image SWNTs are taken up by lesional macrophages Prior studies in non-vascular mouse models revealed that >99% of the inflammatory Ly-6C hi monocytes (but <3% of other circulating immune cells) internalize SWNTs within two hours of administration 16 , 23 . To identify the specific vascular cell type(s) in which SWNTs chronically accumulate in vivo, we next performed high-dimensional nine-colour flow cytometry of digested atherosclerotic aortae after a course of serial SWNT injections. After four weekly SWNT injections, ~70% of the lesional Ly-6C hi monocytes and ~60% of the macrophages had taken up SWNTs (versus only ~15% of the neutrophils, ~5% of the endothelial cells and ~5% of the fibroblasts). Negligible amounts of SWNTs were detected in the lymphocytes and VSMCs (Fig. 2e and Extended Data Fig. 4 ). Confocal microscopy confirmed that SWNTs co-localize with lesional macrophages (Fig. 2f and Extended Data Fig. 4 ). Flow cytometry showed that a greater percentage of the Ly-6C hi monocytes had taken up SWNTs in the atherosclerotic aorta than the spleen after 4 weeks of therapy (Fig. 2g ). There are two major mechanisms that probably explain this pattern of uptake and robust plaque accumulation. First, SWNTs are taken up by circulating monocytes and traffic to the site of vascular inflammation, as occurs during atherogenesis 17 . Second, SWNTs passively target lesional macrophages (for example, extravasation through disrupted plaque vessels). Altogether, these data indicate that SWNTs chronically accumulate in the desired plaque-resident phagocytes. Pro-efferocytic SWNTs prevent atherosclerosis To assess the therapeutic effect of SWNT-SHP1i on atherosclerosis, we employed two independent murine models of vascular disease (Extended Data Fig. 5 ). These included an accelerated inflammation model (dyslipidaemic apoE −/− mice implanted with subcutaneous angiotensin II-infusing minipumps 24 ) and a chronic atherosclerosis model ( apoE −/− mice fed a high-fat ‘Western’ diet for 11 weeks). Compared to the control treatment (SWNT-Cy5.5), treatment with SWNT-SHP1i via weekly injections resulted in a significant anti-atherosclerotic effect in both models and both sexes (Fig. 3a and Extended Data Figs. 5 and 6 ). Analysis of intraplaque SHP-1 phosphorylation (activity) confirmed that SWNT-SHP1i interrupts the key effector of antiphagocytic signalling downstream of CD47-SIRPα (Fig. 3b ) 25 . To explore efferocytosis in vivo, lesions were assessed for their phagocytic index, or the number of ACs that were either ‘free’ or associated with macrophages due to efferocytosis 6 , 26 . Consistent with findings from in vitro phagocytosis assays, the ratio of free versus macrophage-associated ACs was lower in lesions from SWNT-SHP1i animals, which indicates an enhanced efferocytic activity in the vascular bed (Fig. 3c ). As expected, lesions from SWNT-SHP1i-treated mice also displayed smaller necrotic cores (Fig. 3d ) and a reduced accumulation of apoptotic bodies (Fig. 3e ). These therapeutic benefits occurred independently of any changes in traditional cardiovascular risk factors, which included blood pressure, lipid and glucose levels (Extended Data Fig. 5 ). Fig. 3: Pro-efferocytic SWNTs prevent atherosclerosis. a , Mice treated with SWNT-SHP1i ( n = 19) develop significantly reduced plaque content in the aortic sinus relative to SWNT-Cy5.5 controls ( n = 17). These findings were confirmed in a second atherosclerosis model (Extended Data Fig. 5 ). ** P < 0.01 by a two-sided Mann-Whitney U test. Scale bar, 250 μm. b , Compared to the control ( n = 8), SWNT-SHP1i ( n = 9) decreases the phosphorylation of SHP-1, which indicates silencing of the antiphagocytic CD47-SIRPα signal. * P < 0.05 by an unpaired two-tailed t -test. Scale bars, 100 μm. p-SHP1, phosphorylated SHP-1. c – e , Lesions from mice treated with pro-efferocytic SWNTs are more likely to have ACs (indicated by arrows) that have been ingested by lesional macrophages ( n = 9 biologically independent animals per group; scale bar, 25 μm) ( c ), develop smaller necrotic cores (indicated by dotted lines, n = 16 biologically independent animals per group; scale bar, 50 μm) ( d ) and accumulate less apoptotic debris (as assessed by the percentage of cleaved caspase-3 + area in the plaque, indicated by stars, n = 9 biologically independent animals per group; scale bar, 50 μm) ( e ). ** P < 0.01 by an unpaired two-tailed t-test in c and by a two-sided Mann-Whitney U test in d and e . f , 18 F-FDG PET/CT imaging demonstrates that SWNT-SHP1i significantly reduces the vascular inflammation (Supplementary Video 1 ). For all graphs, the data are expressed as the mean and s.e.m. ORO, Oil Red O; DAPI, 4,6-diamidino-2-phenylindole; SUVmean, mean standardized uptake value. Full size image Efficient efferocytosis also acts to resolve inflammation and prevent the secondary necrosis of dead cells 2 , 27 . To assess whether SWNT-SHP1i prevented the inflammatory consequences of defective efferocytosis, we next performed in vivo 18 F-fluorodeoxyglucose positron emission tomography/computed tomography ( 18 F-FDG PET/CT) imaging (Supplementary Video 1 ) 28 . Mice treated with SWNT-SHP1i displayed a reduced aortic uptake of 18 F-FDG after treatment compared to the controls (Fig. 3f ), which indicates decreased arterial inflammation. As persistent inflammation is known to promote plaque vulnerability and the risk for acute cardiovascular events, the apparent ability of pro-efferocytic SWNTs to combat inflammation is particularly intriguing. Single-cell RNA sequencing reveals an anti-inflammatory signature of SWNT-exposed macrophages To assess the impact of chronic efferocytosis stimulation on lesional macrophages, large-scale single-cell RNA sequencing (scRNA-seq) was performed on leukocytes from the aortae of SWNT-Cy5.5- and SWNT-SHP1i-treated mice. After fluorescence-activated cell sorting (FACS) for Cy5.5 + versus Cy5.5 − cells, single-cell transcriptional profiles were obtained using droplet-based sequencing (Fig. 4a and Extended Data Fig. 7 ). After quality control and filtering, we analysed ~1,500 immune cells with a mean of ~90,000 sequencing reads per cell and expression quantified across 15,309 genes (Extended Data Fig. 7 ). Unsupervised clustering grouped cells according to their expression pattern and detected seven distinct leukocyte clusters in the combined datasets from the aortae of SWNT-Cy5.5- and SWNT-SHP1i-treated mice (Fig. 4b,c ). The major cell types were defined according to established immune cell markers and cluster-specific marker genes (Supplementary Table 1 and Extended Data Fig. 7 ), which identified macrophages (Cluster 1), memory T cells (Cluster 2), dendritic cells (Cluster 3), monocytes (Cluster 4), granulocytes (Cluster 6) and a mix of CD4 + /CD8 + cells (Cluster 7). Cluster 5 contained a ‘macrophage-like’ cell type that expressed myeloid-macrophage markers ( Cd68 , Lgals3 and Trem2 ) and genes associated with SMCs and adventitial cells ( Spp1 , Acta2 and Mgp ) 29 . Fig. 4: Single-cell transcriptomics reveal genes and key molecular pathways modulated by a chronic CD47-SIRPα blockade in lesional macrophages. a , Workflow for scRNA-seq which includes aortic cell isolation, drop sequencing and downstream analyses. b , Unsupervised dimensionality reduction identifies seven major cell types with a similar gene expression from the combined SWNT-Cy5.5 control and SWNT-SHP1i datasets ( n = 4 biologically independent animals per group). Data is visualized using t -distributed stochastic neighbour embedding (t-SNE) plots that show the seven distinct cell clusters (left) and SWNT detection in each cell (right). SWNT-positive cells are the most prevalent in lesional macrophages (Cluster 1) and macrophage-like cells (Cluster 5; Extended Data Fig. 7 ). Tc, T cells; DC, dendritic cells. c , Heat map showing the gene expression of ten cluster-defining genes and leukocyte markers (see Supplementary Table 1 for a full list of cluster markers). d , Single-cell differential gene expression analysis identifies the genes regulated by SWNT-SHP1i specifically in lesional macrophages ( n = 4 biologically independent animals per group). GO enrichment and pathway analyses reveal that the CD47-SIRPα blockade results in an increase in the expression of genes related to antigen processing and presentation, and the downregulation of genes associated with monocyte chemotaxis, chemokine signalling and the cellular response to the pro-inflammatory cytokines, IL-1 and interferon-γ (IFN-γ). The subclasses of the top GO biological processes (fold enrichment >10, adjusted P value <10 − 2 ) are shown. The sizes of circles are proportional to the enrichment of each biological process. Functional enrichment was assessed using a two-sided Fisher’s exact test with P -value adjustment by Bonferroni correction. MHC, major histocompatibility complex. Full size image After validation of the SWNT selectivity for macrophages by assessing the frequency of Cy5.5 + cells in each cluster (Fig. 4b and Extended Data Fig. 7 ), differential expression analysis was performed to investigate the SWNT-SHP1i-dependent transcriptional response. Gene expression changes in FACS-sorted SWNT-positive cells were compared between treatment groups. We found that SWNT-SHP1i elicited numerous changes in lesional macrophages, which included a decrease in pro-inflammatory transcripts ( Ccl2 , Ccl7 , Ccl8 and Pf4 ) and an upregulation of genes linked to inflammation resolution ( Socs3 and Zfp36) 30 , 31 (Supplementary Table 2 ). Using the identified differential expression genes, we then applied a bioinformatics approach to explore the upstream regulators and functional significance of such alterations. As expected, both SIRPA ( P = 3.26 × 10 − 3 ) and the SHP-1 encoding gene, PTPN6 ( P = 4.05 × 10 − 2 ) were predicted to mediate the observed transcriptional changes. Lesional SWNT-SHP1i-treated macrophages were enriched for genes associated with phagocytosis ( P = 1.78 × 10 − 7 ) and antigen presentation ( P = 1.63 × 10 − 7 ), a process known to be upregulated in macrophages engaged in necrotic cell clearance 32 (Supplementary Tables 3 and 4 ). Pathway analyses also revealed that SWNT-SHP1i induced an expression signature in macrophages that reflects a decreased inflammatory response ( P = 5.5 × 10 –13 ) and reduced chemotaxis of the mononuclear leukocytes ( P = 2.6 × 10 − 6 ). Interestingly, Gene Ontology (GO) enrichment analysis further showed that macrophages downregulated genes implicated in the response to interleukin-1 ( P = 8.1 × 10 − 3 ) and interferon-γ ( P = 7.85 × 10 − 4 ) (Fig. 4d and Supplementary Tables 5 and 6 ). In accordance with our observations from PET/CT imaging, it appears that targeted efferocytosis stimulation may reduce vascular inflammation without resulting in serious complications, such as immunosuppression, as described below. Pro-efferocytic SWNTs have a favourable safety profile in vivo Lastly, given that pro-efferocytic antibodies are compromised by adverse effects such as anaemia, the safety profile of SWNT-SHP1i was formally assessed. Previous studies showed that similarly PEG-functionalized SWNTs do not cause acute or chronic toxicities in mice, which encouraged further exploration of their applications in medicine 14 , 15 . SHP1i-conjugated SWNTs also appeared to be biocompatible and well-tolerated (Extended Data Fig. 8 ). Clinical haematology and chemistry results from SWNT-SHP1i-treated mice demonstrated no significant alterations, although there was a decrease in the platelet indices, platelet/large cell ratio and mean platelet volume (Extended Data Figs. 8 and 9 ). These indices are generally interpreted clinically in the context of thrombocytopenia or thrombocytosis 33 . Platelet levels of SWNT-SHP1i-treated animals, however, were in the normal range and there was no difference in bleeding or clotting events observed between treatment groups. SWNT-SHP1i treatment was also not associated with an increase in leukopenia, neutropenia or clinical infections. In addition, SWNT-SHP1i therapy was associated with a reduction in high-sensitivity C-reactive protein levels, a marker of inflammation and cardiovascular risk 34 . Importantly, SWNT-SHP1i treatment was not associated with anaemia, the major complication that impedes the translation of pro-efferocytic antibodies (Fig. 5a ). Mice did not develop a compensatory reticulocytosis or splenomegaly (Fig. 5b,c ), as occurs in response to an indiscriminate (systemic) CD47 blockade and the erythrophagocytosis of opsonized RBCs 5 , 6 , 35 . Of note, SHP-1 is primarily expressed in haematopoietic cells, where it negatively regulates multiple pathways in the immune response 36 . Global SHP-1 deficiency is known to cause defects in haematopoiesis and early mortality due to severe interstitial pneumonitis and glomerulonephritis 37 . Given that SWNT-SHP1i treatment did not demonstrate any of these potential toxicities (Extended Data Figs. 8 and 9 ), these data are consistent with the ability of SWNTs to avoid off-target effects due to their specific accumulation within monocytes and macrophages. Fig. 5: Pro-efferocytic SWNTs do not induce clearance of healthy tissue. a , b , Mice treated with SWNT-SHP1i ( n = 22 biologically independent animals) do not develop anaemia ( a ) or a compensatory reticulocytosis ( b ), which occurs in response to anti-CD47-antibody treatment due to the off-target elimination of opsonized RBCs. ** P < 0.01, **** P < 0.0001 by an unpaired two-tailed t -test. c , No significant difference is observed for the weight of the spleen between groups, suggestive of the lack of RBC clearance due to Fc-dependent erythrophagocytosis ( n = 23 biologically independent animals per group, P = 0.065). The IgG and anti-CD47 antibody data in a and b were reported previously ( n = 11 biologically independent animals per group) 6 . For all graphs, data are expressed as the mean and s.e.m. Full size image Conclusions Cardiovascular disease remains the world’s leading killer. Most currently available therapies only target traditional risk factors (such as hypertension and hyperlipidaemia) and do not specifically inhibit the intrinsic, disease-causing pathways known to be active in the vessel wall. As the ‘inflammatory hypothesis’ of atherosclerosis is now definitively established 34 , and because robust genetic causation studies implicate defective efferocytosis as a key driver of plaque expansion 38 , new orthogonal therapies for these risk-factor-independent pathways are being sought. Although major progress has been made in developing agents that can suppress lesional inflammation (for example, anti-interleukin-1β (IL-1β) antibodies) and/or the reactivate engulfment of apoptotic debris in the necrotic core (for example, anti-CD47 antibodies), each of these approaches has an Achilles heel that may limit its translational relevance. For example, the CANTOS trial revealed that systemic inhibition of the IL-1β pathway potently reduced inflammation and recurrent major cardiovascular events (without altering the lipid levels), but unfortunately these benefits were offset by a concomitant increase in fatal infections 34 . Similarly, the first human trial of a pro-efferocytic therapy recently provided tantalizing evidence that an anti-CD47 antibody might slow the progression of Hodgkin’s lymphoma, but also came at a cost of increased anaemia, as expected 35 . Accordingly, a more precise targeting of these processes is required if such cutting-edge therapies are to be broadly translated into the cardiovascular realm. The advent of modifiable, macrophage-specific NPs therefore represents a significant advance in the fight against atherosclerosis. Although NPs have been developed for imaging and the treatment of atherosclerosis, the lack of sufficient selectivity of the NP to the target cell (for example, inflammatory monocytes) and desired end organ has hampered their efficacy and utility 39 , 40 . By combining innovations in vascular biology and nanotechnology, we engineered a Trojan horse system that accumulates in the lesional phagocyte, reactivates efferocytosis locally and reduces the plaque burden without inducing a significant off-target toxicity. SWNTs have also proved to be safe and non-immunogenic in non-human primates 41 , and mechanistic studies have revealed that SWNTs undergo elimination by immune cell peroxidases, such as myeloperoxidase, in a matter of weeks 42 , 43 . This biocompatibility is of crucial importance for the safety of SWNTs. Moreover, our scRNA-seq data indicate that pro-efferocytic SWNTs have the unexpected benefit of suppressing cytokine-dependent vascular inflammation (without the undesirable immunosuppression associated with systemic anti-IL-1β therapy). Although our current and previous studies 16 demonstrate the remarkable selectivity of SWNTs for monocytes and macrophages, further understanding of the mechanism of SWNT selectivity and incorporation of molecular targeting ligands may enable a more efficient delivery to the diseased site, or even to specific macrophage subsets. As the SWNT backbone can be modified to deliver multiple therapeutic agents into the same cell, future studies should determine whether bispecific nanoimmunotherapies that simultaneously target efferocytosis and other aspects of macrophage biology (for example, cholesterol efflux and macrophage skewing) might have a synergistic effect. In addition, unchecked inflammation in atherosclerosis results from the defective clearance of cells that have undergone multiple forms of cell death, such as necroptosis and pyroptosis 44 . Targeting the CD47-SHP1i pathway could thus restore the phagocytosis of apoptotic and non-AC debris that contribute to inflamed and unstable lesions. Future studies should address whether such pro-efferocytic nanotherapies may promote plaque stabilization in advanced disease. Indeed, nanotherapeutics that promote local inflammation resolution have been shown to improve fibrous cap thickness and have a potent atheroprotective effect 45 . Given the parallels between plaque-resident and tumour-associated macrophages, it will be interesting to determine whether this platform could also be adapted as a precision therapeutic for the field of immuno-oncology. Methods Preparation and characterization of SWNT-SHP1i The functionalized SWNTs were prepared as previously reported 16 , with slight modifications as follows. Raw HiPco (high-pressure catalytic decomposition of carbon) SWNTs (diameter 0.8–1.2 nm; Unidym) were added in an aqueous solution of DSPE–PEG 5000 –amine (NOF Corp), sonicated for at least 1 h and then centrifuged at 100,000 g for 1 h to obtain PEGylated SWNTs. Unbound surfactant was washed by repeated filtration through 100 kDa filters (Millipore). For conjugation of Cy5.5 Mono NHS Ester (GE Healthcare) to SWNT-PEG, Cy5.5 Mono NHS Ester was incubated with SWNT-PEG solution (10:1 mole ratio) for 2 h. Excess Cy5.5 dye was removed by five to six rounds of centrifugal filtration until the filtrate became clear (Extended Data Fig. 1 ). SWNT concentrations were determined spectrophotometrically with an extinction coefficient of 7.9 × 10 6 M −1 cm −1 at 808 nm (refs 46 , 47 ). For SHP1i loading, the SHP1i solution was added to stirred SWNT-Cy5.5 at 4 °C and pH 7.4 overnight to form SWNT-SHP1i. After 24 h of stirring, SWNT-SHP1i was dialysed for another 24 h next to PBS to remove unbound SHP1i molecules. The concentration of the loaded SHP1i was measured using a NanoDrop (Nanodrop2000; Thermo Scientific) at its absorption of 320 nm. To verify the synthesis of SWNT-SHP1i, after each step of the synthesis, ultraviolet–visible spectroscopy and attenuated total reflectance infrared spectroscopy in the 4,000–500 cm −1 region (Nicolet iS50 FT/IR Spectrometer) were performed for PEGylated-SWNTs, SWNT-Cy5.5, SWNT-SHP1i and SHP1i. The surface charge of SWNT-Cy5.5 and SWNT-SHP1i were recorded in deionized water using a ZetaSizer Nano ZS (Malvern Instruments). Further SWNT characterization methods are given in the Supplementary Information . Preparation and characterization of 89 Zr-SWNTs Sulfo-SMCC solution (2 mg ml − 1 , 20 μl) was added to 0.5 ml of SWNT-Cy5.5 (1 μM) and stirred at room temperature for 2 h. Afterward, excess Sulfo-SMCC was removed by multiple washes using centrifugal filtration (100 kDa). A p -isothiocyanatobenzyl-deferoxamine (DFO) (2 mg ml − 1 , 200 μl) solution in DMSO was then added to SWNT-Cy5.5-Sulfo-SMCC and incubated for 24 h. Extra chelators were washed by repeating the washing steps using centrifugal filtration (100 kDa). 89 Zr-oxalate (Stanford Cyclotron & Radiochemistry Facility) was diluted with PBS (pH 7.4) and a fraction of this solution was added to 0.5 ml of DFO-conjugated SWNT-Cy5.5 and incubated for 1 h at 37 °C with constant shaking. Excess 89 Zr was removed by centrifugal filtration (100 kDa) for 6–8 min at 4,000 g . Instant thin-layer chromatography was used to determine the radiolabelling yield. A Capintec (CRC-15R) dose calibrator and Hidex gamma counter were used to measure the radioactivity of 89 Zr-SWNT-Cy5.5. The radiochemical purity was 100%. Serum stability experiments were performed at 37 °C in fresh mouse serum (Extended Data Fig. 3 ). TEM of SWNTs For SWNT-PEG negative staining, 10 µl of 10 nM SWNT-PEG were drop cast onto ultrathin lacey carbon 400 mesh TEM grids (Ted Pella. Inc.) and incubated for 10 min. The grids were then washed with ultrapure water and negatively stained with 1% uranyl acetate for 30 s and subsequently dried on Whatman grade 1 filter paper. A Cs-corrected Titan TEM (Thermo Fisher Scientific) was operated with an acceleration voltage of 80 kV and a monochromator excitation value of 1. High-resolution TEM images were taken on a Gatan OneView camera via digital micrograph. Cell culture Mouse macrophages (RAW264.7, ATCC TIB-71) and mouse yolk sac endothelial cells (C166, ATCC CRL-2581) were grown in DMEM with 10% fetal bovine serum, whereas the human monocyte cell line (THP-1, ATCC TIB-202) was grown in RPMI-1640 medium that contained 10% fetal bovine serum and 0.05 mM 2-mercaptoethanol. Primary vascular SMCs were harvested from the aortae of C57Bl/6 mice and propagated in DMEM with 10% fetal bovine serum 38 . Human coronary artery SMCs (Lonza CC-2583) and human aortic endothelial cells (Lonza CC-2535) were cultured and maintained according to the manufacturer’s (Lonza) instructions. All the cells were cultured in a humidified 5% CO 2 incubator at 37 °C. The cell lines were authenticated by the supplier. None of the cell lines were tested for mycoplasma contamination. SWNT in vitro uptake assay Cells were plated in 24-well plates (Corning) until approximately 70% confluent and then incubated with SWNT-Cy5.5 (4 nM) for 3 h in a serum-free media at 37 °C. SWNT-PEG and PBS-treated cells served as negative controls. After washing the cells with PBS, they were collected and analysed by flow cytometry (Scanford cell analyser, Stanford Shared FACS facility). Dead cells were excluded using SYTOX Blue stain (S34837; Invitrogen). The rate of SWNT uptake was evaluated by quantifying the percentage of Cy5.5 + cells using FlowJo10.1.r5 (Tree Star, Inc.). Efferocytosis assay In vitro phagocytosis assays were performed as previously described 6 , 38 . Briefly, RAW264.7 macrophages were labelled with CellTracker Red (1 μM; Life Technologies) and pretreated with SWNT (4 nM), SWNT-Cy5.5 (4 nM), SWNT-SHP1 (4 nM) or SHP1i (300 nM) for 30 min. For target cells, RAW246.7 cells or primary vascular SMCs were labelled with CellTracker Orange (1.25 μM; Life Technologies) and incubated with tumour necrosis factor-α (50 ng ml − 1 ; R&D) for 24 h to induce apoptosis. ACs were plated in 24-well dishes at a density of 1.5 × 10 5 cells per well. RAW264.7 cells were added to cultured ACs at 3 × 10 5 cells per well and co-incubated for 2 h in serum-free media. Anti-CD47 antibody (10 μg ml − 1 MIAP410; BioXcell) was also tested as a positive control 6 . Cells were washed with PBS, dissociated from wells and analysed by flow cytometry (Scanford cell analyser). Efferocytic activity was evaluated as the percentage of phagocytes that were double-positive cells using FlowJo10.1.r5. Experimental animals apoE − / − mice on a C57BL/6 background (Jackson Laboratory) were used in the following studies. A total of 136 male and female apoE − / − mice were included. All the animals were randomly assigned to the experimental groups. Animal studies were approved by the Stanford University Administrative Panel on Laboratory Animal Care (protocol 27279) and conformed to the National Institutes of Health (NIH) guidelines for the use of laboratory animals. For the biodistribution studies, male apoE −/− mice were initiated on a high-fat Western diet (21% anhydrous milk fat, 19% casein and 0.15% cholesterol; Dyets Inc.) at 20–24 weeks of age and maintained on this for 4 weeks. In the main atherosclerosis intervention studies, 8–10-week-old apoE −/− mice were implanted with subcutaneous osmotic minipumps (model 2004; Alzet) that contained angiotensin II (1,000 ng kg –1 min –1 ; VWR) and initiated on a high-fat Western for the ensuing 4 weeks, as previously described (Extended Data Fig. 5 ) 24 . SWNT therapy began one day before osmotic pump implantation and continued weekly for the duration of the study. Both male and female animals were included as per recent NIH policy (Consideration of Sex as a Biological Variable, NOD-15-102). The ‘angiotensin infusion’ model was also used in the cellular specificity studies, 18 F-FDG PET/CT imaging and scRNA-seq. In the chronic atherosclerosis studies, 8-week-old male apoE − / − mice were initiated on a high-fat diet and continued on this for the ensuing 11 weeks (without angiotensin II infusion). At 10 weeks of age, mice were injected as described above for a total of 9 weeks, and were euthanized at the age of 19 weeks. Flow cytometry of organs apoE −/− mice were injected a single dose of SWNT-SHP1i, SWNT-Cy5.5 or plain SWNTs (PEGylated but without Cy5.5) via the tail vein at a dose previously studied (200 μl of 0.068 mg ml − 1 SWNTs) 16 . Mice were euthanized after 7 days, and the peripheral blood, bone marrow, aortae and visceral organs were collected. RBCs were removed from the peripheral blood with ammonium-chloride-potassium lysis buffer (Life Technologies). The aortae and visceral organs were homogenized and digested with Liberase TM (2 U ml − 1 ; Roche) and Elastase (2 U ml − 1 ; Worthington) in Hank’s balanced salt solution at 37 °C for 1 h. Digested tissue was passed through a 70 μm strainer to obtain single cell suspensions in 1% BSA/PBS and stained with SYTOX Blue. Fluorescence was detected by flow cytometry (Scanford cell analyser) and analysed using FlowJo10.1.r5. Cell populations were first gated for non-debris (forward scatter versus side scatter), then gated for singlets (forward scatter versus forward scatter width) and viable cells (SYTOX Blue negative ) (Extended Data Fig. 3 ). The viable, single cells were analysed for Cy5.5 median fluorescence intensity, as well as Cy5.5 positivity to determine the percentage of Cy5.5 + cells in each sample. The Cy5.5 median fluorescence intensity was normalized to the autofluorescence of each tissue type, as determined using samples from plain SWNT-injected mice. Pharmacokinetics and biodistribution studies The biodistribution studies were carried out at the treatment dose described above with 5–6 MBq of 89 Zr-labelled SWNTs. apoE −/− animals were sacrificed 7 days postinjection ( n = 8). The organs were collected into a preweighed vial and wet weighed. The blood t 1/2 was measured by drawing 10 µl of blood from the retro-orbital plexus at prespecified time points (1 h, 2 h, 4 h, 6 h, 8 h, 24 h and 48 h; n = 4–5 per time point). Pharmacokinetic analyses were performed by a first-order exponential decay fitting. All blood t 1/2 and biodistribution samples were analysed for 89 Zr activity using a gamma counter (Hidex Automatic Gamma Counter) and then background and decay corrected to the injection time, converted into megabecquerel using calibrated standards and the %ID g –1 determined by normalization to the total activity injected. A SpectraMax iD3 (Molecular Devices) was used for the fluorescence-based blood t 1/2 study (excitation/emission, 678 nm/718 nm). SWNT cellular uptake profile Single-cell suspensions from the aortae and spleen were obtained as described above and incubated with anti-CD16/32 (553142; BD Biosciences) and stained on ice for 30 min with the following antibodies: Alexa Fluor 594-anti-Vimentin (clone EPR3776, ab154207; Abcam), APC-anti-CD31 (clone 390, 17-0311-80; Invitrogen), FITC-anti-Ly-6C (clone AL-21, 553104; BD Biosciences), PE-Cy5-labelled anti-CD5 (clone 53-7.3, 100609; BioLegend), PE-Cy7-anti-Gr-1 (clone RB6-8C5, 25-5931-81; Invitrogen), APC-Cy7-anti-CD11b (clone M1/70, 101225; BioLegend) and Pacific Blue-anti-F4/80 (clone BM8, 123123; BioLegend). For intracellular staining, cells were fixed and permeabilized with buffers (BD Phosflow Fix Buffer I and Perm Buffer III) according to the manufacturer’s instructions, then stained with Alexa Fluor 488-anti-alpha-smooth muscle actin (clone 1A4, 50-112-4644; eBioscience). Cell suspensions were subjected to flow cytometry (Becton Dickinson LSR II) and analysed using FlowJo10.1.r5. Macrophages were identified as CD11b + /Ly-6C low /F4/80 + cells. Ly-6C hi monocytes were identified as CD11b + /Ly-6C hi /F4/80 low cells. Neutrophils were identified as CD11b + /Gr- hi cells. Atherosclerosis intervention studies To evaluate the therapeutic effect of pro-efferocytic SWNTs, apoE −/− mice were treated with either SWNT-Cy5.5 or SWNT-SHP1i. Mice were treated weekly for four weeks in the angiotensin infusion model and for nine weeks in the chronic atherosclerosis studies (see timeline in Extended Data Fig. 5 ). Body weights were evaluated before and after treatment. Animals were observed daily, and in the case of premature sudden death, necropsy was performed to determine the cause of death. Blood pressure was measured in conscious mice at baseline and on a weekly basis throughout the study period (Visitech Systems). After treatment, mice were killed after an overnight fast, with their aortae, peripheral blood and visceral organs collected. The weights of the spleen, heart and kidney were recorded, as well as those of any unusually sized organs. Complete blood count, metabolic panel, high-sensitivity C-reactive protein and lipid profile determinations were performed by the Animal Diagnostic Laboratory in the Stanford Veterinary Service Center. Tissue preparation and immunohistochemical analysis For the aortic root analysis, mice were perfused with PBS via a cardiac puncture in the left ventricle and then perfusion fixed with phosphate-buffered paraformaldehyde (4%). Aortic roots and visceral organs were collected, embedded in OCT and sectioned at 7 μm thickness, starting from the base of the aortic root and covering the entire aortic sinus area. Four tissue sections at 100 μm intervals were collected from each mouse and stained with ORO (O1516; Sigma Aldrich). The lesion area was quantified from the luminal aspect of the blood vessel through the plaque to the internal elastic lamina (for example, lipid in the neointima was quantified) and was normalized to the total vessel area by encircling the external elastic lamina of the aortic wall. Necrotic core size and lesional collagen content were assessed with Masson’s trichrome (Sigma Aldrich). The necrotic core was quantified by measuring the total acellular area within each plaque. Immunohistochemical staining for alpha-smooth muscle actin (ab5694, 1:300; Abcam) was performed for the analysis of the SMC content in the fibrous cap, with detection by the Vulcan Fast Red Chromogen kit (Biocare Medical). To assess the lesional SHP-1 activity, sections were co-stained with phospho-SHP1 (ab131500, 1:50; Abcam) and Mac-3 (BD 550292, 1:100; BD Sciences), followed by Alexa Flour 488 and 594 (1:250; Life Technologies), respectively. The phospho-SHP1 area was quantified and normalized to the Mac-3 area 6 . To assess ACs in lesions, sections were stained for cleaved caspase-3 (Cell Signaling, 9661, 1:200) staining followed by Alexa Fluor 488 goat anti-rabbit (Life technologies, 1:250). The percentage of cleaved caspase-3 + area was calculated and divided by the total atherosclerotic plaque area measured by ORO in serial sections. To study the phagocytosis of ACs by macrophages, the in vivo phagocytic index was calculated 6 , 26 . Sections were co-stained with cleaved caspase-3 and Mac-3, followed by Alexa Flour. The phagocytic index was determined by manually counting the number of free ACs versus phagocytosed (macrophage-associated) ACs. For the detection of SWNTs, sections were stained with anti-PEG (PEG-B-47, ab51257, 1:100; Abcam). Frozen lung sections were stained with haematoxylin and eosin (H&E, Richard-Allan). C3 deposition in the kidney was assessed by staining sections with anti-mouse C3 (ab11862, 1:100; Abcam). Lesional SWNT co-localization images were taken on an inverted Zeiss LSM 880 laser scanning confocal microscope. All the other images were taken with a Nikon digital camera mounted on a fluorescent microscope and analysed using Adobe Photoshop CS6 in a blinded fashion. In vivo PET/CT imaging 18 F-FDG-PET/CT imaging was used to assess the changes in atherosclerotic inflammation in response to treatment with SWNT-SHP1i or control SWNT-Cy5.5 ( n = 8 per group) 28 . The mice were fasted overnight prior to the scan. Special precautions were taken during the isoflurane-induced anaesthesia to maintain body temperature (before injection, after injection and during the scan). The radiotracer (15–20 MBq of 18 F-FDG; Stanford Cyclotron & Radiochemistry Facility) was administered intravenously to the mice. In addition, a long circulating formulation of iodinated triglyceride (Fenestra VC; MediLumine) was used as contrast agent. The mice were placed on the bed of a dedicated small animal positron emission tomography-computed Tomography (PET-CT) scanner (Inveon PET/CT; Siemens Medical Solution) 3 h after the 18 F-FDG administration, and a 30-min static PET scan was obtained. All the images were reconstructed using the OSEM algorithm. The same acquisition bed was used for the CT scan. The CT system was calibrated to acquire 360 projections (voltage 80 kV, current 500 µA). The voxel size was 0.206 × 0.206 × 0.206 mm 3 . Region-of-interest analysis was performed using IRW software (Inveon Research Workplace; Siemens). 18 F-FDG uptake in the thoracic aorta was quantified by drawing three-dimensional region-of-interests on the axial slices from the CT scan. The standardized uptake values were calculated and the mean value was used. Aortic single cell preparation for scRNA-seq Aortae (including the aortic sinus and aortic arch) were carefully dissected free from the perivascular adipose tissue and cardiac muscle, and then digested into single-cell suspensions as described above. Cells were pooled from mice treated with SWNT-SHP1i ( n = 4) and SWNT-Cy5.5 ( n = 4), and stained with SYTOX Blue to discriminate and exclude non-viable cells. Viable cells (SYTBOX Blue − ) were sorted with a 100 μm nozzle into populations that were Cy5.5 + and Cy5.5 − using a BD Aria II and collected in PBS + 0.04% BSA. scRNA-seq and data analysis Samples were resuspended to a concentration of 600–1,000 cells μl − 1 in PBS + 0.04% BSA and loaded into the 10x Chromium system to generate single-cell barcoded droplets using the 10x Single Cell 3′ reagent kit v2 (10x Genomics), according to the manufacturer’s protocol. The resulting libraries were sequenced on an Illumina HiSeq4000 platform. Detailed methods on library preparation and sequencing are given in the Supplementary Information . Single-cell RNA-sequencing data were preprocessed using 10x Cell Ranger software (Cell Ranger v3.0.2), which included data demultiplexing, barcode processing, alignment and single-cell 3′ gene counting, as previously described 48 . Reads that were confidently mapped to the reference mouse genome (UCSC mm10) were used to generate a gene-barcode matrix for downstream analysis. The filtered gene-barcode matrices that contained only cell-associated barcodes were merged into a combined matrix from the above control (SWNT-Cy5.5) and treated (SWNT-SHP1i) datasets. Genes expressed in <5 cells, cells with <200 or >4,000 detected genes and cells with a percentage of mitochondrial genes >6% were filtered. After additionally filtering adventitial cells, 1,274 immune cells were included to assess the effect of chronic inhibition of the CD47-SIRPα-SHP1 axis. The resulting data was log-normalized, scaled and regressed on the number of unique molecular identifiers per cell and the percentage of mitochondrial gene content. Principle component analysis was performed for dimensionality reduction using the top 1,000 variable genes ranked by their dispersion from the combined datasets, followed by unbiased clustering analysis based on the identified PCs and t -SNE) for data visualization. To identify cell-type specific responses to the SWNT-SHP1i treatment, differential expression tests were performed for cell clusters to compare the samples from mice treated with SWNT-SHP1i and those with SWNT-Cy5.5. Differential expression genes with P < 0.05 based on the Wilcoxon rank sum test were considered statistically significant. All downstream analyses were performed with the Seurat R package v3.0 (ref. 49 ). Pathway analysis Pathway analysis was performed using significantly upregulated and downregulated genes between the SWNT-SHP1i and SWNT-Cy5.5 datasets. Genes were input for pathway analysis by Qiagen Ingenuity Pathway Analysis for the upstream regulator analysis and assessment of the enriched canonical pathways, diseases and functions, and PANTHER Pathway was used for the GO term enrichment analysis 50 (PANTHER overrepresentation test released on 13 November, GO ontology database released on 1 January 2019). GOPlot 1.0.2 was used for the visualization of results from the GO enrichment analysis. Statistical analysis Categorical data were compared using the Fisher’s Exact Test. Continuous data are presented as mean ± s.e.m. and were tested for normality using the D’Agostino Pearson or Shapiro–Wilk test. Groups were compared using the two-tailed Student’s t -test for parametric data and the Mann–Whitney U test for non-parametric data. When comparing more than two groups, data were analysed using ANOVA followed by Tukey post hoc tests. Measurements were taken from distinct samples. For atherosclerosis intervention studies, survival analysis was performed using the Kaplan–Meier method, with the log rank test used to compare time-to-mortality curves. A P value < 0.05 was considered to indicate statistical significance. Statistical analyses were performed using GraphPad Prism 7 (GraphPad Inc.). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. Code availability Code used on R package for analysis of scRNA-seq data can be accessed by contacting A.M.F. at [email protected]. | Michigan State University and Stanford University scientists have invented a nanoparticle that eats away—from the inside out—portions of plaques that cause heart attacks. Bryan Smith, associate professor of biomedical engineering at MSU, and a team of scientists created a "Trojan Horse" nanoparticle that can be directed to eat debris, reducing and stabilizing plaque. The discovery could be a potential treatment for atherosclerosis, a leading cause of death in the United States. The results, published in the current issue of Nature Nanotechnology, showcases the nanoparticle that homes in on atherosclerotic plaque due to its high selectivity to a particular immune cell type—monocytes and macrophages. Once inside the macrophages in those plaques, it delivers a drug agent that stimulates the cell to engulf and eat cellular debris. Basically, it removes the diseased/dead cells in the plaque core. By reinvigorating the macrophages, plaque size is reduced and stabilized. Smith said that future clinical trials on the nanoparticle are expected to reduce the risk of most types of heart attacks, with minimal side effects due to the unprecedented selectivity of the nanodrug. Smith's studies focus on intercepting the signaling of the receptors in the macrophages and sending a message via small molecules using nano-immunotherapeutic platforms. Previous studies have acted on the surface of the cells, but this new approach works intracellularly and has been effective in stimulating macrophages. "We found we could stimulate the macrophages to selectively eat dead and dying cells—these inflammatory cells are precursor cells to atherosclerosis—that are part of the cause of heart attacks," Smith said. "We could deliver a small molecule inside the macrophages to tell them to begin eating again." This approach also has applications beyond atherosclerosis, he added. "We were able to marry a groundbreaking finding in atherosclerosis by our collaborators with the state-of-the-art selectivity and delivery capabilities of our advanced nanomaterial platform. We demonstrated the nanomaterials were able to selectively seek out and deliver a message to the very cells needed," Smith said. "It gives a particular energy to our future work, which will include clinical translation of these nanomaterials using large animal models and human tissue tests. We believe it is better than previous methods." Smith has filed a provisional patent and will begin marketing it later this year. | 10.1038/s41565-019-0619-3 |
Medicine | Study suggests tumor mutational load may be useful metric to predict response to checkpoint-inhibitor immunotherapy | Robert M. Samstein et al. Tumor mutational load predicts survival after immunotherapy across multiple cancer types, Nature Genetics (2019). DOI: 10.1038/s41588-018-0312-8 Journal information: Nature Genetics | http://dx.doi.org/10.1038/s41588-018-0312-8 | https://medicalxpress.com/news/2019-01-tumor-mutational-metric-response-checkpoint-inhibitor.html | Abstract Immune checkpoint inhibitor (ICI) treatments benefit some patients with metastatic cancers, but predictive biomarkers are needed. Findings in selected cancer types suggest that tumor mutational burden (TMB) may predict clinical response to ICI. To examine this association more broadly, we analyzed the clinical and genomic data of 1,662 advanced cancer patients treated with ICI, and 5,371 non-ICI-treated patients, whose tumors underwent targeted next-generation sequencing (MSK-IMPACT). Among all patients, higher somatic TMB (highest 20% in each histology) was associated with better overall survival. For most cancer histologies, an association between higher TMB and improved survival was observed. The TMB cutpoints associated with improved survival varied markedly between cancer types. These data indicate that TMB is associated with improved survival in patients receiving ICI across a wide variety of cancer types, but that there may not be one universal definition of high TMB. Main In recent years, ICI therapy has revolutionized the treatment of patients with advanced-stage cancers. These agents include antibodies that target CTLA-4 or PD-1/PD-L1 1 . Durable benefit, however, is limited to a minority of patients. Recently, several large phase 3 trials have reported negative results in both unselected patients and selected groups, highlighting the clinical need to identify better predictive biomarkers 2 , 3 , 4 , 5 . Early reports have suggested that PD-L1 immunohistochemistry, T-cell infiltration levels, T-cell receptor clonality, gene expression signatures and peripheral blood markers may correlate with clinical response 6 . Additionally, an association between high mutational load and clinical benefit was observed in small cohorts of patients with melanoma treated with CTLA-4 blockade 7 , 8 , and non-small cell lung cancer (NSCLC), patients with melanoma and bladder cancer treated with PD-1/PD-L1 inhibitors 9 , 10 , 11 . However, it is unclear whether TMB is robustly predictive of clinical benefit across diverse human cancers, or outside of these specific clinical trial populations. In previous studies, mutation load was determined by using whole-exome sequencing, which is not widely utilized in routine clinical care. Currently, the majority of precision oncology platforms use next-generation sequencing of targeted gene panels. At Memorial Sloan Kettering Cancer Center (MSK), as part of clinical care, patients undergo genomic profiling with the Food & Drug Administration (FDA)-authorized Integrated Mutation Profiling of Actionable Cancer Targets (MSK-IMPACT) assay 12 . This test is performed in a Clinical Laboratory Improvement Amendments (CLIA)-certified laboratory environment and identifies somatic exonic mutations in a predefined subset of 468 cancer-related genes (earlier versions included 341 or 410 genes), by using both tumor-derived and matched germline normal DNA. We examined the association between nonsynonymous somatic TMB, as measured by MSK-IMPACT, and overall survival after treatment with ICI. The cohort included 1,662 patients whose tumors were profiled by next-generation sequencing and who had received at least one dose of ICI therapy, representing a variety of cancer types with a sufficient number of patients for analysis (Supplementary Fig. 1 ). Patients who had received atezolizumab, avelumab, durvalumab, ipilimumab, nivolumab, pembrolizumab or tremelimumab as monotherapy or in combination were included in the study. Most patients (1,446, representing 94% of tumors excluding glioma) had stage IV or metastatic disease. A small number of patients had locoregionally recurrent disease ( n = 10) or were melanoma patients with regionally advanced unresectable disease (stage III, n = 989 (Supplementary Table 1 ). In total, 146 received anti-CTLA-4, 1,256 received anti-PD-1 or PD-L1, and 260 received a combination of anti-CTLA-4 and anti-PD-1/PD-L1 therapies. A large number of patients had cancers for which ICI is FDA-approved, including 350 NSCLCs, 321 melanomas, 151 renal cell carcinomas, 214 bladder cancers and 138 head and neck squamous cell cancers (Supplementary Table 2 ). To calculate TMB, the total number of somatic nonsynonymous mutations was normalized to the total number of megabases sequenced. Overall survival was measured from the date of first ICI treatment to time of death or most recent follow-up. The median follow-up was 19 months (range 0–80, with 830 (50%) patients alive and censored at most recent follow-up). We defined TMB subgroups by percentile within each histology. We took this approach because the median and range of mutational load have been shown to vary across tumor types 13 ; therefore, a universal cutoff for ‘high TMB’ would be enriched for tumor types with higher mutation load. Across the entire cohort, stratifying tumors by TMB decile within histology showed that a higher number of mutations was associated with improved overall survival. This significant association, stratified by histology, was seen across a variety of cutpoints chosen to define the high-TMB group (ranging from the top 10–50%; Fig. 1a and Supplementary Figs. 3 and 4 ). A clear trend toward decreasing hazard ratio (HR) of death with increasing TMB cutoff was observed across cancer types, demonstrating increasing benefit from ICI with higher TMB (Fig. 1b and Supplementary Fig. 3 ) 14 . Fig. 1: Effect of mutational load on overall survival after ICI treatment. a , Kaplan–Meier curves for patients with tumors falling into the depicted deciles of TMB within each histology. Overall survival is from the first dose of ICI. Two-sided log-rank P values are indicated for all patients, with univariate Cox regression HR of 0.76 (95% confidence interval (CI) 0.62–0.94) and 0.52 (95% CI 0.42–0.64) for the 10–20% and top 10% groups, respectively, compared with the bottom 80% group. b , Cox regression hazard ratios for overall survival of 1,662 patients, at the depicted percentile cutoffs of TMB across all cancer subtypes. Solid black circles represent HRs with P < 0.05 (two-sided log-rank P value). Full size image To confirm that these results were present across multiple cancer types, we performed two additional analyses. First, a multivariable analysis across the entire cohort using Cox proportional-hazards regression demonstrated that the tumor mutation burden was significantly associated with overall survival both as a continuous variable (HR = 0.985, P = 3.4 × 10 −7 ) and with a binary cutoff (top 20% of each histology, HR = 0.61, P = 1.3 × 10 −7 ), with adjustment for cancer type, age, drug class of ICI and year of ICI start (Table 1 ). Furthermore, this association remained significant with removal of melanoma and NSCLC patients from the cohort (Supplementary Table 2 ), thus indicating that this effect was not solely driven by these histologies. Table 1 Multivariable analysis of factors associated with overall survival Full size table We also performed a stratified analysis within each cancer type by selecting the highest mutation load quintile (top 20%) in each histology as the TMB-high group. Using this approach, we observed a similar association of longer overall survival with higher TMB (top 20% within each histology) across multiple cancer types (Fig. 2 and Supplementary Fig. 5 ). Although the effect for some individual cancers did not reach statistical significance, possibly because of smaller sample size, the numerical trend of better overall survival (HR < 1) was observed in nearly all cancer types, with glioma the clearest exception. Together, these data indicate that the association between TMB and improved survival after ICI is likely to be present in most cancer histologies. Fig. 2: Effect of nonsynonymous mutational load on overall survival after ICI treatment, by cancer subtype and drug class. Forest plot for all patients in the identified cohort or individual cancer subtypes. Indicated are the number of patients and HR comparing overall survival after ICI in patients in the highest twentieth-percentile TMB within each histology. Bars represent the 95% CI. The cutoff defining the top 20% of normalized mutational burden from MSK-IMPACT for each cancer type is shown, as well as the two-sided log-rank P value for the comparison of high and low mutational burden survival curves. ER, estrogen receptor. All cancer types in analysis are displayed. Full size image Consistent with varying distributions of TMB across histologies, the TMB cutoff associated with the top 20% of each cancer type varied markedly (Fig. 2 ). Importantly, this result suggests that there is not likely to be a universal number defining high TMB that is predictive of clinical benefit to ICI across all cancer types, and that the optimal cutpoint is likely to vary for different cancers. A similar numerical trend was observed for longer overall survival with TMB measured as a continuous variable across many histologies, concordant with the number of patients in the subgroup (Supplementary Fig. 6 ). In agreement with differences in overall survival, we also observed similar associations between TMB and rates of objective response/clinical benefit to ICI, or progression-free survival, in patients with cancer types for which response data were available—NSCLC, melanoma, esophagogastric, head and neck, and renal cell cancer 15 – 17 (Supplementary Figs. 7 and 8 ). To investigate the possibility that the observed survival differences among patients with higher-TMB tumors might simply be attributable to a general prognostic benefit of high mutational load, unrelated to ICI, we analyzed the outcomes of 5,371 patients with metastatic cancers who did not receive ICI and whose tumors were sequenced with MSK-IMPACT. In these patients, there was no association between higher TMB and improved overall survival (HR = 1.12, P = 0.11). This lack of prognostic benefit was also observed within each histology (Supplementary Figs. 5 and 9 ). Of note, the TMB cutpoint for the top 20% of colorectal cancer patients was high (52.2/Mb), potentially consistent with many MSI-high colorectal tumors receiving ICI treatment. To evaluate the possibility that the ICI-treated cohort of patients might be enriched for those with higher TMB (if, for example, clinicians were more likely to triage higher-TMB patients to ICI therapy), we repeated the survival analyses, instead calculating the top 20% of TMB among all (both ICI- and non-ICI-treated) patients. The TMB cutpoints in other cancer types were not changed with this calculation, and the associations with survival in each cancer type remained very similar in both the ICI- and non-ICI-treated cohorts (Supplementary Figs. 10 and 11 ). Distinctly from the other cancer types, there was no association between higher TMB and improved survival in patients with glioma; in fact, the trend was toward poorer survival. Although there have been case reports of dramatic responses to ICI in patients with glioblastoma associated with childhood biallelic mismatch repair deficiency 18 , mismatch repair is very rare in glioblastoma, and higher TMB in many glioma patients may reflect previous exposure to the alkylating agent temozolomide, which can promote the expansion of less immunogenic subclonal mutations 19 . Alternatively, antitumor immune responses in the central nervous system may be distinct and less dependent on TMB. As would be expected in a large multicancer analysis of tumors sequenced as part of clinical care, the patients included were heterogeneous: some had been heavily pretreated, whereas others were treated with a variety of combination therapies. The timing of MSK-IMPACT testing relative to ICI start was also variable. Nevertheless, the finding of a significant association with overall survival in a heterogeneous cohort underscores the robustness of TMB as a predictive biomarker, thus suggesting that it is likely to be clinically meaningful. TMB, as measured by targeted NGS panels such as MSK-IMPACT, has previously been shown to have a high correlation with total mutational burden, as measured by whole-exome sequencing 20 , 21 , 22 , 23 , 24 . MSK-IMPACT offers the advantage of matched normal germline sequencing for each patient, permitting precise identification of true somatic mutations. Although TMB measured in exome sequencing is highly correlated with measurements in targeted sequencing, it is important to note that numerical cutpoints may differ across platforms. Additionally, we note that TMB cutoffs for individual histologies may not represent the ideal values for clinical use, and they are shown primarily to demonstrate that a relationship exists between TMB and survival for each histology. We chose a top-twentieth percentile cutoff for TMB to dichotomize our data, but this does not imply any clinical significance to this threshold. The variable threshold of TMB across histologies can probably be attributed to distinct tumor microenvironments as well as the numerous other factors shown to independently predict response to ICI, including clonality, immune infiltration, immune cell exclusion, human leukocyte antigen genotype and alterations and expression levels of checkpoint molecules, as well as others 19 , 25 , 26 , 27 , 28 . Our data overall suggest that TMB is associated with increasing overall survival in a dose-dependent fashion. The pancancer nature of this biomarker probably reflects fundamental mechanisms by which ICI functions. Our data are also consistent with the hypothesis that higher mutation load is associated with a higher number of tumor neoantigens presented on major histocompatibility complex molecules that facilitate immune recognition as foreign and the development of an antitumor immune response 29 , 30 . This finding is in line with the observation that patients with hypermutated tumors as a result of defective mismatch repair have high response rates to pembrolizumab, a finding that had led to the FDA’s tissue/site-agnostic approval of this agent for microsatellite-instability-high or mismatch-repair-deficient tumors 31 . Further elucidation of appropriate mutational-load cutoffs with integration of relevant clinical variables within each cancer type will be necessary, probably in the context of prospective clinical studies, to allow for implementation of TMB as a predictive biomarker. Our study addresses several fundamentally important questions in immuno-oncology. Mutational load can predict survival across diverse types of human cancers and is relevant in patients treated with either anti-CTLA-4 or anti-PD-1 therapies. Second, previous studies on the association between mutational load and survival after ICI had examined small cohorts, and therefore the effects of TMB on clinical benefit could not be quantified in a precise manner. This study presents genomic data from the largest cohort of patients treated with ICI to date and demonstrates the continuous association between higher TMB and superior overall survival. Capturing as little as 3% of the coding exome by using targeted panels such as MSK-IMPACT appears to provide a sufficient estimation of total tumor mutational load conferring predictive value for patients in whom ICI treatment is being considered. Finally, the mutational number defining TMB -high appears to vary across cancer types, and there is unlikely to be a universal number that defines the likelihood of benefit from ICI across all histologies. Given the potential toxicities of immunotherapy and the highly variable response to ICI, as well as the significant economic cost of these agents, there is an urgent need for biomarkers that can predict immunotherapy response. Future studies that integrate other genomic or pathologic biomarkers may allow for the development of an even more optimized predictive test to inform clinical decisions on the use of ICI. Methods Patient selection After receiving institutional review board (IRB) approval at MSK, institutional pharmacy records were used to identify patients who had received at least one dose of immunotherapy (atezolizumab, avelumab, durvalumab, ipilimumab, nivolumab, pembrolizumab or tremelimumab), and these were then cross-referenced with patients who had MSK-IMPACT testing done in the context of routine clinical care. Cancer types with more than 35 patients on initial collection were selected for further analysis in the cohort. Most patients who received MSK-IMPACT testing on tumor tissue are enrolled in an IRB-approved institutional research protocol (NCT01775072), with the remaining patients receiving testing as part of routine clinical care; all patients provided informed consent, permitting return of results from sequencing analyses and broader characterization of banked specimens for research. Details of tissue processing and next-generation sequencing and analysis were previously described 11 . Importantly, concurrent sequencing of germline DNA from peripheral blood was performed for all samples to identify somatic tumor mutations. Patients enrolled in ongoing clinical trials for which publication of outcomes data was prohibited were removed, as were a small proportion of patients with either localized disease treated in the neoadjuvant setting ( n = 9) or localized disease. Other preceding or concurrent non-ICI treatments were not recorded or accounted for in the analysis. The timing of tissue pathology on which MSK-IMPACT was performed relative to ICI administration is also heterogenous, with a small portion of patients receiving testing after ICI administration. Mutational-load assessment and statistical analysis The total number of somatic mutations identified was normalized to the exonic coverage of the respective MSK-IMPACT panel in megabases. Mutations in driver oncogenes were not excluded from the analysis. Overall survival analysis on ICI patients was performed from the date of first infusion of any ICI. For patients who received multiple courses of ICI, the first treatment was used for analysis. Patients were censored at the date of most recently attended appointment at MSK if death was not recorded in the electronic medical record. For analysis of patients who did not receive ICI, all patients for whom MSK-IMPACT data were available across all histologies were included. Overall survival analysis was performed from the date of first infusional chemotherapy. Kaplan–Meier survival analysis was performed, and log-rank P values are reported. Multivariable analysis was performed with Cox proportional hazard regression with inclusion of variables significant on univariate regression, including normalized TMB, cancer type, age, ICI drug class and year of ICI administration. The year of ICI administration was included to avoid any possible differences in patients treated in the early years when MSK-IMPACT testing was available. For each histology, we subsequently identified cases in the top twentieth percentile of TMB and determined the log-rank P value for difference in overall survival and the direction of the effect with a HR determined from a coxph model. Additional analyses were performed with the TMB cutoff ranging from 10% to 50%, as well as with the TMB cutoff instead defined among all patients (both ICI- and non-ICI-treated patients). Response data for individual histologies were obtained from published analyses of clinical outcome in the cohorts of patients with NSCLC or esophagogastric cancer patients 15 , 16 . For patients with head and neck cancer, radiology records were reviewed manually to determine evidence of progression or tumor response. In these tumor types, clinical benefit was defined as any partial/complete response, or evidence of stable disease for ≥6 months. For renal cell carcinoma, time to next treatment was recorded manually for all patients. Statistical analysis was performed in R by using the survival package. Graph-Pad Prism was used for basic analysis and generating graphs. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Data necessary to reproduce the figures are provided in Supplementary Data. All data are publicly available at . | A large team of researchers affiliated with Sloan Kettering Cancer Center, Weill Cornell Medical Center and Columbia University Medical Center has found that the mutational load of a tumor may be a useful way to predict a response to checkpoint-inhibitor immunotherapy across different types of cancer. In their paper published in the journal Nature Genetics, the group describes their study of over 1,500 patients with advanced cancer who had undergone checkpoint-inhibitor immunotherapy, and what they found. Checkpoint-inhibitor immunotherapy is a type of treatment for cancer patients whereby an attempt is made to prevent cancer cells from suppressing the body's natural immune response, allowing it to fight tumor development. Unfortunately, for some patients, it does not work as well as for others. In their search to understand why this is the case, medical scientists have also been trying to figure out which patients would benefit from such treatment and which will not. Doing so would save precious time for those patients who will not benefit from it, allowing doctors to prescribe a more effective treatment. In this new effort, the researchers found what they believe is a reliable way to test patients prior to administration of treatment—testing their mutational load. A mutational load, also known as tumor mutation burden, is a number that describes the rate of DNA faults in a tumor, which is a way of quantifying mutation rates in tumors. The study consisted of sequencing cells from a very large number of cancer patients, both those who had undergone checkpoint-inhibitor immunotherapy and those who had not, and looking for a pattern that would indicate differences. They found that those patients with higher mutational loads responded better to checkpoint-inhibitor immunotherapy than did those who had smaller load readings. They also found that different types of cancers had different load thresholds. The team notes that they do not know why a high mutation rate makes patients better candidates for checkpoint-inhibitor immunotherapy, but they do have a theory. They think it might be because higher mutation rates tend to result in cell proteins that are more mangled and easier for the immune system to recognize. | 10.1038/s41588-018-0312-8 |
Medicine | How one of the oldest natural insecticides keeps mosquitoes away | Feng Liu et al, A dual-target molecular mechanism of pyrethrum repellency against mosquitoes, Nature Communications (2021). DOI: 10.1038/s41467-021-22847-0 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-021-22847-0 | https://medicalxpress.com/news/2021-05-oldest-natural-insecticides-mosquitoes.html | Abstract Pyrethrum extracts from flower heads of Chrysanthemum spp. have been used worldwide in insecticides and repellents. While the molecular mechanisms of its insecticidal action are known, the molecular basis of pyrethrum repellency remains a mystery. In this study, we find that the principal components of pyrethrum, pyrethrins, and a minor component, (E)-β-farnesene (EBF), each activate a specific type of olfactory receptor neurons in Aedes aegypti mosquitoes. We identify Ae. aegypti odorant receptor 31 (AaOr31) as a cognate Or for EBF and find that Or31-mediated repellency is significantly synergized by pyrethrin-induced activation of voltage-gated sodium channels. Thus, pyrethrum exerts spatial repellency through a novel, dual-target mechanism. Elucidation of this two-target mechanism may have potential implications in the design and development of a new generation of synthetic repellents against major mosquito vectors of infectious diseases. Introduction Mosquito-transmitted human diseases, such as malaria and dengue fever, represent significant burdens to global human health and cause considerable human suffering. One of the most effective measures to reduce disease transmission involves the use of insect repellents to prevent human contacts with mosquitoes. Pyrethrum extract from the dried and crushed flower heads of Chrysanthemum (Tanacetum cinerariifolium ) began to be used as an insect repellent against biting arthropods thousands of years ago 1 , 2 . Today, T, cinerariifolium plants are grown commercially in many parts of the world, particularly in East Africa and Australia, for extraction of pyrethrum 3 , 4 and pyrethrum and its synthetic analogs, pyrethroids, are key active ingredients in a variety of commercial insect-repelling products, such as mosquito coils, emanators, vaporizer mats, textile finishes, hessian strips/ribbons 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 . Pyrethrins, the major insecticidal components, are used to control a wide range of pests in both agricultural and non-agricultural settings and may pose harmful effects on bees when used outdoor. In addition, pyrethrum-producing Chrysanthemum spp. are recommended as “companion” plants to repel pest insects 14 . Pyrethrins and pyrethroids exert potent insecticidal activities by hyper-activating insect voltage-gated sodium channels, thereby causing rapid paralysis, known as knockdown, and eventual lethality 15 , 16 , 17 . While the molecular mechanism of insecticidal lethal action of these compounds is well-established, the molecular mechanism(s) underlying the repellency elicited by pyrethrum and pyrethroids, at sublethal levels, remains a mystery. Various possibilities have been proposed, including contact-based repellency, spatial repellency, neuronal irritation, olfactory responses 11 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 . However, it is not clear how these possibilities relate to each other, and, most importantly, if olfactory responses are involved, what is the identity of the odorant receptor(s) responsible for pyrethrum repellency. In this study, we took a combination of molecular genetics, electrophysiological and behavioral approaches to gain insights into pyrethrum repellency in Aedes aegypti , the primary vector of viruses that cause dengue fever, Zika, yellow fever and chikungunya. We aimed to address the following key questions: (i) Is pyrethrum repellency odorant receptor (Or)-mediated? (ii) If yes, which Or(s) is activated by pyrethrum? (iii) What component(s) of pyrethrum extract activates the Or(s)? and (iv) Is there a synergistic interaction among pyrethrum components in repellency? Our study resolved all these questions and revealed a novel, dual-target mechanism for insect repellency, involving dual activation of olfactory repellency pathways and voltage-gated sodium channels. The discovery has significant basic and practical implications in understanding the mechanisms and development of insect repellents against mosquitoes and other human disease vectors. Results Pyrethrum elicits spatial repellency First, we determined whether pyrethrum elicits spatial repellency using a hand-in-cage assay 27 . In this behavioral assay, mosquitoes are attracted to a human hand in a modified glove (Fig. 1a ) that has a screened window on the back of the glove. Mosquitoes would land on a piece of mesh secured on the top of the window. Between the top mesh and the hand is a second mesh, which was above the hand and treated with a test compound or solvent, but mosquitoes cannot make direct contact with the second mesh (Fig. 1a ). The landing frequency of female Rockefeller mosquitoes (wild-type Ae. aegypti ) onto the top mesh was significantly reduced when the second mesh was treated with pyrethrum (Sigma-Aldrich), and pyrethrum repellency increased in a dose-dependent manner (Fig. 1b ). Similar spatial pyrethrum repellency was also observed in female Kisumu mosquitoes (wild-type Anopheles gambiae ) (Supplementary Fig. 1a ). These results clearly show contact-independent, spatial repellency of pyrethrum against mosquitoes. Fig. 1: Pyrethrum elicits spatial repellency in Ae. aegypti mosquitoes. a A schematic depiction of the setup for the hand-in-cage assay as previously described 27 . b Dose-dependent pyrethrum repellency in Rockefeller (wild-type) mosquitoes ( n = 7 cages for control; n = 9 cages for 10 −3 (v v −1 ); n = 8 cages for 10 −2 (v v −1 ) from 3 batches of mosquitoes; t = 2.68, df = 14, P = 0.0181 for control vs . pyrethrum at 10 −3 and t = 9.58, df = 13, P < 0.0001 for control vs . pyrethrum at 10 −2 ). c Representative EAG response traces of Orlando (wild-type) mosquitoes to pyrethrum (10 −2 v v −1 ). d Repellency of pyrethrum (10 −2 v v −1 ) against Orlando or orco −/− mutant mosquitoes ( n = 6 cages for each of the controls; n = 8 cages for pyrethrum in Orlando, n = 7 cages for pyrethrum in orco −/− , from 3 batches of mosquitoes; t = 11.74, df = 12, P < 0.0001 for control vs. pyrethrum in Orlando, U = 3, P = 0.008 for control vs. pyrethrum in orco −/− , and t = 6.53, df = 13, P < 0.0001, for Orlando vs. orco −/− for pyrethrum). e Representative EAG response traces of orco −/− mosquitoes to pyrethrum (10 −2 v v −1 ). f Comparison of EAG responses in Orlando and orco −/− ; t = 4.61, df = 14, P = 0.0004, n = 5 antennae for orco −/− , and n = 11 antennae for Orlando. Two-tailed unpaired student’s t -test or two-tailed Mann-Whitney Rank Sum test was used to compare each of two sets of data. Data are plotted as mean ± s.e.m. and dots denote the value of each repeat. Full size image Next, we examined whether spatial repellency by pyrethrum depends on the mosquito olfaction system. We found that pyrethrum elicits electroantennagram (EAG) responses in Ae. aegypti and An. gambiae mosquitoes (Fig. 1c ; Supplementary Fig. 1b ). Pyrethrum-induced EAG response in Ae. aegypti was also reported recently in another study 24 . In contrast, no response to pyrethrum was detected from mosquito maxillary palps, another major olfactory organ in insects (Supplementary Fig. 1c ). These results indicate that the olfactory receptor neurons (ORNs) in the antennae of Ae. aegypti and An. gambiae mosquitoes are likely responsible for sensing pyrethrum. In insects, in addition to individual Ors that are responsible for recognizing specific odorants, an obligate co-receptor (Orco) is required for detection of diverse odorants 28 , 29 , 30 and can be used to infer whether insect attraction or avoidance response is Or-mediated. No obvious degenerate DAPI-stained ORNs were detected from orco −/− Anopheles mosquitoes 31 . We found that pyrethrum repellency was significantly reduced in orco −/− Ae. aegypti mosquitoes, compared to the wild-type Ae. aegypti strain Orlando, from which the orco −/− mutant was generated 32 (Fig. 1d ). The EAG signals from orco −/− mutant mosquitoes were also dramatically reduced compared to that from Orlando mosquitoes (Fig. 1e, f ). These results indicated that spatial repellency by pyrethrum is largely Or-mediated. Pyrethrum activates specific olfactory receptor neurons and odorant receptors Insect Ors are mainly expressed in olfactory receptor neurons (ORNs) in antenna 29 , 33 , 34 , 35 . Three major morphologically distinct types of antennal trichodae sensilla are recognized in Ae. aegypti antennae: short sharp-tipped (sst), long sharp-tipped (lst), and short blunt-tipped (sbt) 36 , 37 . Each Ae. aegypti sensillum houses two neurons: The neuron that generates larger spikes (i.e., action potentials) is called the A neuron and the neuron that produces smaller spikes is called the B neuron. To identify which mosquito ORN(s) responds to pyrethrum, we performed single sensillum recordings (SSR) of antennal olfactory sensilla of Rockefeller mosquitoes in response to a panel of odorants including pyrethrum, (±)-citronellal, geranyl acetate and (-)-borneol (Fig. 2a ; Supplementary Fig. 2 ), most of which are plant-derived mosquito repellents. We identified three responsive sst sensilla, sst-1 to sst-3, and six responsive sbt sensilla, sbt-1 to sbt-6, based on their response profiles to the odorants in our panel (Supplementary Fig. 2 ). lst sensilla did not respond to any odorants in our panel. Fig. 2: Identification of pyrethrum-responsive sensilla in Ae. aegypti antennae. a A schematic drawing illustrating single sensillum recording (SSR) from Ae. aegypti mosquito antennae. b SSR responses of short sharp-tipped (sst) and short blunt-tipped (sbt) sensilla to pyrethrum (10 −2 v v −1 ) ( n = 8 sensilla for sbt-1; n = 7 for sbt-2, sst-1 and sst-2; n = 3 for sbt-3 and sbt-4; n = 2 for sbt-5 and sbt-6; n = 9 for sst-3). c Representative SSR traces indicating increased firing of A neurons of sbt-1 sensilla ( n = 8 sensilla) or sst-1 sensilla ( n = 7 sensilla) in response to pyrethrum (10 −2 v v −1 ) in Rockefeller mosquitoes. Data are plotted as mean ± s.e.m. and dots denote the value of each repeat. Full size image We noted that sbt-1 and sst-1 sensilla were most responsive to pyrethrum (Fig. 2b ). In particular, the A neuron in the sbt-1 sensilla was excited by pyrethrum, while the B neuron was not (Fig. 2c ). Interestingly, the sbt-1A neuron also exhibited strong excitatory responses to (±)-citronellal and geranyl acetate (Supplementary Fig. 2 ), which are key components of other natural insect repellents. In contrast, the sbt-1B neurons were highly sensitive to indole and toluene (Supplementary Fig. 2 ). For sst-1 sensilla, pyrethrum specifically activated sst-1A neurons, but not sst-1B (Fig. 2c ). In addition, sst-1A was also activated by (±)-citronellal, camphor and eucalyptol (Supplementary Fig. 2 ), which are known repellents against mosquitoes. Furthermore, similar response profiles of sbt-1 and sst-1 to pyrethrum and other volatiles were also detected from Orlando mosquitoes (Supplementary Fig. 3 ). In addition, pyrethrum also weakly activated sbt-4A and sst-2A (Fig. 2b ), suggesting that the olfactory response to pyrethrum is likely not exclusively dependent on sbt-1A and sst-1A. Furthermore, additional sensilla type(s) besides the ones we identified in Supplementary Fig. 2 could respond to pyrethrum. Having identified sbt-1 and sst-1 as primary ORNs that are responsive to pyrethrum, we next attempted the challenging goal of identifying specific Or(s) that are involved. For this purpose, we took advantage of the availability of an indexed library of An. gambiae Ors (AgOrs) 38 , and expressed each of the 50 AgOrs in the “ab3 empty neuron” system of Drosophila melanogaster , in which the endogenous Or gene Or22a in the ab3 sensillum was deleted 39 . We examined SSR response from the resulting chimera ab3 sensillum expressing individual AgOrs in the D. melanogaster antenna to the same panel of odorants used in the analysis of Ae. aegypti sensilla. We found that pyrethrum activated AgOr31 (Fig. 3a ), AgOr20, AgOr53 and AgOr76 (Supplementary Fig. 4a ). As shown in Fig. 3b, c (also Supplementary Fig. 4b ), the odorant response profile of the sbt-1A neuron matches remarkably well with that of AgOr31. The odorant response profiles of the other three pyrethrum-activated AgOrs, AgOr20, AgOr53 and AgOr76 did not match with the response profiles of the sbt-1A or sst-1A neurons (Supplementary Fig. 4b, c ). Phylogenetic analysis of Or protein sequences from seven mosquito species, including Ae. aegypti , Ae. albopictus , An. gambiae , An. coluzzii , An. dirus , An. albimanus and Culex quinquefaciatus , revealed that AaOr31 from Ae. aegypti , AgOr31 from An. gambiae and related Ors from the other five mosquito species fall into a single distinct clade (Supplementary Fig. 5 ). Therefore, AgOr31 and AaOr31 are most likely orthologous based on matching odorant response profiles (Fig. 3b, c ) and the close phylogenetic relationship. In contrast, no orthologues of AgOr20 , AgOr53 or AgOr76 are found in Ae. aegypti , Ae. albopictus or Cx. quinquefaciatus , suggesting that Or31 represents a widely conserved mosquito Or gene, whereas AgOr20 , AgOr53 and AgOr76 likely belong to a family of Anopheles -specific Or genes (Supplementary Fig. 5 ). Future research using Drosophila empty neurons expressing AaOr31 and Or31 orthologs from other five mosquito species could further examine the proposed orthologous relationship across the major mosquito species. Fig. 3: Identification of pyrethrum-responsive AgOr31 from An. gambiae . a A representative single sensillum recording (SSR) trace ( n = 8 sensilla) from Drosophila ab3A empty neurons expressing AgOr31 to pyrethrum (10 −2 v v −1 ). An. gambiae AgOrs were expressed heterologously in the empty neuron (i.e., the ab3A neuron of the ab3 sensilla) in D. melanogaster . b Odorant response profiles from AgOr31 ( n = 4 sensilla). c Odorant response profiles from sbt-1A neuron in Ae. aegypti ( n = 16 sensilla/mosquitoes). Data are plotted as mean ± s.e.m. and dots denote the value of each repeat. Full size image ( E )-β-farnesene (EBF) activates pyrethrum-responsive Or31 Given that pyrethrum is an extract containing six insecticidal esters (i.e., collectively known as pyrethrins) as major components and several phytochemicals as minor components, including ( E )-β-farnesene (EBF), β-cubebene, ethyl palmitate and ethyl linoleate 40 , 41 , 42 , we next addressed the important question of which component(s) in pyrethrum activates Or31 by conducting SSR of AgOr31-expressing chimera ab3 sensilla to these individual components. Remarkably, EBF (Sigma-Aldrich) activated not only AgOr31 expressed in Drosophila ab3 empty neurons (Supplementary Fig. 6a ), but also sbt-1A neurons in Ae. aegypti (Fig. 4a ). None of the insecticidal esters, ethyl palmitate or ethyl linoleate activated AgOr31 (Supplementary Fig. 6b ). β-cubebene was not commercially available to be examined in our study. Furthermore, EBF elicited spatial repellency at the 10 −2 dilution (i.e., 114 μg cm −2 ) (Fig. 4b ). It should be pointed out that a previous study 43 did not observe repellency by EBF using the concentration of ~10 μg cm −2 . Consistent with their result, we did not detect repellency by EBF at the 10 −3 dilution (i.e., 11.4 μg cm −2 ) (Supplementary Fig. 6c ), indicating that EBF repellency is concentration-dependent. Fig. 4: AaOr31-mediated (E)-β-farnesene (EBF)/pyrethrum repellency in Ae. aegypti . a Representative SSR traces from short blunt-tipped (sbt)-1 sensilla in Rockefeller (wild-type) and AaOr31 −/− mosquitoes in response to EBF ( n = 10 sensilla for Rockefeller and n = 7 sensilla for AaOr31 −/− ), (±)-citronellal ( n = 10 sensilla for Rockefeller and n = 8 sensilla for AaOr31 −/− ), geranyl acetate ( n = 10 sensilla for Rockefeller and n = 8 sensilla for AaOr31 −/− ) and pyrethrum ( n = 10 sensilla for Rockefeller and n = 8 sensilla for AaOr31 −/− ) at 10 −2 dilution. b Repellency in AaOr31 −/− compared with that of Rockefeller mosquitoes to EBF ( t = 3.38, df = 11, P = 0.0061; n = 6 cages for AaOr31 −/− and n = 7 cages for Rockefeller from 3 batches of mosquitoes), (±)-citronellal ( t = 2.26, df = 16, P = 0.0384, n = 9 cages for AaOr31 −/− and n = 9 cages for Rockefeller from 4 batches of mosquitoes), geranyl acetate ( U = 8, P = 0.008, n = 8 cages for Rockefeller, n = 9 cages for AaOr31 −/− from 4 batches of mosquitoes) and pyrethrum ( U = 1, P = 0.001, n = 6 cages for AaOr31 −/− , n = 8 cages for Rockefeller from 3 batches of mosquitoes) at 10 −2 dilution. Two-tailed unpaired student’s t -test or two-tailed Mann–Whitney Rank Sum test was used to compare each of two sets of data. Data are plotted as mean ± s.e.m. and dots denote the value of each repeat. Full size image Knockout of AaOr31 reduced pyrethrum and ( E )-β-farnesene repellency Next, we asked whether Ae. aegypti Or31 (AaOr31) contributes to mosquito avoidance behavior to pyrethrum, EBF and other volatiles that activate AaOr31. For this purpose, we generated AaOr31 knockout ( AaOr31 −/− ) mosquitoes in Ae. aegypti strain Rockefeller using the CRISPR-Cas9 technology. Genotyping revealed that the AaOr31 gene of AaOr31 −/− mosquitoes carried two deletions in exon 2 of AaOr31 , which resulted in a premature stop codon (Supplementary Fig. 7a ). In AaOr31 −/− mosquitoes, we were able to identify sbt-1 sensilla by the response of sbt-1B neurons to both indole and toluene since none of other types of sbt sensilla (sbt2-6) responds to both indole and toluene (Supplementary Fig. 2 ). As expected, the response of sbt-1B neurons in AaOr31 −/− mosquitoes to both indole and toluene remained intact (Supplementary Fig. 7b ). Strikingly, the response of sbt-1A neurons to EBF, geranyl acetate or (±)-citronellal was completely abolished in AaOr31 −/− mosquitoes (Fig. 4a ). Furthermore, the repellency by EBF, (±)-citronellal and geranyl acetate were all significantly reduced in AaOr31 −/− mosquitoes compared to wild-type mosquitoes (Fig. 4b ). Similarly, knockout of AaOr31 abolished the response of sbt-1A neurons to pyrethrum (Fig. 4a ) and reduced pyrethrum repellency (Fig. 4b ). Knockout of AaOr31 did not alter the response profile of sst-1A to pyrethrum and other odorants examined (Supplementary Fig. 7c ). Collectively, these data showed that AaOr31-mediated olfactory pathway is an important component of the pyrethrum repellency mechanism. Pyrethrins elicit repellency by activating specific olfactory receptor neurons and sodium channels We noted, however, that more than 40% of pyrethrum repellency remained in AaOr31 −/− mosquitoes, as shown in Fig. 4b . Furthermore, the percentage of EBF in commercial pyrethrum extracts is generally very low (ranging from 1.25% to 1.97% based on our analysis of the pyrethrum extracts used in this study), which alone would not be sufficient to evoke repellency. These results suggest additional mechanism(s) underlying pyrethrum repellency. As shown in Fig. 2b, c , besides sbt-1, sst-1 was the only other type of sensillum that strongly responded to pyrethrum. We predicted that additional component(s) in pyrethrum likely elicit repellency by activating the sst1-associated olfactory pathway. Indeed, we found that pyrethrin I (P-I) and pyrethrin II (P-II), two major components purified from pyrethrum extracts (Supplementary Fig. 8 ), activate sst-1A neurons in Rockefeller mosquitoes (Fig. 5a ). A similar response of sst-1A neurons to pyrethrin I was also observed in AaOr31 −/− mosquitoes (Supplementary Fig. 7c ). Both P-I and P-II elicited spatial repellency (Fig. 5b ) and pyrethrin repellency was significantly reduced in orco −/− mosquitoes (Fig. 5b ), suggesting an important role of sst-1A neurons in pyrethrin-induced repellency. In addition, because pyrethrins are known to hyper-activate voltage-gated sodium channels that are critical for electrical signaling 44 , 45 , we examined a possible involvement of activation of sodium channels in pyrethrin repellency. For this purpose, we used the KDR:ROCK mutant line, which is near isogenic to the wild-type Rockefeller strain and resistant to pyrethrum due to two defined mutations in the sodium channel gene 46 (Supplementary Fig. 9a ). Compared with wild-type Rockefeller mosquitoes, we found that pyrethrin/pyrethrum repellency was reduced in KDR:ROCK mosquitoes (Fig. 5c ; Supplementary Fig. 9b ). However, GA repellency was not reduced in KDR:ROCK mosquitoes (Fig. 5c ). These results indicate the involvement of sodium channel activation in pyrethrin repellency. Thus, pyrethrins contribute to pyrethrum repellency via two mechanisms: (i) activation of sst-1A-associated olfactory pathway and (ii) hyper-activation of voltage-gated sodium channels. Fig. 5: Pyrethrin repellency is the result of activation of both Or(s) and sodium channels. a Representative single sensillum recording (SSR) traces of short sharp-tipped (sst)-1 sensilla responding to pyrethrin I (P-I; n = 8 sensilla) and pyrethrin II (P-II; n = 6 sensilla) at the 10 −1 dilution from Rockefeller mosquitoes. b Repellency elicited by P-I and P-II at the 10 −3 dilution in Orlando (wild-type) and orco −/− mosquitoes ( n = 10 cages for Orlando control and orco −/− control from 2 batches of mosquitoes; P-I: U = 31, P = 0.034, n = 11 cages for Orlando and n = 12 cages for orco −/− from 3 batches of mosquitoes; P-II: U = 15, P < 0.001, n = 14 cages for Orlando and n = 15 cages for orco −/− from 4 batches of mosquitoes). c Repellency by pyrethrum and geranyl acetate in KDR:ROCK mosquitoes compared to that in Rockefeller mosquitoes ( n = 10 cages for control in Rockefeller and n = 9 cages for control in KDR:ROCK from 2 batches of mosquitoes; pyrethrum at the 10 −3 dilution: t = 2.96, df = 28, P = 0.0062, n = 15 cages for Rockefeller and n = 15 cages for KDR:ROCK from 4 batches of mosquitoes; pyrethrum at the 10 −2 dilution: t = 6.56, df = 31, P < 0.0001, n = 17 cages for Rockefeller and n = 16 cages for KDR:ROCK from 4 batches of mosquitoes; geranyl acetate: t = 1.32, df = 16, P = 0.204, n = 9 cages for each strain from 2 batches of mosquitoes). Two-tailed unpaired student’s t -test or two-tailed Mann-Whitney Rank Sum test was used to compare each of two sets of data. Data are plotted as mean ± s.e.m. and dots denote the value of each repeat. Full size image Synergism between AaOr31-mediated repellency and sodium channel-activated repellency Identification of sst-1A and sbt-1A OSNs and voltage-gated sodium channels as targets of pyrethrum repellency raised a fundamental question with respect to the relationship between activation of sodium channels and olfactory pathways in pyrethrum-mediated mosquito repulsion. To address this question, we examined mosquito repellency in response to pyrethrins I/II or EBF alone or in combination. Remarkably, EBF enhanced pyrethrin I or pyrethrin II repellency at a concentration of as low as 4 ppm (Fig. 6a ), whereas EBF alone at these concentrations did not evoke any repellency (Supplementary Fig. 6c ). Importantly, the enhancement was specific to pyrethrins, as no enhancement in repellency was observed between EBF and another plant-derived repellent, (-)-borneol (Fig. 6a ). Although, like pyrethrins, (-)-borneol activates sst-1A neurons (Supplementary Fig. 2 ) and elicits Orco-dependent repellency (Supplementary Fig. 10a ), it cannot activate sodium channels (Supplementary Fig. 10b ), suggesting that EBF enhancement may be specific to sodium channel-activating chemicals, such as pyrethrins. Furthermore, the enhancement was abolished in AaOr31 −/− mosquitoes (Fig. 6b ), indicating that pyrethrins enhance AaOr31-dependent repellency evoked by EBF. Importantly, the synergism between pyrethrins and EBF could be observed even when the concentration of pyrethrin I was reduced to 10 ppm and 1 ppm which alone did not elicit repellency (Fig. 6c ). Collectively, these results strongly suggest that hyper-activation of sodium channels by pyrethrins, together with AaOr31-mediated repellency by EBF, produce a potent synergism that could explain the pyrethrum repellency against mosquitoes. The potentiation of AaOr31-mediated repellency by pyrethrins raises the possibility of a broad effect of sodium channel hyper-activation on the mosquito olfactory system. We therefore examined a possible effect of pyrethrins on the activities of odorants other than EBF. We found that camphor and eucalyptol, which each activate multiple types of Ae. aegypti sensilla in both Rockefeller and Orlando mosquitoes (Supplementary Fig. 2 ), also evoked spatial repellency in both strains (Supplementary Fig. 11c and 12 ). The magnitude of repellency by eucalyptol and camphor, as well as by P-I and P-II (Figs. 5 and 6 ), was greater in Orlando mosquitoes compared with that in Rockefeller mosquitoes, which likely reflect natural variations between the two strains. Repellency by camphor and eucalyptol was Orco-dependent as reduced repellency was observed in orco − / − mosquitoes (Supplementary Fig. 10c ). Furthermore, P-I enhanced repellency by camphor and eucalyptol (Supplementary Fig. 11 ), confirming a broad effect of pyrethrins on repellency by other mosquito repellents. Next, we investigated whether the pyrethrin/EBF synergism occur at the olfactory sensory level. Specifically, we evaluated the activity of sbt-1 neurons in response to EBF alone or co-application with P-I. As expected, EBF evoked increased firing of stb1A neurons, but co-application of P-I and EBF did not enhance the EBF-evoked activity (Supplementary Fig. 12a, b ). In addition, P-I did not alter the baseline firing of sbt-1 neurons in Rockefeller or AaOr31 −/− mosquitoes (Supplementary Fig. 12c ), suggesting that sodium channels in these ORNs are not sensitive to P-I. Collectively, these results suggest that pyrethrin potentiation of Or-mediated repellency does not occur at the olfactory sensory level, pointing to its effect at another step, possibly at the central neural processing level. Fig. 6: Pyrethrins-induced repellency is synergized by (E)-β-farnesene (EBF) via activation of both AaOr31 and sodium channels. a Effect of EBF (4 ppm) on pyrethrin I (P-I; 1000 ppm), pyrethrin II (P-II; 1000 ppm) and (-)-borneol (100 ppm) repellency in Rockefeller ( n = 10 cages for control from 2 batches of mosquitoes; U = 6, P = 0.001 ; n = 8 cages for P-I + EBF and n = 12 cages for P-I alone from 3 batches of mosquitoes; t = 3.82, df = 16, P = 0.0015; n = 10 cages for P-II alone and n = 8 cages for P-II + EBF from 3 batches of mosquitoes; and t = 0.49, df = 21, P = 0.626, n = 13 cages for (-)-borneol alone and n = 10 cages for (-)-borneol + EBF from 3 batches of mosquitoes. b No effect of EBF (4 ppm) on P-II (1000 ppm) repellency in AaOr31 −/− mosquitoes ( n = 10 cages for control from 2 batches of mosquitoes; t = 0.11, df = 14, P = 0.916, n = 8 cages for P-II alone and n = 8 for P-II + EBF from 2 batches of mosquitoes). c , Effect of EBF (10 ppm) on repellency by P-I (1 ppm and 10 ppm) in Rockefeller ( n = 10 cages for control; P-I at the 1 ppm: t = 7.34, df = 18, P < 0.0001, n = 10 for P-I alone and n = 10 for P-I + EBF from 2 batches of mosquitoes; P-I at 10 ppm: t = 3.32, df = 18, P = 0.0038, n = 10 cages for P-I alone and n = 10 cages for P-I + EBF from 2 batches of mosquitoes). EBF alone at 4 ppm and 10 ppm did not elicit repellency in Rockefeller or AaOr31 −/− mosquitoes (Supplementary Fig. 6d ). Two-tailed unpaired student’s t -test or t wo-tailed Mann-Whitney Rank Sum test was used to compare each of two sets of data. Data are plotted as mean ± s.e.m. and dots denote the value of each repeat. Full size image Discussion In this study, we provide strong evidence that pyrethrum exerts spatial repellency through a novel, dual-target mechanism. A minor component of pyrethrum, ( E )-β-farnesene (EBF), activates AaOr31, while Or31-mediated repellency is significantly synergized by pyrethrins, which are activators of voltage-gated sodium channels and the principal components of pyrethrum. It is remarkable that, centuries ago, humans unknowingly exploited a hidden potent synergism between activation of sodium channels by pyrethrins and activation of AaOr31 by EBF in a natural plant extract against insect bites. Elucidation of this two-target mechanism for a popular natural insect repellent has significant implications in the design of a new generation of synthetic repellent mixtures against major mosquito vectors of infectious human diseases, including malaria, dengue and Zika. AaOr31 is conserved in all three major disease-transmitting mosquito genera, Aedes, Anopheles and Culex (Supplementary Fig. 5 ). To our knowledge, Or31 in the first conserved mosquito odorant receptor that has been demonstrated to mediate repellency. Furthermore, for this first time we show that, besides acting on sodium channels, pyrethrins also directly activate Or(s) located in sst-1A neurons, although the identity of pyrethrin-activated Or(s) remains to be determined. We speculate that the EBF-pyrethrins synergism in pyrethrum repellency is likely the result of action of pyrethrins on hypersensitive sodium channels which either directly enhances the activity of EBF/AaOr31-mediated repellency and/or affect another neural circuit(s) that controls host finding. Exactly how hyper-activation of sodium channels enhances Or-mediated repellency and/or inhibit host finding awaits future investigations. Previous research revealed extensive alternative splicing and RNA editing of the sodium channel transcript, which produces a wide range of functional diversity of insect sodium channels in Drosophila and cockroach, including differential sensitivities to pyrethroids 47 . It is likely that pyrethrins at non-lethal concentrations of pyrethrins activate pyrethrin–hypersensitive sodium channel variants expressed in certain neural circuits, which leads to depolarization of membrane potential and influences the activity of neural circuits in response to EBF and other Or-activating repellents. Our SSR recordings did not detect any effect of pyrethrin I on EBF-elicited response of sbt-1A neurons, suggesting that sodium channels in the ORNs per se were not the targets of pyrethrins. Functional identification and localization of such pyrethrin–hypersensitive sodium channel variants will be necessary to advance further mechanistic understanding of the pyrethrin-EBF synergism in repellency. The discovery of a two-target mechanism in pyrethrum repellency will likely stimulate future development of a new generation of durable and wide-spectrum insect repellents. Indeed, citronellyl cyclobutanecarboxylate (CCBC) was recently developed as a promising synthetic citronellol-derived repellent with improved stability and other physiochemical properties. As shown in Supplementary Fig. 13 , we discovered that CCBC activates AgOr31 and stb-1A neurons and that CCBC repellency is significantly reduced in AaOr31 −/− mosquitoes. Therefore, we have accidentally identified AaOr31 as a major target of CCBC, in addition to pyrethrum. This illustrates the promising nature of Or31 for new repellent discoveries. Methods Mosquito strains Five Ae. aegypti mosquito strains were used: Rockefeller and Orlando are two wild-type strains and orco −/− ( orco 16 ) mosquito is a mutant mosquito strain with the orco gene mutated 32 (BEI Resources, NIAID, NIH). KDR:ROCK is a pyrethroid-resistant strain 46 . The AaOr31 −/− mutant strain was generated in this study using the CRISPR-Cas9 technology and was backcrossed with parental Rockefeller mosquitoes for four generations to generate a near-isogenic line for functional analysis. One An. gambiae strain was used: Kisumu (BEI Resources, NIAID, NIH). Hand-in-cage assay The hand-in-cage assay was similar to the hand-in-glove assay by Boyle et al. 27 . The setup for the hand-in-cage assay includes a 30 cm × 30 cm × 30 cm mosquito cage (BioQuip, Rancho Dominguea, CA), with a mounted digital camera and a human hand in a modified glove with a screened window. The digital camera (e-con Systems Inc, San Jose, CA, model: e-CAM51A) for video recording is mounted on the cage top and connected to a laptop computer. A nitrile rubber glove (Ansell Protective Products, Coshoton, OH, part number: 37-155) was cut on the back side of the glove to create a window (6 cm×5 cm) (Fig. 1a ). A piece of magnetic frame (slightly larger than the dimension of the window) was glued onto the cut window which was used as a base for stacking more magnetic window frames (Fig. 1a ; also further explained below). One piece of test compound-treated polyester netting (Shason Textile Inc., part number: WS-B532-111, white; 6.5 cm×5.5 cm) was placed on this fixed magnetic frame, which was ~3.0 mm above the glove. The second piece of the netting was untreated and placed ~8.0 mm above the treated net using a stack of four magnetic frames. The stacked magnetic frames were further secured with a binder clip. The stacking creates sufficient space between the treated net and the untreated net so that mosquitoes that land on the open window were not able to contact the treated net or contact and pierce the skin of the hand in the glove. The hand makes no contact with the treated net. The assay was run in a room with relative humidity around 50% and temperature between 27 °C to 30 °C. Twenty-four hours before an assay, four to nine days-old females (about 40, mated, non-blood fed) were transferred into a mosquito cage. The cage was kept in an incubator where mosquitoes were provided only with water in a cotton ball placed on the top of the cage. Immediately before the assay, one researcher (i.e., tester) treated a piece of netting with 500 µl test compound dissolved in acetone in a glass Petri dish in an adjacent room. Acetone served a control. After letting acetone evaporate (~7 min), the researcher assembled and put on a modified glove. In the meantime, a lab assistant transferred the prepared cage from the incubator to a bench in the assay room. Both personnel avoided use of any hand lotions and cosmetic products and wore white lab coats and gloves. The hand in the modified glove was introduced into the cage to initiate the assay. Mosquitoes landing on the test window was recorded by the digital camera for five minutes. The number of mosquitoes landing during the second to fifth minutes was counted and recorded. For each cage, solvent (acetone) control was tested first and then followed with a treatment. The time interval of assays between control and treatment was at least 1.5 h, allowing the mosquitoes to fully recover and residual vapors from experiments to be ventilated out of the room. Controls, which represent mosquito baseline activity in response to solvent, were from two trials of solvent 1.5 h apart to make sure that mosquitoes continue landing in the second trial at the same rate. Data from any cage that gave a low landing number in a control trial were discarded. Percentage repellency was determined for each cage using the following equation: Percentage repellency = [1- (cumulative number of mosquitoes landed on the window of treatment /cumulative number of mosquitoes landed on the window of solvent treatment)] x 100). Each experiment was repeated at least by two different testers. Once the assay was done, the cages were immediately sprayed with ethanol (99%) followed with a thorough rinse using distilled water to remove any residual chemicals on the cages and then a second ethanol spray before the cages were left to air dry. The modified glove and its magnetic frames were soaked in ethanol (99%) in a container, then rinsed with distilled water and a second ethanol rinse, before being left to air dry. Single sensillum recording Single sensillum recording was conducted as described in Liu et al. 48 . Female mosquitoes 4 days after eclosion were anaesthetized (2–3 min on ice) and mounted on a microscope slide (76 × 26 mm) 48 . An antenna was fixed using a double-sided tape to a cover slip resting on a small ball of dental wax to facilitate manipulation. The cover slip was placed at an appropriate angle to the mosquito head. Once mounted, the specimen was placed under a microscope (Eclipse FN1, Japan) and the antenna viewed at a high magnification (1000×). Two tungsten microelectrodes were sharpened in 10% KNO 2 at 2–10 V. The reference electrode, which was connected to ground, was inserted into the compound eye of the mosquito and the other was connected to the preamplifier (10×, Syntech, Kirchzarten, Germany) and inserted into the shaft of an olfactory sensillum (mainly in 8 th to 13 th flagellomeres) to complete the electrical circuit to extracellularly record ORN potentials 49 . Controlled manipulation of the electrodes was performed using a micromanipulator (Burleigh PCS-6000, CA). The preamplifier was connected to an analog-to-digital signal converter (IDAC-4, Syntech, Germany), which in turn was connected to a computer for signal recording and visualization in the software AutoSpike v3.1. The activity of co-located ORNs in each sensillum was assessed based on the differences in spike amplitude. The ORN with the large spike amplitude was designated as cell A and cell B with the small spike amplitude 35 . Signals were recorded for 10 s starting 1 s before stimulation, and the action potentials were counted off-line over a 500-ms period before and after stimulation. The spontaneous firing rates observed in the preceding 500 ms were subtracted from the total spike rates observed during the 500-ms stimulation, and counts were recorded in units of spikes s −1 . Eighteen compounds (Supplementary Fig. 2 ) besides pyrethrum from different chemical classes were selected for functional classification of olfactory sensilla with various morphological shapes. Each compound was diluted in dimethyl sulfoxide (DMSO) to a stock solution with a concentration of 100 μg μl −1 . Subsequently, a series of 10-fold dilutions were made from each of the stock solutions for each compound tested. For each dilution, a 10 μl portion was dispersed onto a filter paper strip (4 × 30 mm), which was then inserted into a Pasteur pipette to create the stimulus cartridge. A sample containing the solvent alone served as control. The airflow across the antennae was maintained constant at a 20 ml s −1 throughout the experiment. Purified and humidified air was delivered to the preparation through a glass tube (10-mm inner diameter) perforated by a small hole 10 cm away from the end of the tube, into which the tip of the Pasteur pipette could be inserted. The stimulus was delivered to the sensilla by inserting the tip of the stimulus cartridge into this hole and diverting a portion of the air stream (0.5 l min −1 ) to flow through the stimulus cartridge for 500 ms using a stimulus controller (Syntech, Germany). The distance between the end of the glass tube and the antennae was ≤1 cm. The number of spikes s −1 was obtained by averaging the results for each sensillum/compound combination. Recording from each replicate/sensillum was done using different mosquitos. Electroantennogram The electroantennogram procedure followed that described in Pelletier et al. 50 with minor modifications. Briefly, the head of an adult Ae. aegypti female was excised and mounted on an EAG platform equipped with two micromanipulators and a high-impedance AC/DC preamplifier (Syntech, Germany). Chlorinated silver wires in glass capillaries filled with 0.1% KCl and 0.5% polyvinylpyrrolidone (PVP) were used for both reference and recording electrodes. One antenna with the tip cut was accommodated into the recording electrode. Preparation was bathed in a high humidity air stream flowing at 20 ml s −1 to which a stimulus pulse of 2 ml s −1 was delivered for 500 ms. Any change in antennal deflection induced by the stimuli or control puffs was recoded for 10 s. All compounds were dissolved in paraffin oil to make a stock solution of 10-fold dilution, and decadic dilutions were made. An aliquot (10 μl) of a tested compound was loaded onto a filter paper strip (4 × 30 mm), which was immediately inserted into a Pasteur pipette for evaporation. Solvent (paraffin oil) alone served as control. For each compound, EAG responses of 5 to 11 female mosquitoes were recorded. Chemicals Compounds that were used in electrophysiological recordings and behavioral assays are listed in Supplementary Table 1 . The empty neuron system We followed the method of Gonzalez et al. 51 for heterologous expression of AgORs in the ab3 empty neurons. The Gal4 line ( w; Cyo/Δhalo; Or22a-Gal4 ) was kindly provided by John Carlson (Yale Univ.), and 50 UAS-AgOr lines were obtained from the Bloomington Drosophila stock center. Flies from each UAS-AgOr line with red eye and curly wings was used to cross with the Gal4 line and progeny with red eye and straight wings were selected for single sensillum recording. sgRNA design and production The procedure for sgRNA synthesis followed the description of Li et al. 52 with minor modifications. Two guide RNAs (Supplementary Table 2 ) were designed by searching the sense and antisense strands of the AaOr31 gene (AAEL013217) for the presence of protospacer-adjacent motifs (PAMs) with the sequence of NGG using the Chopchop online tool ( ). Linear double-stranded DNA templates for all sgRNAs were generated by template-free polymerase chain reaction (PCR) using NEB Q5 high-fidelity DNA polymerase (catalog # M0491S) and a sense primer AaOR31crisprF-1 or AaOR31crisprF-2 paired with an antisense primer AaOR31crisprR (Supplementary Table 2 ). PCR reactions were heated to 98 °C for 30 s, followed by 35 cycles of 98 °C for 10 s, 58 °C for 10 s, and 72 °C for 10 s, then 72 °C for 2 min. PCR products were purified using Promega Wizard@SV Gel and PCR Clean-up System (catalog #A9281). Following PCR, sgRNAs were synthesized using the Ambion Megascript T7 in vitro transcription kit (catalog # AM1334, Life Technologies) according to the manufacturer’s protocol using 300 ng of purified DNA template. Following in vitro transcription, the sgRNAs were purified using the MegaClear Kit (catalog #AM1908, Life Technologies) and diluted to 1000 ng μl −1 in nuclease-free water and stored in aliquots at −80 °C. Recombinant Cas9 protein from Streptococcus pyogenes was obtained commercially (CP01, PNA Bio Inc) and diluted to 1000 ng l −1 in nuclease-free water and stored in aliquots at −80 °C. CRISPR mediated microinjections Embryonic collection and CRISPR microinjections were performed following the procedure described by Li et al. 52 . Briefly, Rockefeller mosquitoes were blood-fed 5 days before egg collection. An ovicup filled with ddH 2 O and lined with filter paper was placed into a cage and female mosquitoes were allowed to lay eggs in the ovicup in the dark. After 15–30 min, the ovicup was taken out and unmelanized eggs were transferred onto a glass slide. The eggs were quickly aligned on a wet piece of filter paper. Aluminosilicate needles were pulled on a Sutter P-1000 needle puller and beveled using a Sutter BV-10 beveler. An Eppendorf Femotojet was used for power injections under a compound microscope at 100× magnification. About 50 eggs were injected each time immediately after fresh eggs were collected. The concentration of components used in the study was as follows; Cas9 protein at 300 ng μl −1 , each sgRNA at 40 ng μl −1 . After injection, eggs were placed in a cup filled with water and allowed to hatch and developed into adults. To identify mutants, we performed genotyping at each generation of mosquitoes after injection by cutting one hind leg off for genomic DNA isolation. Genomic DNA was extracted using the DNeasy blood & tissue kit (QIAGEN) following the manufacturer’s protocol. Target loci were amplified by PCR using the primers AaOR31F (5’-ATTGGCATGCGCTACTTTTATT-3’) and AaOR31R (5’-ATAACATCCTTTAGCCAGTGCC-3’) (also in Supplementary Table 2 ). PCR products were gel purified and sent directly for Sanger sequencing using the forward primer used in PCR reactions. Data analysis, statistics and experimental repeats All statistical analysis was done using Prism 5 (GraphPad Software). Data are presented as mean ± s.e.m. Unpaired Student’s t -tests was used to compare two sets of data. If the data did not meet the normality or equality of the variance assumptions needed for Student’s t -tests, the equivalent Mann–Whitney Rank Sum test was used instead. The significance for all the tests was set to a P- value <0.05. Each hand-in-cage experiment and SSR recording were repeated by two or more researchers. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data that support all experimental findings of this study are available within the paper and its Supplementary Information files. Raw data necessary to reproduce all statistical analyses and results in the paper as well as P values for all figures are provided in the source data file. Source data are provided with this paper. Vectorbase ( ) and the NCBI website ( ) were used. | With mosquito season upon us, people are stocking up on repellents to prevent itchy bites. Bug repellents are important because they don't just protect against the buzzing, blood-sucking little pests—they also safeguard against the diseases they carry, which kill some 700,000 people worldwide each year. Surprisingly, despite widespread use, no one understood exactly how most mosquito repellents keep the insects away. Now researchers are starting to uncover the first pieces of the puzzle. A new study has identified a scent receptor in mosquitoes that helps them sniff out and avoid trace amounts of pyrethrum, a plant extract used for centuries to repel biting insects. One of the oldest insecticides known, pyrethrum comes from the dried, crushed flowers of certain chrysanthemum species. Pyrethrum breaks down quickly in sunlight and isn't readily absorbed through the skin, so the insecticide has long been considered one of the safer options for use around children and pets. What makes pyrethrum toxic to mosquitoes has been known for some time. It works by binding to tiny pores in the insects' nerve cells and paralyzing them on contact. But it has another property whose mode of action is more of a mystery. At lower concentrations it protects not by killing mosquitoes but by preventing them from getting close enough to land and bite in the first place. Led by biology professor Ke Dong, who recently joined the faculty at Duke University, the team did a variety of tests to understand how mosquitoes detect and avoid pyrethrum, and which of the extract's chemical components help them do it. First, they had people don a special rubber glove and put their hand in a cage holding 50 hungry mosquitoes. The glove had a window screen on the back made of two layers of loose-fitting mesh. The top layer acts as a barrier that mosquitoes are unable to bite through. Normally, mosquitoes find the heat and aroma of human skin wafting through the mesh irresistible, and are quick to land and check it out. But when the bottom layer of mesh closest to the skin was treated with pyrethrum, they lost interest. These early experiments confirmed that mosquitoes don't have to get close enough to taste or touch pyrethrum-treated skin or clothing to stay away. To find out if smell was involved, the researchers attached tiny wire electrodes to the small hairs covering the mosquitoes' antennae and measured their electrical responses to puffs of air containing chemicals released by pyrethrum and other repellents. A mosquito's ability to smell comes from special receptors embedded in nerve cells on the insect's antennae and mouth parts. Once odor molecules wafting through the air stimulate these receptors, the nerve cells send a message to the brain, which identifies the smell. Dong and her colleagues were able to pinpoint a specific ingredient in pyrethrum flower extracts, called EBF, which activates a smell receptor in the mosquito's antenna called Or31. They found that EBF works together with other components called pyrethrins to make an especially off-putting bouquet. Even tiny doses that mosquitoes barely seem to notice when the compounds occur alone—fewer than five odor molecules per million molecules of air—can send the insects flying or crawling away when they occur in combination. While the researchers focused on the mosquito species Aedes aegypti—which spreads viruses such as Zika, yellow fever and dengue—they also found Or31 odor receptors with strikingly similar protein sequences in six other mosquito species. More than 200 types of mosquitoes live in the United States alone; about a dozen of which spread germs that can make people sick. With mosquitoes becoming increasingly resistant to our best chemical defenses, researchers are constantly on the lookout for new ways to fight them. These findings, published May 5 in the journal Nature Communications, could help researchers develop new broad-spectrum repellents to keep a variety of mosquitoes at bay, and by extension stop them from biting people and spreading disease. | 10.1038/s41467-021-22847-0 |
Medicine | Signs of cancer that occur years before diagnosis could lead to earlier cancer detection | The evolutionary history of 2,658 cancers, Nature (2020). DOI: 10.1038/s41586-019-1907-7 , www.nature.com/articles/s41586-019-1907-7 Journal information: Nature | http://dx.doi.org/10.1038/s41586-019-1907-7 | https://medicalxpress.com/news/2020-02-cancer-years-diagnosis-earlier.html | Abstract Cancer develops through a process of somatic evolution 1 , 2 . Sequencing data from a single biopsy represent a snapshot of this process that can reveal the timing of specific genomic aberrations and the changing influence of mutational processes 3 . Here, by whole-genome sequencing analysis of 2,658 cancers as part of the Pan-Cancer Analysis of Whole Genomes (PCAWG) Consortium of the International Cancer Genome Consortium (ICGC) and The Cancer Genome Atlas (TCGA) 4 , we reconstruct the life history and evolution of mutational processes and driver mutation sequences of 38 types of cancer. Early oncogenesis is characterized by mutations in a constrained set of driver genes, and specific copy number gains, such as trisomy 7 in glioblastoma and isochromosome 17q in medulloblastoma. The mutational spectrum changes significantly throughout tumour evolution in 40% of samples. A nearly fourfold diversification of driver genes and increased genomic instability are features of later stages. Copy number alterations often occur in mitotic crises, and lead to simultaneous gains of chromosomal segments. Timing analyses suggest that driver mutations often precede diagnosis by many years, if not decades. Together, these results determine the evolutionary trajectories of cancer, and highlight opportunities for early cancer detection. Main Similar to the evolution in species, the approximately 10 14 cells in the human body are subject to the forces of mutation and selection 1 . This process of somatic evolution begins in the zygote and only comes to rest at death, as cells are constantly exposed to mutagenic stresses, introducing 1–10 mutations per cell division 2 . These mutagenic forces lead to a gradual accumulation of point mutations throughout life, observed in a range of healthy tissues 5 , 6 , 7 , 8 , 9 , 10 , 11 and cancers 12 . Although these mutations are predominantly selectively neutral passenger mutations, some are proliferatively advantageous driver mutations 13 . The types of mutation in cancer genomes are well studied, but little is known about the times when these lesions arise during somatic evolution and where the boundary between normal evolution and cancer progression should be drawn. Sequencing of bulk tumour samples enables partial reconstruction of the evolutionary history of individual tumours, based on the catalogue of somatic mutations they have accumulated 3 , 14 , 15 . These inferences include timing of chromosomal gains during early somatic evolution 16 , phylogenetic analysis of late cancer evolution using matched primary and metastatic tumour samples from individual patients 17 , 18 , 19 , 20 , and temporal ordering of driver mutations across many samples 21 , 22 . The PCAWG Consortium has aggregated whole-genome sequencing data from 2,658 cancers 4 , generated by the ICGC and TCGA, and produced high-accuracy somatic variant calls, driver mutations, and mutational signatures 4 , 23 , 24 (Methods and Supplementary Information ). Here, we leverage the PCAWG dataset to characterize the evolutionary history of 2,778 cancer samples from 2,658 unique donors across 38 cancer types. We infer timing and patterns of chromosomal evolution and learn typical sequences of mutations across samples of each cancer type. We then define broad periods of tumour evolution and examine how drivers and mutational signatures vary between these epochs. Using clock-like mutational processes, we map mutation timing estimates into approximate real time. Combined, these analyses allow us to sketch out the typical evolutionary trajectories of cancer, and map them in real time relative to the point of diagnosis. Reconstructing the life history of tumours The genome of a cancer cell is shaped by the cumulative somatic aberrations that have arisen during its evolutionary past, and part of this history can be reconstructed from whole-genome sequencing data 3 (Fig. 1a ). Initially, each point mutation occurs on a single chromosome in a single cell, which gives rise to a lineage of cells bearing the same mutation. If that chromosomal locus is subsequently duplicated, any point mutation on this allele preceding the gain will subsequently be present on the two resulting allelic copies, unlike mutations succeeding the gain, or mutations on the other allele. As sequencing data enable the measurement of the number of allelic copies, one can define categories of early and late clonal variants, preceding or succeeding copy number gains, as well as unspecified clonal variants, which are common to all cancer cells, but cannot be timed further. Lastly, we identify subclonal mutations, which are present in only a subset of cells and have occurred after the most recent common ancestor (MRCA) of all cancer cells in the tumour sample ( Supplementary Information ). Fig. 1: Timing clonal copy number gains using allele frequencies of point mutations. a , Principles of timing mutations and copy number gains based on whole-genome sequencing. The number of sequencing reads reporting point mutations can be used to discriminate variants as early or late clonal (green or purple, respectively) in cases of specific copy number gains, as well as clonal (blue) or subclonal (red) in cases without. b , Annotated point mutations in one sample based on VAF (top), copy number (CN) state and structural variants (middle), and resulting timing estimates (bottom). LOH, loss of heterozygosity. c , Overview of the molecular timing distribution of copy number gains across cancer types. Pie charts depict the distribution of the inferred mutation time for a given copy number gain in a cancer type. Green denotes early clonal gains, with a gradient to purple for late gains. The size of each chart is proportional to the recurrence of this event. Abbreviations for each cancer type are defined in Supplementary Table 1 . d , Heat maps representing molecular timing estimates of gains on different chromosome arms ( x axis) for individual samples ( y axis) for selected tumour types. e , Temporal patterns of two near-diploid cases illustrating synchronous gains (top) and asynchronous gains (bottom). f , Left, distribution of synchronous and asynchronous gain patterns across samples, split by WGD status. Uninformative samples have too few or too small gains for accurate timing. Right, the enrichment of synchronous gains in near-diploid samples is shown by systematic permutation tests. g , Proportion of copy number segments ( n = 90,387) with secondary gains. Error bars denote 95% credible intervals. ND, near diploid. h , Distribution of the relative latency of n = 824 secondary gains with available timing information, scaled to the time after the first gain and aggregated per chromosome. Source data Full size image The ratio of duplicated to non-duplicated mutations within a gained region can be used to estimate the time point when the gain happened during clonal evolution, referred to here as molecular time, which measures the time of occurrence relative to the total number of (clonal) mutations. For example, there would be few, if any, co-amplified early clonal mutations if the gain had occurred right after fertilization, whereas a gain that happened towards the end of clonal tumour evolution would contain many duplicated mutations 14 (Fig. 1a , Methods). These analyses are illustrated in Fig. 1b . As expected, the variant allele frequencies (VAFs) of somatic point mutations cluster around the values imposed by the purity of the sample, local copy number configuration and identified subclonal populations. The depicted clear cell renal cell carcinoma has gained chromosome arm 5q at an early molecular time as part of an unbalanced translocation t(3p;5q), which confirms the notion that this lesion often occurs in adolescence in this cancer type 16 . At a later time point, the sample underwent a whole genome duplication (WGD) event, duplicating all alleles, including the derivative chromosome, in a single event, as evidenced by the mutation time estimates of all copy number gains clustering around a single time point, independently of the exact copy number state. Timing patterns of copy number gains To systematically examine the mutational timing of chromosomal gains throughout the evolution of tumours in the PCAWG dataset, we applied this analysis to the 2,116 samples with copy number gains suitable for timing ( Supplementary Information ). We find that chromosomal gains occur across a wide range of molecular times (median molecular time 0.60, interquartile range (IQR) 0.10–0.87), with systematic differences between tumour types, whereas within tumour types, different chromosomes typically show similar distributions (Fig. 1c , Extended Data Figs. 1 , 2 , Supplementary Information ). In glioblastoma and medulloblastoma, a substantial fraction of gains occurs early in molecular time. By contrast, in lung cancers, melanomas and papillary kidney cancers, gains arise towards the end of the molecular timescale. Most tumour types, including breast, ovarian and colorectal cancers, show relatively broad periods of chromosomal instability, indicating a very variable timing of gains across samples. There are, however, certain tumour types with consistently early or late gains of specific chromosomal regions. Most pronounced is glioblastoma, in which 90% of tumours contain single copy gains of chromosome 7, 19 or 20 (Fig. 1c, d ). Notably, these gains are consistently timed within the first 10% of molecular time, which suggests that they arise very early in a patient’s lifetime. In the case of trisomy 7, typically less than 3 out of 600 single nucleotide variants (SNVs) on the whole chromosome precede the gain (Extended Data Fig. 3a, b ). On the basis of a mutation rate of µ = 4.8 × 10 −10 to 3.0 × 10 −9 SNVs per base pair per division 25 , this indicates that the trisomy occurs within the first 6–39 cell divisions, suggesting a possible early developmental origin, in agreement with somatic mosaicisms observed in the healthy brain 26 . Similarly, the duplications leading to isochromosome 17q in medulloblastoma are timed exceptionally early (Extended Data Fig. 3c, d ). Notably, we observed that gains in the same tumour often appear to occur at a similar molecular time, pointing towards punctuated bursts of copy number gains involving most gained segments (Fig. 1e ). Although this is expected in tumours with WGD (Fig. 1b ), it may seem surprising to observe synchronous gains in near-diploid tumours, particularly as only 6% of co-amplified chromosomal segments were linked by a direct inter-chromosomal structural variant. Still, synchronous gains are frequent, occurring in 57% (468 out of 815) of informative near-diploid tumours, 61% more frequently than expected by chance ( P < 0.01, permutation test; Fig. 1f ). Because most arm-level gains increment the allele-specific copy number by 1 (80–90%; Fig. 1g ), it seems that these gains arise through mis-segregation of single copies during anaphase. This notion is further supported by the observation that in about 85% of segments with two gains of the same allele, the second gain appears with noticeable latency after the first (Fig. 1h ). Therefore, the extensive chromosome-scale copy number aberrations observed in many cancer genomes are seemingly caused by a limited number of events—possibly by merotelic attachments of chromosomes to multipolar mitotic spindles 27 , or as a consequence of negative selection of individual aneuploidies 28 —offering an explanation for observations of punctuated evolution in breast and colorectal cancer 29 , 30 . Timing of point mutations in driver genes As outlined above, point mutations (SNVs and insertions and deletions (indels)) can be qualitatively assigned to different epochs, allowing the timing of driver mutations. Out of the 47 million point mutations in 2,583 unique samples, 22% were early clonal, 7% late clonal, 53% unspecified clonal and 17% subclonal (Fig. 2a ). Among a panel of 453 cancer driver genes, 5,913 oncogenic point mutations were identified 4 , of which 29% were early clonal, 5% late clonal, 56% unspecified clonal and 8% subclonal. It thus emerges that common drivers are enriched in the early clonal and unspecified clonal categories and depleted in the late clonal and subclonal ones, indicating a preferential early timing (Fig. 2b ). For example, driver mutations in TP53 and KRAS are 12 and 8 times enriched in early clonal stages, respectively. For TP53 , this trend is independent of tumour type (Fig. 2c ). Mutations in PIK3CA are two times more frequently clonal than expected, and non-coding changes near the TERT gene are three times more frequently early clonal. Fig. 2: Timing of point mutations shows that recurrent driver gene mutations occur early. a , Top, distribution of point mutations over different mutation periods in n = 2,778 samples. Middle, timing distribution of driver mutations in the 50 most recurrent lesions across n = 2,583 white listed samples from unique donors. Bottom, distribution of driver mutations across cancer types; colour as defined in the inset. b , Relative timing of the 50 most recurrent driver lesions, calculated as the odds ratio of early versus late clonal driver mutations versus background, or clonal versus subclonal. Error bars denote 95% confidence intervals derived from bootstrap resampling. Odds ratios overlapping 1 in less than 5% of bootstrap samples are considered significant (coloured). The underlying number of samples with a given mutation is shown in a . c , Relative timing of TP53 mutations across cancer types, as in b . The number of samples is defined in the x -axis labels. d , Estimated number of unique lesions (genes) contributing 50% of all driver mutations in different timing epochs across n = 2,583 unique samples, containing n = 5,756 driver mutations with available timing information. Error bars denote the range between 0 and 1 pseudocounts; bars denote the average of the two values. NA, not applicable; NS, not significant. Source data Full size image Aggregating the clonal status of all driver point mutations over time reveals an increased diversity of driver genes mutated at later stages of tumour development: 50% of all early clonal driver mutations occur in just 9 genes, whereas 50% of late and subclonal mutations occur in approximately 35 different genes each, a nearly fourfold increase (Fig. 2d ). Consistent with previous studies of individual tumour types 31 , 32 , 33 , 34 , these results suggest that, in general, the very early events in cancer evolution occur in a constrained set of common drivers, and a more diverse array of drivers is involved in late tumour development. Relative timing of somatic driver events Although timing estimates of individual events reflect evolutionary periods that differ from one sample to another, they define in part the order in which driver mutations and copy number alterations have occurred in each sample (Fig. 3a–d ). As confirmed by simulations, aggregating these orderings across samples defines a probabilistic ranking of lesions (Fig. 3a ), recapitulating whether each mutation occurs preferentially early or late during tumour evolution (Extended Data Figs. 4 , 5 , Supplementary Information ). Fig. 3: Aggregating single-sample ordering reveals typical timing of driver mutations. a , Schematic representation of the ordering process. b – d , Examples of individual patient trajectories (partial ordering relationships), the constituent data for the ordering model process. e – g , Preferential ordering diagrams for colorectal adenocarcinoma (ColoRect–AdenoCA) ( e ), pancreatic neuroendocrine cancer (Panc–Endocrine) ( f ) and glioblastoma (CNS–GBM) ( g ). Probability distributions show the uncertainty of timing for specific events in the cohort. Events with odds above 10 (either earlier or later) are highlighted. The prevalence of the event type in the cohort is displayed as a bar plot on the right. Source data Full size image In colorectal adenocarcinoma, for example, we find APC mutations to have the highest odds of occurring early, followed by KRAS , loss of 17p and TP53 , and SMAD4 (Fig. 3b , e). Whole-genome duplications occur after tumours have accumulated several driver mutations, and many chromosomal gains and losses are typically late. These results are in agreement with the classical APC-KRAS-TP53 progression model of Fearon and Vogelstein 35 , but add considerable detail. In many cancer types, the sequence of events during cancer progression has not previously been determined in detail. For example, in pancreatic neuroendocrine cancers, we find that many chromosomal losses, including those of chromosomes 2, 6, 11 and 16, are among the earliest events, followed by driver mutations in MEN1 and DAXX (Fig. 3c, f ). WGD events occur later, after many of these tumours have reached a pseudo-haploid state due to widespread chromosomal losses. In glioblastoma, we find that the loss of chromosome 10, and driver mutations in TP53 and EGFR are very early, often preceding early gains of chromosomes 7, 19 and 20 (Fig. 3d, g ). Mutations in the TERT promoter tend to occur at early to intermediate time points, whereas other driver mutations and copy number changes tend to be later events. Across cancer types, we typically find TP53 mutations among the earliest events, as well as losses of chromosome 17 ( Supplementary Information ). WGD events usually have an intermediate ranking, and most copy number changes occur later. Losses typically precede gains, and consistent with the results above, common drivers typically occur before rare drivers. Timing of mutational signatures The cancer genome is shaped by various mutational processes over its lifetime, stemming from exogenous and cell-intrinsic DNA damage, and error-prone DNA replication, leaving behind characteristic mutational spectra, termed mutational signatures 24 , 36 . Stratifying mutations by their clonal allelic status, we find evidence for a changing mutational spectrum between early and late clonal time points in 29% (530 out of 1,852) of informative samples ( P < 0.05, Bonferroni-adjusted likelihood-ratio test), typically changing the spectrum by 19% (median absolute difference; range 4–66%) (Fig. 4a, b , Extended Data Fig. 6 ). Similarly, 30% of informative samples (729 out of 2,387) displayed changes of their mutation spectrum between the clonal and subclonal state, with median difference of 21% (range 3–72%). Combined, the mutation spectrum changes throughout tumour evolution in 40% of samples (1,069 out of 2,688). Fig. 4: Dynamic mutational processes during early and late clonal tumour evolution. a , Example of tumours with substantial changes between mutation spectra of early (left) and late (right) clonal time points. The attribution of mutations to the most characteristic signatures are shown. b , Example of clonal-to-subclonal mutation spectrum change. c , Fold changes between relative proportions of early and late clonal mutations attributed to individual mutational signatures. Points are coloured by tissue type. Data are shown for samples ( n = 530) with measurable changes in their overall mutation spectra and restricted to signatures active in at least 10 samples. Box plots demarcate the first and third quartiles of the distribution, with the median shown in the centre and whiskers covering data within 1.5× the IQR from the box. d , Fold changes between clonal and subclonal periods in samples ( n = 729) with measurable changes in their mutation spectra, analogous to c . Source data Full size image To quantify whether the observed temporal changes can be attributed to known and suspected mutational processes, we decomposed the mutational spectra at each time point into a catalogue of 57 mutational signatures, including double base substitution and indel signatures 24 (Methods). In general, these mutational signatures display a predominantly undirected temporal variability over several orders of magnitude (Fig. 4c, d , Extended Data Fig. 7 ). In addition, several signatures demonstrate distinct temporal trends. As one may expect, signatures of exogenous mutagens are predominantly active in the early clonal stages of tumorigenesis. These include tobacco smoking in lung adenocarcinoma (signature SBS4, median fold change 0.43, IQR 0.31–0.72), consistent with previous reports 37 , 38 , and ultraviolet light exposure in melanoma (SBS7; median fold change 0.16, IQR 0.09–0.43). Another strong decrease over time is found for a signature of unknown aetiology, SBS12, which acts mostly in liver cancers (median fold change 0.22, IQR 0.06–0.41). In chronic lymphoid leukaemia, there was a 20-fold relative decrease in mutations associated with somatic hypermutation (SBS9; median fold change 0.05, IQR 0.02–0.43) from clonal to subclonal stages. Some mutational processes tend to increase throughout cancer evolution. For example, we see that APOBEC mutagenesis (SBS2 and SBS13) increases in many cancer types from the early to late clonal stages (median fold change 2.0, IQR 0.8–3.6), as does a newly described signature SBS38 (median fold 3.6, IQR 1.8–11). Signatures of defective mismatch repair (SBS6, 14, 15, 20, 21, 26 and 44) increase from clonal to subclonal stages (median fold 1.8, IQR 1.2–3.0). Chronological time estimates The molecular timing data presented above do not measure the occurrence of events in chronological time. If the rate at which mutations are acquired per year in each sample was constant, the chronological time would simply be the product of the estimated molecular timing and age at diagnosis. However, this relation will be nonlinear if the mutation rate changes over time, and is inflated by acquired mutational processes, as suggested by the analysis in the previous section. Some of these issues can be mitigated by counting only mutations contributed by endogenous and less variable mutational processes, such as CpG-to-TpG mutations (hereafter CpG>TpG) caused by spontaneous deamination of 5-methyl-cytosine to thymine at CpG dinucleotides, which have been proposed as a molecular clock 12 . Our supplementary analysis suggests that, although the baseline CpG>TpG mutation rate in cancers is very close to that in normal cells, there appears to be a moderate increase (1–10 times, adding between 20 and 40% of mutations) in cancers (Extended Data Fig. 8 ). As this shifts chronological timing estimates, we model different scenarios of the evolution of the CpG>TpG mutation rate (Fig. 5a ). Fig. 5: Approximate chronological timing inference suggests a timescale of cancer evolution of several years. a , Mapping of molecular timing estimates to chronological time under different scenarios of increases in the CpG>TpG mutation rate. A greater increase before diagnosis indicates an inflation of the mutation timescale. b , Median latency between WGDs and the last detectable subclone before diagnosis under different scenarios of CpG>TpG mutation rate increases for n = 569 non-hypermutant cancers with at least 100 informative SNVs, low tumour in normal contamination and at least five samples per tumour histology. c , Median latency between the MRCA and the last detectable subclone before diagnosis for different CpG>TpG mutation rate changes in n = 1,921 non-hypermutant samples with low tumour in normal contamination and at least 5 cases per cancer type. Source data Full size image Applying this logic to time WGDs, which yield sufficient numbers of CpG>TpG mutations, demonstrates that they occur several years and possibly even a decade or more before diagnosis in some cancer types, under a range of scenarios of mutation rate increase (Fig. 5b , Extended Data Fig. 9 ). A notable example is ovarian adenocarcinoma, which appears to have a median latency of more than 10 years. This holds true even under a scenario of a CpG>TpG rate increase of 20-fold, which would be far beyond the 7.5-fold rate increase observed in matched primary and relapse samples 39 (Extended Data Fig. 8f ). Notably, these results suggest WGD may occur throughout the entire female reproductive life (Extended Data Fig. 9b ). The latency between the MRCA and the last detectable subclone is shorter, typically several months to years (Fig. 5c ). These timescales of cancer evolution are further supported by the fact that progression of most known precancerous lesions to carcinomas usually spans many years, if not decades 40 , 41 , 42 , 43 , 44 , 45 . Our data corroborate these timescales and extend them to cancer types without detectable premalignant conditions, raising the hope that these tumours could also be detected in less malignant stages. Discussion To our knowledge, our study presents the first large-scale genome-wide reconstruction of the evolutionary history of cancers, reconstructing both early (pre-cancer) and later stages of 38 cancer types. This is facilitated by the timing of copy number gains relative to all other events in the genome, through multiplicity and clonal status of co-amplified point mutations. However, several limitations exist ( Supplementary Information ). Perhaps most importantly, molecular timing is based on point mutations and is therefore subject to changes in mutation rate. Notably, healthy tissues acquire point mutations at rates not too dissimilar from those seen in cancers, particularly when considering only endogenous mutational processes, and furthermore, some tissues are riddled with microscopic clonal expansions of driver gene mutations 5 , 6 , 7 , 8 , 9 , 11 . This is direct evidence that the life history of almost every cell in the human body, including those that develop into cancer, is driven by somatic evolution. Together, the data presented here enable us to draw approximate timelines summarizing the typical evolutionary history of each cancer type (Fig. 6 , Supplementary Information for all other cancer types). These make use of the qualitative timing of point mutations and copy number alterations, as well as signature activities, which can be interleaved with the chronological estimates of WGD and the appearance of the MRCA. Fig. 6: Typical timelines of tumour development. a – d , Timelines representing the length of time, in years, between the fertilized egg and the median age of diagnosis for colorectal adenocarcinoma ( a ), squamous cell lung cancer ( b ), ovarian adenocarcinoma ( c ) and pancreatic adenocarcinoma ( d ). Real-time estimates for major events, such as WGD and the emergence of the MRCA, are used to define early, variable, late and subclonal stages of tumour evolution approximately in chronological time. The range of chronological time estimates according to varying clock mutation acceleration rates is shown as well, with tick marks corresponding to 1×, 2.5×, 5×, 7.5×, 10× and 20×. Driver mutations and copy number alterations (CNA) are shown in each stage according to their preferential timing, as defined by relative ordering. Mutational signatures (Sigs) that, on average, change over the course of tumour evolution, or are substantially active but not changing, are shown in the epoch in which their activity is greatest. DBS, double base substitution; SBS, single base substitutions. Where applicable, lesions with a known timing from the literature are annotated; dagger symbols denotes events that were found to have a different timing; asterisk symbol denotes events that agree with our timing. Source data Full size image It is remarkable that the evolution of practically all cancers displays some level of order, which agrees very well with, and adds much detail to, established models of cancer progression 35 , 46 . For example, TP53 with accompanying 17p deletion is one of the most frequent initiating mutations in a variety of cancers, including ovarian cancer, in which it is the hallmark of its precancerous precursor lesions 47 . Furthermore, the list of typically early drivers includes most other highly recurrent cancer genes, such as KRAS , TERT and CDKN2A , indicating a preferred role in early and possibly even pre-cancer evolution. This initially constrained set of genes broadens at later stages of cancer development, suggesting an epistatic fitness landscape canalizing the first steps of cancer evolution. Over time, as tumours evolve, they follow increasingly diverse paths driven by individually rare driver mutations, and by copy number alternations. However, none of these trends is absolute, and the evolutionary paths of individual tumours are highly variable, showing that cancer evolution follows trends, but is far from deterministic. Our study sheds light on the typical timescales of in vivo tumour development, with initial driver events seemingly occurring up to decades before diagnosis, demonstrating how cancer genomes are shaped by a lifelong process of somatic evolution, with fluid boundaries between normal ageing processes 5 , 6 , 7 , 8 , 9 , 10 , 11 and cancer evolution. Nevertheless, the presence of genetic aberrations with such long latency raises hopes that aberrant clones could be detected early, before reaching their full malignant potential. Methods Dataset The PCAWG series consists of 2,778 tumour samples (2,703 white listed, 75 grey listed) from 2,658 donors. All samples in this dataset underwent whole-genome sequencing (minimum average coverage 30× in the tumour, 25× in the matched normal samples), and were processed with a set of project-specific pipelines for alignment, variant calling, and quality control 4 . Copy number calls were established by combining the output of six individual callers into a consensus using a multi-tier approach, resulting in a copy number profile, a purity and ploidy value and whether the tumour has undergone a WGD ( Supplementary Information ). Consensus subclonal architectures have been obtained by integrating the output of 11 subclonal reconstruction callers, after which all SNVs, indels and structural variants are assigned to a mutation cluster using the MutationTimer.R approach ( Supplementary Information ). Driver calls have been defined by the PCAWG Driver Working Group 4 , and mutational signatures are defined by the PCAWG Signatures Working Group 24 . A more detailed description can be found in Supplementary Information, section 1 . Data accrual was based on sequencing experiments performed by individual member groups of the ICGC and TCGA, as described in an associated study 4 . As this is a meta-analysis of existing data, power calculations were not performed and the investigators were not blinded to cancer diagnoses. Timing of gains We used three related approaches to calculate the timing of copy number gains (see Supplementary Information, section 2 ). In brief, the common feature is that the expected VAF of a mutation ( E ) is related to the underlying number of alleles carrying a mutation according to the formula: E [ X ] = nmfρ /[ N (1 − ρ ) + Cρ ], in which X is the number of reads, n denotes the coverage of the locus, the mutation copy number m is the number of alleles carrying the mutation (which is usually inferred), f is the frequency of the clone carrying the given mutation ( f = 1 for clonal mutations). N is the normal copy number (2 on autosomes, 1 or 2 for chromosome X and 0 or 1 for chromosome Y), C is the total copy number of the tumour, and ρ is the purity of the sample. The number of mutations n m at each allelic copy number m then informs about the time when the gain has occurred. The basic formulae for timing each gain are, depending on the copy number configuration: $${\rm{Copy}}\,{\rm{number}}\,2+1:T=3{n}_{2}/(2{n}_{2}+{n}_{1})$$ $${\rm{Copy}}\,{\rm{number}}\,2+2:T=2{n}_{2}/(2{n}_{2}+{n}_{1})$$ $${\rm{Copy}}\,{\rm{number}}\,2+0:T=2{n}_{2}/(2{n}_{2}+{n}_{1})$$ in which 2 + 1 refers to major and minor copy number of 2 and 1, respectively. Methods differ slightly in how the number of mutations present on each allele are calculated and how uncertainty is handled ( Supplementary Information ). Timing of mutations The mutation copy number m and the clonal frequency f is calculated according to the principles indicated above. Details can be found in Supplementary Information, section 2 . Mutations with f = 1 are denoted as ‘clonal’, and mutations with f < 1 as ‘subclonal’. Mutations with f = 1 and m > 1 are denoted as ‘early clonal’ (co-amplified). In cases with f = 1, m = 1 and C > 2, mutations were annotated as ‘late clonal’, if the minor copy number was 0, otherwise ‘clonal’ (unspecified). Timing of driver mutations A catalogue of driver point mutations (SNVs and indels) was provided by the PCAWG Drivers and Functional Interpretation Group 4 . The timing category was calculated as above. From the four timing categories, the odds ratios of early/late clonal and clonal (early, late or unspecified clonal)/subclonal were calculated for driver mutations against the distribution of all other mutations present in fragments with the same copy number composition in the samples with each particular driver. The background distribution of these odds ratios was assessed with 1,000 bootstraps ( Supplementary Information, section 4.1 ). Integrative timing For each pair of driver point mutations and recurrent copy number alterations, an ordering was established (earlier, later or unspecified). The information underlying this decision was derived from the timing of each driver point mutation, as well as from the timing status of clonal and subclonal copy number segments. These tables were aggregated across all samples and a sports statistics model was employed to calculate the overall ranking of driver mutations. A full description is given in Supplementary Information, section 4.2 . Timing of mutational signatures Mutational trinucleotide substitution signatures, as defined by the PCAWG Mutational Signatures Working Group 24 , were fit to samples with observed signature activity, after splitting point mutations into either of the four epochs. A likelihood ratio test based on the multinomial distribution was used to test for differences in the mutation spectra between time points. Time-resolved exposures were calculated using non-negative linear least squares. Full details are given in Supplementary Information, section 5 . Real-time estimation of WGD and MRCA CpG>TpG mutations were counted in an NpCpG context, except for skin–melanoma, in which CpCpG and TpCpG were excluded owing to the overlapping UV mutation spectrum. For visual comparison, the number of mutations was scaled to the effective genome size, defined as the 1/mean( m i / C i ), in which m i is the estimated number of allelic copies of each mutation, and C i is the total copy number at that locus, thereby scaling to the final copy number and the time of change. A hierarchical Bayesian linear regression was fit to relate the age at diagnosis to the scaled number of mutations, ensuring positive slope and intercept through a shared gamma distribution across cancer types. For tumours with several time points, the set of mutations shared between diagnosis and relapse ( n D ) and those specific to the relapse ( n R ) was calculated. The rate acceleration was calculated as: a = n R / n D × t D / t R . This analysis was performed separately for all substitutions and for CpG>TpG mutations. On the basis of these analyses, a typical increase of 5× for most cancer types was chosen, with a lower value of 2.5× for brain cancers and a value of 7.5× for ovarian cancer. The correction for transforming an estimate of a copy number gain in mutation time into chronological time depends not only on the rate acceleration, but also on the time at which this acceleration occurred. As this is generally unknown, we performed Monte Carlo simulations of rate accelerations spanning an interval of 15 years before diagnosis, corresponding roughly to 25% of time for a diagnosis at 60 years of age, noting that a 5× rate increase over this duration yields an offset of about 33% of mutations, compatible with our data. Subclonal mutations were assumed to occur at full acceleration. The proportion of subclonal mutations was divided by the number of identified subclones, thus conservatively assuming branching evolution. Full details are given in Supplementary Information, section 6 . Cancer timelines The results from each of the different timing analyses are combined in timelines of cancer evolution for each tumour type (Fig. 6 and Supplementary Information ). Each timeline begins at the fertilized egg, and spans up to the median age of diagnosis within each cohort. Real-time estimates for WGD and the MRCA act as anchor points, allowing us to roughly map the four broadly defined time periods (early clonal, intermediate, late clonal and subclonal) to chronological time during a patient’s lifespan. Specific driver mutations or copy number alterations can be placed within each of these time frames based on their ordering from the league model analysis. Signatures are shown if they typically change over time (95% confidence intervals of mean change not overlapping 0), and if they are strongly active (contributing at least 10% mutations to one time point). Signatures are shown on the timeline in the epoch of their greatest activity. Where an event found in our study has a known timing in the literature, the agreement is annotated on the timeline; with an asterisk denoting an agreed timing, and dagger symbol denoting a timing that is different to our results. Full details are given in Supplementary Information, section 7 . Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability Somatic and germline variant calls, mutational signatures, subclonal reconstructions, transcript abundance, splice calls and other core data generated by the ICGC/TCGA PCAWG Consortium are described elsewhere 4 and available for download at . Further information on accessing the data, including raw read files, can be found at . In accordance with the data access policies of the ICGC and TCGA projects, most molecular, clinical and specimen data are in an open tier that does not require access approval. To access information that could potentially identify participants, such as germline alleles and underlying sequencing data, researchers will need to apply to the TCGA Data Access Committee (DAC) via dbGaP ( ) for access to the TCGA portion of the dataset, and to the ICGC Data Access Compliance Office (DACO; ) for the ICGC portion. In addition, to access somatic SNVs derived from TCGA donors, researchers will also need to obtain dbGaP authorization. Datasets used and results presented in this study, including timing estimates for copy number gains, chronological estimates of WGD and MRCA, as well as mutation signature changes, are described in Supplementary Note 3 and are available at . Code availability The core computational pipelines used by the PCAWG Consortium for alignment, quality control and variant calling are available to the public at under the GNU General Public License v3.0, which allows for reuse and distribution. Analysis code presented in this study is available through the GitHub repository . This archive contains relevant software and analysis workflows as submodules, which include code for timing copy number gains, point mutations and mutation signatures, real-time timing and evolutionary league model analysis, as well as scripts to generate the figures presented: CancerTiming (v.3.1.8), MutationTimeR (v.0.1), PhylogicNDT (v.1.1) and a series of custom scripts (v. 1.0), with detailed versions of other packages used. Change history 25 January 2023 A Correction to this paper has been published: | Early signs of cancer can appear years before diagnosis and developing tests for these genetic signs could provide new ways to spot cancer early, according to new research led by the Crick and EMBL-EBI, and supported by an international cancer genomics consortium, the Pan-Cancer Analysis of Whole Genomes Project. The collaboration included Wellcome Sanger Institute, Broad Institute of MIT and Harvard, Big Data Institute at the University of Oxford and Oregon Health & Science University. The study, published in Nature, looked at 47 million genetic changes in more than 2,500 human tumours, across 38 cancer types. By looking at how many times a single change had been replicated and copied across chromosomes, the researchers were able to determine the order in which they happened and the relative timing between them. Using this method, they found that just over 20% of mutations can be considered early events in a tumour's development, with some of these changes taking place years, even decades, before the cancer is found. Of these early mutations, half fall within the same nine genes, meaning there is a small number of genes that are common drivers of early cancer development. "We've developed the first timelines of genetic mutations across the spectrum of cancer types. For more than 30 cancers, we now know what specific genetic changes are likely to happen, and when these are likely to take place. Unlocking these patterns means it should now be possible to develop new diagnostic tests, that pick up signs of cancer much earlier," says Peter Van Loo, co-lead author and group leader in the Cancer Genomics Laboratory at the Crick. As cells in the body grow and divide, errors can be introduced in their DNA. While most of these changes do not significantly alter our cells, some are harmful and are associated with the formation and growth of tumours. These DNA errors continue to accumulate in cancerous cells, so a tumour can ultimately be made up of cells with many different genetic mutations. "We have learned that cancer is the endpoint of a lifelong evolutionary process that drives our cells. This process is fuelled by mutations in the cells' genomes," says Moritz Gerstung, co-lead author and Group Leader at EMBL's European Bioinformatics Institute (EMBL-EBI). "These mutations occur as we age. Usually, there are no consequences to these mutations, but sometimes, the consequences can be dramatic. This process usually culminates within the decades prior to cancer diagnosis, but in some cases, we have been able to identify alterations as old as the patient." The research identified cancer types in which mutations tend to happen particularly early, for example, ovarian cancer and two types of brain tumours, glioblastoma and medulloblastoma. It also revealed the specific changes that are likely to happen early in each of the more than 30 cancer types. One of the most common early changes in many cancers, including ovarian cancer, affects a gene called TP53. In glioblastoma, an extra copy of chromosome 7 is very frequently gained early, while for pancreatic neuroendocrine cancer a number of chromosomes are lost in the initial stages of tumour development. "What's extraordinary is how some of the genetic changes appear to have occurred many years before diagnosis, long before any other signs that a cancer may develop, and perhaps even in apparently normal tissue," says Clemency Jolly, co-lead author and Ph.D. student in the Cancer Genomics Laboratory at the Crick. In a separate paper from the international consortium, also published in Nature today, Crick researchers identified the cancers that are likely to have many different mutations enter their DNA at the same time, as well as the timing of these events and the genes likely to be affected. For example, 22% of the 2,500 tumours studied were found to have experienced an event called chromothripsis, where a DNA strand breaks in many places at once and the pieces are rearranged incorrectly. This process was found to be an important and critically early event in the evolution of most cancers, particularly in melanomas. These two papers are part of a collection published in Nature from the Pan-Cancer Analysis of Whole Genomes Consortium. | 10.1038/s41586-019-1907-7 |
Space | Study shows first evidence of winds outside black holes throughout their mealtimes | Strong disk winds traced throughout outbursts in black-hole X-ray binaries, Nature (2018). nature.com/articles/doi:10.1038/nature25159 Journal information: Nature | http://nature.com/articles/doi:10.1038/nature25159 | https://phys.org/news/2018-01-evidence-black-holes-mealtimes.html | Abstract Recurring outbursts associated with matter flowing onto compact stellar remnants (such as black holes, neutron stars and white dwarfs) in close binary systems provide a way of constraining the poorly understood accretion process. The light curves of these outbursts are shaped by the efficiency of angular-momentum (and thus mass) transport in the accretion disks, which has traditionally been encoded in a viscosity parameter, α . Numerical simulations 1 , 2 , 3 of the magneto-rotational instability that is believed to be the physical mechanism behind this transport yield values of α of roughly 0.1–0.2, consistent with values determined from observations of accreting white dwarfs 4 . Equivalent viscosity parameters have hitherto not been estimated for disks around neutron stars or black holes. Here we report the results of an analysis of archival X-ray light curves of 21 outbursts in black-hole X-ray binaries. By applying a Bayesian approach to a model of accretion, we determine corresponding values of α of around 0.2–1.0. These high values may be interpreted as an indication either of a very high intrinsic rate of angular-momentum transport in the disk, which could be sustained by the magneto-rotational instability only if a large-scale magnetic field threads the disk 5 , 6 , 7 , or that mass is being lost from the disk through substantial outflows, which strongly shape the outburst in the black-hole X-ray binary. The lack of correlation between our estimates of α and the accretion state of the binaries implies that such outflows can remove a substantial fraction of the disk mass in all accretion states and therefore suggests that the outflows correspond to magnetically driven disk winds rather than thermally driven ones, which require specific radiative conditions 8 . Main The disk-instability model 9 , 10 , 11 , 12 , 13 , 14 was developed to explain outbursts in compact binaries in which a white dwarf accretes from a low-mass companion 15 . A cool (neutral) quiescent disk is built up through steady mass transfer from the companion star, causing the temperature of the disk to increase. At some radius (called the ignition radius), the temperature of the disk will eventually reach the temperature at which hydrogen ionizes. This triggers a thermal–viscous instability within the disk, due to the steep temperature dependence of opacity in this temperature range. As a result, the disk cycles between a hot, ionized, outburst state and a cold, neutral, quiescent state. The growth of the thermal–viscous instability at the ignition radius results in two heating fronts propagating inwards and outwards through the disk. This propagation brings the disk into a hot state, causing rapid in-fall of matter onto the compact object and a bright optical and ultraviolet outburst. As the disk is depleted over time (because mass falls onto the compact stellar remnant at a higher rate than it is being transferred from the companion star), the temperature and mass-accretion rate in the outer radii will eventually be reduced to the point at which hydrogen can recombine. This triggers the formation and propagation of a cooling front that returns the disk to its quiescent (neutral) state. This predicted behaviour, which is characterized by alternating periods of outbursts and quiescence, matches observations of accreting white dwarfs well. However, changes to the theory are needed for the close binaries known as low-mass X-ray binaries, which contain more compact stellar remnants (such as neutron stars and stellar-mass black holes). There are 18 confirmed black-hole low-mass X-ray binaries in our Galaxy, identified through bright X-ray outbursts which indicate rapid accretion episodes 16 . These outbursts 16 last considerably longer, and recur much less frequently, than those from many types of accreting white dwarf, owing to heating of the outer disk by X-rays emitted in the inner regions of the accretion flow 17 . X-ray irradiation keeps the accretion disk in its hot (ionized) state over the viscous timescale. This timescale, which is encoded in observed outburst light curves, is related to the efficiency of angular-momentum transport directly and thus provides a means of measuring this efficiency. See Methods and Extended Data Fig. 1 for a detailed discussion of the characteristic three-stage outburst decay profile present in the light curve of a black-hole low-mass X-ray binary. The magneto-rotational instability is thought to provide the physical mechanism behind angular-momentum (and mass) transport in accretion disks 18 . The effective viscosity in these disks, which is commonly parameterized using the α viscosity prescription 19 , encapsulates the efficiency of this transport process. Physically, the viscosity parameter α sets the viscous time of the accretion flow through the disk and thus, according to the disk-instability model, is encoded within the decay profile of the light curve of an outburst. A disk with higher viscosity (higher α ) in outburst will accrete mass more quickly, resulting in shorter decay times and shorter outburst durations 20 . The α viscosity has been inferred only in (non-irradiated) disks around accreting white dwarfs, by comparing the outburst timescales from observed light curves to synthetic model light curves created by numerical disk codes for different α inputs 4 . Values of α have not previously been measured in irradiated accretion disks, such as those around stellar-mass black holes in low-mass X-ray binaries. The assertion 21 that α ≈ 0.2–0.4 in such systems was deduced from calculations 20 of “detailed models of complete light curves”, not from detailed comparison of models with observations. Note that we learned of a recent study 22 of the black-hole low-mass X-ray binary 4U 1543–475 after acceptance of this manuscript. Accordingly, we have built a Bayesian approach that characterizes the angular-momentum (and mass) transport that occurs in disks in low-mass X-ray binary systems. The α viscosity parameter in a hot, outbursting disk ( α h ) sets the timescale on which matter moves through the hot (ionized) portion of the disk and thus controls the duration of the first stage of the decay profile observed in an X-ray light curve (see Methods for details). This (viscous) timescale varies depending on the mass of the compact object and the size of the accretion disk, with the size of the disk itself governed by the ratio of component masses in the system and the orbital period of the binary. To reconcile the multi-level, interconnected relationships that exist between these parameters that define the properties of the accretion flow, we use Bayesian hierarchical modelling (see Methods of details). This technique allows us to derive: (i) the timescales associated with individual stages of the decay of the outburst, and (ii) the rate of mass accretion through the disk during, and the time of occurrence of, the transitions between the individual decay stages. Ultimately, the Bayesian technique allows us to take into account effectively our prior knowledge of the orbital parameters that define a low-mass X-ray binary system (black-hole mass, companion mass and orbital period) and thus to sample α h directly from its observed X-ray light curve. We analysed a representative sample of X-ray light curves of 21 individual outbursts of 12 black-hole low-mass X-ray binary systems from the WATCHDOG project 16 ( Extended Data Table 1 ). In Fig. 1 we show examples of the analytical irradiated-disk-instability model fitted to observed data. In this figure, we overlay predicted decay profiles that illustrate the way in which varying α h changes the predicted light-curve decay profile. For these 21 outbursts, we derive 0.19 < α h < 0.99 (see Fig. 2 and Extended Data Table 2 ). These results represent a derivation of α in accretion disks of low-mass X-ray binaries from a fit to the observed light curves of outbursts in such systems. Figure 1: Example light curves of outbursts in low-mass X-ray binaries. The figure displays the observed bolometric X-ray light curves for the 2000 (left) and 2001–2002 (right) outbursts of the low-mass X-ray binary XTE J1550−564, which harbours a black hole with a mass of 10.4 ± 2.3 solar masses ( M ⊙ ) 16 . In this source, which has undergone multiple outbursts in the past two decades, we measure an extremely high value of the viscosity parameter α . Shading in the background indicates the accretion state of the source during the outbursts: blue, hard; yellow, intermediate; red, soft. Although XTE J1550−564 transitions from the soft to hard accretion state during the decay of the 2000 outburst, the light curve shows no signature of that transition. Disk outflows have been observed only in the soft and intermediate states, or at high flux levels (greater than 10% Eddington) 27 , above the grey bar (left) in the hard state. Coloured circles represent data from different X-ray instruments: the Proportional Counter Array aboard the Rossi X-ray Timing Explorer (RXTE/PCA), or the Advanced CCD Imaging Spectrometer aboard the Chandra X-ray Observatory (Chandra/ACIS). Translucent data indicate the rise of the outburst, which was not included in the fits. Error bars show the statistical uncertainties of the instruments. The insets show the outbursts on a linear scale. The best-fitting analytical model (solid black line) and residuals (lower panel) are displayed in both panels. We measure α h = 0.96 ± 0.15 and from the light curves of the 2000 and 2001–2002 outbursts, respectively. We over-plot the resulting decay profiles corresponding to α h = 0.7 (dot-dashed line) and α h = 0.5 (dashed line), demonstrating the way in which the shape of the light curve changes with different values of α . MJD, modified Julian date. PowerPoint slide Source data Full size image Figure 2: Characterization of the mass-transport process in accretion disks. The α viscosity parameter in the hot, ionized zone of the disk ( α h ), which encompasses the efficiency of angular-momentum (and mass) transport in accretion disks, as derived using our Bayesian methodology, is plotted against the orbital period ( P orb ) of the binary system, for the 21 individual outbursts that occurred in our sample of 12 Galactic black-hole low-mass X-ray binaries with measured orbital periods. The different colours represent individual sources. Error bars represent the 68% confidence interval. The values of α h are derived both for outbursts during which the source cycles through all accretion states (hard, intermediate and soft; circles) and for those during which the source remains in the hard accretion state (triangles). PowerPoint slide Source data Full size image There are two possible explanations for the high values of α that we measure. The first is that we really are measuring the intrinsic value of α for these disks. The only way to reproduce such high intrinsic values of α in accretion-disk simulations is for a net magnetic field to thread the disk, with concurrent mass outflows strongly shaping the outburst as a whole. Simulations of angular-momentum transport driven by the magneto-rotational instability, carried out in vertically stratified boxes that represent a local patch of the disk (shearing box), typically yield α ≈ 0.02 without a net magnetic flux 23 , 24 . Convection enhances transport to yield α ≈ 0.2 in the conditions appropriate to accreting white dwarfs 1 , 2 , 3 . This value is consistent with those deduced from observations of outbursts in the non-irradiated disks around these objects 4 , but is insufficient to explain the higher values of α ≳ 0.2 that we measure from outbursts in black holes. However, when the shearing box is threaded by a net magnetic flux, simulations show that α scales as β −1/2 (where β is the ratio of the thermal pressure to the imposed magnetic pressure), reaching values as high as α ≈ 1 when β is roughly 1–10 3 , with β = 1 the lower limit for the magneto-rotational instability to operate in a thin disk 5 , 6 , 7 . Hence, strong intrinsic angular-momentum transport indicates the presence of a large-scale field in the accretion disk, origin and evolution of which have yet to be determined in black-hole transients. Moreover, simulations that reproduce high intrinsic α also display strong outflows, which actually do not remove much angular momentum; thus, angular-momentum transport is still primarily driven by the magneto-rotational instability. The second possibility is that the intrinsic values of α for accretion disks of black-hole low-mass X-ray binaries are smaller than we measure (for example, comparable to 0.2) and that unspecified strong mass outflows are shaping the overall light-curve profiles that we observed. Figure 3 illustrates how including a term to account for mass (and angular-momentum) loss within the irradiated-disk-instability model results in a model that mimics the effect that high α has on the light-curve decay profile. Figure 3: Toy model of a disk ‘wind’. Two model light curves for an irradiated disk with α h = 0.2 around a 6 M ⊙ black hole are shown: dashed line, assuming no mass loss; solid line, including a term to account for mass loss during the outburst. The latter is computed assuming mass loss is proportional to the central mass-accretion rate onto the black hole during the decay (meant to be representative of a disk-wind-type outflow). Although the shape of the profile remains the same, the effective timescale τ e is reduced to (1 − ε w ) τ e . Thus, as the fraction of mass lost increases, τ e decreases, mimicking the effect an arbitrary large value of α has on the light-curve profile (that is, high α corresponds to fast decay). A measurement of α = 1 would correspond to a disk with α = 0.2 and ε w = 0.8 in the toy model, indicative of a substantial outflow. Note that, although this model assumes that the local outflow rate is related to the local accretion rate in the disk, this need not be the case. Further, this simplifying assumption, used purely to solve for the light curve numerically, limits what we can say about how much mass is lost in the outflow. This model requires , but it is certainly possible that the outflow rate is larger than the central mass-accretion rate onto the black hole . The transition (which occurs at a flux f t and time t break ) between the viscous and irradiation-controlled stages of the decay in each light-curve is indicated by a black filled circle. PowerPoint slide Source data Full size image In both cases, substantial outflows appear to have a key role in regulating the disk-accretion process. Strong mass outflows have been observed in outbursting low-mass X-ray binaries in the soft or intermediate accretion states, or at high flux levels (greater than 10% Eddington) in the hard accretion state 25 , 26 , 27 , in the form of accretion-disk winds. These outflows have been attributed to thermal winds driven by X-ray irradiation or to magnetic winds driven by centrifugal acceleration along magnetic field lines anchored in the disk 8 , 28 . It has recently been shown that thermally driven winds (such as Compton-heated winds 26 ) can be produced only in the soft accretion state, because the ionization state of the wind becomes unstable in the hard accretion state (see, for example, refs 29 , 30 ). The absence of a correlation between the values of α and the X-ray flux or accretion state in our outburst sample suggests that the outflow mechanism is generic, and favours magnetically driven over thermally driven outflows. Methods Archival X-ray data collection and reduction We have collected all outburst data available since 1996, for each of the 12 systems in our source sample, from (i) the Proportional Counter Array (PCA) aboard the Rossi X-ray Timing Explorer (RXTE), (ii) the X-ray Telescope (XRT) aboard the Swift Observatory, (iii) the Gas-Slit Camera (GSC) aboard the Monitor of All-sky Image (MAXI) Telescope, (iv) the Advanced CCD Imaging Spectrometer (ACIS-S) and High Resolution Camera (HRC-S) aboard the Chandra X-ray Observatory, and (v) the European Photon Imaging Camera (EPIC) aboard XMM-Newton. All X-ray light-curve data from RXTE/PCA was collected from the WATCHDOG project 16 . These authors compiled all available good pointed PCA observations (no scans or slews) from the HEASARC archive, for 77 black-hole X-ray binary sources in the Galaxy, over the entire 16-year RXTE mission. For each individual source in our sample, we use scripts from the WATCHDOG project, including the rex script within the Heasoft Software Package ( ), to reduce and extract (mission-long) daily time-binned, background-subtracted light curves in the 2–10-keV band from the PCA Std2 data available on that source in the WATCHDOG database. We also compiled all available MAXI/GSC data using the WATCHDOG project’s online light-curve tool ( ). This tool compiles all of the publicly available data from the MAXI archive ( ) in three standard bands (2–4 keV, 4–10 keV, 10–20 keV) and runs it through the WATCHDOG processing pipeline 16 . Using this tool, we extracted (mission-long) daily time-binned, background-subtracted light curves in the 2–10-keV band for each individual source (where available). In addition, we use the Swift/XRT online product builder 31 , 32 ( ) to compile (mission-long) daily time-binned, background-subtracted light curves in the 2–10-keV band, using all available windowed-timing- and photon-counting-mode XRT pointed observations. Last, we collected all available Chandra/ACIS-S, Chandra/HRC-S and XMM-Newton/EPIC pointed observations from the literature for individual outbursts (where available). We then convert individual instrument count rates to fluxes in the 2–10-keV band using PIMMS v4.8c ( ) and the spectral information available in the literature. Conversion from count rate to bolometric flux We use crabs as a baseline unit of flux to calculate approximate count-rate equivalences in the 2–10-keV-band data from RXTE/PCA, Swift/XRT and MAXI/GSC. Integration of the now-accepted, ‘canonical’, simple power-law spectrum of the Crab Nebula 33 over the 2–10-keV band gives us a straightforward method for converting between count rate and flux in this band. Assuming that a source spectrum is Crab-like in nature results in uncertainty in the computed source flux. However, given that it has been found that assuming a Crab-like spectral shape in narrow X-ray energy bands (such as the 2–10-keV band we make use of here) produces no more than a 20% (and typically less than 10%) error in the source flux for a flat power law versus a blackbody 16 , this approach is justified. To convert flux in the 2–10-keV band to bolometric flux, we use the following bolometric corrections (BCs), estimated for each individual accretion state 34 that occurs during outbursts of black-hole low-mass X-ray binaries: hard state, BC = 5; soft and intermediate states, BC = 1.25. By combining the bolometric corrections discussed above with the daily accretion-state information for each outburst, obtained from the WATCHDOG project’s 16 online Accretion-State-By-Day tool ( ), we are able to compute daily time-binned bolometric light curves. Markov chain Monte Carlo (MCMC) fitting algorithm We use a Bayesian approach to estimate the five parameters that describe the shape of an observed light-curve decay profile: (i) the exponential (viscous) decay timescale ( τ e ), (ii) the linear (irradiation-controlled) decay timescale ( τ l ), (iii) the X-ray flux of the system at the transition between exponential and linear decay stages ( f t ), (iv) the time after the outburst peak when the transition between exponential and linear decay stages occurs ( t break ), and (v) the X-ray flux limit of the exponential decay stage ( f 2 ); see Extended Data Fig. 1 . Using the emcee python package 35 , an implementation of the Affine Invariant MCMC Ensemble Sampler 36 , we apply a MCMC algorithm to fit the exponential (viscous) and linear (irradiation-controlled) stages of each decay simultaneously (as described in the main text and Extended Data Fig. 1 ), where applicable. Before fitting occurs, secondary maxima and other rebrightening events 20 , 37 , 38 that contaminate the decays are removed by hand. These data are not included in the fits; analysis of these rebrightening events will be presented in a later paper. The removal of these rebrightening events has no effect on the determination of α from the X-ray light curves. The remaining data are then fitted in logarithmic (bolometric) flux space with our five-parameter analytical model (see below for details). The emcee python package runs a modified version of the Metropolis–Hastings algorithm, in which an ensemble of ‘walkers’ move simultaneously through and explore the parameter space. To fit each light curve, we use 50 walkers—10 times the dimension of model. For emcee to run optimally, we first set the initial positions of our ensemble of walkers appropriately in parameter space. To do so, we use pyHarmonySearch 39 , an implementation of the harmony search global optimization algorithm, to perform an initial survey of our parameter space. pyHarmonySearch acts essentially as a less-time-consuming version of a brute-force grid-search method, allowing us to place our ensemble of walkers in a tight ball around the best guess that it finds. This best guess provides a starting point for the MCMC algorithm. Prior distributions for each of the five parameters are also set from the results of pyHarmonySearch . In the case of a well-sampled light curve (near-continuous daily data throughout the outburst), a Gaussian prior for each parameter with a mean set by the results of pyHarmonySearch is used. In the case in which only scattered data are available on only a portion of the full decay, wide flat priors (based on expectations from other outbursts of the same source, or outbursts from sources with similar orbital periods) are used for each parameter. After initialization, we begin running the MCMC algorithm on each light curve with a 500-step burn-in phase. Here the ensemble of walkers are evolved over a series of steps to ensure that the initial configuration that we have set enables the walkers to explore the parameter space sufficiently. At the end of the burn-in phase, if the initial configuration is appropriate for the problem, the walkers will have ended up in a high-probability region, a place in parameter space in which the states of the Markov chain are more representative of the distribution being sampled. After this phase, the MCMC algorithm is restarted, with the walkers starting at the final position they acquired during the burn-in phase, and run until convergence. The number of steps required for convergence depends on the amount of data available and the complexity of the decay profile of the outburst. After likelihood maximization is performed, the MCMC algorithm outputs the converged solution in the form of posterior distributions of each parameter. We take the best-fit result (the best-fit value along with the upper and lower limits on this value) as the median and 1 σ (68%) confidence interval of each posterior distribution, respectively. The analytical outburst decay model Extended Data Fig. 1 shows the predicted characteristic three-stage decay profile shape present in the light curve of a black-hole low-mass X-ray binary 20 , 37 , 40 . In the first stage (viscous decay), X-ray irradiation keeps the whole disk in a hot (ionized) state, preventing the formation of a cooling front. As more mass is accreted onto the black hole than is transferred from the companion at this time, the disk is drained by viscous accretion of matter only, resulting in an exponential-shaped decay profile on the viscous timescale. Eventually, as the mass in the disk and the mass transfer rate decrease, the dimming X-ray irradiation can no longer keep the outer regions of the disk in the hot (ionized) state and a cooling front forms, behind which the cold matter slows its inward flow drastically. At this point, the system enters the second stage (irradiation-controlled decay), during which the propagation of the cooling front is controlled by the decay of the irradiating X-ray flux. The hot (ionized) portion of the disk continues to flow and accrete, but gradually shrinks in size, causing a linear-shaped decay profile. Eventually, the mass accretion rate onto the black hole becomes small enough that X-ray irradiation no longer has a role. In this third and final stage (thermal decay), the cooling front propagates inward freely on a thermal–viscous timescale, resulting in a steeper linear decay in the light curve down to the quiescent accretion level. The analytical model that we use to describe the outburst decay profiles in the light curves of black-hole low-mass X-ray binaries, predicted by the (irradiated) disk-instability model, is rooted in the ‘classic’ King and Ritter formalism 37 . This formalism combines knowledge of the peak X-ray flux and of the outer radius of the irradiated disk to predict the shape that the decay of an X-ray light curve of a transient low-mass X-ray binary system would follow. The temperature of most of the accretion disk in transient low-mass X-ray binaries during outburst is dominated by X-ray heating from the inner accretion region. The X-ray light curve will show an exponential decline if irradiation by the central X-ray source is able to ionize the entire disk, keeping it in the hot (ionized) state and preventing the formation of the cooling front 41 . The X-ray light curve will show a linear decline if irradiation by the central X-ray source is able to keep only a portion of the entire disk in the hot (ionized) state. In this case, the central X-ray flux will no longer be able to keep the outer regions of the disk above the hydrogen ionization temperature (around 10 4 K) and a cooling front will appear and propagate down the disk. Because the cooling front cannot move inward on a viscous timescale (the farthest it can move inward is set by the radius at which T = 10 4 K), a linear-shaped decline is observed in the light curve. By assuming, as do many studies of X-ray irradiated disks in close binary systems, an isothermal disk model (that is, the disk is assumed to be vertically isothermal because it is irradiated, with the central mid-plane temperature equal to the effective temperature set by the X-ray irradiation flux at the disk surface 42 ), the critical X-ray luminosity for a disk radius R 11 (in units of 10 11 cm) has been derived 37 to be , above which the light curve should display an exponential decay shape and below which the light curve should display a linear decay shape. In this formalism, a well-sampled light curve (in both time and amplitude) should show a combination of exponential- and linear-shaped stages in the decay profile. The exponential decay is replaced with a linear decay when the X-ray flux has decreased sufficiently, resulting in a distinct brink (such as a break in slope) in the light-curve shape. By deriving analytical expressions for the shape that light-curve decays of transient low-mass X-ray binaries systems take, the timescales of the exponential and linear stages of a decay, the peak mass-accretion rate (and in-turn X-ray luminosity for a given accretion efficiency) and the time at which the exponential decay is replaced by the linear decay have been predicted 37 . This approach has since been validated by smooth-particle-hydrodynamics accretion simulations 43 and applied to observations of various classes of X-ray binaries 44 , 45 , 46 , 47 , 48 , 49 with varied success. However, although the King and Ritter formalism has, coincidentally, been found to agree relatively well with observations, it oversimplifies the physics of the X-ray-irradiated disks to which it is applied 41 . Therefore, we instead use a modified version of the King and Ritter formalism. In this modified version we (i) include the effects of continuing mass transfer from the donor star 47 , 48 , and (ii) use a disk structure 20 , 40 in which X-ray irradiation that affects the disks of black-hole low-mass X-ray binaries is modelled using a general irradiation law: Here, the irradiation parameter C irr is defined as the fraction of the central X-ray luminosity ( for accretion efficiency η ) that heats up the disk. Because C irr contains information about the illumination and disk geometry, and the temperature profile of the X-ray irradiation, it effectively parameterizes our ignorance of how these disks are actually irradiated. Physically, C irr controls the timescale of the linear decay stage (and the overall outburst duration) and when the transition between decay stages occur, and sets a limit on the amount of mass that can be accreted during the outburst. Stronger irradiation (larger C irr ) increases the duration of the outburst and thus the relative amount of matter able to be accreted during an outburst. Consequently, if more matter is accreted during outburst, the subsequent time in quiescence will lengthen because the disk will require more time to build up again. Following the procedure outlined in previous work 47 , 48 , but instead using the general irradiation law defined above, yields the following analytical form for the flux of a black-hole low-mass X-ray binary as a function of time during the exponential (viscous) and linear (irradiation-controlled) stages of the decay: Here τ e and τ l are defined as the viscous (exponential) decay timescale in the hot (ionized) zone of the disk and the linear decay timescale, respectively; is the flux limit of the exponential decay, which depends on the mass-transfer rate from the companion and the distance to the source d ; t break is defined as the time when the temperature of the outer edge of the disk is just sufficient to remain in a hot (ionized) state; and f t is the corresponding X-ray flux of the system at time t break . We perform fits in flux, as opposed to luminosity, space to avoid the correlated errors (due to an uncertain distance) that would arise if we were to fit the latter; the uncertain distance (and other parameters) are incorporated below. By fitting this model to our sample of observed X-ray light curves we can derive the viscous decay timescales in black-hole low-mass X-ray binaries to range between roughly 50 and 190 days, consistent with previous conclusions 50 ; see Extended Data Table 2 for fit results. The Bayesian hierarchical methodology We quantify the angular-momentum (and mass) transport that occurs in the irradiated accretion disks in low-mass X-ray binary systems using the α viscosity parameter. In the current form of the disk-instability model, the use of this simple parameter results from the inability of current numerical simulations to follow ab initio turbulent transport driven by the magneto-rotational instability on viscous timescales in a global model of the accretion disk. This parameter is encoded within the viscous (exponential) stage of the light-curve decay profile. During this first stage of the decay, irradiation of the disk traps it in a hot (ionized) state that allows a decay of the central mass-accretion rate only on a viscous timescale: , where ν KR is the Shakura–Sunyaev viscosity 19 , the average value of the kinematic viscosity coefficient near the outer edge of the disk 37 , and R h,disk is the radius of the hot (ionized) zone of the disk. For Keplerian disks, the Shakura–Sunyaev viscosity is related to the dimensionless viscosity parameter in the hot disk α h by , where Ω k is the Keplarian angular velocity and c s is the sound speed in a disk (proportional to , where T c is the central mid-plane temperature of the disk). Therefore, using Ω k = ( GM 1 / R 3 ) 1/2 , the viscous timescale in the disk can be written as a function of α h , the mass of the compact object M 1 and the radius of the accretion disk R h,disk : where G is the gravitational constant, m h is the mass of a hydrogen atom, γ is the adiabatic index (the ratio of the specific heats of a gas at a constant pressure and a gas at a constant volume) and k B is the Boltzmann constant. Because T c is only weakly dependent on viscosity and X-ray irradiation in irradiated disks, we can approximate its value as a constant, 16,300 K (ref. 51 ). Solving for α h yields Because α h depends on parameters that characterize the outburst decay profile of a low-mass X-ray binary (that is, observed data) as well as on the orbital parameters that define the binary system (that is, parameters that we have prior knowledge of, namely, M 1 and R h,disk , which is itself dependent on M 1 , the mass of the companion star in the system and the orbital period P orb ), we require a multi-level Bayesian statistical sampling technique to effectively sample α h . We therefore built a Bayesian hierarchical model. A Bayesian hierarchical model is a multi-level statistical model that enables the posterior distribution of some quantity to be estimated by integrating a combination of known prior distributions with observed data. In our case, the established orbital parameters of the binary ( M 1 , binary mass ratio q , P orb ) for a system act as the known priors, and the quantitative outburst decay properties derived from fitting the light curves of a low-mass X-ray binary system with the analytical version of the irradiated disk-instability model that we developed ( τ e ) act as the observed data. Using emcee 35 (see above for details), our hierarchical model samples α h simultaneously for all outbursts of each of the 12 sources in our sample using 240 walkers, 10 times the dimension of our model: 12 of these dimensions correspond to 6 established measurements of black-hole mass, 4 known binary mass ratios and 2 observationally based Galactic statistical population distributions (the Özel black-hole mass distribution 52 and the distribution of binary mass ratios for the dynamically confirmed stellar-mass black holes in the Galaxy 16 ); the remaining 12 dimensions correspond to the accretion disk radii for each system. Initialization is accomplished by placing our ensemble of walkers in a tight ball around a best guess that corresponds to the best known estimates of the binary parameters ( M 1 , q , P orb ) for each system. If a reliable estimate of M 1 is not known for a system, then the mean of the Ozel mass distribution 52 is used. Similarly, if q is not known for a system, then the median of the uniform distribution between the minimum and maximum of the known values of mass ratio for all dynamically confirmed black holes in the Galaxy 16 is used. Our hierarchical model samples accretion disk radii from a uniform distribution between the circularization radius ( R circ ) and the radius of the Roche lobe of the compact object ( R 1 ) in the system, both of which depend on only M 1 , q and P orb . Initial values of R h,disk are set as the median of the uniform distribution between R circ and R 1 for each system, calculated using the best guess for M 1 and q (discussed above), and the known P orb . The prior distributions for each of the 24 parameters are also set using the best-guess orbital parameters for each binary system. If a constrained measurement of the parameter exists (that is, a value with uncertainty), then a Gaussian prior based on this measurement and its uncertainty is used. If only a range is quoted in the literature for a parameter, then a uniform prior is used. The prior distributions for R h,disk are taken as the uniform distribution between R circ and R 1 for each system. After initialization, we begin running emcee on the observed data ( τ e ) with a 500-step burn-in phase. After this phase, emcee is restarted, with the walkers starting at the final position they ended at in the burn-in phase, and run until convergence. Ultimately, emcee outputs the converged solution in the form of posterior distributions of α h for each outburst or system. The converged value and upper and lower limits on this value are taken as the median and 1 σ confidence interval of each posterior distribution, respectively; see Extended Data Table 2 . Data availability The datasets generated during and analysed during this study are available from the corresponding author on reasonable request. | New research shows the first evidence of strong winds around black holes throughout bright outburst events when a black hole rapidly consumes mass. The study, published in Nature, sheds new light on how mass transfers to black holes and how black holes can affect the environment around them. The research was conducted by an international team of researchers, led by scientists in the University of Alberta's Department of Physics. Using data from three international space agencies spanning 20 years, the scientists used new statistical techniques to study outbursts from stellar-mass black hole X-ray binary systems. Their results show evidence of consistent and strong winds surrounding black holes throughout outbursts. Until now, strong winds had only been seen in limited parts of these events. "Winds must blow away a large fraction of the matter a black hole could eat,'' described Bailey Tetarenko, PhD student and lead author on the study. "In one of our models, the winds removed 80 per cent of the black hole's potential meal." Depending on their size, stellar-mass black holes have the capacity to consume everything within a 3 to 150 kilometre radius. "Not even light can escape from this close to a black hole," explained Gregory Sivakoff, associate professor of physics and co-author. Other, much larger black holes, called supermassive black holes, appear to have affected the formation of entire galaxies. "But even supermassive black holes are smaller than our solar system. While they are small, black holes can have surprisingly large effects," explained Sivakoff. So, what exactly causes these winds in space? For now, it remains a mystery. "We think magnetic fields play a key role. But we'll need to do a great deal of future investigation to understand these winds," explained Craig Heinke, associate professor of physics and co-author. "Strong disk winds traced throughout outbursts in black-hole X-ray binaries" will be published online January 22 in Nature. | nature.com/articles/doi:10.1038/nature25159 |
Biology | Jaws hold crucial knowledge on the fate of sharks | Alice Manuzzi et al, Retrospective genomics highlights changes in genetic composition of tiger sharks (Galeocerdo cuvier) and potential loss of a south-eastern Australia population, Scientific Reports (2022). DOI: 10.1038/s41598-022-10529-w Journal information: Scientific Reports | https://dx.doi.org/10.1038/s41598-022-10529-w | https://phys.org/news/2022-05-jaws-crucial-knowledge-fate-sharks.html | Abstract Over the last century, many shark populations have declined, primarily due to overexploitation in commercial, artisanal and recreational fisheries. In addition, in some locations the use of shark control programs also has had an impact on shark numbers. Still, there is a general perception that populations of large ocean predators cover wide areas and therefore their diversity is less susceptible to local anthropogenic disturbance. Here we report on temporal genomic analyses of tiger shark ( Galeocerdo cuvier ) DNA samples that were collected from eastern Australia over the past century. Using Single Nucleotide Polymorphism (SNP) loci, we documented a significant change in genetic composition of tiger sharks born between ~1939 and 2015. The change was most likely due to a shift over time in the relative contribution of two well-differentiated, but hitherto cryptic populations. Our data strongly indicate a dramatic shift in the relative contribution of these two populations to the overall tiger shark abundance on the east coast of Australia, possibly associated with differences in direct or indirect exploitation rates. Introduction Intraspecific genetic diversity is essential for long-term population persistence to avoid the negative effects of inbreeding 1 , to buffer against environmental variation in time and space 2 , and to ensure adaptability to a changing environment 3 . High genetic diversity may also have positive ecosystem effects by promoting productivity, abundance, and stability of community structure 4 , 5 , 6 . Thus, species with many discrete populations with different genetic adaptations, including life history strategies, are assumed better at withstanding exploitation and environmental variation, than a single panmictic population. Intraspecific diversity protection is a specified objective of the Convention on Biological Diversity (CBD; ), but is rarely an integral part of monitoring activities, in particular for species that are not critically endangered 7 , 8 . Indeed, the lack of dedicated sampling programs and the many species requiring monitoring makes it logistically impossible to oversee intra-specific diversity in most instances. In addition, the difficulties associated with the use of high-resolution genetic methods when sample sizes are low or when tissue samples vary in age, composition and provenance 9 also hinder our ability to assess intra-specific diversity. The use of DNA extracted from museum specimens, in combination with modern molecular analytical tools, has revolutionized the ability to assess genetic changes at contemporary time scales 10 , 11 . Studies on processes such as intra-population loss of diversity, adaptive change caused by evolutionary drivers in the environment, as well as the movement, decline, or extirpation of populations through time and space can now be undertaken 5 , 12 . In the marine realm, there are numerous examples of local population reductions and extinctions, in particular for large sharks and rays 13 , 14 , but closely coupling these incidences with genetic and genomic data is challenging due to the lack of temporal genetic data for elasmobranch species. Spatiotemporal genetic samples (i.e. obtained across several locations and years 15 ) can be used to test for changes in distribution of elasmobranch populations including potential extirpations and replacements, as well as intra-population changes in levels of genetic variability, and putative adaptive genetic changes in the timeframe of recent environmental changes or exploitation. This is of significant interest with respect to populations of large sharks, where there is a strong need for identification of genetic populations and their historical trajectories, as they represent both the relevant unit for evolution and management 9 , 16 in order to contribute significantly to short-term sustainable exploitation as well as long-term protection of intra-specific biodiversity. Failing to identify and incorporate biologically-discrete populations in management may cause overharvesting of particular (cryptic) populations or population segments, resulting in extinction or reduction of genetic diversity within populations 4 , 7 . The tiger shark ( Galeocerdo cuvier ) is one of the world’s largest sharks, with a circumglobal distribution in tropical and sub-tropical waters 17 , 18 . Throughout its range, the species is exploited by multiple users, from target and bycatch in commercial, artisanal, and recreational fisheries 19 , 20 , to shark control operations to improve bather safety 21 , 22 , which have been linked to on-going population decline and reduced average size of individuals 22 , 23 , 24 . The species is globally listed as Near Threatened on the International Union for the Conservation of Nature’s (IUCN) Red List of Threatened Species due to a suspected decline of ~ 30% over the past ~ 60 years 25 . This general decline covers variable regional population trends ranging from relatively stable abundances, e.g. North-western Atlantic 26 , to severe local depletions as observed in the Red Sea, Eritrea, Iranian Gulf, as well India and Pakistan 27 . In the Arabian Seas on-going local reported declines of 30–50% over three generations led the species to be locally assessed as Vulnerable 27 . Of greater concern, however, is the reported 74% decline in catch per unit effort (CPUE) for tiger sharks off Queensland (eastern Australia) over the past 25 years, which was accompanied by a 21% decline in the average size of individuals 24 . Thus, despite its cosmopolitan nature and dispersal potential, local depletions appear to be relatively common in the species, suggesting some degree of population isolation could occur. However, there is still a lack of adequate spatially resolved data to understand regional population trends of tiger sharks as they are commonly caught in unreported fisheries. Current assessments of the tiger shark population structure have been based on microsatellite and mitochondrial genetic markers, and have demonstrated a clear inter-basin genetic split between the Atlantic and Indo-Pacific Oceans, but a general lack of intra-basin population structuring 28 , 29 , 30 . Although this may be a true biological characteristic of the species, it could also reflect analytical challenges related to obtaining sufficient samples with good spatial coverage and the application of low-resolution genetic markers. Thus, in a recent review of research priorities, the development of high-resolution SNPs (Single Nucleotide Polymorphisms) to further resolve tiger shark meta-population structure was strongly recommended 31 . Moreover, temporal genetic information to assess the anthropogenic effects of exploitation is virtually absent. Such temporal genetic data are necessary to determine both intra-population changes in diversity by genetic drift, migration and selection, but also extent of population displacement or extirpation. Due to its Near Threatened status, potential for over-exploitation and risk to human lives, tiger shark management and conservation actions are high on the public agenda 32 . As large sharks are highly mobile, management has been primarily focused on international or regional policies, with limited value of enforcing strong local “domestic” regulations 16 . However, the detection of fine-scale population structure and the effects of local depletion or extirpation on intraspecific genetic diversity could lead to a general paradigm shift in large shark management from a global or regional perspective to a much stronger local focus. Here, we investigated a possible link between the reported decline in tiger shark numbers and local genetic diversity off the east coast of Australia. While mating system can affect effective population size 33 , 34 and declines in genetic diversity can be moderated by gene flow 35 , genetic diversity has been shown to be linked to effective population size. Indeed, a reduction in population size (e.g. bottleneck) can lead to a rapid decline in heterozygosity and allele diversity 36 . Such relationship has also been reported in empirical studies (e.g. 37 , 38 ). We chose east coast Australia because of documented reductions in tiger shark CPUE in that region and because samples from this region are relatively abundant across both spatial and temporal scales; i.e. tiger sharks samples were available along the eastern Australian coastline and from as early as ~1939 to 2015. This provided an unparalleled opportunity to assess how the alleged population decline may have affected the genetic composition of the species over the past century. In order to collect genetic data over the widest spatial and temporal range, we extracted DNA from tissue samples taken from shark jaws archived in museums and private collections, mostly retained as trophies from fishing competitions, and performed retrospective genomic analyses 39 . As these trophies are listed with local game-fishing clubs, their size, catch date and location are well documented. Tiger sharks in Australia are thought to be part of a large Indo-Pacific population 28 , and satellite-tag tracking studies have revealed extensive movements up to several thousand kilometres in the region 40 , 41 with some individuals seen moving as far as New Caledonia (maximum reported distances: of 1141 and 1800 km) 42 , 43 . In this context, we hypothesised that tiger sharks in eastern Australia, form a single panmictic population, from tropical Queensland to temperate Victoria 42 , 44 . Therefore, by employing a local spatiotemporal approach, our study aimed to assess if (i) tiger sharks in eastern Australia are indeed panmictic, (ii) reported declines are related to possible population structure in the species and if so, (iii) whether population distributions and relative abundances have remained stable over the last 100 years. This is the first study of its kind, not only for the region and tiger sharks, but also for elasmobranchs in general. Results Bioinformatics pipeline and data filtering Sequencing yielded an average of 2,833,138 reads per individual, with slightly lower numbers for historical jaw samples (median 1,973,493) than contemporary tissue samples (median 2,301,001). After running the bioinformatics pipeline, 78% of the original sequences went into transcriptome mapping. All samples showed a very low percentage of contaminants, confirming the validity of the capture strategy for sequencing enrichment with shark DNA. An average of 0.12% of the cleaned reads was of mitochondrial origin and excluded from further analysis, except for the few cases when it was used for species confirmation. Out of the 20,000 catshark derived baits, 4544 (22.72%) were post-hoc mapped back to the tiger shark transcriptome 45 covering 4137 scaffolds. Of these, 4143 had captured reads with a depth of coverage higher than non-target regions. Scaffolds with a bait had an average coverage of 68.5×, while scaffolds from off-target regions had an overall average depth of 40.7× (35.8× for historical and 43.4× for contemporary samples). Coverage was higher and less variable in contemporary than in historical samples. Single nucleotide polymorphisms (SNPs) were called from all transcriptomic sequences and we identified 35,061 raw SNP variants for 122 samples in 2978 reference scaffolds. After filtering, 4580 SNPs remained. SNPs with significant departures from Hardy–Weinberg Equilibrium (HWE) were removed to produce a final dataset consisting of 1840 SNP loci genotyped in 115 samples from G. cuvier specimens caught between ~1939 and 2015. In addition, we removed samples that had higher levels of missing data (below 70% call rate), had lack of length/weight information that did not allow to calculate the age, or had erroneous species labelling (identified as a different species), that could indicate a possible contamination with reads from other shark species. The final dataset thus contained 106 samples (Fig. 1 , Table S1 ). Very low levels of DNA damage was observed, confirming DNA was well preserved in jaws 39 , at least for the relatively short time period (~ 80 years) compared to true ancient DNA (aDNA) studies. Thus, it is highly unlikely that the final SNPs represent artefacts due to deamination or other DNA damage in the historical jaw samples. Figure 1 Sampling locations and distribution through time and space. ( a ) Samples distribution along the east coast of Australia. Samples are grouped by decades of birth (1910–1960, 1970–1990 and 2000) as explained in the text and the colours identify the three time-periods. Group names refer to the three major regions where samples were collected: Gulf of Carpentaria (GCA), Coral Sea (CRS) and Tasman Sea (TAS). ( b ) The histogram shows the difference between decade of catch and calculated decade of birth and associated sample numbers, as reported above each bar. Grey bars identify the calculated years of birth, while black numbers refer to the years of catch. Full size image Data analysis for temporal and spatial genomic variability To examine the stability of patterns over time, we used a temporal genetic analysis based on the back-calculated decade of birth of tiger shark individuals, which showed clear evidence of temporal genetic differentiation. This was apparent for the temporal dataset as a whole (Table 1 ), where pairwise F ST estimates increased with time. Temporal differentiation was also apparent for the Tasman Sea samples alone, where the majority of the oldest historical samples originated (Table S2 ). In contrast, contemporary tiger shark samples (2000 and 2010) from the Gulf of Carpentaria (GCA), Coral Sea (CRS) and Tasman Sea (TAS), showed little evidence of genetic structuring, with estimated non-significant pairwise F ST values of 0.006 between GCA and CRS, and 0.001 between CRS and TAS (Table S3 ). A Principal Coordinates Analysis (PCoA) of spatiotemporal F ST values (Fig. 2 ) showed a separation of samples along axis 1, explaining 88.7% of the variation. The 1910–1960 TAS samples were the most distinct, with the 1970–1990 TAS samples intermediate between the TAS oldest samples and a cluster of the remaining samples. The Principal Component Analysis (PCA) of all spatiotemporally collected individuals (Fig. 3 a) supported the genetic differentiation of historical samples as they clustered differently, with a proportion of the individuals forming a relatively distinct cluster from the contemporary samples. In contrast to Fig. 3 a, an individual based PCA of spatially-collected contemporary samples only (Fig. 3 b) did not display any apparent clustering of individuals according to location. The presence of a distinct group of individuals in the historical samples was supported by the results from the Discriminant Analysis of Principal Components (DAPC), where the Bayesian Information Criterion (BIC) analysis and the k-means algorithm identified K = 2 as the number of clusters that best explains the structure in the dataset (Fig. 4 ). Further analytical support for two populations was given by the results of the snmf algorithm (LEA package) that also identified K = 2 as the most likely number of clusters (Fig. S1 ). The DAPC analysis showed ~ 70% (13 of 19) of ‘cluster 2’ individuals in the 1910–1960 TAS samples, ~ 36% (5 of 14) in the 1970–1990 samples and a lack among the TAS samples from 2000. For the GCA samples, only a single ‘cluster 2’ individual (1 of 14) was found (2000). Likewise, only one ‘cluster 2’ individual (1 of 14) was found among the CRS 2000 samples. The Fisher exact test confirms that the change over time was statistically significant with all the comparisons between historical versus contemporary collections (TAS_1910–1960 vs TAS 1970–1990, TAS 1910–1960 vs TAS_2000, and TAS_1970–1990 vs TAS_2000) being significant (p-val = 0.041, 0.00001 and 0.003 respectively). Patterns of Site Frequency Spectra (SFS) and the pi and Watterson theta indices (Fig. S2 , Table S4 ) showed that older collections and contemporary collections behaved similarly with no evident bias (e.g. excess of singletons) in older samples. When pooling all putative ‘cluster 1’ and ‘cluster 2’ individuals across samples, the resulting mean F ST between the two population groups was 0.013, thus substantially higher than between any pair of spatiotemporal samples, further supporting a mixed populations hypothesis. The distribution of F ST values across loci showed that a high number of loci (Fig. S3 ) contributed to the differentiation, suggesting that genetic differences between the two groups were not caused by technical artefacts or contemporary evolution at one or a few loci. However, while BayeScan did not detect any outliers, pcadapt identified 14 possible outlier loci between the two clusters. Most of these loci (9 of 14) showed an overall higher allele frequency for ‘cluster 2’. The mean level of heterozygosity in the two groups was statistically different (Fig. 5 ), with 0.23 (± 0.01) for ‘cluster 1’ and 0.26 (± 0.01) for ‘cluster 2’. Average missing data was lower for ‘cluster 1’ than ‘cluster 2’ (0.35% and 1.14%, respectively), likely reflecting the average age of samples. However, there was no correlation between mean individual heterozygosity and proportion of missing data using a linear model ( p = 0.25; R 2 = 0.003). Table 1 Pairwise F ST values between temporal collections. Full size table Figure 2 Principal Coordinates Analysis (PCoA) of mean pairwise FST’s between spatiotemporal tiger shark samples from eastern Australia. Groups are based on back-calculated ages and refer to: Gulf of Carpentaria (GCA), Coral Sea (CRS) and Tasman Sea (TAS). The axes report the percentage of variance explained. Full size image Figure 3 Principal Component Analysis (PCA) by time periods and locations. ( a ) PCA of all individual genotypes for the samples with back-calculated age of birth and ( b ) PCA of contemporary samples based on decade of catch (2000–2010) covering the Gulf of Carpentaria (GCA), Coral Sea (CRS) and Tasman Sea (TAS). Full size image Figure 4 Discriminant Analysis of Principal Components (DAPC) for K = 2. The plot illustrates the spatiotemporal occurrence of individuals from the two hypothesized clusters in time and space. Samples grouped by time and space are labelled along the x-axis, only collections encompassing more than six samples were included. The y-axis reports the membership probability of each sample to belong of either clusters (‘1’ in blue and ‘2’ in green). Full size image Figure 5 Boxplot of the average proportion of heterozygous SNPs loci for the two clusters. Average proportion of heterozygous SNP loci over total loci genotyped for the two identified clusters. Cluster 1 is composed of mainly contemporary and northern samples, while cluster 2 individuals are almost exclusively found in southern historical samples (see Fig. 4 for explanation). Full size image Discussion This is the first study to demonstrate that archived samples in combination with modern genomic tools can reveal temporal changes in biodiversity in large sharks, which otherwise would have remained unnoticed. We identified genomic differences, consistent with two hitherto unidentified populations, at a relatively local scale (Eastern Australia) for a large migratory shark, the tiger shark. However, this pattern was only evident when historical samples were included, suggesting the displacement or extirpation of a local population. This conclusion is based on the findings that the genetic clustering of our spatiotemporal genotypes provided best statistical support for two population groups, with downstream analysis confirming that these units are genetically distinct and contain different levels of genetic diversity. One of the identified populations was most abundant in the oldest and southern-most samples (i.e. Tasman Sea), and almost completely absent from contemporary (including southern samples) and northern-most samples. We propose that the most parsimonious explanation for this absence is a shift in the relative population abundance likely associated with human fishing activities. Our results mirror the findings of Brown and Roff 46 that reported a major decline in abundance of tiger sharks over three generations off the Queensland coast, with greater declines detected at the southern sites. However, we recognise that a number of factors could have partially contributed to our observations and uncertainties around this apparent dramatic shift in abundance, which we discuss below. Our findings are unlikely to be the result of technical issues associated with the use of historical DNA (hDNA). In general, capture sequencing of hDNA has a lower sequencing depth and coverage leading to fewer mapped reads and thus more missing data with increasing sample age 47 , both of which could affect downstream population genomic inferences 48 , 49 , 50 . Although our historical samples presented fewer reads than the contemporary samples, the level of missing data was generally low, even for the oldest samples (maximum level of missing data allowed was 5% per SNP). For example, in ‘cluster 2’ all but two individuals had less than 5% missing data (average of 1.14%, median 0.14%), while all but two individuals in ‘cluster 1’ had less than 1% missing data (average 0.35%, median 0.00%). ‘Cluster 2’ individuals generally showed a higher level of heterozygosity than individuals from ‘cluster 1’, which is the opposite of the expected pattern of reduced individual heterozygosity due to low coverage caused by allelic drop-out 42 . The opposite pattern of increased diversity due to a high occurrence of singletons caused by DNA degradation was also implausible due to the filtering, only including SNPs with a minimum allele count of three in the dataset. In addition, the genetic differentiation that we observed between clusters over time was not caused by a few spurious high-differentiated loci (which could be the result of a technical issue in the genomic pipeline), but was found to be spread across the transcriptome. Moreover, if ‘cluster 2’ genotypes were artefacts created by poor quality DNA, this would still fail to explain the complete absence of those genotypes in the historical northern samples with equal DNA quality (GCA). The observed pattern of differentiation is consistent with a scenario of genetic drift accumulated across the genome, at an evolutionary time scale in two semi-independent populations of tiger sharks, as clearly indicated by the genetic cluster analysis and the transcriptome-wide patterns of differentiation (i.e. differentiation was driven my multiple loci spread throughout the transcriptome). Further strength to this hypothesis come from the results obtained by the two independent clustering methods (DAPC and snmf function in LEA), representing different algorithm and population genetic models/assumptions. Both approaches consistently identified the most likely presence of a distinct cluster in the older southern samples, which was not detectable in the most recent/northern collections. Thus, the relatively large differentiation between the two putative population clusters, the differences in their levels of heterozygosity, and the finding that not all historical Tasman Sea samples clustered together, renders the alternative hypothesis of very strong short-term genetic drift in a single panmictic population distributed across the central Indo-Pacific less plausible . In summary, the reported temporal genetic differentiation is unlikely to be caused by artefacts in sequencing and in the bioinformatics pipeline, by historical genetic drift in a single panmictic population, or by strong temporal genetic signatures of intra-population environmental selection (which would imply differentiation to be driven by a few loci). The use of material from historical collections together with genomic-scale analyses provided a relatively large sample size and sufficient statistical power for individual-based cluster analyses, presenting a unique window to explore population composition of tiger sharks in the past. The collection of historical and contemporary samples from the east coast of Australia comprised a huge logistic effort and, at least for the oldest specimens, represents the majority of high quality samples available and practically possible to sample in the community to date. Our samples thus provide relatively solid evidence of changes in population abundance, but still are unlikely to provide the full picture of genetic variation across space and time in tiger sharks off eastern Australia, in particular with respect to population mixing. Although location of catch was accurate, the sampling effort was mainly opportunistic, and thus variation in composition of samples with respect to age and sex, as well as season of sampling and method, may have influenced our results. Unfortunately, there is little information on sex for the historical samples and thus this assumption cannot be directly proven. Previous tagging studies investigating tiger shark dispersal have shown complex movement patterns including large-scale migrations, seasonal residency, and sex- and size-based dispersal or partial migrations 42 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 . In eastern Australia, the movements of tiger shark are closely related to water temperature 42 , 58 . Prey densities associated with these gradients may also drive adult shark movements to regularly return to specific areas to feed, i.e. established turtle nesting sites 58 . As a species that undergoes ontogenetic diet shift to larger-bodied 45 prey items as they mature 59 , this could also partly explain a higher occurrence of the local (‘cluster 2’) population in the historical Tasman Sea samples, which were predominately large trophy specimens. However, little is still known about the use of space by tiger sharks, particularly those that are sexually mature, and there is some suggestion that reproductive cues associated with changes in thermal gradient may also drive movements of large females toward the insular regions of the Pacific Ocean 58 . Sampling the same individual twice or kin sampling might skew the genetic diversity estimated 60 . However, these were not an issue in our study as, firstly, samples were obtained from dead individuals, and secondly, our kinship analysis did not show related individuals in our samples. Despite possible sampling-associated uncertainties, the observed patterns of temporal genetic divergence are clear. Firstly, our results based on contemporary samples are consistent with two recent studies that detected a single genetic population off the Australian east coast 28 , 29 and are also compatible with the present understanding of population structure in the Indo-Pacific 29 , 30 . Secondly, the presence of the two identified cryptic groups was not random in space and time. Individuals from ‘cluster 2’ were almost exclusively detected in the southern part of the species’ distribution, most abundant in the oldest samples, and almost absent from contemporary samples. Consequently, we hypothesize tiger sharks in east Australian waters consisted of at least two populations in the past, but likely comprise a single population now. This may sound counterintuitive in the light of satellite tag-tracking studies that have shown evidence of individuals migrating over 1000 km 23 , 43 , 61 . However, large migrations and local populations at fine geographical scales are not mutually exclusive since dispersal events may be related to foraging and not reproductive purpose hence contributing differently to a population genetic make-up. Thus, evidence of fine-scale structure could be linked to basic “triangle migrations 16 , 62 of fishes between parturition sites and juvenile and adult habitats 63 . Partial migration, ubiquitous among animal populations, and ontogenetic variation in dispersive behaviour, documented in tiger sharks 53 , 54 , 64 ), might have also affected estimate of genetic diversity. Indeed, pooling of life stages, i.e. across sizes, during data analysis can obscure estimates of stock structure and dispersal 65 . This is, however, unlikely to be an issue in our study because we were able to identify clear stock structure regardless of the majority of our samples being from large tiger sharks known to be the most dispersive 54 , 57 , 64 . The observed pattern of population structure suggests a southern Australian distribution of one of the populations (‘cluster 2’), with the apparently more abundant population (‘cluster 1’) currently found throughout the entire distribution, with a previous limited intrusion to southern New South Wales (based on the small number of individuals in ‘cluster 1’ from the historical TAS samples). It is possible that the southerly population (‘cluster 2’) was a coastal and more resident ecotype and the other population is currently more widespread, more offshore and more migratory, as found in bony fishes such as Atlantic cod ( Gadus morhua ) 66 and European anchovy ( Engraulis encrasicolus ) 67 and in marine mammals like the bottlenose dolphin ( Tursiops truncatus ) 68 . Intraspecific differences in movement and residency patterns is also increasingly being reported in sharks 69 , 70 , 71 , including in large predatory species as tiger sharks 70 . Importantly, our findings suggest that the abundance of the putative southerly population has declined significantly compared to pre-1990s levels. The apparent local depletion of tiger sharks at the south eastern distribution of the species in Australia supports previous studies showing a reduction in the abundance and mean size of tiger sharks caught in this region 23 , 24 , 46 . Although the species is not commonly commercially targeted off eastern Australia, it is caught as bycatch, in Queensland and New South Wales’ shark control programs and in recreational fisheries off New South Wales. Off Queensland, tiger shark catch-per-unit-effort has dropped by 74% over the past 25 years, while the average size declined by 21% 24 , with most of the decline occurring in the southern part of the state 23 . Catch-per-unit-effort also declined in the NSW beach meshing program 22 . This reported strong reduction in tiger shark abundance, which coincides with the increase in lethal shark mitigation measures, is of concern and may imply that ongoing lethal mitigation measures as the possible drivers of the observed population declines in the region. Recent estimates of tiger shark bycatch obtained from commercial logbook records indicate between 5 and 10 t routinely caught off Queensland, and approximately 3 t per year off New South Wales. However, New South Wales also accounts for approximately 10 t of tiger shark recreational catches 72 , 73 . Further, since 1936 the game fishing in New South Wales has targeted large tiger sharks for capture point scores and continues to do so over several competitions annually 71 . Illegal foreign fishing vessels that target large sharks for their fins have been apprehended in the Tasman Sea region and north into the Coral Sea, with tiger shark found to comprise about 20% of the total biomass of sharks caught 74 . Overall, these activities may have selectively removed the southern population (‘cluster 2’), most likely through asserting a higher level of exploitation than on the more widespread northern population. Ongoing climate change can also contribute to shifting distributions and abundance of marine fish species and populations 63 , particularly in Australia 75 , 76 , since the region has been identified as one of the global hotspots for ocean warming 77 . Thus, it is possible that increasing sea temperatures could have negatively affected the putative southern population or contributed to the increase of the northern population in southern waters. This ‘tropicalization’ of southern Australian waters is well studied with several ‘northern’ species shifting their distribution ranges further south, and increasing in abundance in the Tasman Sea (e.g. 78 ). Nevertheless, in the light of the recent dramatic demographic changes in tiger sharks and other large shark populations in the region, it is likely that fishing has played a significant role in the reported changes in abundance of these two genetically differentiated populations. Still, genetic analysis of more contemporary samples from along the east coast of Australia and the Pacific Ocean 28 using e.g. a targeted genotyping approach focusing on the most informative SNPs 79 , could help to further elucidate the apparent change of abundance of the two populations of tiger sharks in eastern Australian waters. The apparent occurrence (and loss) of localized cryptic populations of tiger sharks, at a finer geographical level than previously believed, raises a number of concerns regarding identification and monitoring of intraspecific biodiversity in large sharks. Our study suggests that localized populations may be more common than anticipated from recent genetic studies using markers with lower resolution 28 , 80 . Thus, reported local, in particular coastal, depletions of large sharks 14 could also be associated with severe reductions or losses of entire populations that has remained unnoticed due to replacement with individuals from other and possibly more wide-ranging populations. This eradication of semi-independent evolutionary lineages will not only have a local effect on population abundance, but also affect the evolutionary potential of the species as a whole and the ecosystem services they provide 4 . This highlights the importance of developing high-resolution genomic resources for elasmobranchs and other high gene flow marine organisms 81 , 82 which can provide information on a large number of variable sites in the genome (e.g. SNPs). By genotyping many individuals for a high number of SNPs, it may be possible to identify putative populations in species with general low levels of genetic differentiation such as sharks. More importantly, our work also points to significant challenges regarding the scale of current management and biodiversity protection schemes for large sharks. Sustainable management of local populations, through matching the scale of governance with population structure, is important for the protection of the evolutionary legacy of the species and the potential for adapting to future environmental changes 83 . It is also important for the maintenance of healthy marine ecosystems that could provide services to human society 84 . Accordingly, management focus will need to include localized protection measures, such as local seasonal closures or marine reserves to properly match the geographic scale of the population 85 . Specifically, for the east coast of Australia, we recommend to further investigate our findings, to elucidate the current abundance and distribution of the two populations and establish measures to protect the putative southern population component, which appears to have faced a significant historical decline, primarily driven by either direct and indirect exploitation. Overall, our work highlights the need to include historical samples in studies of population structure of large migratory marine species, as these harbour a treasure trove of information and can shed light on complex demographic patterns and aid in the development of accurate conservation actions at relevant geographical scales. Methods Sample collection Tiger shark specimens were caught over a time-span of close to 80 years (~1939–2015). Samples originated from north-eastern and eastern Australia, extending from the Gulf of Carpentaria (GCA), through the Coral Sea (CRS) to the Tasman Sea (TAS) (Fig. 1 a). Contemporary tissue samples (2000–2015) were obtained as fin-clips from sharks caught in the Queensland Shark Control Program, the New South Wales Shark Meshing Program, commercial and recreational landings, and sharks caught for tagging and tracking research purposes 86 . Historical samples (~1939–1999) comprised dried tiger shark jaws and vertebrae obtained from museum collections, fishers, and other private or public collections. The initial dataset consisted of 115 unique sharks. As tiger sharks are long-lived and the sampled individuals were highly variable in age, we estimated the year of birth for each sample to allow for a more accurate temporal genetic comparison (Fig. 1 b). For example, a large shark sampled in 2010 could have been born the same year as a smaller shark sampled in 1990. Individual year of birth was estimated using a locally derived relationship between total length (L T ) and age. Growth rate estimation (t) was based on vertebral aging for both females and males using the Von Bertalanffy growth function (VBGF) ( 1 ) as 76 : $$ {\text{t }} = {\text{ ln }}\left( {(( - {\text{L}}_{{\text{T}}} + {\text{ L}}_{\infty } )/({\text{L}}_{\infty } - {\text{L}}_{0} ))/ - {\text{k}}} \right), $$ (1) where L 0 and L ∞ represent the length-at-birth and theoretical asymptotic length, respectively, and k represents the growth coefficient. We assumed different parameters for males and females (males: L ∞ = 441.1 cm, k = 0.08, and L 0 = 123.4 cm; females: L ∞ = 379.9 cm, k = 0.06, and L 0 = 116.8 cm), and a combined set where information on sex was not available (L ∞ = 433.7 cm, k = 0.06, and L 0 = 121.5 cm). For individuals with only fork length (L F ) available, total length was calculated using the relationship L T = 22.607 + 1.096 L F 87 . For the 20 sharks without length information, total weight (W T ) was used to first obtain L F using a regression equation parameterized for tiger sharks in the north-western Atlantic 88 , the closest population to our target species from which there is available data (L F = ((W T /2.5281) × 10 –6 ) (1/3.2603) ). DNA extraction and target capture Historical tissue material was collected following the protocol described in Nielsen et al. 39 and involved the collection of “bio-swarf” produced when drilling a 3.5-mm hole in the calcified cartilage of jaws or vertebrae. Extraction of DNA from the bio-swarf and contemporary fin tissue was performed with the Bioline ISOLATE II Genomic DNA kit according to the manufacturer’s protocol, using 18–37 mg (average 27 mg) of tissue per extraction. For genomic library preparation, DNA from contemporary samples was sheared to an average fragment size of 200 bp with a M220 focused ultrasonicator (Covaris, USA). DNA from historical material was fragmented due to degradation over time and therefore used directly. Genomic-capture libraries were prepared using the KAPA Hyper Prep Kit (Kapa Biosystems, USA) according to manufacturer’s instructions. A total of 50 ng input DNA per sample was used with a one in five dilution of the TruSeq DNA HT dual-index adaptors (Illumina, USA). Ten PCR cycles for library amplification were used for the contemporary samples and twelve for the historical. Selected regions of genomic DNA were captured with a MyBaits (MYcroarray) target enrichment kit, consisting of 20.000 biotinylated RNA baits (120 bp each) developed from pancreas, liver and brain derived transcriptome sequences 89 of the small-spotted catshark ( Scyliorhinus canicula ). At the time of bait development this was the most taxonomically similar species from which a large genomic resource was available for bait design that would likely capture tiger shark transcribed regions. For more details about DNA extraction, library preparation, and bait design see Nielsen et al. 39 . All DNA samples were captured individually, using 135 ng of DNA library as input, which had previously been treated with a 1× AMPure XP beads clean-up. Hybridization capture of tiger shark DNA was conducted for 24 h at 60 °C in solution for subsequent paired-end (2 × 125 bp) sequencing on an Illumina HiSeq2000 v4. Prior to sequencing, the captured libraries were amplified using thirteen PCR cycles and purified using 0.8× AMPure XP beads. Quality Control (QC) steps were performed using a Bioanalyzer (Agilent Technologies, CA, USA), thus the final sequencing libraries could be pooled in equal nM concentrations. Samples were sequenced in two lanes. One “historical” lane consisting of 38 jaw samples and 5 fin samples, and a “contemporary” lane with 75 fin samples and 2 jaw samples. The difference in sample number per lane accounted for more variable template numbers among historical samples and thus secured a higher minimum number of sequences per individual. The lanes reciprocally included the same jaw samples that were sequenced in both lanes, similar to one of the contemporary tissue samples in the historical lane, allowing for estimation of “lane effects” and to evaluate reproducibility of multilocus genotypes through the molecular, bioinformatics and population genomics pipelines. Bioinformatics pipeline and data filtering We customized a bioinformatics pipeline to ensure removal of potential contaminants, artefacts and low quality reads 90 before proceeding with the downstream analysis. Briefly, the de-multiplexed reads were controlled for quality using FastQC 91 . All adaptors were removed using AdapterRemoval 92 and reads were filtered by length and quality, with a minimum length of 30 bp and base quality of 28. Filtered reads were merged using FLASH 93 with default parameters, and checked for contaminants using Kraken2 94 . Both unpaired and concordantly merged reads were mapped against possible sources of contaminants (bacteria, fungi) using the Bowtie2 95 “sensitive” option. The cleaned reads were also mapped against the mitochondrial genome of tiger shark (NCBI Reference Sequence: NC_022193.1 96 ). Previous studies have shown that “off-target capture” is common, i.e. capture of genomic regions of both nuclear and mitochondrial origin not matching the baits. For example, in highly degraded samples target template DNA may not bind and amplify as well as in good quality samples and templates, resulting in more amplification of non-targeted regions of the genome 10 , 11 . Moreover, mtDNA sequences are commonly captured, or directly sequenced, due to the high copy number of mitochondrial DNA compared to nuclear DNA 82 , 97 . This phenomenon may even be desirable, as it allows assessment of mtDNA diversity. After removal of mitochondrial sequences, the reads were mapped against the transcriptome of tiger shark 45 , using the BWA-mem algorithm. This transcriptome includes 179,867 unique contigs greater than200 bp length (average 822 bp, maximum 15,892 bp). After mapping, PCR duplicates were removed using Picard-tools ( ). We checked the patterns of DNA damage of the remaining reads, using Mapdamage2.0 98 . Coverage and depth of the target regions were estimated using Samtools 99 , and finally we called SNPs using Freebayes 100 with default parameters. The raw SNPs obtained were further filtered to keep only biallelic SNPs with quality above 30 and minimum allele count of three. Only SNPs with a maximum level of missing data of 20% were maintained. Additionally, we filtered for excess depth to reduce the possible presence of paralogs and multi-copy loci. Linkage disequilibrium (LD) between SNPs within bins of 800 bp (maximum length of the merged reads plus 150 bp each side) was estimated using the prune function in bcftools (SAMtools package), by calculating the square correlation between alleles of each pair of loci, r 2 101 and keeping only SNPs with an r 2 < 0.25. To test the reliability of our final SNPs, we compared genotypes for duplicate control samples and only maintained SNPs that matched more than 80% of the pairwise comparisons (to allow for missing data). Finally, we filtered for significant departure from HWE ( p < 0.05) to remove systematic genotyping errors. All filtering steps but the LD pruning were done using VCFtools 102 . Data analysis for temporal and spatial genomic variability Back-calculated year of birth ranged between 1917 (the oldest) and 2012 (the youngest). For all downstream analysis, we grouped samples into four time periods based on their estimated decade of birth: 1910–1960, 1970–1990 and 2000, the latter comprising all contemporary samples (2000–2015). These date ranges were used as named time periods throughout the manuscript. The three periods were also associated with different catch rates in the study area, with the highest catch rate between 1960 and 1980, which significantly decreased after 2000 24 , especially in the southern part of Queensland 9 , 103 . For the temporal analysis we estimated temporal genetic differentiation using all samples (n = 106) including individuals belonging to different spatiotemporal sample groups, namely GCA_1970-1990 (n = 12), GCA_2000 (n = 14), CRS_1970-1990 (n = 7), CRS_2000 (n = 15), TAS_1910-1960 (n = 19), TAS_1970-1990 (n = 14) and TAS_2000 (n = 26). All estimations of pairwise F ST 104 between spatial and temporal samples were performed using the R package StAMPP 105 and their significance was assessed with 1,000 permutations over loci. For all FST comparisons the p-values were adjusted using a False Discovery Rate (FDR) method. A Principal Coordinates Analysis (PCoA) was applied to the pairwise F ST matrix to summarize and plot the differences reported in the table using the pcoa function in the ape 5.0 package 106 in R. We performed a Principal Component Analysis (PCA) to explore the spatial and temporal structure of individual genotypes using the R package adegenet 107 . An initial PCA of all samples revealed two identical genotypes (two types of archived tissue from the same individual) and one of them was subsequently removed. A PCA of the contemporary samples revealed two extreme outliers of which one of them was identified as another species (spinner shark; Carcharhinus brevipinna ) based on mtDNA sequences. Both samples were removed from further analysis. For the temporal estimates the dataset (n = 106) was divided into the three periods: 1910–1960 (n = 19), 1970–1990 (n = 33) and 2000 (n = 55). In order to assess current spatial genetic differentiation a subset of 74 contemporary individuals (based on year of catch) from the three different areas: GCA (n = 26), CRS (n = 21) and TAS (n = 27) were selected. In order to test for possible differences in individual heterozygosity among population groups, we used an ad-hoc R script, to calculate the proportion of heterozygous loci out of all genotyped loci for an individual, thereby accounting for differences in total number of genotyped loci (i.e. missing data) across individuals. The boxplot was realized using the ggplot2 package in R 108 to highlight the median of each cluster. The distribution of F ST across loci was plotted as a function of heterozygosity using ggplot2. Estimates of Weir and Cockerham 104 F ST and heterozygosity were obtained using VCFtools. A test for selection was performed using pcadapt 109 , with a qvalue of 0.1 as cut-off to identify putative selective outliers and bayescan 99 with parameters ‐n 5000 ‐thin 10 ‐nbp 20 ‐pilot 5000 ‐burn 50000 ‐pr_odds 100. To detect the presence of possible genetic clusters identified by the pairwise F ST analysis, a Discriminant Analysis of Principal Components (DAPC) was applied as it better detects variability and division among populations compared to other software for population structure analysis (e.g. STRUCTURE 110 ) as it is free of assumptions about any underlying population genetics model. For the DAPC we used the adegenet package in R. Since the results of a DAPC analysis can be highly affected by the number of discriminant functions used, we ran the xvalDapc function first to select the best number of discriminant functions to represent the data. The model was corrected for overfitting by using the a-score estimate, with 26 PCs retained representing the optimal trade-off between power of discrimination and over-fitting. The recommended number of PCs obtained using this approach (26) was used to run a Bayesian information criterion (BIC) analysis and the k-means algorithm to identify the most likely number of clusters that could explain the variability within our dataset. For comparison, the snmf function implemented in the LEA package 111 was also tested, since it represents a different model-based approach for clusters identification in large datasets. The value of K that best explain the data was selected using the cross-entropy criterion, with 100 repetitions for K value. A Fishers exact test was applied to pairwise comparisons between different temporal collections to determine whether any possible change in relative proportion among groups was statistically significant. Estimates of relatedness were performed using VCFtools to exclude kin sampling that could skew the estimates of genetic diversity. Finally, Site Frequency Spectrum per temporal and spatial collections were generated in ANGSD 112 to further investigate whether our results could be affected by technical biases (e.g. excess of singletons). Due to the lack of knowledge on the ancestral state, the folded version of the “realSFS” function was used to polarize the allele frequencies on the reference used to produce the vcf file (-anc -fold 1). In addition, Pi and Watterson theta were estimated for each temporal and spatial group using the “sfsr” function ( ). | Jaws was the only word needed to give the iconic 1970's thriller about a great white with a preference for humans its eerie title. Though a strong and important player at the top of the foodchain, sharks face a range of enemies: overfishing, habitat loss, pollution, climate change and human fear resulting in the use of shark control programs in some locations. The fear and fascination for sharks have made people collect shark jaws for decades. These collections of shark jaws from museums, national fishery institutes and personal collections, including modern samples from fishery institutes represent a great opportunity for scientists. Using genomic data retrieved from historical tiger shark jaws, an international group of scientists including Professor Einar Eg from the Technical University of Denmark has found evidence of the disappearance of a local southeastern Australian population of tiger sharks. A disappearance associated with a documented local decline in abundance of tiger sharks, likely caused by the ongoing shark control program. An international study that highlights changes in genetic composition of tiger sharks (Galeocerdo cuvier) and potential loss of a south-eastern Australia population has just been published in the journal Scientific Reports "Our study shows that tiger sharks can have local and genetically isolated populations at a restricted geographical scale—such as the south Eastern Australian coast—and that these local populations are vulnerable to direct exploitation and shark control programs," says Eg. Top predator controls the ecosystem balance The study shows that there are still tiger sharks in the area. However, these individuals belong to an, apparently, more widespread population found across the east/north coast of Australia. "When we, through genetic analysis, better understand the distribution and migration of shark populations and their responses to human activities over historical time, we are better able to design proper management plans and actions at the appropriate geographical scale. Not only for the benefit of sharks, but for marine ecosystems as a whole," says Eg. "Sharks are top predators. They control the abundance of other species below them, and sick fish, in the food chain, ensuring species diversity. I.e. they are important for maintaining ecosystem balance. They are generally long lived and slow reproducers, so a healthy shark fauna signals a healthy ocean and ecosystem." Genetic diversity is the fuel that drives future evolution Before the new study, it was believed that tiger sharks did not display local population structure. Thus, genetic differences among tiger shark populations were only found at a basin wide scale, such as between tiger sharks in the Pacific and Atlantic oceans. Accordingly, tiger sharks were expected to display low vulnerability towards local depletion. Therefore management of the species at a large geographical scale was in focus. "From our samples alone, it appears that the historical local population has been extirpated or significantly reduced. This means that management of the species also has to focus on regional processes and exploitation patterns in order to protect local populations and biodiversity of the species as a whole," says Eg. "Genetic diversity within a species, is the fuel that drives future evolution and adaptation to the environment, e.g. climate change. Without historical genetic/genomic data, there is no way of assessing the loss of genetic diversity within a species." Fear and facts—are sharks moving North? With regards to the shark control programs having an impact on shark numbers, the obvious question arises How afraid should one actually be to go swimming in Australia or South Africa? "In 2021, there were 73 cases of unprovoked shark bites worldwide, with a total of 11 fatalities. Most attacks were related to surfing and board sports. In Australia, there were three fatalities and 1 in SA. So, the chance of being attacked and killed by a shark is almost non-existing. One should definitely be more afraid of driving in your car writing txt messages," says Eg. As climate change causes sea temperatures to rise, some researchers say that we may be looking into a future with large sharks entering Danish/European waters. However, Eg stresses that though changed temperature conditions could allow for more large sharks occurring in Danish/European waters, many other factors determine the distribution of a species. "The Mediterranean, for instance, is very suitable for large sharks, but we do not see large assemblages of white, tiger, mako sharks there. If they come, it is highly unlikely that this would result in any bather-shark conflicts. As an example, there were no reported shark bites in Europe for 2021," says Eg. A future for sharks On a global scale, the tiger shark is near threatened. According to Professor Eg that covers a significant species depletion in some areas, while they're doing ok in other regions of the world: "We need to shift tiger shark management conceptually from an exclusive species view to also include the local population aspect. For example, saving global populations has to go through protection and proper management of local populations," says Eg. "Now, by having our temporal genetic data, we can study the genetic impact of anthropogenic pressure on marine species, enabling us to improve management in order to secure biodiversity." So, how can genetic research continue and help improve shark control and hunting in favor of sharks? "Genetic research can help to elucidate the proper biological units (genetic populations), which should be the target for fisheries management, conservation and biodiversity protection," says Eg. "Studies like ours can illustrate the likely consequences of local over-exploitation in relation to shark control and make us realize what we can lose by not paying attention to the distribution of genetic variation within a species." | 10.1038/s41598-022-10529-w |
Medicine | New pathway reveals how immune system is regulated, gives hope for chronic diseases | Homeostatic regulation of T cell trafficking by a B cell-derived peptide is impaired in autoimmune and chronic inflammatory disease, Nature Medicine, DOI: 10.1038/nm.3842 Journal information: Nature Medicine | http://dx.doi.org/10.1038/nm.3842 | https://medicalxpress.com/news/2015-04-pathway-reveals-immune-chronic-diseases.html | Abstract During an inflammatory response, lymphocyte recruitment into tissue must be tightly controlled because dysregulated trafficking contributes to the pathogenesis of chronic disease. Here we show that during inflammation and in response to adiponectin, B cells tonically inhibit T cell trafficking by secreting a peptide (PEPITEM) proteolytically derived from 14.3.3 zeta delta (14.3.3.ζδ) protein. PEPITEM binds cadherin-15 on endothelial cells, promoting synthesis and release of sphingosine-1 phosphate, which inhibits trafficking of T cells without affecting recruitment of other leukocytes. Expression of adiponectin receptors on B cells and adiponectin-induced PEPITEM secretion wanes with age, implying immune senescence of the pathway. Additionally, these changes are evident in individuals with type 1 diabetes or rheumatoid arthritis, and circulating PEPITEM in patient serum is reduced compared to that of healthy age-matched donors. In both diseases, tonic inhibition of T cell trafficking across inflamed endothelium is lost. Control of patient T cell trafficking is re-established by treatment with exogenous PEPITEM. Moreover, in animal models of peritonitis, hepatic ischemia-reperfusion injury, Salmonella infection, uveitis and Sjögren's syndrome, PEPITEM reduced T cell recruitment into inflamed tissues. Main In vertebrates, a lymphocyte (T cell and B cell)-based adaptive immune system has evolved to augment innate immunity. Adaptive responses require lymphocyte trafficking between the bone marrow, lymphoid organs and peripheral tissues using blood as a vehicle for dispersal 1 . Our knowledge of the trafficking process is still incomplete. However, unregulated T cell recruitment during inflammation is pathogenic and contributes to chronic disease 2 , 3 . Here we reveal the function of a homeostatic pathway, which imposes a tonic inhibition on T cell trafficking during inflammation. Identification of this pathway arose through studies on the circulating adipokine adiponectin. Adiponectin affects both metabolic and immune pathways 4 , 5 , 6 , 7 , including the recruitment of leukocytes during an inflammatory response 6 , and plasma concentrations of adiponectin are low in a number of chronic diseases, including diabetes 4 . We tested the hypothesis that adiponectin might regulate lymphocyte trafficking and that changes in adiponectin function might contribute to pathogenic lymphocyte recruitment in chronic inflammatory and autoimmune diseases. We started by observing lymphocyte trafficking in vitro across isolated human endothelial cells, which are the gatekeepers to the tissues for circulating leukocytes. To enter inflamed tissue, T cells migrate through endothelial cells lining the post-capillary venules 8 , 9 , and this has been modeled both in vitro and in vivo 10 , 11 , 12 , 13 , 14 , 15 . Thus, memory T cells moving rapidly in the flowing blood are preferentially recruited by endothelial cells activated by cytokines (for example, tumor necrosis factor α (TNF-α) and/or interferon γ (IFN-γ)). Tethering from flow and rolling adhesion are supported by E-selectin and vascular cell adhesion molecule 1 (VCAM-1) 16 , whereas integrin-mediated stable adhesion and migration are supported by sequential signals from chemokines and prostaglandin D 2 (PGD 2 ; refs. 17 , 18 , 19 , 20 , 21 , 22 , 23 ). Here we show that in the presence of adiponectin, B cells recruited to the endothelial cell surface during inflammation reduce the efficiency of memory T cell migration by imposing a tonic inhibition on this process. Thus, we believe that the adaptive immune system has evolved a robust strategy for regulating inappropriate or excessive activity, thereby limiting the possibility of establishing a chronic inflammatory response that might contribute to disease. Results Adiponectin regulates T cell migration In vitro, adiponectin dose-dependently inhibited the TNF-α– and IFN-γ–induced trans-endothelial migration of human peripheral blood lymphocytes (PBLs) with a half-maximal effective concentration (EC 50 ) of 2.6 nM (0.94 μg/ml) ( Fig. 1a and Supplementary Fig. 1a ), with the most marked effects seen at physiological circulating levels observed in healthy humans (5−15 μg/ml). Although migration was reduced compared to the untreated control so that more cells were firmly adherent to the apical surface of the endothelium ( Supplementary Fig. 1b ), the number of lymphocytes recruited was unaffected by adiponectin ( Supplementary Fig. 1c ). The effects of adiponectin on PBL migration were seen in both a static system ( Fig. 1a ) and under conditions of flow ( Fig. 1b ), and they were evident on human umbilical vein endothelial cells (HUVECs) and human dermal microvascular endothelial cells (HDMECs) ( Fig. 1c ). The majority of transmigrating PBLs were CD3 + CD45RO + memory T cells, as expected for this model (ref. 17 and data not shown). Adiponectin did not alter the expression and/or function of lymphocyte integrins (α 4 β 1 and α L β 2 ), the chemokine receptor CXCR3, or the PGD 2 receptor DP-2 on PBLs ( Supplementary Fig. 1d ). Moreover, chemotactic responses to CXCL12, CXCL10, or PGD 2 were unaltered by adiponectin ( Supplementary Fig. 1e ). Fewer than 5% of T cells (CD3 + cells), including memory and naïve subsets, expressed adiponectin receptors (adipoR1 and adipoR2) ( Fig. 1d–f ). However, circulating B cells (CD19 + cells) expressed both receptors abundantly ( Fig. 1d–f ). We also found that endothelial cells expressed both adiponectin receptors ( Supplementary Fig. 2 ). However, adiponectin did not directly target endothelial cells in our system, as treated PBLs are washed to remove any adiponectin before their addition to the endothelial cells. To ensure that any residual carryover of this agent did not influence lymphocyte recruitment, we verified that adiponectin did not modulate the gene expression of adhesion molecules and chemokines in TNF-α– and IFN-γ–stimulated endothelial cells ( Supplementary Table 1 ). As T cells lack adiponectin receptors but show altered patterns of migration in response to adiponectin, we postulated that another lymphocyte population mediated the inhibition of T cell trafficking. Upon depleting B cells from the PBL mixture, T cells were released from the inhibitory effects of adiponectin ( Fig. 1g ). Adding back purified B cells to isolated T cells could reconstitute the adiponectin-dependent inhibition of T cell migration, and using supernatants from adiponectin-stimulated B cells was as effective as adding B cells themselves ( Fig. 1g ). The ability of B cell supernatants to impair T cell migration was lost when B cells were activated with adiponectin in the presence of an inhibitor of protein secretion, brefeldin-A ( Fig. 1g ). These experiments suggest that B cells release a soluble factor in response to stimulation by adiponectin that regulates migration of T cells. Figure 1: T cell migration across endothelial cells is regulated by a soluble agent released from B cells stimulated with adiponectin. ( a–c ) The effects of adiponectin (0−15 μg/ml) on T cell migration across TNF-α– and IFN-γ–treated HUVECs under static ( n = 3−7) ( a ) or flow conditions ( n = 6) ( b ) or on HDMECs under static conditions ( n = 3−4) ( c ). ( d ) Representative plots of adiponectin receptor 1 and adiponectin receptor 2 (adipoR1 and adipoR2) expression on T cells (CD3 + ) and B cells (CD19 + ), as measured by flow cytometry. ( e , f ) Percentage of adiponectin receptor–positive cells on lymphocyte subpopulations (measured by flow cytometry), n = 6−13. ( g ) Transmigration of T cells after B cell depletion, reconstitution of depleted preparations with B cells or B cell supernatant, or inhibition of B cell secretion by brefeldin-A (10 μg/ml), n = 3−7. Data are from at least three independent experiments using three donors for both PBLs and HUVECs/HDMECs and are shown as means ± s.e.m. NS, not significant, * P ≤ 0.05, ** P ≤ 0.01, *** P ≤ 0.001 by linear regression ( a ), paired t -test compared to untreated (no adiponectin) control ( b–c , g ), or Dunnett post-test compared to B cells ( e , f ). Source data Full size image Adiponectin induces PEPITEM secretion by B cells We used an unbiased proteomic screen to identify the agent(s) secreted by purified B cells after stimulation with or without adiponectin. A comparative analysis identified a 14-amino acid peptide, SVTEQGAELSNEER, specific to the supernatants of adiponectin-stimulated B cells ( Fig. 2a ). By comparing this to an in silico library of published and predicted sequences, we found that the peptide demonstrated 100% sequence homology to a single human protein, and that it represents amino acids 28−41 of the 14.3.3.ζδ protein, which is a 245-amino acid product of the YWHAZ gene ( Fig. 2b ). Figure 2: PEPITEM inhibits T cell transmigration by binding to cadherin-15 on endothelial cells. ( a ) Tandem mass spectrometric analysis of ion m / z = 774.8 with the predicted sequence for PEPITEM, data are representative of n = 3. ( b ) Amino acid sequence of 14.3.3.ζδ with PEPITEM highlighted in blue. ( c ) The effect of PEPITEM (0−10 ng/ml), scrambled control peptide or irrelevant peptides (proinsulin chain A and tetanus toxoid peptide at 10 ng/ml) on PBL transmigration, n = 3−4. ( d ) The effect of PEPITEM on the transmigration of pre-sorted leukocyte populations. Data are normalized to scrambled control peptide, n = 3−6. NA, no adhesion. ( e , f ) The effects of control or cadherin-15 ( CDH15 )-specific siRNA on the mRNA expression of CDH15 in HDMECs ( n = 6) ( e ) and on T cell transmigration across HDMECs ( n = 6) ( f ). Data are normalized to unstimulated ( e ) or untreated and no-siRNA control ( f ). ( g , h ) The effects of control or CDH15 -specific siRNA on the protein expression of CDH15 determined by western blotting ( g ) or confocal microscopy in HDMECs and skeletal muscle cells (SkMCs) ( h ). Scale bars, 30 μm. Data are mean ± s.e.m. * P ≤ 0.05, ** P ≤ 0.01, *** P ≤ 0.001 compared to no-adiponectin/PEPITEM control ( c ), scrambled control ( d ) and unstimulated control or no siRNA ( e , f ) by Dunnett post-test. NS, not significant. Source data Full size image Proteolytic release of the peptide from the parent protein was confirmed after a tryptic digestion of recombinant 14.3.3.ζδ generated the same 14-amino acid product ( Supplementary Table 2 ). A time course of the 14.3.3.ζδ-derived peptide secretion from adiponectin-stimulated B cells showed rapid and sustained release ( Supplementary Fig. 3a ). A synthetic peptide exhibited an identical mass:charge ( m / z ) ratio to that of the native peptide ( m / z = 774.88), suggesting that the B cell–derived product was not subject to post-translational modification before secretion ( Supplementary Fig. 3b, c ). Synthetic peptide showed a dose-dependent inhibitory effect on the trafficking of PBLs across TNF-α– and IFN-γ–stimulated endothelial cells in vitro ( Fig. 2c ), with a 19 pM EC 50 (28 pg/ml) ( Supplementary Fig. 4a ). A similar response was observed in the absence of bovine serum albumin, which removed any source of arachidonic acid that might be used to generate bioactive eicosanoids such as PGD 2, which are known to regulate T cell trafficking 23 ( Supplementary Fig. 4b ). Similarly to adiponectin, the peptide had no effect on the number of lymphocytes adherent to the endothelial cells ( Supplementary Fig. 4c ), but it increased the number of surface adherent cells, as migration through the monolayer was inhibited ( Supplementary Fig. 4d ). Neither a scrambled peptide, containing the same amino acids in random order, nor other peptides of unrelated sequence but with known biological activity (proinsulin chain A peptide and tetanus toxoid peptide) had an effect on T cell migration ( Fig. 2c ). As the peptide inhibited T cell migration, we named it PEPtide Inhibitor of Trans-Endothelial Migration or PEPITEM. PEPITEM specifically inhibited the migration of CD4 + and CD8 + memory T cells, without affecting either the transmigration of neutrophils or monocytes ( Fig. 2d ) or the total adhesion of any leukocyte subset ( Supplementary Tables 3 and 4 for adiponectin). In addition, PEPITEM inhibited T cell transmigration on both HDMECs and HUVECs ( Supplementary Fig. 4e ). PEPITEM was ineffective if PBLs were pre-treated, and it only had inhibitory effects when pre-incubated with endothelial cells, implying that PEPITEM operated by stimulating a receptor on these cells ( Supplementary Fig. 4f ). When we fractionated B cells into their subsets, we found that plasma cells (CD38 +++ IgD − IgM − CD27 − ) were able to secrete higher quantities of PEPITEM compared to naïve (CD38 + IgD + IgM + ) or memory (CD38 − IgD − IgM − CD27 − ) B cells after incubation with adiponectin ( Supplementary Table 5 ). There was also an enrichment of plasma cells on cytokine-stimulated endothelial cells ( Supplementary Fig. 4g) . These data imply a dominant role of circulating plasma cells in the regulation of memory T cell trafficking. The endothelial cell receptor for PEPITEM is cadherin-15 To identify a PEPITEM receptor on endothelial cells, we used PEPITEM with a biotin conjugate on its N terminus as 'bait' to 'fish' for binding partners on the endothelial cell surface. This peptide showed full efficacy in a functional migration assay ( Supplementary Fig. 5a ). PEPITEM-bound molecules were co-immobilized on neutravidin columns after endothelial cell lysis. Proteins eluted from the neutravidin columns were subjected to analysis by mass spectrometry for identification. This strategy yielded a candidate with a strong statistical score (Mascot score >30), cadherin-15 (CDH15; M-cadherin). CDH15 was efficiently knocked down in endothelial cells by specific siRNA oligomers, but not by control sequences ( Fig. 2e and Supplementary Fig. 5b ). Silencing of CDH15 did not alter the baseline efficiency of T cell migration across the endothelium ( Supplementary Fig. 5c ). However, it did release T cells from the inhibitory effects of PEPITEM ( Fig. 2f ). Binding data indicates that PEPITEM is able to bind recombinant CDH15 in a Biacore assay ( K d = 108.9 μM) ( Supplementary Fig. 5d ). CDH15 has not previously been described in endothelial cells, and here we show that mRNA for CDH15 is endogenously expressed, as well as being upregulated when endothelial cells are stimulated with inflammatory cytokines ( Fig. 2e ). In addition, we detected CDH15 expression in HDMECs at the protein level using western blotting, either after siRNA knockdown or upregulation by cytokines ( Fig. 2g ). Confocal microscopy also showed expression in HDMECs and skeletal muscle cells (SkMCs, used as a positive control) using fluorescently labeled reagents, and staining was reduced after siRNA-mediated silencing of CDH15 , thereby demonstrating the specificity of the antibody ( Fig. 2g,h and Supplementary Fig. 5b ). Endothelial cell sphingosine-1-phosphate impairs migration Sphingosine-1-phosphate (S1P) is a biologically active sphingolipid generated in the cytosol from sphingosine, which is phosphorylated by the sphingosine kinases (SPHKs) SPKH1 or SPHK2 (ref. 19 ). The endothelial cell transporter spinster homolog 2 (SPNS2) is also necessary for translocation of S1P into extracellular fluid 24 , 25 , 26 . S1P has an important regulatory role in the movement of T cells from inflamed tissue to afferent lymphatics 27 and in T cell egress from secondary lymphoid organs 28 , 29 , 30 , 31 . Here we found the S1P receptor antagonist W146 (trifluoroacetate salt) released T cells from the inhibitory effects of both adiponectin and PEPITEM ( Fig. 3a,b ). Moreover, the effects of adiponectin and PEPITEM on T cell migration could be mimicked dose dependently when exogenous S1P was added to purified T cells ( Fig. 3c ). We confirmed that S1P was of endothelial origin by pre-treating endothelial cells with the SPHK1 inhibitor (2,2-dimethyl-4 S -(1-oxo-2-hexadecyn-1-yl)-1,1-dimethylester-3-oxazolidinecarboxylic acid) or the SPHK1 and SPHK2 (SPHK1/2) inhibitor ( N , N -dimethylsphingosine), which in both cases abolished the ability of PEPITEM to inhibit T cell migration ( Fig. 3d,e ). In agreement with the patterns of activity of these inhibitors, we found that SPHK1, but not SPHK2, was highly expressed in endothelial cells but not in B cells ( Supplementary Fig. 6a ), and that mRNA levels for SPHK1 , but not SPHK2 , were increased upon stimulation of endothelial cells with TNF-α and IFN-γ compared to controls ( Fig. 3f ). Knockdown of endothelial SPNS2 mRNA also released T cells from the inhibitory effects of PEPITEM ( Fig. 3g and Supplementary Fig. 6b–d ). The addition of S1P to endothelial cells at a concentration equivalent to the total circulating concentration in plasma ( ∼ 0.2 μM) (ref. 32 ) inhibited T cell transmigration efficiently ( Fig. 3c ). However, in the circulation S1P is bound to plasma proteins such as albumin and lipoproteins 32 , 33 , which limit its availability. Published data estimates that biologically active S1P circulates at approximately 5 nM (refs. 33 , 34 ). In the presence of this concentration of S1P, the effects of PEPITEM on migration were readily observable ( Supplementary Fig. 6e–g ). Moreover, PEPITEM inhibition of T cell trafficking in the presence of 5 nM S1P could be reversed by treating endothelial cells with SPHK inhibitors or by SPNS2 knockdown in these cells, showing that additional and functional S1P was released in response to PEPITEM stimulation ( Supplementary Fig. 6e–g ). Figure 3: PEPITEM induces the S1P release from endothelial cells, which inhibits T cell migration. ( a , b ) The effects of an S1PR antagonist (10 μM) on T cell migration in the presence or absence of adiponectin ( n = 3−5) ( a ) or PEPITEM ( n = 3−7) ( b ). ( c ) The effects on T cell transmigration of adding S1P to B cell–depleted PBLs, n = 3−6. ( d , e ) The effects of SPHK1-specific inhibitor (5 μM) ( d ) or SPHK1/2 inhibitor (5 μM) ( e ) on T cell transmigration in the presence of PEPITEM, n = 3. ( f ) The expression of SPHK1 and SPHK2 mRNA in endothelial cells, n = 7−8. ( g ) The effect of SPNS2 -specific siRNA on T cell transmigration in the presence of PEPITEM, n = 4. ( h , i ) The expression of S1PR1 on memory T cells (CD3 + CD45RO + ) on stimulated endothelial cells ( n = 3) ( h ) and on plated ICAM-1 stimulated with CXCL10 ( n = 6) ( i ). ( j ) The effects of S1P on the expression of the LFA-1 activation epitope (KIM127) on ICAM-1 adherent memory T cells (CD3 + CD45RO + ) stimulated with CXCL10, n = 4. Data are means ± s.e.m. and normalized to control ( g – j ). * P ≤ 0.05, ** P ≤ 0.01, *** P ≤ 0.001 compared to untreated control by analysis of variance (ANOVA) ( a–g ) and Dunnett ( a , b , d , e , g ) or B cell depletion no-adiponectin control ( c ) or to unstimulated (0) control ( f ) or Bonferroni post-test ( j ) or paired t -test on raw data ( h ) or Wilcoxon signed-rank test ( i ). Source data Full size image The process of T cell egress from lymph nodes into blood results in rapid S1P-mediated internalization of S1P receptors (S1PR1 and S1PR4) from the surface of circulating T cells 30 , 31 . Thus, for S1P to be an effective mediator during T cell migration across endothelial cells, S1P receptors on T cells require upregulation. We observed a robust increase in the expression of S1PR1 on the cell surface of memory T cells that had been recruited by cytokine stimulated endothelial cells ( Fig. 3h ). Moreover, we observed rapid upregulation of surface S1PR1 when memory T cells adherent to immobilized, recombinant ICAM-1 were stimulated by CXCL10 ( Fig. 3i and Supplementary Fig. 7a ). This mode of T cell activation could override the desensitization pathways for S1PR1, as high expression was maintained even in the presence of 10 μM exogenous S1P ( Supplementary Fig. 7a ). Moreover, this was not due to the recruitment of a subset of cells with high S1PR1 expression, as these were not evident in the isolated T cells before addition to the ICAM-1 ( Supplementary Fig. 7a ). Thus, during the process of recruitment, chemokines presented by endothelial cells rapidly and sustainably induce expression of S1PR1 on memory T cells. S1P downregulates the affinity of the T cell integrin adhesion receptor α L β 2 (LFA-1), reducing binding to ICAM-1 after stimulation with the chemokine CXCL10 ( Fig. 3j and Supplementary Fig. 7b ). This change in integrin function did not alter the levels of T cell recruitment, which is dependent on E-selectin and VLA-4, in this model of in vitro transmigration (ref. 17 and data not shown). However, in the presence of adiponectin (and therefore S1P) there was a modest increase in the number of cells rolling on the endothelial cell surface compared to the untreated control ( Supplementary Fig. 7c ). Taken together, these data imply that S1P-mediated changes in the function of LFA-1 are able to interrupt the process of LFA-1 mediated trans-endothelial migration. PEPITEM is functional in vivo As PEPITEM is derived from B cells, we conducted a series of studies in the BALB/c B cell–deficient mouse 35 . In a mouse model of zymosan-induced peritonitis, more T cells trafficked into the peritoneal cavity in B cell–deficient mice compared to controls ( Fig. 4a ). Injection of these mice with PEPITEM during challenge with zymosan inhibited trafficking of T cells into the peritoneum ( Fig. 4a ). Additionally, we tested the efficacy of PEPITEM in a model of systemic bacteremia upon Salmonella typhimurium infection in the C57BL/6 B cell–deficient mouse. Salmonella colonizes the spleen and liver during primary infection 36 , where it promotes an inflammatory infiltrate into infectious foci. PEPITEM reduces the number of T cells resident in these infectious foci in Salmonella -infected B cell–deficient mice compared to controls ( Fig. 4b ), with a trend toward reduced T cells in PEPITEM-treated wild-type (WT) mice ( Supplementary Fig. 8a,b ). PEPITEM also abolished T cell recruitment in the hepatic sinusoids, with a concomitant increase in the number of free-flowing T cells, after acute liver ischemia and reperfusion injury in C57BL/6 WT mice, as assessed by real-time intravital microscopy ( Fig. 4c,d and Supplementary Fig. 8c ). This observation is consistent with integrin-mediated processes that support leukocyte tethering in the low-shear environment of the hepatic sinusoids 37 , 38 . Figure 4: PEPITEM inhibits T cell migration in vivo. ( a ) T cells recruited into the peritoneum of BALB/c B cell–deficient mice after induction of peritonitis using zymosan and treatment with PEPITEM or scrambled peptide. All data were normalized to the number of T cells in BALB/c WT mice treated with zymosan, n = 3−8. ( b ) Mean CD3 + T cells per infectious foci in the liver of Salmonella -infected C57BL/6 B cell–deficient mice treated with PEPITEM or PBS (control), n ≥ 4. ( c ) Adherent CD3 + T cells per mm 2 in the liver sinusoid following reperfusion in C57BL/6 WT mice treated with PEPITEM or scrambled peptide by intravital microscopy. ( d ) Representative pictures of carboxyfluorescein diacetate succinimidyl ester (CFDA-SE)-labeled T cells in the sinusoids in scrambled (top) and PEPITEM-treated mice (bottom), n = 4 per group. Scale bars, 100 μm. ( e ) CD3 + T cells in the ocular infiltrate of C57BL/6 WT mice treated with PBS (control) or PEPITEM after induction of uveitis by intravitreal injection of LPS, n = 9−10. ( f ) CD3 + T cells in the salivary glands of C57BL/6 WT mice treated with scrambled peptide (control) or PEPITEM after induction of Sjögren's syndrome by cannulation of salivary glands and injection of luciferase-encoding replication-defective adenovirus (AdV5), n = 7−8. ( g ) Representative pictures showing CD3 + T cells and CD19 + B cells in the salivary glands after scrambled peptide (left) or PEPITEM (right) treatment. Scale bars, 100 μm. Data are means ± s.e.m. * P ≤ 0.05, ** P ≤ 0.01 compared to zymosan-treated WT mice by Dunnett post-test ( a ) or compared to scrambled ( a , f ) or PBS-treated B cell–deficient mice by unpaired t -test ( b , e ) or two-way ANOVA ( c ). Source data Full size image In a model of endotoxin-induced (lipopolysaccharide; LPS) autoimmune uveitis (ocular inflammation) in C57BL/6 WT mice, administration of PEPITEM with LPS into the eye reduced the number of T cells in the ocular infiltrate ( Fig. 4e ). PEPITEM also reduced T cell trafficking into the salivary glands of C57BL/6 WT mice challenged with a virally induced model of tertiary lymphoid organ formation, which mimics changes observed in the autoimmune rheumatic disease Sjögren's syndrome 39 ( Fig. 4f,g ). In both uveitis and Sjögren's syndrome models, as well as in the zymosan-induced peritonitis model (data not shown), B cells were recruited to the sites of inflammation and the number of B cells recruited was not affected by PEPITEM ( Supplementary Fig. 8d,e ). In addition, in the in vivo model of acute, resolving inflammation (peritonitis), the recruitment patterns of F4/80 macrophages and CD11c + cells were not affected at the time point used to assess T cell trafficking ( Supplementary Fig. 8f,g ). The PEPITEM pathway is impaired in disease and the elderly Efficacy in in vivo models of inflammation prompted us to investigate whether the PEPITEM pathway was compromised in individuals with the T cell–driven autoimmune disease type 1 diabetes (T1D) or in individuals with rheumatoid arthritis. First, we measured the expression of adipoR1 and adipoR2 on the circulating B cells of individuals with T1D or rheumatoid arthritis, comparing these to healthy age- and gender-matched control donors (cohort statistics are shown in Supplementary Tables 6 , 7 , 8 ). The expression of both adipoR1 and adipoR2 was reduced on B cells from individuals with T1D ( Fig. 5a for the percentage of positive B cells and Supplementary Fig. 9a for representative intensity histograms) and with rheumatoid arthritis ( Fig. 5b for the percentage of positive B cells and Supplementary Fig. 9a for representative intensity histograms) compared to healthy controls. We observed no difference in the expression of CD19 on B cells and in B cell number between healthy controls and individuals with T1D or rheumatoid arthritis, suggesting that these differences reflect changes in adipoR1 and adipoR2 expression ( Supplementary Fig. 9b,c ). There was a positive correlation between the expression of adipoR1 and adipoR2 on B cells with the quantity of PEPITEM released by B cells upon stimulation with adiponectin ( Fig. 5c,d ). This was the case for both individuals with T1D and healthy controls (adipoR2 only), although in individuals with T1D only, low levels of PEPITEM could be detected, which reflected the paucity of expression of adiponectin receptors. In addition, we were able to detect low concentrations of PEPITEM in serum from healthy controls, and we found that this was reduced in individuals with T1D ( Fig. 5e ). Figure 5: The PEPITEM pathway is compromised in T1D and rheumatoid arthritis. ( a , b ) The expression of (left) adipoR1 or (right) adipoR2 on B cells in cohorts of healthy controls ( a , n = 19; b , n = 10) and in individuals with T1D ( n = 29) ( a ) or rheumatoid arthritis (RA) ( n = 11−12) ( b ). ( c , d ) Correlation between expression of adipoR1 ( c ) and adipoR2 ( d ) on B cells and the quantity of PEPITEM released by B cells after adiponectin stimulation, as measured by mass spectrometry in healthy controls and individuals with T1D, n = 10 in each group. ( e ) Concentrations of PEPITEM in serum from healthy controls and individuals with T1D, n = 7. ( f , g ) T cell transmigration in individuals with T1D ( n = 9−22) ( f ) or rheumatoid arthritis (RA) ( n = 7−8) ( g ) after treatment with adiponectin (15 μg/ml) or PEPITEM (10 ng/ml) or scrambled peptide (10 ng/ml). ( h , i ) Correlation between expression of adipoR1 ( h ) and adipoR2 ( i ) on B cells and age in healthy controls, n = 40−41. NS, not significant. Data are means ± s.e.m. * P ≤ 0.05, ** P ≤ 0.01, *** P ≤ 0.001 compared to healthy control by Mann-Whitney test ( a ) or unpaired t -test ( b , e ) or paired t -test for healthy controls ( f , g ) and ANOVA ( f , g ) and Dunnett post-test for individuals with T1D and rheumatoid arthritis by linear regression ( c , d , h , i ). Source data Full size image Decreased PEPITEM secretion by B cells released T cells from the inhibitory effects of adiponectin, so that there was no longer an inhibition of T cell migration in individuals with T1D ( Fig. 5f ) or those with rheumatoid arthritis ( Fig. 5g ). The effects of adiponectin could be mimicked by the addition of exogenous PEPITEM in the ex vivo migration assay, using lymphocytes from individuals with T1D or rheumatoid arthritis ( Fig. 5f,g ), meaning that the loss of tonic inhibition of T cell migration in these individuals could be readily replaced with appropriate PEPITEM treatment. One of the major risk factors for developing chronic inflammatory or autoimmune diseases is age. Thus, we analyzed the expression of adipoR1 and adipoR2 in healthy donors of different ages. There was a negative correlation between the expression of adipoR1 and adipoR2 and age ( Fig. 5h,i ). Discussion The processing of an immune-regulatory 14-amino acid peptide from an intracellular protein (14.3.3ζδ) with no known association to the inflammatory response has not yet been described in, nor could it have been predicted from, any of the known pathways that regulate leukocyte trafficking. In fact, the functions of the seven members of the 14.3.3 protein family are diverse. For example, they are involved in regulating the function of cytosolic proteins that support metabolic, cell cycle, apoptotic and protein trafficking pathways 40 , 41 , 42 . Their importance in such homeostatic pathways is highlighted by their association with diseases as varied as cancer, hyper-proliferative skin disorders and Alzheimer's disease, when their function is disrupted 43 , 44 . The effects of losing the function of the PEPITEM pathway that we document here now allows us to add T1D and rheumatoid arthritis to the list of diseases in which the dysregulated function of 14.3.3 proteins has a role. Here we show that extracellular PEPITEM impairs lymphocyte trafficking by indirectly regulating the function of the β 2 -integrin, LFA-1. Interestingly, both intracellular 14.3.3β and 14.3.3ζδ have been implicated in the regulation of adhesion dependent cellular functions in other contexts. For example, morphology and adhesion in dendrites, embryonic kidney cell lines and rat fibroblasts are associated with the 14.3.3β-dependent regulation of β 1 -integrins 45 , 46 , 47 , 48 , 49 , 50 . Moreover, 14.3.3ζδ protein in platelets regulates the function of the adhesion complex GPIb/IX/V, and it is required for efficient recruitment of platelets to von Willebrand factor (VWF) 51 . It is also notable that S1P, the terminal mediator in the PEPITEM pathway, can also regulate the function of endothelial cell–borne adhesion receptors 52 , 53 (for example, VE-cadherin) that are involved in the regulated trafficking of leukocytes. Thus, although a role in lymphocyte trafficking is novel for 14.3.3 proteins, the regulation of adhesion molecules in other cells and contexts does provide a generic link between their known biological functions and the new role described here. In the lymph node, a process of reciprocal cross-talk between B cells and T cells is known to be important in establishing an antigen-specific immune response 54 . Cross-talk between B cells and T cells also has a role in the initiation of T cell–mediated autoimmune events in T1D and rheumatoid arthritis 55 , 56 . Indeed, depletion of B cells by the monoclonal antibody rituximab has beneficial effects in these diseases 57 , 58 . It might be assumed that such a B cell depletion strategy might compromise the inhibitory effects of PEPITEM on T cell trafficking by removing the tonic 'brake' on inflammation provided by this pathway (described in detail in Fig. 6 ). However, our data implies that this pathway is no longer functional in individuals with established disease. Thus, it is unlikely that the benefit of removing pathogenic B cells is being achieved at the expense of the protective functions of PEPITEM-secreting B cells. However, our observation that exogenous peptide can regain control of the trafficking of T cells raises the possibility that the PEPITEM pathway may present a tractable target for the development of novel therapeutic agents with which to treat chronic inflammatory and autoimmune diseases. In T1D or rheumatoid arthritis this might be achieved using the peptide itself. However, we predict that in other disease states, alternative aspects of this pathway may be compromised. For example, the expression and proteolysis processing of 14.3.3.ζδ to yield mature peptide could be altered, as could the secretory pathways required for PEPITEM release in response to adiponectin. In endothelial cells, changes in the expression of cadherin-15 or the release of S1P in response to PEPITEM signaling could lead to inappropriate T cell traffic. Finally, T cells themselves may lose the capacity to respond to S1P by failing to up regulate S1PRs in response to inflammatory chemokines, or through alterations in the intrinsic signaling pathways lying downstream of these molecules. Many of these processes have yet to be defined mechanistically, such as the identity and localization of the protease(s) that cleave PEPITEM from 14.3.3.ζδ. However, with a fuller understanding of the biology of the PEPITEM pathway it will be important to determine whether alterations in these other steps are detectable in disease. If they are, they will offer unique opportunities to develop new therapeutic agents. Figure 6: Schematic depicting B cell–mediated regulation of T cell trafficking during inflammation. (1) Endothelial cells stimulated with pro-inflammatory cytokines such as TNF-α and IFN-γ recruit flowing T cells in the blood. (2) β 1 - and β 2 -integrins are activated by chemokine signals from CXCL9−11 presented on the endothelial surface, which interact with CXCR3 on T cells and lead to T cell arrest. (3) At the same time, chemokine stimulation induces S1PR1/4 surface expression on T cells. T cell spreading and migration is supported by PGD 2 , generated through the metabolism of arachidonic acid (AA) by cyclooxygenases (COX), which operates through the PGD 2 receptor, DP-2. (4) T cells spread and migrate across and through vascular endothelial cells. (5) Simultaneously, adherent B cells are simulated by circulating adiponectin through the adipoR1/2 receptors, (6) resulting in the release of PEPITEM from B cells. (7) PEPITEM binds to cadherin-15 (CDH15) on endothelial cells, stimulating the release of S1P from endothelial cells. (8) S1P binds to S1P receptors (S1P1/4) on recruited T cells, (9), which regulates the activation of the β 2 -integrin LFA-1 and inhibits trans-endothelial T cell migration. Full size image Methods General. Alta Bioscience (University of Birmingham, Birmingham, UK) synthesized PEPITEM, scrambled peptide and biotinylated PEPITEM. We purchased sphingosine-1-phosphate, W146 (trifluoroacetate salt), PGD2, SPHK1 inhibitor, 2,2-dimethyl-4 S -(1-oxo-2-hexadecyn-1-yl)-1,1-dimethylester-3-oxazolidinecarboxylic acid, SPHK1 inhibitor 5c, and the SPK1/2 inhibitor, N , N -dimethylsphingosine from Cayman Chemicals (Michigan, USA). We purchased chemokines and cytokines from Peprotech (London, UK) and R&D Systems (Oxford, UK). Isolation of leukocytes. We obtained blood samples from healthy donors with written informed consent and approval from the University of Birmingham Local Ethical Review Committee (ERN_07-058). We isolated peripheral blood mononuclear cells (PBMCs) and neutrophils from blood using a two-step density gradient of Histopaque 1077 and 1119 (Sigma-Aldrich, Poole, UK). Lymphocytes were purified by panning PBMCs on culture plastic for 30 min at 37 °C to remove monocytes as previously described 59 . PBLs were then counted, resuspended in M199 (Life Technologies Invitrogen Compounds, Paisley, UK) containing 0.15% bovine serum albumin (BSA; Sigma-Aldrich) at 1 × 10 6 cells per ml for transmigration assays, or in PBS containing 0.5% BSA and 2 mM EDTA (Sigma-Aldrich) for cell sorting. B cells were depleted from PBLs by positive selection using anti-CD19 beads (Miltenyi Biotec, Surrey, UK). When B cells were reconstituted into PBLs or used to generate supernatants, B cells were sorted by negative selection to yield untouched cells (StemCell, Grenoble, France). Memory and naïve CD4 + and CD8 + T cells were isolated using negative selection kits (StemCell). Monocytes and their subsets were isolated by positive selection using CD14 and CD16 beads (Miltenyi Biotec). In vitro transmigration assay. Human umbilical cords were obtained from the Human Biomaterials Resource Centre (HBRC, University of Birmingham) (09/H1010/75) which holds ethical approval and collected fully consented tissue from the Birmingham Women's Hospital NHS Trust. HUVECs were isolated from umbilical cords as previously described 60 and cultured in M199 supplemented with 20% FCS (fetal calf serum), 10 ng/ml epidermal growth factor, 35 μg/ml gentamycin, 1 μg/ml hydrocortisone (all from Sigma-Aldrich), and 2.5 μg/ml amphotericin B (Life Technologies Invitrogen Compounds). Primary HUVECs were dissociated using trypsin/EDTA (Sigma-Aldrich) and seeded on 12-well tissue culture plates (Falcon; Becton Dickinson Labware), or iBidi chamber slides (iBidi, Martinsried, Germany) for adhesion assays in low-serum medium 2% (Endothelial Basal Medium, Promocell, Heidelberg, Germany). Seeding density was chosen to yield confluent monolayers within 24 h. TNF-α (100 U/ml; Sigma-Aldrich) and IFN-γ (10 ng/ml; PeproTech, London, UK) were added to confluent monolayers for 24 h before adhesion assay with lymphocytes. Primary HDMECs were purchased from Promocell and cultured in the manufacturer's recommended medium (endothelial cell growth medium; Promocell). HDMECs were used after four passages. Prior to adhesion assay, 1 × 10 6 PBLs or 1 × 10 5 B cells were treated with adiponectin (0.0001–15 μg/ml) at room temperature (22 °C) under agitation for1 h and washed before use. Static or flow-based adhesion assays were performed as previously described 18 . Endothelial cells were washed twice with M199 0.15% BSA to remove the excess cytokines. PBLs were allowed to adhere to the endothelial cells for 6 min at 37 °C before non-adherent cells were removed by washing with M199 0.15% BSA. PBL adhesion and migration was assessed using a phase-contrast videomicroscope as previously described 61 . Manipulations and microscopy were carried out at 37 °C. Recordings were digitized and analyzed offline using Image-Pro Plus software (DataCell, Finchampstead, UK). The numbers of adherent cells were counted in each field, averaged and converted to cells per mm 2 and multiplied by the known surface area of the endothelial cell monolayer to calculate the total number adherent. This number was divided by the known total number of PBLs added to obtain the percentage of the PBLs that had adhered. Each lymphocyte was classified as either: 1) phase bright and adherent to the surface of the endothelial cell; or 2) phase dark and spread and migrating below the surface of the endothelial cell monolayer. Cells were counted on endothelial cells forming a confluent monolayer. Any condition in which monolayers were disrupted was excluded, as this would affect transmigration counts. The percentage of adherent lymphocytes that had transmigrated was calculated. In some experiments, data was normalized to the control by dividing the percentage transmigration of the treated sample by the percentage of transmigration of the control and multiplied by 100. For the B cell reconstitution experiments, B cells were negatively selected using StemSep magnetic kit and 100,000 cells were incubated with 15 μg/ml adiponectin for 1 h at room temperature. Cells were centrifuged 1500 r.p.m. for 7 min and 1 ml of supernatant was added to 1×10 6 B cell–depleted PBLs (1 B cell to 10 B cell-depleted PBLs). Negatively sorted B cells (100,000) were incubated with brefeldin-A (10 μg/ml) for 4 h and adiponectin (15 μg/ml) for the last hour. Cells were then washed and supernatant added back to B cell depleted PBLs and transmigration was measured. In some experiments, lymphocytes were pre-treated with 10 μM S1PR antagonist (W146) or S1P (0.0001 to 100 μM) at room temperature under agitation and washed after 30 min, so the treatments did not modulate HUVEC function. Alternatively, HUVECs or HDMECs were pre-treated with 5 μM SPHK1 or SPHK1/2 antagonists at 37 °C and washed after 30 min. PEPITEM was added to the PBLs or different leukocyte subsets at room temperature under agitation and washed after 30 min, so the treatments did not modulate HUVEC function, prior incubation of cells on endothelial cells. Firmly adhered PBL were collected from the endothelial cell surface using cold EDTA for 3 washes. Transmigrated PBL were collected by treatment of endothelial cells with Accutase (Sigma-Aldrich). Both fractions were labeled and analyzed by flow cytometry as described below in the 'Flow cytometry' section. Flow cytometry. PBMC were stained with the relevant antibodies for 30 min at 4 °C. Subsequently, samples were labeled with the relevant secondary conjugated antibodies for 30 min at 4 °C. Isotype controls and secondary-only conditions were used as negative controls. Rabbit anti-human adipoR1 (357−375) and adipoR2 (374−386) antibodies (Phoenix pharmaceuticals, Karlsruhe, Germany) were used at 5 μg/ml, and detected using a goat-anti rabbit Alexa Fluor 488 secondary antibody used at 8 μg/ml (Life Technologies Invitrogen Compounds). Gating to measure the expression of adipoR1 and adipoR2 on PBL and B cells was based on the isotype control. Isotype control frequencies were subtracted from the adipoR1 and adipoR2 frequencies for each subject. The following antibodies were used to stain human PBMCs: CD4-FITC (1:50) (OKT-4), CD3-PerCp-Cy5.5 (1:50) (OKT3), CD19-PECy7 (1:50) (HIB19), CD8-Pacific Blue (1:50) (OKT8), CD56-PE (1:50) (MEM188) (all from E-bioscience, Hatfield, UK), CD4–Pacific orange (1:10) (clone S3.5) (Life Technologies Invitrogen Compounds) and CD45RO-APC (1:20) (UCHL1) (BD Bioscience, Oxford, UK), α 4 β 1 -PE (1:100) (P5D2), α L β 2 -FITC (1:100) (212701), DP-2-FITC (1:10) (301108) (R&D Systems, UK), CXCR3-PE (1:50) (2Ar1) (US Biological). B cell subsets were labeled using CD19-PerCp-Vio700 (1:30) (LT19), IgM-PE (1:30) (PJ2-22H3), IgD-APC (1:60) (IgD26), CD38-FTIC (1:150) (IB6) and CD27-APC-Vio-770 (1:10) (M-T271) (all from Miltenyi Biotec). The following antibodies were used to stain mouse PBMC from blood and peritoneal lavage: CD3-FITC (1:50) (145-2C11), CD3-PECy7 (1:50) (145-2C11), CD4-PB (1:100) (GK1.5), CD8-TR (1:200) (5H10), CD11C-PECy7 (1:50) (N418), CD19-APC (1:50) (1D3), CD44-FITC (1:500) (IM7), CD45-PerCpCy5.5 (1:200) (30-F11), CD62L-PE (1:500) (MEL-14), B220-APCCy7 (1:100) (RA3-6B2), gp38-PE (1:200) (8.1.1) (all from eBioscience) and F4/80-APC (1:20) (CI:A3-1) (AbD Serotec, Kidlington, UK). Samples were assayed using a Cyan (Dako) with Summit software and then analyzed using FlowJo software (TreeStar, Ashland, OR). Between 10,000 and 50,000 events per sample were recorded and plotted as the frequency of positive cells or mean fluorescence intensity (MFI). All samples analyzed were included in the study unless insufficient cells were collected for flow cytometry analysis. Confocal microscopy. Cells were grown to confluence onto glass chamber slides (Becton Dickinson Falcon) at 37 °C. They were then fixed with 2% formaldehyde and 4% sucrose for 15 min, washed in PBS and stained with 10 μg/ml of either a sheep anti-human CDH15 antibody (Val22-Ala606) (R&D systems) or sheep IgG (Southern Biotech, UK) overnight at 4 °C. Goat anti-sheep IgG1 conjugated to Alexa Fluor 488 (1:1,000) (Abcam, UK) was then applied and the slides were visualized using a Zeiss LSM 510 inverted laser-scanning confocal microscope using a 40× water-immersion objective with excitation at 488 nm and 543 nm (Zeiss, Gottingen, Germany). Constant acquisition parameters and laser power were maintained throughout individual experiments for analysis and images were processed using Zeiss LSM Image Examiner software (Zeiss). Digital images were recorded in two separately scanned channels with no overlap in detection of emissions from the respective fluorochromes. Confocal micrographs were stored as digital arrays of 1024 × 1024 pixels with 8-bit sensitivity. Immunoprecipitation and western blotting. Whole cell lysates were extracted by suspending cells in lysis buffer with 50mM Tris-HCl pH 8, 150 mM NaCl, 10% glycerol, 1% (wt/vol) Nonidet P-40, 0.5% (wt/vol) sodium deoxycholate, and protease inhibitor cocktail (Invitrogen). After incubation for 15 min at 4 °C, this preparation was centrifuged at 600g for 10 min. The supernatant was subjected to immunoprecipitation by incubating for 30 min at 4 °C with protein G–Dynabeads (Invitrogen) and polyclonal sheep anti-CDH15 antibody (10 μg/ml) (Val22-Ala606) (R&D systems). The beads were collected and washed three times with ice-cold extraction buffer and once with 50 mM Tris-HCl pH 7.5. Proteins that were retained on the beads were eluted using 50 nM glycine pH 2.8 and separated by SDS–PAGE 10% (wt/vol) and analyzed by western blot with a sheep anti-human CDH15 antibody (1 μg/ml) (Val22-Ala606) (R&D systems). Blots were then probed with appropriate horseradish peroxidase-conjugated anti-sheep secondary antibody (1:3,000) (Cell Signaling Technology, UK). Immunodetection was carried out using the ECL plus Kit (Amersham, GE Healthcare Life Sciences, and UK) followed by exposure to X-ray film for 15 min. Controls were run in parallel with application of the recombinant CDH15 (R&D systems). siRNA transfection. HUVECs were plated in 12-well plates (87,500 cells per well) for 24 h or until about 80% confluence. The relevant siRNAs were added at a final concentration of 50 nM to 83.75 μl or 82.5 μl if duplex in Opti-MEM medium (Life Technologies Invitrogen Compounds). 1.5 μl of RNAi Lipofectamine (Life Technologies Invitrogen Compounds) was mixed with 13.5 μl of Opti-MEM and incubated for 10 min at room temperature. 15 μl of Lipofectamine mix was added to each siRNA singleplex or duplex, gently mixed, and incubated for a further 10 min. HUVECs were washed twice with PBS and 400 μl of Opti-MEM was added to the Lipofectamine siRNA duplexes. After gentle agitation, the mix was added to the HUVECs and incubated at 37 °C for 4 h. The mix was then replaced with low-serum medium without antibiotics. After 48 h, HUVECs were stimulated with TNF-α and IFN-γ for 24 h before measuring PBL adhesion and migration as described previously. The PEPITEM receptor candidate was targeted by four siRNA oligomers that were purchased from Qiagen (Crawley, UK); CHD15 (#1: ATCGCCGACTTCATCAATGAT, #2: CACAGCCCTCATCTATGACTA, #3: CCCGATCAGCGTATCCGAGAA, #4: CAGGACGACCTTCGAGACAAT) and the S1P transporter SPNS2 (#1: CTGCACTTCTGCTGCAATCAA, #2: CCCACACAACTTGCTGGGCAA, #3: CAGCTTGGGCAACGTGCTCAA, #4: TGCCATTGGGACAATGAAGAA). Control random siRNAs were purchased from Thermo Scientific. Real-time PCR. Total mRNA was extracted using the RNAeasy Minikit (Qiagen, Crawley, UK) according to the manufacturer's protocol. Briefly, PBMC were first lysed, then added to a column, after three washes, mRNA was eluted from the column with water. mRNA concentration was measured using NanoDrop spectrofluorimeter (LabTech) and mRNA was stored at −80 °C. To convert mRNA to cDNA, random primers (Promega, USA) were annealed to 1 μg of mRNA for 5 min at 70 °C, after which the following mastermix was added to give a final volume of 30 μl: 10 U Superscript II Reverse Transcriptase (RT), 10 U RNAout RNase inhibitor, 1× Superscript Buffer (all from Invitrogen) and 10 mM dNTPs (Promega). The reaction was run at 37 °C for 1 h, followed by 5 min at 95 °C. To analyze mRNA, FAM-labeled SPHK1, SPHK2 and SPNS2 primers and VIC-labeled 18S primers were bought as Assay on Demand kits from Applied Biosystems (Warrington, UK). Samples were amplified in duplicates using the 7500HT real-time PCR machine (Applied Biosystems) and analyzed using the software package SDS 2.2 (Applied Biosystems). Data were expressed as relative expression units relative to 18S or as fold change (2 −ΔΔCt method). Identification of PEPITEM. B cells (2 × 10 5 −5 × 10 5 ) were incubated in presence or absence of adiponectin at 15 μg/ml. Adiponectin (15 μg/ml) was added in M199 and used as a negative control. The peptides from the three samples were purified using C18 solid-phase extraction columns from Supelco (DSC-18, Sigma-Aldrich). The columns were conditioned by adding 1 ml of 0.1% trifluoroacetic acid (TFA, Thermo Scientific) in acetonitrile (ACN, Thermo Scientific), which like all additions was allowed to drip through the column under gravity. The column was then equilibrated with 0.1% TFA/water and the sample, adjusted to 0.1% TFA, was added to the column. The column was washed with 1 ml of 0.1% TFA/water and the peptides eluted using 0.1% TFA/acetonitrile (1 ml) which was dried under vacuum and the samples resuspended in 20 μl in 0.1% formic acid in 2% acetonitrile. 10 μl of the purified samples was subjected to a LC-MS/MS analysis using a gradient of 2−36% ACN in 0.1% formic acid over 30 min at 350 nl/min using a Dionex Ultimate 3000 HPLC system. The HPLC column was connected to a Bruker ETD Amazon ion-trap mass spectrometer with an online nanospray source. A mass spectrometry survey scan from 350 to 1,600 m / z was performed and the five most intense ions in each survey scan were selected for collision-induced dissociation (CID) fragmentation. After ions were fragmented twice they were placed on an exclusion list for 0.5 min. The raw data was processed using the Bruker Data Analysis peak detection program to select peaks which were then searched using the Mascot search engine (version 2.1) using the SwissProt protein database. The minimum mass accuracy for both the mass spectrometry and tandem mass spectrometry scans were set to 0.5 Da and no protease selection was used. The peptides were filtered using a minimum Mascot score of 30. The data output was analyzed via the Bruker ProteinScape software package. To identify a potential candidate in the B cell supernatant with adiponectin treatment, we applied a subtractive data analysis method. Hits from the adiponectin-stimulated sample were compared to the B cell supernatant without adiponectin and recombinant adiponectin controls and common analytes were discarded. The recombinant 14.3.3.ζδ protein (100 μg) (Fitzgerald Industries International) was digested using trypsin at 40 μg/ml (in 10% ACN, 90% HPLC dH 2 0, 0.1% TFA) for 1 h at 37 °C and samples were purified on C18 columns with at 90% ACN and analyzed using mass spectrometry and database searching as described above. Identification of PEPITEM receptor. Endothelial cells were incubated with an N terminus biotinylated version of PEPITEM or biotinylated scrambled control for 4 h at 4 °C. Cells were then wash twice in cold PBS and lysed with a Triton phosphate buffer (20 mM sodium phosphate pH 7.5, 150 mM NaCl, 1% Triton X-100, Protease inhibitor, all from Sigma-Aldrich). After 30 min, lysates were collected and centrifuged for 20 min at 600 g at 4 °C. Supernatants were collected and dried under vacuum and the samples resuspended in 20 μl 8 M urea/2% SDS and loading buffer (Sigma-Aldrich and Life Technologies Invitrogen Compounds). Samples were loaded onto a 4−12% SDS-PAGE gel (Life Technologies Invitrogen Compounds) and stained overnight with colloidal Coomassie staining buffer (0.08% Coomassie Brilliant Blue G250, 1.6% orthophosphoric acid, 8% ammonium sulfate, 20% methanol, all from Sigma-Aldrich). Gels were detained in 1% Acetic acid in distilled water (several changes) until the background was clear. Protein bands were cut-out of the gel and washed twice in 50% acetonitrile (ACN)/50 nM ammonium bicarbonate (AB, Sigma-Aldrich) for 45 min at 37 °C with agitation. The gel fragments were then incubated at 56 °C for 1 h in 50 mM DTT in 10% ACN/50 mM AB then in 100 mM in 10% ACN/50 mM AB for 30 min at room temperature in the dark. Bands were washed twice in 10% ACN/40 mM AB for 15 min and dried under vacuum until completely dry. Trypsin (Promega, Southampton, UK) was then added on the bands at 200 μg/ml in 10% ACN/40 mM AB and left overnight at 37 °C. Supernatant was then collected and bands were washed twice in 3% formic acid for 1 h at room temperature under agitation. Supernatants were collected and pooled together after all washes and samples were dried under vacuum, resuspended in 20 μl in 0.1% formic acid in 2% ACN. 10 μl of the purified samples was subjected to an LC-MS/MS analysis using a gradient of 2−36% ACN in 0.1% formic acid over 30 min at 350 nl/min using a Dionex Ultimate 3000 HPLC system, as already described. Biacore assay. All Biacore assays were performed with the help of Dr. Catherine McDonnel (GE Healthcare) using a Biacore T200 system (GE Healthcare). To test peptide binding to CDH15, recombinant CDH15-Fc (50 μg/ml) (R&D Systems) was immobilized on protein A bound to a chip using a standard protocol (900 s, 5 μl/min). N-terminal biotinylated PEPITEM (24 μg/ml to 770 μg/ml) in HBS-P buffer (GE Healthcare), 5 mM CaCl 2 and 0.05% P20 was flowed over the chip at a flow rate of 30 μl/min for 60 s injection and 600 s dissociation. For this experiment, buffer alone and random peptides were used as controls in case of any non-specific binding, as well as scrambled biotinylated PEPITEM. Binding kinetics were measured in response units (RU) and BiaEvaluation software (GE Healthcare) was used to analyze the data traces. Quantification of PEPITEM. 500,000 negatively selected B cells were incubated in the presence or absence of adiponectin 15 μg/ml for 1 h at room temperature. After centrifugation at 250 g for 7 min, supernatants were spiked with 10 ng of 3 H (tritium)-radiolabeled PEPITEM as an internal mass standard for relative intensity quantification. Peptides were purified on DSC-18 solid-phase extraction columns as described above and analyzed by liquid chromatography–tandem mass spectrometry as described above. Owing to their identical chromatographic properties, endogenous and synthetic radiolabeled versions of PEPITEM elute at the same point on the gradient, allowing comparison in the same mass spectrum. However, owing to their different physical properties, these versions are readily resolved as separate peaks within this mass spectrum. Therefore, extracted ion chromatograms (EIC) comparing intensity of the 10 ng radiolabeled standard ( m / z 780.88 ± 0.05) to that of endogenous PEPITEM ( m / z 774.88 ± 0.05) allowed quantification of the native peptide. This method of relative intensity quantification is time-honored within the field of analytical science 62 . Alternatively, B cell subsets were isolated by automated cell sorting using a Moflow Astrios EQ (Beckman Coulter) and labeled as previously described in the ' In vitro transmigration assay, section. The B cell subsets were then incubated with 15 μg/ml of adiponectin for 1 h. Supernatants were collected and analyzed by mass spectrometry to quantify PEPITEM using the radiolabeled quantification method. Samples for the PEPITEM secretion time-course were collected after 5, 15, 30 min, 1, 3 and 24 h after PEPITEM incubation on endothelial cells. Samples were centrifuged, purified on C18 columns at 100% ACN and analyzed by mass spectrometry using the radiolabeled quantification method. Identifying changes in the affinity of α L β 2 on lymphocytes. 96well plates were coated with 50 μg/ml recombinant ICAM-1/Fc (R&D Systems) overnight at 4 °C. The plate was blocked using PBS 2% BSA for 1 h at room temperature and PBL treated with CXCL10 (80 ng/ml) and/or S1P (10 μM) were added for 6 min. Excess unbound PBLs were washed away. Bound PBLs were collected using cold PBS by rough pipetting and PBLs were labeled at 4 °C for the lymphocyte integrin α L β 2 (LFA-1) using the mouse anti-human KIM127 (10 μg/ml) antibody recognizing the intermediate affinity epitope (N. Hogg, London) or the S1PR1 antibody (5 μg/ml; Cayman Chemicals; amino acids 241−253 (ISKASRSSEKSLA)). KIM127 was detected using a goat anti-mouse Alexa Fluor 488 secondary antibody (Invitrogen) at 8 μg/ml and S1PR1 with a donkey anti-rabbit Alexa Fluor 488 secondary antibody (Invitrogen) at 8 μg/ml. The expression of the affinity site was measured on memory T cells (CD45RO + CD3 + T cells) by flow cytometry as described in the 'Flow cytometry' section. To measure the expression of S1PRs on firmly adhered and transmigrated memory T cells, we washed the endothelial cells after 6 min of PBL transmigration using cold EDTA (Sigma-Aldrich). The EDTA wash was repeated until all firmly adhered cells were recovered. Transmigrated cells were collected with endothelial cells using Accutase for one minute at 37 °C and firmly adhered and transmigrated cells were labeled with CD45RO-APC (1:20) (UCHL1) (BD Bioscience) and CD3-PerCp-Cy5.5 (1:50) (OKT3) (eBioscience) and S1PR1 and analyzed by flow cytometry. Investigating the function of PEPITEM in mouse models of inflammation. All experiments were performed in accordance with UK Home Office regulations. For the animal studies, in each experiment B cell–deficient or wild-type (WT) animals with the same background were allocated at random to different experimental groups. Importantly, mice from the same litter were randomly distributed amongst the experimental groups, and where possible, they were equally distributed amongst experimental groups. Comparison of peptide, scrambled peptide, and carrier (PBS) were carried out on the same day with identical reagents when possible. The investigators were not blinded to allocation during experiments and outcome assessment. All samples analyzed were included in the study unless insufficient cells were collected for flow cytometry analysis. Acute peritoneal inflammation. BALB/c WT or BALB/c B cell–deficient mice ( Igh−J tm1Dhu N?+N2; these mice carry a deletion of the J segments of the Ig heavy chain locus) (Taconic, New York, USA) were housed in the Biomedical Services Unit at the University of Birmingham. Mice were used between 6 and 8 weeks of age and were matched for sex, as both male and females were used. Peritonitis was induced by the intraperitoneal injection (i.p) of 1 mg of type A zymosan (Sigma-Aldrich) as previously described 63 . In some animals zymosan was delivered with 300 μg of PEPITEM or scrambled peptide. Cells washed from the peritoneal cavity were collected in PBS at 48 h after injection. Erythrocytes in the peritoneal exudates were lysed and leukocytes stained for analysis by flow cytometry. Tubes containing the cells from the peritoneal exudates were fully acquired on the flow cytometry to accurately count the cells. Gates were set up on all cells in the peritoneum and the number of T cells was determined based on CD3 expression. Blood was drawn by cardiac puncture under the armpit and processed as for peritoneal exudate cells. All data were normalized to the number of T cells in WT mice treated with zymosan. Systemic bacteremia upon Salmonella infection in the liver. For Salmonella infections, attenuated Salmonella Typhimurium (strain SL3261, originally obtained from R.A. Kingsley (Wellcome Trust Sanger Institute, Cambridge 64 ) was grown overnight and bacteria were harvested from log-phase cultures as described previously 65 . C57BL/6 WT or C57BL/6 B cell–deficient (B cell–deficient mice were generated in house from breeding out the QM (quasi-monoclonal) IgH transgene from QM mice, which have the other IgH locus inactivated) 66 mice were infected by i.p. injection with 5 × 10 5 S . Typhimurium in PBS containing 100 μg PEPITEM. Mice were used between 6 and 8 weeks of age and were matched for sex, as both male and females were used. Control mice received 5 × 10 5 S . Typhimurium in PBS only. Mice received further PEPITEM (or PBS) injections (100 μg; i.p.) daily for the next 4 d and all samples were collected at day 5 or 7 after infection. Livers were immediately frozen and were subsequently examined by immunohistochemistry (IHC) as has been described elsewhere 67 . All IHC analysis was performed in Tris buffer (pH 7.6) at room temperature. Primary antibodies specific to CD3 (1:300) (145-2C11, BD Pharmingen) and F4/80 (1:500) (Cl:A3:1, AbD Serotec), and secondary antibodies (Dako Cytomation) (horse-radish peroxidase- (1:300) or biotin-conjugated (1:600)) were added for 60 and 45 min respectively. Slides were developed using 3, 3′-diaminobenzidine tetrahydrochloride (Sigma-Aldrich) or alkaline phosphatase (ABComplex, Vector Laboratories) and levamisole with napthol AS-MX phosphate and Fast Blue BB salt (all from Sigma-Aldrich) respectively. Slides were mounted in glycerol (Sigma-Aldrich) and images acquired using a Leica CTR6000 microscope (Leica, Milton Keynes, UK) with Image J and QCapture software. Mean CD3 + cells per foci were quantified for a minimum of 50 foci per tissue section at 25× magnification. Acute liver ischemia and reperfusion injury . Splenic T cells were isolated from 8−10-week-old male C57BL/6 WT mice (Harlan, Oxford, UK) through negative-selection magnetic activated cell sorting (MACS) using the Pan T cell isolation kit II (Miltenyi Biotec, Surrey, UK) according to the manufacturer's instructions. C57BL/6 mice were anesthetized by i.p. injection of ketamine (100 mg/kg, Vetlar V, Pfizer, Kent, UK) and xylazine hydrochloride (10 mg/kg, Xylacare, Animalcare, York, UK) delivered in 0.9% saline solution. The trachea and right common carotid artery were cannulated and the liver was exposed. Prior to ischemia, 250 μl of PEPITEM or scrambled peptide was injected via the carotid artery and allowed to circulate for 5 min. Ischemia of the left and median lobes of the liver was induced through application of a traumatic vascular clamp to the hepatic artery and portal vein supplying these lobes for 90 min. After 90 min of ischemia the clamp was removed and intravital observations carried out. Using an Olympus IX81 inverted microscope (Olympus, UK) the microvasculature of the liver was viewed through a 10× objective. 1 million fluorescently labeled T cells (CFDA-SE, 10 μM, Life Technologies, Paisley, UK) with 20 μl of PEPITEM/scrambled peptide were injected into the carotid artery at the point of clamp removal. A random field was selected every 10 min and imaged for 20 s. Five additional fields of view in a pre-defined pattern were then imaged for 20 s each. Adherent cells were defined as cells that were static for at least 20 s. Cells were counted on each field. Acute model of LPS (Lipopolysaccharide) driven uveitis (EIU-experimental autoimmune uveitis). C57BL/6J WT mice were originally obtained from Harlan UK Limited (Oxford, UK), and breeding colonies were established within the Animal Services Unit at Bristol University (Bristol, UK). Mice were housed in specific pathogen-free conditions with continuously available water and food. Female mice immunized for disease induction were aged between 6 and 8 weeks. All mice were kept in the animal house facilities of the University of Bristol. Treatment of animals conformed to United Kingdom legislation and to the Association for Research in Vision and Ophthalmology statement for the Use of Animals in Ophthalmic and Vision Research. Local administration of LPS from Salmonella Typhimurium (50 ng per eye) (Sigma-Aldrich) was performed by intravitreal injection in anesthetized B6LY5 mice as previously described 68 . Animals were anesthetized by i.p injection of 150 μl of Vetalar (ketamine hydrochloride 100 mg/ml; Pfizer, Sandwich, UK) and Rompun (xylazine hydrochloride 20 mg/ml; Bayer, Newburg, UK) mixed with sterile water in a ratio of 0.6:1:84. The pupils of all animals were dilated using topical tropicamide 1% (Minims from Chauvin Pharmaceuticals Ltd., UK). Intravitreal injections were performed under the direct control of a surgical microscope with the tip of a 12-mm 33-gauge hypodermic needle mounted on a 5-μl syringe (Hamilton AG, Bonaduz, Switzerland). For treatment groups either PEPITEM (6 μg) or PBS were combined with 50 ng per eye of LPS as a single injection, administered in a total volume of 4 μl per injection. The injection site was treated with chloramphenicol ointment. At 15 h after LPS/treatment administration, eyes were enucleated and carefully cleaned to remove all extraneous connective and vascular tissue. The aqueous and vitreous humor, iris, ciliary body and retina were microscopically dissected in HBSS (Life Technologies, Paisley, UK). These ocular components were homogenized and forced through a 70-μm cell strainer with a syringe plunger, to obtain a single-cell suspension, and stained for flow cytometry analysis. Cells were incubated with 24G2 cell supernatant for 10 min at 4 °C before incubation with fluorochrome-conjugated monoclonal antibodies (all from BD Bioscience, Oxford, UK) against cell surface markers CD45 (1:1,000) (30-F11), CD4 (1:100) (RM4-5) and CD8 (1:100) (53-6.7) at 4 °C for 20 min. Cells were resuspended in 7-aminoactinomycin D (7AAD) (Molecular Probes), and dead cells were excluded from analysis by gating on 7AAD-negative cells. Measurement of cell suspensions were acquired using a three-laser BD LSR-II flow cytometer (BD Cytometry Systems, Oxford, UK) and analyzed was performed using FlowJo software version 7.6.5 (TreeStar, Ashland, OR). Cell numbers were calculated by reference to a known cell standard, as previously reported 64 . Briefly, splenocytes at a range of known cell concentrations were acquired using a fixed and stable flow rate for 1 min. Based on total cell number acquired during this time, a standard curve was generated and used to interpolate cell concentrations of ocular infiltrating cells acquired at the same flow rate and time. Virally induced Sjögren's syndrome . C57BL/6 mice were from Harlan. Under ketamine-domitor (Pfizer, Sandwich, UK) anesthesia, the submandibular glands of female C57BL/6 mice (8−12 weeks old) were intraductally cannulated with 10 8 −10 9 p.f.u. of luciferase-encoding replication-defective adenovirus (AdV5), as previously described 39 . A group of five mice were administered (i.p injections) 100 μg of PEPITEM and another group of five mice were administered 100 μg of scrambled peptide every day until day 5 after cannulation. Salivary glands from cannulated mice treated with PEPITEM and scrambled peptides were harvested, snap frozen in OCT (Sakura, UK) over liquid nitrogen. Frozen sections of 7 μm in thickness were cut, left to dry overnight at room temperature, next day they were stored in −80 °C until use. For immunofluorescence analysis, slides were allowed to come to room temperature and then fixed for 20 min in ice-cold acetone, left to dry and then were hydrated in PBS. For immunofluorescence staining, all dilutions of reagents and antibodies were made in PBS with 1% BSA. First, to block endogenous biotin sections were treated with 0.05% avidin (Sigma-Aldrich) and 0.005% biotin (Sigma-Aldrich) for 15 min each and a washed for 5 min with PBS in between the two incubations. This was followed by blocking with 10% horse serum (Sigma-Aldrich) for 10 min. Slides were then incubated for 60 min with 'cocktails' containing the following primary antibodies in PBS (1% BSA); CD19–Alexa Fluor 647 (1:50) (eBio1D3), CD3ɛ–biotin (1:50) (ebio500A2) (both were from eBioscience). Biotinylated CD3ɛ was detected using streptavidin–Alexa Fluor 555 (1:500) (Molecular Probes). Hoechst (1:1,000) (Molecular Probes) was used for nuclear stain. All secondary antibodies were incubated for 30 min. Slides were mounted with Prolong Gold Antifade reagent (Life Technologies). Images were acquired on a Zeiss 780 upright confocal head with a Zeiss Axio Imager Z1 microscope and viewed through a 10× objective. Digital images were recorded in three separately scanned channels with no overlap in detection of emissions from the respective fluorochromes. Confocal micrographs were stored as digital arrays of 1024 × 1024 pixels with 8-bit sensitivity; detectors were routinely set so that intensities in each channel spanned the 0−255 scale optimally. The Zen 2010 software (Zeiss) was used to process these images. Cannulated salivary glands were harvested and chopped into small pieces and digested for 20 min at 37 °C with gentle stirring in 2 ml of RPMI 1640 medium (Sigma-Aldrich) containing collagenase and dispase (250 μg/ml; from Roche, Welwyn, UK), DNase I (25 μg/ml; from Sigma-Aldrich) and 2% (vol/vol) FCS. The suspension was gently pipetted to break up aggregates. During the final pipetting, EDTA (Sigma-Aldrich) was added to a final concentration of 10 mM to further reduce cell aggregates. Cells were then passed through a 70-μm mesh with a syringe, washed twice and resuspended in PBS (with 0.5% BSA and 2 mM EDTA) Single cells suspensions were stained for 30 min in PBS (with 0.5% BSA and 2 mM EDTA) with cocktails of the following antibodies CD3ɛ–PECy7 (1:200) (145-2C11), CD19–APC-Cy7 (1:100) (eBio1D3) (from eBiosciences). Cells were then washed twice in PBS (with 0.5% BSA and 2 mM EDTA), resuspended in PBS (with 0.5% BSA and 2 mM EDTA) and then analyzed using a Cyan-ADP (Dako) with forward/side scatter gates set to exclude nonviable cells. Data were analyzed with FlowJo software (TreeStar, Oregon, USA). Patient studies. Sample size for the patient studies were guided by the variance from a previous study in our laboratory in which we determined the frequency of adiponectin receptors on monocytes from individuals with T1D versus matched healthy controls 7 . Randomization was not required for this study. All patients and controls were analyzed in parallel on the same flow cytometer over a period of weeks, as they became available from clinic. The expression of the adiponectin receptors was always determined compared to the isotype control for each sample. The same batch numbers of flow cytometry reagents was used throughout to ensure standardization and reduce inter-experimental variation. Type 1 diabetes. Individuals with T1D were recruited from University Hospital Birmingham Department of Diabetes outpatient clinics. Individuals had received a diagnosis of T1D fulfilling the 1999 World Health Organization (WHO) framework 69 . Blood samples were obtained with written informed consent and approval from the Birmingham, East, North and Solihull Research Ethics Committee (06/Q2703/47). Healthy volunteers were matched to the T1D cohort on gender, age and BMI (clinical parameters in Supplementary Tables 6 and 7 ). AdipoR1/2 expression on B cells was measured in 19 healthy controls (58% males) and 29 individuals with T1D (65% male). Quantification of PEPITEM secretion by B cells under adiponectin stimulation was measured in 10 healthy controls (60% males) and 10 individuals with (60% males). Lymphocyte transmigration was measured for 15 healthy controls (55% males) and 9 (for the PEPITEM cohort) or 22 (for the adiponectin cohort) individuals with T1D (60% males). Rheumatoid arthritis. Individuals with rheumatoid arthritis were recruited from the Birmingham Early Arthritis Cohort (BEACON). Rheumatoid arthritis was classified according to 1987 American College of Rheumatology criteria. Blood samples were obtained with written informed consent and approval from the National Research Ethics Service (NRES) committee West Midland, the Black Country (2/WM/0258). Healthy volunteers were matched to the rheumatoid arthritis cohort on gender and age (clinical parameters in Supplementary Table 8 ). adipoR1/2 expression on B cells was measured in ten healthy controls (30% males) and 12 individuals with rheumatoid arthritis (41% males). Lymphocyte transmigration was measured for seven healthy controls (57% males) and eight individuals with rheumatoid arthritis (12.5% males). Aging study. AdipoR1/2 expression on B cells was measured in 40 healthy volunteers ranging in age from 21 to 66 years old. Statistics. In vitro data are from at least three experiments including three separate donors for PBLs, HUVECs or HDMECs and are the mean of these different experiments ± s.e.m. or s.d. as stated. Numbers of animals in each model are stated in the figure legends. The numbers of individuals with T1D or rheumatoid arthritis, or healthy controls compared are stated in the figure legends. Differences were analyzed using GraphPad Prism software (GraphPad software Inc., La Jolla, CA, USA) by paired or unpaired t -test or by one way analysis of variance (ANOVA) followed by post hoc analysis for multiple group comparison (Dunnett or Bonferroni). A Dunnett post hoc analysis was used to compare all the data sets on the graph to a common control. A Bonferroni post hoc test was used to compare all data sets in a graph with each other. Normality was checked using the Kolmogorov–Smirnov test. A nonparametric test (Mann–Whitney test) was used when data did not pass the normality test. The Wilcoxon signed-rank test was used to compare a data set to a normalized control, where the data was presented as a percentage of that control (i.e., where control values were the same, e.g., 100%). P values of ≤0.05 were considered significant. | Researchers from the University of Birmingham have identified an important new way in which our immune systems are regulated, and hope that understanding it will help tackle the debilitating effects of type 1 diabetes, rheumatoid arthritis and other serious diseases. The team discovered a novel pathway that regulates the movement of pathogenic immune cells from the blood into tissue during an inflammatory response. A healthy, efficient immune system ordinarily works to damp down inflammation and carefully regulate the magnitude of the response to infection and disease. In diseases such as diabetes and arthritis, as well as when we age, our immune system becomes less stringently regulated and this can lead to an exaggerated inflammatory response - allowing inappropriate access of immune cells to vulnerable tissues. The new study shows that beneficial effects of the new pathway are lost in these diseases, as well as during normal ageing. The study, published in Nature Medicine, details how a key molecule regulates this aspect of our immune response. Importantly, the team were then able to show how the addition of this molecule to immune cells from patients with diabetes and arthritis could regain control of the movement of their immune cells, thereby reversing the pathogenic changes seen in these diseases. Professor Ed Rainger, from the University of Birmingham, explained, "Our immune system becomes progressively less effective over the years and this can become harmful leading to disease. Being able to understand the link between ageing and pathology will help us to reduce the risk of ill health associated with increasing age." "Our discovery of this new pathway is very exciting. Not only does it reveal new ways in which our bodies control inflammation, it also indicates that we may be able design new drugs to reverse the disease and age specific loss of this pathway." "The fact that the new pathway is relevant to both diabetes and rheumatoid arthritis, which are quite different diseases, implies a broad applicability to many chronic inflammatory and autoimmune diseases. This is an area of research we are keen to follow, and will be working with doctors from other specialities to determine whether this is the case and whether new therapies might be more broadly applicable" The global healthcare landscape is undergoing a significant shift, with some populations experiencing a sharp increase in life expectancy. However, an ageing population comes with a rise in the prevalence of debilitating diseases, which in turn passes on a significant burden to the patients, their families and their health service providers. Professor Rainger added, 'The link between the decline of this pathway and normal ageing is also very interesting, as this is a natural process. It means that patients with diseases such as rheumatoid arthritis may have accelerated decline of this pathway so that individuals as young as 20 have the immune function of 70 year olds. If we can identify patients at risk of developing this disease we may be able to artificially restore some vigour to their immune systems and reduce the burden of disease for the individual patient as well as their families and the NHS.'' The next step is to use the findings in clinical studies that will investigate the viability of treatments and therapies targeting this pathway. Professor Peter Weissberg, Medical Director at the British Heart Foundation, said: "This is a superb piece of research that appears to have identified a new way to regulate chronic inflammation. It helps to explain why autoimmune diseases like rheumatoid arthritis become more common with age." "It remains to be seen whether these findings will have any direct relevance to cardiovascular disease. However, coronary heart disease tends to be more common in people with chronic inflammatory conditions such as rheumatoid arthritis, so if this research leads to better treatments for these conditions, it might be expected that this will lead to fewer heart attacks in these patients." | 10.1038/nm.3842 |
Medicine | Scientists discover new method of diagnosing cancer with malaria protein | Mette Ø. Agerbæk et al, The VAR2CSA malaria protein efficiently retrieves circulating tumor cells in an EpCAM-independent manner, Nature Communications (2018). DOI: 10.1038/s41467-018-05793-2 Journal information: Nature Communications | http://dx.doi.org/10.1038/s41467-018-05793-2 | https://medicalxpress.com/news/2018-08-scientists-method-cancer-malaria-protein.html | Abstract Isolation of metastatic circulating tumor cells (CTCs) from cancer patients is of high value for disease monitoring and molecular characterization. Despite the development of many new CTC isolation platforms in the last decade, their isolation and detection has remained a challenge due to the lack of specific and sensitive markers. In this feasibility study, we present a method for CTC isolation based on the specific binding of the malaria rVAR2 protein to oncofetal chondroitin sulfate (ofCS). We show that rVAR2 efficiently captures CTCs from hepatic, lung, pancreatic, and prostate carcinoma patients with minimal contamination of peripheral blood mononuclear cells. Expression of ofCS is present on epithelial and mesenchymal cancer cells and is equally preserved during epithelial–mesenchymal transition of cancer cells. In 25 stage I–IV prostate cancer patient samples, CTC enumeration significantly correlates with disease stage. Lastly, rVAR2 targets a larger and more diverse population of CTCs compared to anti-EpCAM strategies. Introduction Metastasis, the process in which malignant cells spread from the primary tumor to distant sites, is of key importance in cancer. Up to 90% of cancer-related deaths are related to the metastatic spread of cancer cells 1 , 2 , 3 . This complex process is vital to cancer progression and involves intravasation of cancer cells into the blood stream 2 . The cancer cells traveling in the blood are called circulating tumor cells (CTCs) 4 , 5 , and a subset of these has increased metastatic capacity 6 . CTCs have spurred increasing clinical interest since their levels in the blood were shown to be predictive of overall outcome for patients with metastatic colorectal, breast, prostate, and lung carcinomas 7 , 8 , 9 , 10 . Furthermore, the detection and enumeration of CTCs in patient blood samples, also termed liquid biopsies, provide a non-invasive tool for real-time monitoring of treatment response and estimating risk for metastatic relapse 11 . Besides enumeration, isolation of viable CTCs from blood enables individual and longitudinal molecular characterization and downstream experimental analysis, irrespective of the availability of tumor tissue biopsies. The ability to perform cellular analysis of bulk CTCs, but also contained subpopulations of cells with enhanced metastatic capacity, may represent a major advantage over DNA-based approaches, such as the detection of circulating tumor DNA 12 . Several CTC detection and isolation platforms have been described 13 . Many recently developed systems are based on distinct biophysical properties of CTCs such as their theoretically larger size compared to peripheral blood mononuclear cells (PBMCs). However, studies have shown a large variation in CTC size and a considerable size overlap between CTCs and PBMCs 14 . Therefore, while these methods may provide viable CTCs, separation purely based on size could be too restrictive and introduce a considerable bias by missing important metastatic cells for the downstream analysis. Other systems for CTC isolation use antibodies to target epithelial markers, such as the epithelial cell adhesion molecule (EpCAM) cell surface protein. One of these is the CellSearch® CTC platform, which relies on detecting CTCs using anti-EpCAM antibody-coated magnetic ferrofluid nanoparticles followed by bulk magnetic enrichment 4 . In this platform, enriched cancer cells are identified as CTCs by their cytokeratin (CK) positivity using a fluorescent-labeled antibody, and potentially contaminating PBMCs are identified by a CD45 counterstain. This system represents the current gold standard for CTC enumeration and is approved by the US Food and Drug Administration (FDA) for monitoring patients with metastatic breast, colorectal, and prostate cancers 15 . Given the heterogeneous nature of CTCs, EpCAM-based capture approaches are inherently biased toward capturing CTCs with well-preserved epithelial traits and are rarely efficient in epithelial cancers with downregulated EpCAM expression, e.g., during epithelial–mesenchymal transition (EMT), or in cancers of mesenchymal origin (i.e., sarcomas) 16 , 17 , 18 . In an attempt to include these cells, many CTC isolation methods combine several antibodies in an antibody cocktail and thereby target a larger population of CTCs 19 , 20 , 21 . Such cocktails are, however, often only applicable to specific tumor types and prone to capture more non-cancer cells including white blood cells 22 , 23 . A similar contamination issue arises when the inverse approach is taken and CTCs are enriched by depletion of CD45-positive white blood cells, most likely due to a considerable fraction of leukocytes with low-level expression of surface marker 14 , 24 , 25 . Considering the limitations of the above-described methods, it is clear that the field would benefit greatly from a specific and universally expressed cancer marker for capturing and detecting CTCs. The isolation of CTCs requires a highly specific target, which is completely absent from normal cells. In line with this, we have recently described a uniquely modified form of chondroitin sulfate (CS), termed oncofetal chondroitin sulfate (ofCS), which is expressed by placental cells and cancer cells of both epithelial and mesenchymal origin 26 . CS belongs to the family of glycosaminoglycans (GAG), which are long, linear carbohydrates made up of repeated disaccharide units that can be differentially modified by disaccharide sulfations. CS can be attached to different proteins called chondroitin sulfate proteoglycans (CSPGs) on the cell membrane or in the extracellular matrix. We have shown that ofCS can be attached to more than 30 different proteoglycans 26 , 27 , 28 , 29 , 30 . A single cancer cell may display different combinations of these ofCS proteoglycans and thereby ofCS serves as a more robust cancer biomarker as it is not restricted to the expression of a single protein. CSPGs are often overexpressed in primary as well as metastatic tumors, which, in combination with the expression of the specific ofCS structure across a diverse array of tumor types as well as its strict cancer specificity, makes them an appealing target for the universal and efficient isolation and detection of CTCs 31 . We recently made the exciting discovery that the unique ofCS can be detected by the VAR2CSA malaria protein 26 . In placental malaria, parasite-infected erythrocytes adhere to ofCS in the placenta using the malaria VAR2CSA protein as an anchor 32 . Testing a wide range of cells and tissues, we found that the recombinant VAR2CSA (rVAR2) protein also binds more than 95% of cancer cell lines and tissues of epithelial, mesenchymal, and hematopoietic origin, with very limited binding to non-cancerous cells or normal tissue (besides placental tissue) 26 . This suggests that expression of ofCS is vital for the cellular attributes of embryonic and cancer cells, such as rapid proliferation, migration, and invasion 26 . We have shown that ofCS plays a key role in tumor cell motility through canonical integrin signaling pathways, and thus supports the metastatic potential of cancer cells 27 . In line with this, we found ofCS to be highly expressed in human metastatic lesions in situ and showed that rVAR2 could inhibit metastasis of cancer cells in mice 27 . As rVAR2 shows cancer-specific and origin-independent binding to ofCS both prior to and after the metastatic process, we hypothesized that rVAR2 could be a useful tool to broadly and efficiently capture rare cancer cells in complex blood samples. Here, we present a CTC isolation method based on rVAR2 conjugated to 4.5 µm streptavidin-coated magnetic CELLection™ Biotin Binder Dynabeads®. Using the IsoFlux™ System to retrieve Dynabead-bound cells, we find a markedly enhanced CTC capture compared to EpCAM-based techniques in a diverse set of clinical blood samples. Importantly, our data confirm that the additionally captured subset of EpCAM-negative CTCs indeed derives from the respective tumor site. Our data indicate that the ofCS modification is independent of tumor type and cell differentiation status, demonstrating the potentially broad applicability of the rVAR2-based CTC isolation method. Results rVAR2 binds to epithelial and mesenchymal cancer cells Based on the specific and high affinity binding of rVAR2 to ofCS on cancer cells 26 , we sought to establish a cancer-specific and tumor-type-independent CTC isolation method. To examine the potential use of rVAR2 for binding of cancer cells in blood samples, cancer cells of breast (MDA-MB-231, MCF7, Hs578T), prostate (LNCaP, PC3, DU145), colorectal (COLO205, HT-29, SW480), and lung carcinomas (A549) as well as osteosarcoma (U2OS) and melanoma (C32) were mixed with PBMCs in a 1:1 ratio. Flow cytometry analysis showed that rVAR2 bound specifically to cancer cells of epithelial and mesenchymal origin (Table 1 ). It should be noted that while rVAR2 binding to DU145 cells was stronger than binding to PBMCs, rVAR2 binding of the other metastasis-derived prostate cancer cell lines LnCAP and PC3 was even more pronounced. Table 1 rVAR2 binding to cancer cells or peripheral blood mononuclear cells (PBMCs) Full size table Statistically, non-specific binding will dramatically increase when the number of non-target cells increases 33 . Therefore, we further verified the ability of rVAR2 to distinguish cancer cells from PBMC, by mixing cancer cells with PBMCs in a 1:5000 ratio and analyzing the samples for rVAR2 binding using a CytoTrack scanning device. The CytoTrack system allows for high throughput, multispectral confocal imaging of CTCs in blood samples 34 . The platform does not perform any prior enrichment of CTCs and therefore the enumeration of rare cells relies solely on specific biomarker detection using specific fluorescently labeled probes. Following staining with His-tagged rVAR2 in combination with an Alexa Fluor 488-conjugated anti-penta His antibody, the CytoTrack scanning device readily detected both epithelial and mesenchymal human cancer cells in a background of normal CD45-positive PBMCs (Fig. 1a ). Fig. 1 rVAR2 binds specifically to a diverse repertoire of cancer cells. a Detection of cancer cells using the CytoTrack platform. Representative confocal microscopy images of indicated cell lines. Cancer cells were mixed with PBMCs in a 1:5000 ratio prior to analysis and stained with His-tagged rVAR2 in combination with anti-penta His Alexa Fluor 488 (green), an anti-CD45 Cy5 antibody (red), and DAPI (blue). Scale bars, 10 µm. b Flow cytometry measured fluorescence intensity of three breast cancer (left panel), three prostate cancer (middle panel), and three colorectal cancer (right panel) cell lines stained by His-tagged rVAR2 in combination with anti-penta His Alexa Fluor 488 ( y -axis) and a PE-conjugated anti-EpCAM antibody ( x -axis) Full size image These data indicate that an rVAR2-based CTC isolation method would be cancer specific and independent of the expression of epithelial markers. This was further confirmed by performing dual staining with rVAR2 and an anti-EpCAM antibody on nine different carcinoma cells lines, followed by flow cytometry analysis. As expected, rVAR2 binding did not correlate with the expression level of the epithelial marker EpCAM (Fig. 1b ). rVAR2 binding is unaffected by phenotypic plasticity Epithelial-mesenchymal transition (EMT) provides cancer cells with a phenotypic plasticity that is thought to be essential for the metastatic progression of primary carcinomas 35 , 36 , 37 , 38 . During this transition, epithelial surface markers, such as EpCAM, can be downregulated, rendering them less suitable targets for CTC isolation. To determine whether ofCS expression is maintained after the transition of a carcinoma cell toward a more mesenchymal phenotype, we induced EMT in the A549 lung adenocarcinoma cell line using TGF-β 39 . The induction of EMT following TGF-β treatment was confirmed by decreased expression of the epithelial marker E-cadherin, whereas expression of the mesenchymal markers vimentin and N-cadherin increased (Fig. 2a, b ). Accordingly, the cells gained an elongated morphology and showed decreased expression of pan-CK, confirming their transition (Fig. 2b ). Most importantly, rVAR2 binding was preserved, following induction of EMT as measured by flow cytometry (Fig. 2c ). Preserved rVAR2 binding to cancer cells after EMT induction was also found for the human glioblastoma U87mg cell line (Supplementary Fig. 1 ). While EMT most likely drives the escape of cancer cells from the primary tumor site, the reverse process, termed mesenchymal–epithelial transition (MET), is thought to drive the colonization at the distant metastatic site 40 , 41 . To explore whether the mesenchymal–epithelial plasticity affects the expression of ofCS on cancer cells, we removed TGF-β from the culture media of EMT-induced A549 cells 42 . Seventy-two hours after the removal of TGF-β, mesenchymal markers, such as fibronectin and N-cadherin, were reduced, indicating that the A549 cells had returned to a more epithelial state (Supplementary Fig. 2a ). Furthermore, the Snail EMT transcription factor, which was found upregulated as a consequence of TGF-β treatment, was reduced to background levels after the simplified MET (Supplementary Fig. 2a ). Intriguingly, rVAR2 binding remained both through the EMT and MET (Supplementary Fig. 2b ). Fig. 2 Epithelial–mesenchymal transition increases rVAR2 binding. a Western blot of A549 cell lysates after 24, 48, and 72 h of treatment with TGF-β or control (TGF-β dissolution buffer). Blots were incubated with rabbit anti-E-cadherin, anti-vimentin, anti-N-cadherin, or anti-GAPDH antibodies and detected by anti-rabbit HRP antibody. b Representative images of fixed A549 cells after 48 h of treatment with TGF-β or control buffer. Cells were incubated with DAPI, phalloidin alexafluor 594 to stain F-actin, anti-pan Cytokeratin Alexa Fluor 488, or rabbit anti-E-cadherin and anti-vimentin in combination with anti-rabbit FITC antibodies. Scale bars, 50 µm. c Intensity of rVAR2 staining (MFI) of A549 cells treated with TGF-β or control buffer for 48 h ( P < 0.001, generalized least squares regression model). Binding of rVAR2 was detected by flow cytometry using anti-penta His Alexa Fluor 488 antibody. Three independent experiments were performed. MFI mean fluorescence intensity Full size image rVAR2-coated beads capture cancer cells in blood samples Having confirmed that rVAR2 can detect a range of different types of cancer cells in complex blood samples and that the binding to cancer cells is maintained during EMT/MET, we developed an rVAR2-based method to isolate CTCs. To allow easy isolation of viable CTCs, we used magnetic CELLection™ Biotin Binder Dynabeads® and biotinylated rVAR2 to isolate ofCS-positive CTCs on an IsoFlux™ microfluidic isolation system (Fluxion). Instruments combining microfluidics with magnetic micrometer-scale bead-based cell sorting have shown promising results in detecting rare cells in complex blood samples 14 , 43 . On the IsoFlux™ system, the cells are aligned within a microfluidic channel and directed across a magnetic field isolation zone at slow velocity 43 . This enables control of the time by which cells reside in the isolation zone, ensuring high recovery of magnetically labeled cells. In addition, unlabeled PBMCs are directed away from the isolation zone by gravitational and hydrodynamic forces, thereby improving the purity of the isolated CTC fraction 43 . The rVAR2 construct used in this study was based on our previously defined minimal ofCS binding region in the native full-length VAR2CSA protein, which stretches over 121 kDa 44 . This complicates direct conjugation of rVAR2 to beads, as it would likely interfere with the capacity of rVAR2 to interact with ofCS. To circumvent this problem, beads were coated with rVAR2 using a split protein conjugation system (SpyTag/SpyCatcher) in a four step process 45 , 46 . First, rVAR2 was genetically modified to include an N-terminal 13 amino acid SpyTag peptide. Second, the corresponding 13 kDa SpyCatcher protein was produced recombinantly and biotinylated. Third, SpyTagged rVAR2 and the biotinylated SpyCatcher were mixed resulting in the formation of an isopeptide bond between the two proteins. Finally, this biotinylated complex was immobilized on streptavidin-covered magnetic Dynabeads®. This procedure allowed conjugation of rVAR2 to magnetic beads without abolishing the ofCS-binding capacity of rVAR2. To test the ability of the rVAR2-coated magnetic beads to capture cancer cells, we spiked 100 PC3 prostate cancer cells into 5 mL of healthy donor blood and assessed cancer cell retrieval. First, red blood cells were eliminated by lysis, followed by a brief incubation of the rVAR2-coated beads with the cell sample. The cell-bead suspension was loaded onto the IsoFlux™ microfluidic cartridge and automatically processed in the IsoFlux™ instrument. Captured cells were manually transferred to slides, immobilized by a strong magnet, and stained for pan-CK, CD45, and 4′, 6-diamidino-2-phenylindole (DAPI). Cancer cells, defined as CK+, CD45−, DAPI+, were enumerated using the Ariol System software. Isolated PC3 prostate cancer cells showed clear CK staining (green) and no CD45 staining (red) (Fig. 3a ). Since spike-in experiments with established cell lines rather poorly represent the heterogeneity of CTCs, low-passage pancreatic ductal adenocarcinoma (PDAC) cells from patient-derived xenografts (PDX) were used for further validation of the method. The isolated PDAC cells were easily distinguished from white blood cells by their CK+, CD45−, DAPI+ profile (Fig. 3b ). To evaluate the isolation method, we performed an exact enumeration of three separate spike-in experiments using 100 cells. On average, the isolation recovery was 83% and 90% for the PC3 and PDAC cells, respectively (Fig. 3c ). To further validate the efficiency of the rVAR2-based CTC recovery, we assessed the recovery after adding 10, 20, 50, 100, 200, or 500 PDAC cells to 5 mL blood. The average percentage of recovered cells was between 86.0 and 91.9% and the recovery did not vary systematically with the number of cells added (correlation coefficient ( R 2 ) = 0.996) (Fig. 3d and Table 2 ). Finally, we tested the sensitivity of cancer cell recovery in a more challenging setup by spiking 5 mL of blood with only three or six GFP-expressing PDAC cells. The average recovery for five replicates was 60.0% and 76.7% for three and six cells spiked in, respectively (Table 2 ). Collectively, these results show that rVAR2-coated beads enable efficient isolation of cancer cells spiked into a complex blood sample, demonstrating the high sensitivity of the procedure. Fig. 3 rVAR2-based CTC isolation using magnetic beads. a Representative confocal microscopy image of a PC3 cell captured by rVAR2-coated magnetic beads in combination with the IsoFlux™ device after spiking into 5 mL of healthy donor blood. Isolated cells were incubated with anti-cytokeratin FITC antibody (green), anti-CD45 PE antibody (red), and DAPI (blue). Scale bars, 10 µm. b A low-passage pancreatic ductal adenocarcinoma (PDAC) cell from a patient-derived xenograft captured by rVAR2-coated magnetic beads in combination with the IsoFlux™ device after spiking into 5 mL of healthy donor blood. Isolated cells were incubated with anti-cytokeratin FITC antibody (green), anti-CD45 PE antibody (red), and DAPI (blue). Scale bars, 10 µm. c CTC isolation efficiency by spike-in experiments; 100 PC3 cells or PDAC cells were spiked into 5 mL of blood and isolated by rVAR2-coated beads in combination with the IsoFlux™. The recovery was estimated by immunofluorescence staining of CK+, CD45−, DAPI+ cells (mean ± s. d., n = 3). d Recovery of PDAC cells spiked into 5 mL of blood. The number of spiked PDAC cells (10, 20, 50, 100, or 200 cells) is plotted versus the number of PDAC cells isolated by rVAR2-coated beads ( n = 5). Equation shows a linear regression model Full size image Table 2 Accuracy of rVAR2-based recovery of PDAC cells spiked into 5 mL of blood from healthy donors Full size table rVAR2-coated beads capture CTCs in patient blood samples In order to test whether rVAR2-coated beads enabled the isolation of CTCs from clinical samples, we analyzed blood samples from patients with pancreatic ( n = 9), hepatic ( n = 4), prostate ( n = 25), and lung ( n = 6) cancer at different stages of disease. rVAR2-coated beads captured CK+, CD45−, DAPI+ cells in all four cancer types (Fig. 4a–c ), whereas no CTCs were detected in blood samples from healthy donors ( n = 16). Fig. 4 rVAR2- and EpCAM-based CTC isolation and enumeration in cancer patients. a Number of CTCs isolated from 5mL pancreatic ( n = 9), hepatocellular ( n = 4), and prostate ( n = 25) cancer patient-derived blood using rVAR2-coated beads. CTCs were enumerated by immunofluorescence stainings and defined as CK+ CD45− DAPI+. b Representative confocal microscopy image of a circulating tumor cell isolated with rVAR2 from blood derived from one of the pancreatic cancer patients (patient 4, Table 3 ). Isolated cells were stained with anti-cytokeratin FITC antibody (green), anti-CD45 PE antibody (red), and DAPI (blue). Scale bar, 10 μm. c Number of CK+ CD45− DAPI+ CTCs isolated using rVAR2 or anti-EpCAM antibody-coated beads from 5mL blood from 15 of the stage II–III prostate cancer (PCa) patients (P < 0.02, Wilcoxon test for paired data). d Number of CK+ CD45− DAPI+ CTCs isolated using rVAR2 or anti-EpCAM antibody-coated beads from 5mL blood from six of the stage III–IV pancreatic ductal adenocarcinoma (PDAC) patients. e Box-Whiskers plot showing post-isolation characterization of CK+ CD45− DAPI+ CTCs using EpCAM or rVAR2 stain on CTCs isolated using rVAR2 ( n = 7) or anti-EpCAM antibody-coated ( n = 7) beads, respectively. The median is presented as the center line, whiskers as min to max values, and the 25th to 75th percentiles define the box. f Number of PBMCs contaminating the isolated CTCs from patient-matched blood samples using rVAR2 or anti-EpCAM antibody-coated beads. PBMC levels were estimated by immunofluorescence stainings and defined as CK−, CD45+, DAPI+ stained cells (P < 0.0001, Wilcoxon test for paired data) ( n = 23). Full size image Next, we sought to confirm that the isolated CK+, CD45−, DAPI+ cells originated from the diagnosed tumor of the given patient and thus were genuine CTCs. This was demonstrated in four of the samples from patients with PDAC. As four of these patients harbored activating G12 KRAS mutations, the copies of mutated DNA in the total CTC isolate could be quantified using highly sensitive digital droplet polymerase chain reaction (ddPCR). ddPCR was performed on genomic DNA material directly retrieved from the enumeration slide by microdissection. As a positive control, we used the DNA extract from 100 primary PDAC cells carrying the same mutation plus 5000 PBMCs. Table 3 summarizes the enumeration data as well as the ddPCR mutation analysis. The G12 KRAS mutation numbers for each of the four tested samples correlated with the CTC enumeration data, indicating that the captured CTCs, defined by a CK+, CD45−, DAPI+ profile, did indeed carry the KRAS mutation and thus originated from the pancreatic tumor (Table 3 ). Table 3 KRAS mutational analysis of total DNA extract from CTC isolates from pancreatic cancer patient blood samples a Full size table rVAR2 captures more CTCs and less PBMCs Our data suggest that rVAR2 bears the potential for capturing a subset of non-epithelial CTCs in addition to the classical epithelial CTCs. To directly compare our ofCS-targeting CTC isolation method with the more common EpCAM-targeting strategy, we aimed to capture CTCs in a subset of the blood samples from lung, prostate, and pancreatic cancer patients using either rVAR2-coated or anti-EpCAM antibody-coated beads on the IsoFlux™ system. The rVAR2-based method detected higher CTC numbers than the EpCAM-based method in all patient-matched blood samples (Fig. 4d–f ). On average the rVAR2-based CTC isolation resulted in a 5.3×, 2.8×, or 6.4× higher CTC levels for lung ( n = 4), prostate ( n = 15), and pancreatic ( n = 6) cancer, respectively. To characterize the isolated prostate CTCs, we stained rVAR2-captured CTCs for EpCAM and anti-EpCAM antibody captured CTCs for ofCS (using rVAR2). Only half of the rVAR2-captured CTCs were EpCAM positive, whereas all cells retrieved by the EpCAM-based method were ofCS positive as determined by rVAR2 staining (Fig. 4g ). As expected, these results support the notion that the rVAR2-based CTC isolation results in the capture of a broader spectrum of CTCs and that none of the EpCAM-based captured CTCs are missed by rVAR2. In order to confirm that the additional CK-positive cells isolated by rVAR2 were indeed CTCs, we compared the concentration (copies per microliter) of mutated KRAS genes in four of the PDAC isolates by performing ddPCR on the material from the CTC enumeration slide as described above. The mutational analysis was consistent with the CTC enumeration, showing an average of seven times higher levels of G12 KRAS mutation in the rVAR2 isolates compared to the EpCAM isolates (Table 4 ). Likewise, a higher average mutant allele fraction (MAF) for KRAS was found for the rVAR2-based CTC isolations (7.5%) than the EpCAM-based CTC isolations (1.1%). The higher MAF of the rVAR2-based CTC isolations most likely reflects the higher number of isolated CTCs as well as a lower PBMC contamination (Fig. 4h ). On average, the number of contaminating PBMCs was 82% less after capturing ofCS-positive CTCs compared to the EpCAM-based capture strategy ( P < 0.0001, Wilcoxon test for paired data) ( n = 23). Table 4 KRAS mutational analysis of total DNA extract from rVAR2- versus EpCAM-based CTC isolates Full size table To expand on these findings, we confirmed the presence of mutated KRAS at a single cell level. rVAR2-isolated cells from two of the pancreatic cancer blood samples were stained for EpCAM in addition to the routine CK and CD45 used for detection. To ensure high cell-picking efficiency, we applied the single cell isolation workflow developed by Neumann et al. 47 . The bead-coated cells were placed on a glass slide and processed using the automated micromanipulator CellCelector (ALS, Jena, Germany). A magnetic field kept the cells in situ and five EpCAM-positive (EpCAM+, CK+, CD45−, DAPI+) and five EpCAM-negative (EpCAM−, CK+, CD45−, DAPI+) single cells were selected based on morphological criteria. Preference was given to individual cells with small round shape and an individual nucleus without signs of DNA fragmentations. Total RNA from picked single cells was isolated and used for cDNA synthesis. Following preamplification, the presence of KRAS mutations was verified by ddPCR. All EpCAM-positive and EpCAM-negative cells carried the expected KRAS mutation (Supplementary Fig. 3 ). As our results indicated that rVAR2-captured CTCs could contain a mesenchymal subpopulation, we stained CTC enriched samples from two lung cancer patients for the mesenchymal marker vimentin. Samples from both patients contained cells that were double positive for CK and vimentin, strongly supporting our hypothesis that rVAR2 efficiently captures CTCs with an intermediate epithelial and mesenchymal phenotype (Fig. 4i ). Notably, a minor fraction of the double-negative cells (CK−, CD45−, DAPI+) was found to be vimentin positive, suggesting the presence of mesenchymal CK− CTCs within the rVAR2-enriched cell population. rVAR2 can be used for CTC isolation in early-stage cancer To test our rVAR2-based capture of prostate CTCs against the current clinical practice, four patient-matched blood samples were also analyzed on the CellSearch® CTC platform. The EpCAM capture on this platform did not detect CTCs above the pre-specified cut off (≥2 CTCs per 7.5 mL blood) in any of the tested samples (Fig. 5a ). However, it should be noted that the CellSearch® CTC platform was validated in metastatic prostate cancer while we in this case tested stage II prostate cancer patients 9 . Nevertheless, the EpCAM-based capture analyzed on the IsoFlux™ detected 5, 8, 5, and 5 CTCs in the patient-matched samples (estimated from the 5 mL capture as shown in Fig. 4d ). Interestingly, the rVAR2-based detection method captured the highest number of cells with 12, 23, 15, and 23 CTCs, respectively, in the four patient samples (estimated from the 5 mL capture as shown in Fig. 4d ). Thus, considerably more CTCs were isolated using rVAR2. Fig. 5 rVAR2-capture of CTCs from prostate cancer patients. a Number of CK+ CD45− DAPI+ CTCs isolated from 7.5 mL blood from four prostate cancer patients using rVAR2 or anti-EpCAM antibody-coated beads or the CellSearch® CTC platform. b CTC enumeration using rVAR2-coated beads on blood samples from prostate cancer patients with different disease stages ( n = 25) as well as from healthy controls ( n = 16) and patients with non-malignant diseases ( n = 12). ( P = 0.0001 for association between disease severity and CTC number, Kruskal–Wallis test). UTI: urinary tract infection, BPH: benign prostatic hyperplasia Full size image To test whether the rVAR2 method had the potential to predict disease stage, rVAR2-based CTC numbers from prostate cancer patient blood samples ( n = 25 from Fig. 4a ) were classified according to the disease stage of the patient. CTCs were isolated from all patients, including all those in stage I and II ( n = 14), and the number of captured CTCs showed a statistically significant association with patient disease stage ( P = 0.0001, Kruskal–Wallis rank test) (Fig. 5b ). Importantly, no CTCs were detected in blood from any of the healthy controls ( n = 16) or patients with different non-malignant diseases ( n = 12). Intriguingly, these data suggest that the rVAR2-based method could be used for CTC detection and staging in low-grade disease stages. Discussion We have previously shown that ofCS is presented by nearly all cancer cells 26 . This suggests that ofCS could be an ideal target for the isolation of CTCs in a wide range of cancer types. In this study, we show that the rVAR2 malaria protein specifically detects ofCS on a variety of cancer cells in complex blood samples. Based on this, we developed a highly efficient CTC isolation protocol using the ofCS-targeting rVAR2 protein and magnetic beads in combination with the IsoFlux™ system. The analytical performance was evaluated and showed excellent recovery when cell lines and more heterogeneous primary cancer cells, ranging in numbers from 3 to 500 cells, were spiked into 5 mL blood from healthy donors. Importantly, this protocol enabled isolation of CTCs from pancreatic, hepatic, lung, and prostate cancer patients at various stages of disease, illustrating the broad applicability of the rVAR2-based CTC isolation method. Furthermore, we showed that rVAR2 resulted in a higher CTC yield in blood samples from prostate, pancreatic, and lung cancer patients as compared to EpCAM-based isolation of patient-matched samples. It is interesting that rVAR2 captures more CTCs in all tested blood samples. One explanation could be the wide distribution of ofCS on cancer cells. ofCS is a secondary modification to a wide range of different CSPGs, of which multiple are co-expressed on different cancer cells. rVAR2 isolation of CTCs is therefore not dependent on the expression of a single marker 27 . It is likely that the expression of ofCS on multiple membrane bound proteins increases the density of target antigen on the cell surface and thereby improves the sensitivity of the rVAR2-based CTC isolation strategy compared to that of targeting a single protein, like EpCAM, which is more sensitive to protein up or downregulation. Another explanation for the increase in CTC recovery is that ofCS is simply present on a broader spectrum of the patient CTCs. This is substantiated by the fact that only a fraction of the rVAR2-captured CTCs showed EpCAM positivity, while all the EpCAM-captured CTCs were ofCS positive. The EpCAM−, CK+, CD45− cells captured by rVAR2 were confirmed to be genuine CTCs by KRAS mutation analysis on single cells from pancreatic cancer patient blood samples. While most carcinomas are considered EpCAM positive, the expression of EpCAM is often heterogenous within the tumor. The intratumoral heterogeneity as well as a potential transition toward a more mesenchymal phenotype could partly explain why EpCAM-based methods only detect a fraction of the CTCs, despite the epithelial origin of the tumor. In contrast, we show that rVAR2 binding to cancer cells is maintained after induction of EMT. In line with this, we demonstrate that the rVAR2-enriched CTC population from cancer patients contains vimentin-positive cells as well as EpCAM-negative cells, indicating that the rVAR2 isolates include mesenchymal-like subpopulations. Interestingly, the observed ofCS display on mesenchymal-like carcinoma cells is in accordance with our recently published work where we demonstrated that ofCS plays a key role in cancer cell motility through integrin signaling pathways, and thus seems to be a requirement for cancer cells to invade and metastasize 27 . A complete analysis of the mesenchymal subsets of CTCs is, however, beyond the scope of this study. As CK represents a validated marker for CTC detection, we chose to stain the bead-captured cells and define CTCs based on being CK+, DAPI+, and CD45−. Therefore, the CTC enumeration used for this study is still dependent on the expression of the epithelial marker, CK. As shown for the A549 lung cancer cell line, CK is also frequently downregulated during EMT and this has also been noted for the contained subset of cancer stem cells 6 . Thus, it is likely that capturing CTCs with rVAR2 followed by CK detection will miss certain subsets of CTCs. Intriguingly, Vim+, CK−, CD45− cells were detected in two blood samples from non-small cell lung cancer (NSCLC) patients and they may represent mesenchymal CTCs. Future studies will have to define and further validate the nature of the captured vimentin+, CK− putative CTCs. Collectively, our data suggest that rVAR2 specifically binds cancer cells regardless of their state of epithelial differentiation, and thus will enable the isolation of additional subsets of CTCs. These results do, however, not rule out that rVAR2− EpCAM+ CTC may exist, as suggested by the weak rVAR2 staining on DU145 cells (Fig. 1b ). Future studies could study the combined use of EpCAM and rVAR2 for capturing CTCs to examine the potential cumulative effect. In addition to giving higher CTC yields, the rVAR2-based isolation also resulted in a low contamination of PBMCs compared to the EpCAM-based isolation on the IsoFlux™ system. The number of CD45+, CK−, DAPI+ PBMCs in the rVAR2 isolates was not affected by disease stage, and healthy as well as non-malignant disease subjects showed equal numbers of PBMCs. Despite the overexpression of CSPGs on cancer cells, CS in general is present on all cells including PBMCs. However, the decrease in PBMC contamination supports our previous findings that rVAR2 exhibits a high specificity toward the cancer-specific ofCS modification 26 . In combination with the high CTC levels, the relative purity of the CTC isolates makes the rVAR2-based method suitable for downstream analysis such as whole-genome sequencing. Currently, the only FDA-approved CTC detection platform, CellSearch®, is based on positive EpCAM selection of cells in patient blood. Our test of blood samples from four prostate cancer patients on the FDA-approved CellSearch® CTC platform yielded only one or no CTCs per sample. This is below the threshold for abnormality set forth for the instrument 4 . However, the CellSearch® CTC platform was validated in metastatic prostate cancer while we in this case tested stage II prostate cancer patients. Surprisingly, significantly more CTCs were isolated using the same isolation target, EpCAM, on the IsoFlux™ system. Direct comparison of the CTC isolation by the two different platforms is, however, difficult due to the use of different antibody clones and magnetic beads for capture. The IsoFlux™ system utilizes micrometer-scale beads which have shown to result in a magnetic moment that is sufficient for capturing cells even with low target expression 43 . This is likely to increase the sensitivity compared to the nanoscaled magnetic particles used in the FDA-approved CellSearch® CTC platform 43 . Also, the IsoFlux™ system uses microfluidic flow based enrichment of magnetically labeled cells, which increases the purity of the final output compared to no-flow bulk isolation 43 . Furthermore, differences in the validation stainings and the criteria used when assigning an object as a CTC could potentially affect the analytical interpretation of the result 33 . Thus, there are important technical differences between the two instruments that may explain at least in part the differences in CTC counts. The data from this study clearly indicate that rVAR2 combined with the IsoFlux™ system results in markedly higher assay sensitivity, which could be an important factor in the clinic, as it reduces the number of cancer cases with false negative tests and potentially allows for CTC detection at earlier stages of disease. In addition, the increased sensitivity could make the rVAR2-based method suitable for risk monitoring in patients suffering from cancer types with reportedly low CTC levels as well as detection of minimal residual disease after therapy or surgical resection of a tumor. To further explore the clinical potential of the rVAR2-based method, CTC enumerations in 25 prostate cancer patients were associated with disease stage. Despite the relatively low number of patient samples available, the difference in levels of CTCs between the four stages of disease was remarkable. Thus, rVAR2 could potentially provide means for assessing the stage of the disease. However, large-scale prospective clinical trials are warranted to validate the clinical sensitivity as well as prognostic value of this CTC capture method. In conclusion, we describe an efficient and cancer-specific method for the isolation of CTCs in complex blood samples using a recombinant VAR2CSA malaria protein. We show that rVAR2 specifically detects ofCS, a uniquely modified form of CS, on a wide variety of cancer cells, regardless of tumor origin. The rVAR2-based CTC isolation method showed markedly increased CTC retrieval compared to EpCAM-based techniques when testing blood from prostate, lung, and pancreatic cancer patients. Taken together, the rVAR2-based method not only provides a more sensitive and universal tool for CTC detection with the potential of a high clinical impact, but also allows for isolation of more, if not all, subsets of CTCs, which is of great value for downstream cellular analysis. This will hopefully provide further insights into the cellular characteristics of the highly metastatic subpopulation of cancer cells circulating in the blood of cancer patients, and thereby improve our understanding of metastasis formation and potentially enable the development of more efficient therapies. Methods Cell cultures Sw480, COLO205, LNCaP, C32, Hs578t, and PDX-derived PDAC cells were cultured in RPMI 1640, whereas A549, PC-3, DU145, U2OS, and MDA-MB-231 were cultured in DMEM, HT-29 in McCoy’s, and MCF7 in EMEM with an additional supplement of 0.01 mg/mL insulin. All media for the cancer cell cultures were purchased from Sigma Aldrich and supplemented with 10% FBS, L-glutamine, penicillin, and streptomycin. All commercial cell lines originated from ATCC®. We regularly tested for mycoplasma contamination and none of the cell lines used in this paper have tested positive. Due to limitations of the in-house facilities, authentication of each cell line by genotyping is still in progress. None of the cell lines used in this paper are listed in the database of commonly misidentified cell lines maintained by ICLAC. For retrieval of the PDX-derived PDAC cells, human tissue was obtained with written informed consent from all patients and expanded in vivo as PDX. PDX-354 was processed as previously described 48 . Briefly, PDX-derived tumors were minced and enzymatically digested with collagenase (STEMCELL Technologies) for 90 min at 37 °C, and after centrifugation for 5 min at 1200 rpm, cell pellets were resuspended and cultured in RPMI (Invitrogen) supplemented with 10% FBS and 50 U/mL penicillin–streptomycin. Primary cultures were tested for mycoplasma contamination every 2 weeks (MycoAlert™ Mycoplasma Detection Kit, Lonza). Production of rVAR2 The subunit DBL1-ID2a of VAR2CSA (rVAR2) was recombinantly expressed in Escherichia coli as previously described 49 . In brief, the FCR3 DBL1-ID2a with a C-terminal V5 tag, penta-His tag, and a split protein tag sequence was inserted into a modified pET15b plasmid (Novagen) and transformed to SHuffle T7 Express Competent E. coli cells (New England Biolabs, C3029H). Following lysis of the cell pellet, rVAR2 expressed in a soluble form was purified by immobilized affinity chromatography (IMAC) followed by size-exclusion chromatography. Purity of the protein was confirmed by SDS page and Western blot, whereas specificity toward ofCS was ensured in ELISA and on cancer cells using flow cytometry. Flow cytometry Blood samples were collected in CPDA Vacuette tubes and PBMCs were isolated using a Lymphoprep gradient. PBMCs were mixed with cancer cells in a 1:1 ratio and incubated with 250 nM rVAR2 or as otherwise indicated for 30 min at 4 °C. Following three wash steps in PBS with 2% FBS, cells were incubated with anti-penta His Alexa Fluor 488 (Cat. No. 35310, Qiagen, 1:500) 50 and data were acquired using a FC500 flow cytometer (Beckmann Coulter). Secondary antibody controls were included in all experiments by adding only anti-penta His Alexa Fluor 488 without the His-tagged rVAR2. EpCAM levels in cancer cell lines were detected by an anti-human EpCAM antibody [VU-1D9] PE (Cat. No. ab112068, Abcam, 1:75) 51 . Mean fluorescence intensities (MFIs) were analyzed using FlowJo™ software. Induction of EMT A549 cells were seeded at a density of 3000 cells/cm 2 in DMEM supplemented with 10% FBS, L-glutamine, penicillin, and streptomycin. After attachment, cells were starved in 0.5% FBS for 24 h. Cells were subsequently treated with 10 ng/mL TGF-β (Cat. No. phg9214, Life Technologies) or TGF-β suspension buffer as control (40 mM acetic acid, 0.1% BSA in ultra-pure water) for 24, 48, or 72 h to induce EMT. Transition was confirmed by morphology changes and a change in expression of epithelial and mesenchymal markers using Western blot and immunofluorescence studies. For the MET studies, TGF-β was replaced by DMEM with 0.5% FBS after 72 h EMT induction and the partial return of the A549 cells to their epithelial state was observed for another 72 h. For Western blots, cells were lysed with EBC lysis buffer for 30 min and the protein extract was balanced using a Bradford assay. Equal amount of protein lysates were loaded onto a NuPAGE Bis-Tris gel (ThermoFisher Scientific) after which samples were transferred to a nitrocellulose membrane (Biorad). Transfer was confirmed using Ponceau red staining and the membranes were blocked in 5% skimmed milk powder in TBS-T. Anti-GAPDH (14C10) antibody (Cat. No. 2118, Cell Signaling, 1:1000) 52 , anti-fibronectin antibody (Cat. No. 610077, BD Biosciences, 1:2000) 53 or anti-E-cadherin (1:1000), anti-vimentin (1:1000), anti N-cadherin (1:500), anti-Snail (1:500), and anti-β-catenin (1:500) primary antibodies from the EMT Antibody Sampler Kit (Cat. No. 9782S, Cell Signaling) 54 were added to the membrane in TBS-T supplemented with 2% skimmed milk powder and incubated overnight at 4 °C. To reduce antibody levels, the blots were cut into smaller lanes based on the molecular target size prior to antibody incubation (Supplementary Fig. 4 ). Following three washes in TBS-T, the membranes were incubated with anti-rabbit HRP (Cat. No. 9782S, Cell Signaling, 1:1000) 54 for 1 h at room temperature and the reactivity was detected using LumiGlo Reserve Chemiluminescent Substrate (KPL). For immunofluorescence studies, cells were grown on glass slides and fixed in 4% formaldehyde, washed three times in PBS, blocked with 1% BSA, 5% FBS, and 0.3% Tween in PBS for 1 h at room temperature, incubated overnight at 4 °C with anti-pan CK Alexa Fluor 488 (Ca. No. 53-9003-82, eBioscience, 1:500) or primary anti-E-cadherin and anti-vimentin antibodies from the EMT Antibody Sampler Kit (Cat. No. 9782S, Cell Signaling, 1:200) 55 made in PBS with 1% BSA and 0.3% Triton X-100. After three washes in PBS, the fixed cells were incubated with secondary Fluorescein (FITC) Anti-Rabbit IgG Antibody (Cat. No. FI-1000, Vector Laboratories, 1:200) 56 for 1 h at room temperature. For analysis of F-actin, cells were blocked in 1% BSA and stained with Alexa Fluor® 594 Phalloidin (Cat. No. A12381, ThermoFisher, 1:40) 57 for 20 min at room temperature. All cells were stained with DAPI (Cat. No. D1306, Life Technologies) 26 and mounted using FluorSave Reagent (Merck Millipore). Staining was analyzed using a Nikon TE 2000-E confocal microscope with 60× oil immersion objective lens (DIC). CytoTrack One-hundred cancer cells were mixed with 500,000 PBMC. Cells were incubated with 250 nM rVAR2 for 30 min at 4 °C and secondarily with anti-penta His Alexa Fluor 488 (Cat. No. 35310, Qiagen, 1:500) and anti-human CD45 Cy5 (Cat. No. 19-0459, eBioscience, 1:10) 34 . After fixation in 4% formaldehyde, the cells were stained with DAPI (Cat. No. D1306, Life Technologies) and mounted on glass slides using FluorSave Reagent (Merck Millipore). rVAR2-positive cells were located using a CytoTrack CT4 Scanner. The resulting table of hotspots was subsequently analyzed for morphology and rVAR2, DAPI, and CD45 staining, as described 34 . Preparation of rVAR2- or anti-EpCAM antibody-coated beads The recombinantly expressed VAR2CSA (rVAR2) protein used in the rVAR2 CTC isolation method was designed to include a 13-amino-acids peptide (SpyTag) from the fibronectin-binding protein Fba N-terminally, which enables covalent isopeptide bond formation to a biotinylated 12 kDa SpyCatcher protein 45 . The SpyCatcher was produced in E. coli Bl21 as a soluble poly-HIS tagged protein, and purified by Ni++ affinity chromatography. Purity was determined by SDS page and quality of protein was ensured by testing the capacity to form an isopeptide bond to a tagged protein. The tagged rVAR2 and biotinylated SpyCatcher fragment were incubated at room temperature for 1 h. After this step, the protein was incubated with CELLection™ Biotin Binder Dynabeads® (4.5 µm) at room temperature for at least 30 min resulting in rVAR2-coated beads (0.43 µg biotinylated protein per microliter bead suspension). For the EpCAM-based detection, CELLection™ Pan Mouse IgG Dynabeads® (4.5 µm) in combination with an anti-EpCAM antibody [Ber-EP4] (Cat. No. ab7504, abcam) 58 were used. Anti-EpCAM antibody and beads were incubated for 30 min (0.02 µg anti-EpCAM antibody per microliter bead suspension). Remaining protein or antibody was removed by carefully washing the beads in PBS containing 0.1% BSA three times, each time using a neodymium magnet (10 × 12 mm) for dragging beads into a pellet. Cell culture and spike-in preparations Prior to the spike-in experiments, PC3 cells or primary PDAC (PDX-derived) cells were harvested using trypsin−EDTA (Sigma-Aldrich) and resuspended in culture medium. Cell concentration was measured by manually counting the number of viable cells in a 1:1 mixture with Trypan Blue solution (Sigma-Aldrich). The suspensions were subsequently spiked into blood to achieve the desired concentrations. GFP-expressing PDAC cells were used for the spike-in experiment with three and six cells. To ensure a precise cell count before spike-in, we did serial dilutions and placed the cells in a low-adhesion 96-well plate. After validation of the cell counts by microscopy, the cells were transferred to 5 mL of blood. These samples were then processed as described below with the exception of the CK staining. CTC isolation from blood Blood samples were collected under the Barts Cancer Tissue Biobank Ethics committee protocol and informed written consent was obtained for all enrolled subjects. Blood was received in K2 EDTA tubes and processed within 2 h. The blood samples were divided into aliquots of 5 mL and red blood cells were lysed in 45 mL Red Blood Cell (RBC) lysis buffer containing 0.155 M ammonium chloride, 0.01 M potassium hydrogen carbonate, and 0.1 mM EDTA for 10 min. After centrifugation at 400× g for 8 min, the cell pellet was gently washed in PBS once. The centrifugation step was repeated, and finally cells were resuspended in RPMI medium containing 1% FBS in addition to 1 mM CaCl 2 and 5 mM MgCl 2 and transferred to a low retention microcentrifuge tube (Fisherbrand). Under these conditions, cells were incubated with ~1.6E6 rVAR2- or anti-EpCAM antibody-coated magnetic beads at 4 °C. Cancer cells adhering to beads were retrieved by running the isolation protocol on the IsoFlux™ machine (Fluxion). Isolated cancer cells were hereafter retrieved in 200 µL RPMI medium containing 1% FBS in addition to 1 mM CaCl 2 and 5 mM MgCl 2 and transferred to a low retention microcentrifuge tube (Fisherbrand). A neodymium cylinder magnet was used to drag cells bound to beads toward the bottom of the tube, enabling removal of the supernatant. The cells were then fixed in 4% PFA for 10 min and added onto glass slides, on which a circle with the same size as the magnet had been drawn using a water repellent Dako pen. When adding or removing buffer from the cells, the glass slide was placed on top of the magnet. The cells were blocked for 5 min in 10% normal donkey serum (NDS) prior to stain with PE-conjugated anti-CD45 [5B-1] antibody (Cat. No. 130-080-201, MACS Miltenyi Biotec, 1:100) 59 for 30 min at room temperature. Hereafter, cells were permeabilized using 0.2% Triton X-100 diluted in PBS containing 0.5% BSA and 2 mM EDTA. This step was followed by staining of the cells with FITC-conjugated anti-CK [CK3-6H5] antibody (Cat. No. 130-080-101, MACS Miltenyi Biotec, 1:10) 60 for 30 min at room temperature. To enable visualization of cell nuclei, the cells were stained with DAPI. The sample was mounted using Dako Faramount Aqueous Mounting Medium. Quantification of cancer cells Enumeration of cancer cells was done manually using the Ariol image analysis system (Leica Biosystems Ltd., UK) with an Olympus BX61 microscope. Cancer cells were identified as CK+, CD45−, DAPI+ cells. PBMCs were identified as CK−, CD45+, DAPI+ cells. Four-color immunofluorescence staining on cancer cells CTCs from prostate cancer patients isolated using rVAR2-coated beads were stained for EpCAM positivity using an anti-EpCAM antibody [Ber-EP4] (Cat. No. ab7504, abcam, 1:100) in combination with an Alexa Fluor 647-conjugated goat anti-Mouse IgG secondary antibody (Cat. No. A-21235, Invitrogen, 1:200). CTCs from prostate patients isolated using anti-EpCAM antibody-coated beads were stained for rVAR2 positivity using rVAR2 protein containing a V5-tag in combination with a FITC-conjugated anti-V5 antibody (Cat. No. R963-25, Invitrogen, 1:100) 26 . For these samples, the CK positivity was observed using an anti-CK (CK3-6H5) antibody (Cat. No. 130-090-866, MACS Miltenyi Biotec, 1:10) in combination with an Alexa Fluor 647-conjugated goat anti-Mouse IgG secondary antibody (Cat. No. A-21235, Invitrogen, 1:200) 61 . For vimentin staining, cells were stained with FITC-conjugated anti-CK (CK3-6H5) antibody (Cat. No. 130-080-101, MACS Miltenyi Biotec, 1:10), and anti-vimentin (EP21) antibody (Cat. No. AC0024, Epitomics, 1:50) in combination with an Alexa Fluor 647-conjugated anti-rabbit secondary antibody (Cat. No. A-31573, Invitrogen, 1:200). Verification of PDAC CTCs using ddPCR Total genomic DNA (gDNA) was extracted directly from the sample material on the slide used for enumeration (QIAamp DNA Micro Kit, Qiagen). The QX100 Droplet Digital PCR System (ddPCR, Biorad), PrimePCR KRAS mutant, and WT assays (Biorad, dHsaCP2000001 (G12D), dHsaCP2000002 (G12D WT), dHsaCP2000005 (G12V), dHsaCP2000006 (G12V WT), dHsaCP2000009 (G12R), and dHsaCP2000010 (G12R WT)) were used to detect the following KRAS mutations in gDNA: G12D, G12V, and G12R. A total 50 ng of gDNA was used for each PCR. PDAC 215 (G12D), PDAC 247 (G12V), and PDAC JH033 (G12R) were used for positive controls and leukocytes of a healthy donor served as a negative control. In ddPCR, the samples containing gDNA were partitioned into 20,000 droplets and loaded into thermo cycler. Following PCR amplification, droplets from each sample are streamed in single file through the droplet reader. Absolute concentration of KRAS mutant and WT DNA copies were determined using the QuantaSoft software provided by the manufacturer. Briefly, positive droplets which contain at least one copy of the target exhibit increased fluorescence. The system detects Mutant (HEX) and WT (FAM) alleles by counting the number of droplets positive for each fluorophore. Single cell isolation using the CellCelector CTCs were immunofluorescently stained with FITC-conjugated anti-CK [CK3-6H5] antibody (Cat. No. 130-080-101, MACS Miltenyi Biotec, 1:10), PE-conjugated anti-CD45 [5B-1] antibody (Cat. No. 130-080-201, MACS Miltenyi Biotec, 1:100), EpCAM [Ber-EP4] (Cat. No. ab7504, Abcam, 1:100) with Goat anti-Mouse Alexa Fluor® 647 (AF647) secondary antibody (Cat. No. A-21235, ThermoFisher, 1:200), and DAPI. Single cells were isolated using an automated micromanipulator, CellCelector (ALS GmbH, Jena, Germany). This system consists of an inverted fluorescent microscope (CKX41, Olympus, Tokyo, Japan) with a CCD camera system (XM10-IR, Olympus, Tokyo, Japan) and a vertical 30 µm glass capillary on a robotic arm. ALS CellCelector Software 3.0 (ALS, Jena, Germany) was used for analysis. Labeled cell solutions were transferred to a glass slide and cells were allowed to settle. The cells were visualized by bright field (BF) and fluorescent microscopy for nuclear DAPI and CD45-PE staining to verify morphology and CD45 negativity at 20× magnification. Then CK+EpCAM+ and CK+EpCAM− target cells were detected in the FITC and AF647 channel at 40× magnification. The target cells were detected by the software following pre-defined settings, manually approved, and then fully automatically aspirated and transferred into PCR tubes containing 100 μl of lysis buffer of the Guanidine Thiocynate (GTC) Method. Total RNA was isolated by the GTC method using standard protocols 62 . The purified RNA was used for cDNA synthesis using the SuperScript VILO Master Mix according to the manufacturer’s recommendations (Cat. No. 1455280, Invitrogen), followed by preamplification using AmpliTaq Gold 360 Master Mix (Cat. No. 4398881, Applied Biosystems), and 200 nM each of forward and reverse primers for KRAS [5′-CTGAAAATGACTGAATATAAACTTGTGG-3′ (forward) and 5′-TAGCTGTATCGTCAAGGCACTC-3′ (reverse)]. Preamplified DNA was used for KRAS mutation detection by ddPCR. Consistent with the analysis of total genomic CTC extract, the PrimePCR KRAS mutant and WT assays (Biorad, dHsaCP2000001 (G12D), dHsaCP2000002 (G12D WT), dHsaCP2000005 (G12V), and dHsaCP2000006 (G12V WT)) were used to detect the G12D and G12V KRAS mutations. CellSearch The identification of CTCs utilizing the CellSearch® CTC platform (Celltracks Autoprep and Celltracks Analyzer II) was performed as per the manufacturer’s instructions. Informed consent was obtained from all subjects and blood samples were collected in CellSave tubes. The CTC assay was performed with the CellSearch® CTC Kit, which contains reagents for immunomagnetic isolation of EpCAM-positive cells. The isolated cells were stained with DAPI, PE-conjugated CK monoclonal antibodies, and APC-labeled CD45 monoclonal antibody (CELLTRACKS® AUTOPREP® System). Statistics STATA 14 software was used for all analyses. The effect on rVAR2 binding by treating A549 cells with TGFβ was tested in three independent experiments by declaring MFI data to be panel data and testing the effect of TGFβ treatment versus control in a generalized least squares regression model using the xtgls command including treatment group and ln(rVAR2) concentration as explanatory variables (Fig. 2c ). The ability of the anti-EpCAM and rVAR2 assays to capture CTC from patient samples were compared by Wilcoxon rank sum test for paired data. P values are from two-sided tests (Fig. 4d–f ). The association between cancer stage and number of CTC detected by rVAR2 was tested by Kruskal–Wallis test followed by comparing groups by the Mann–Whitney U test (Fig. 5b ). Data availability Data supporting the findings of this study are within this manuscript or available from the corresponding authors upon reasonable request. Change history 07 June 2022 A Correction to this paper has been published: | In a spectacular new study, researchers from the University of Copenhagen have discovered a method of diagnosing a broad range of cancers at their early stages by utilising a particular malaria protein that sticks to cancer cells in blood samples. The researchers hope that this method can be used in cancer screenings in the near future. Each year, cancer kills approximately 9 million people worldwide, and early diagnosis is crucial to efficient treatment and survival. Now, researchers from the Faculty of Health and Medical Sciences at the University of Copenhagen have come up with a new method of diagnosing cancer in its early stages in humans by way of a malaria protein called VAR2CSA, which sticks to cancer cells. All the scientists need to determine whether or not a person has cancer is a blood sample. "We have developed a method in which we take a blood sample, and with great sensitivity and specificity, we're able to retrieve the individual cancer cells from the blood. We catch the cancer cells in greater numbers than existing methods, which offers the opportunity to detect cancer earlier and thus improve outcome. You can use this method to diagnose broadly, as it's not dependent on cancer type. We have already detected various types of cancer cells in blood samples. And if there is a cancer cell in your blood, you have a tumour somewhere in your body," says Professor Ali Salanti from the Department of Immunology and Microbiology and joint author of the study, which has just been published in the scientific journal, Nature Communications. Today, there are several ways of detecting cancer cells in blood. Most of them are based on a particular marker, which is found on the surface of tumour cells. However, not all tumour cells display this marker, which renders these methods unable to detect tumour cells spread to other organs such the liver, lung and bones, as opposed to the method based on the malaria protein. A few years ago, Ali Salanti and his fellow researchers discovered a new method of treating cancer with the protein VAR2CSA, which is produced by malaria parasites. And these discoveries have formed the basis of the research group's new method of diagnosis. Among other things, they have shown that the malaria protein sticks to a specific sugar molecule, which is found in more than 95 percent of all types of cancer cells. In other words, this new method of diagnosis can be used to detect practically all types of cancer. Circulating tumour cells A cancerous tumour consists of several different cancer cell types, some of which spread by wandering through the tissue and into the blood. These cancer cells in the blood are called circulating tumour cells, and they can develop into metastases, which cause up to 90 percent of all cancer-related deaths. If cancer originating in the lungs spreads to the brain, it is called brain metastasis. The new method detects the circulating tumour cells in a blood sample by using the malaria protein. During the development of this new method, the researchers took 10 cancer cells and added them to five millilitres of blood, and were subsequently able to retrieve nine out of 10 cancer cells from the blood sample. "We count the number of cancer cells, and based on that, we're able to make a prognosis. You can, for example, decide to change a given treatment if the number of circulating tumour cells does not change during the treatment the patient is currently undergoing. This method also enables us to retrieve live cancer cells, which we can then grow and use for testing treatments in order to determine which type of treatment the patient responds to," says Postdoc Mette Ørskov Agerbæk, Department of Immunology and Microbiology and joint author of the study. Future screening programme The researchers are following up on their results in a large clinical study in which many more patients with pancreatic cancer have been tested using this method. "We found strikingly high numbers of circulating tumour cells in every single patient with pancreatic cancer, but none in the control group," says Professor Christopher Heeschen, School of Medical Sciences, UNSW, Sydney, Australia, and joint author of the study. The researchers envision using the method to screen people at high risk of developing cancer in the future. However, they also expect that this method can be used as a biomarker indicating whether a patient with mostly vague symptoms actually has cancer or not. This will enable doctors to determine the stage of the disease. "Today, it's difficult to determine which stage cancer is at. Our method has enabled us to detect cancer at stages one, two, three and four. Based on the number of circulating tumour cells we find in someone's blood, we'll be able to determine whether it's a relatively aggressive cancer or not so then to adjust the treatment accordingly," explains Professor Ali Salanti who adds that a much larger clinical study is needed before firm correlations to tumour staging can be made. | 10.1038/s41467-018-05793-2 |
Medicine | Using optogenetics and holographic projection, scientists aim to implant perceptions in brain | Alan R. Mardinly et al, Precise multimodal optical control of neural ensemble activity, Nature Neuroscience (2018). DOI: 10.1038/s41593-018-0139-8 Journal information: Nature Neuroscience | http://dx.doi.org/10.1038/s41593-018-0139-8 | https://medicalxpress.com/news/2018-04-optogenetics-holographic-scientists-aim-implant.html | Abstract Understanding brain function requires technologies that can control the activity of large populations of neurons with high fidelity in space and time. We developed a multiphoton holographic approach to activate or suppress the activity of ensembles of cortical neurons with cellular resolution and sub-millisecond precision. Since existing opsins were inadequate, we engineered new soma-targeted (ST) optogenetic tools, ST-ChroME and IRES-ST-eGtACR1, optimized for multiphoton activation and suppression. Employing a three-dimensional all-optical read–write interface, we demonstrate the ability to simultaneously photostimulate up to 50 neurons distributed in three dimensions in a 550 × 550 × 100-µm 3 volume of brain tissue. This approach allows the synthesis and editing of complex neural activity patterns needed to gain insight into the principles of neural codes. Main Neural circuits can encode information in the rate 1 , timing 2 , number 3 , and synchrony of action potentials 4 , as well as in the identity of active neurons 5 . Yet the technical inability to create or edit custom patterns of spatiotemporal neural activity is a key impediment to understanding the logic and syntax of the neural code 6 . Experimental approaches that allow high-fidelity temporal control of neural activity 7 from specific groups of neurons would make it possible to systematically vary spike rate, timing, synchrony, and number and would permit definitive tests of how neural ensembles encode information. Similarly, deleting action potentials from functionally defined neurons will allow experimenters to probe the elements of endogenous activity patterns that contribute to neural computations and behaviors with unprecedented precision. Optogenetics offers the basis for such a technology, but many neural computations rely on genetically similar yet functionally distinct neurons that are physically intermixed 8 , 9 , and are thus beyond the reach of conventional approaches. Two-photon (2P) optogenetics 10 , 11 , 12 , 13 allows experimenters to stimulate neurons based on their precise spatial location as well as their genetic identity. Combined with 2P calcium imaging, this allows activation of specific neurons on the basis of any desired feature 14 , 15 , 16 , 17 , 18 . However, in vivo all-optical approaches in mice have suffered from low temporal precision (>10 ms jitter) 14 , 15 , 16 , or could only photostimulate several neurons simultaneously 15 , 19 and thus could not create precise neural activity patterns. Furthermore, until recently, in vivo multiphoton suppression of neural activity was not previously possible 20 , critically limiting the ability of experimenters to assess the necessity of spikes originating from specific, functionally defined neurons. Several optical methods can stimulate neurons using 2P excitation, although all have limitations. A standard 2P optogenetic technique is to scan a small laser spot over neurons that express variants of slow red-shifted opsins like C1V1 T/T 10 , 13 . While this approach effectively drives spiking, the slow kinetics of the scanning laser and the opsin preclude precise control of the specific sequence of neural activity, resulting in uncertain trial-to-trial reproducibility 12 , 14 , 16 . In contrast, scanless multiphoton stimulation using computer-generated holography (CGH) and temporal focusing allows simultaneous illumination of the entire somatic membrane and can provide higher temporal fidelity when used with a fast opsin 19 , 21 , 22 , 23 , 24 , 25 . However, existing excitatory opsins are too weak or too slow to drive precise neural activity patterns with scanless 2P optogenetics in vivo 15 , 20 . Additionally, only recent innovations have allowed scanless holographic optogenetics to function with high axial resolution in three dimensions (3D) 26 , 27 . Therefore, to synthesize and edit custom distributed patterns of neural ensemble activity, we engineered powerful new opsins optimized for multiphoton optogenetics. We developed ChroME, an ultrafast and highly potent opsin with 3–5× larger photocurrents than opsins commonly used for multiphoton optogenetics 12 , 13 , 28 . This allows high-fidelity sub-millisecond control of pyramidal neuron spiking. In vivo, we activated ensembles of neurons expressing ST-ChroME using 3D scanless holographic optogenetics with temporal focusing (3D-SHOT) 27 , synthesizing precise sequences of neural activity with cellular resolution and millisecond precision. By combining 3D-SHOT with volumetric 2P calcium imaging, we obtained all-optical control of distributed neural ensembles, simultaneously stimulating up to 50 neurons with high temporal precision and cellular resolution. Furthermore, to achieve all-optical suppression, we improved and employed the extremely potent inhibitory anion opsin, GtACR1 29 , which exhibits 80-fold increases in photocurrent over previously employed pump-based opsins for multiphoton optogenetic silencing 13 . Critically, we identified a strategy to prevent the cellular toxicity we observed when the transgene was expressed conventionally. Using this new construct, IRES-ST-eGtACR1, with 3D-SHOT, we provide electrophysiological and all-optical demonstration of high fidelity silencing of neural activity from identified neurons in vivo. Together, these data represent a novel technological approach for precise multimodal control of neural ensemble activity with high fidelity. Results Requirements for controlling neural activity with millisecond precision To control neural activity with sub-millisecond precision, we sought an opsin and a 2P stimulation approach capable of generating large currents with rapid kinetics 30 . Injecting current into patched neurons (layer (L) 2/3 pyramidal cells, used throughout this study unless otherwise noted) in brain slices, we could reliably evoke precise spike trains with brief, high-amplitude current steps. In contrast, long current injections, analogous to some spiral scanning approaches 14 , 16 , resulted in variable spike number and timing, but required lower current amplitudes (Fig. 1a,b and Supplementary Fig. 1 ). Fig. 1: ST-ChroME allows precise high-fidelity 2P activation. a , Overlay of 25 trials from a representative L2/3 pyramidal neuron showing the V m response to 5-ms current pulses or sustained current injection ( I inj ) near the neuron’s rheobase. b , Current needed to induce action potentials as a function of stimulus duration ( n = 8 L2/3 pyramidal neurons). c , Left: grand average photocurrent traces from neurons expressing ST-C1V1 T/T (black, n = 19), ST-ChrimsonR (red, n = 11), ST-Chronos (green, n = 25), or ST-ChroME (magenta, n = 11), via in utero electroporation. Right: photocurrent amplitudes elicited by CGH stimulation. Dashed line represents mean rheobase for 5-ms stimulation of L2/3 pyramidal neurons (ST-ChroME vs. others: * P < 0.0008, Kruskal–Wallis test with multiple comparisons correction; all other comparisons: P > 0.13). d , Top: duration of CGH stimulation needed to elicit action potentials in neurons expressing each opsin ( n = 8 ST-C1V1 T/T , n = 5 ST-ChrimsonR, n = 25 ST-Chronos, n = 8 ST-ChroME). Bottom: fraction of electroporated L2/3 neurons that could be driven at 1 Hz with best CGH stimulation. e , Traces shown in c scaled to the peak current amplitude for each. f , Ten overlaid traces from representative L2/3 neurons expressing ST opsins during 1-Hz CGH stimulation (red line indicates light pulses). g , Spike latency for 1-Hz CGH stimulation of L2/3 neurons expressing ST-opsins (ST-C1V1 T/T 25.9 ± 4 ms, n = 15; ST-ChrimsonR 18.8 ± 3.8 ms, n = 14; ST-Chronos 4.4 ± 0.65 ms, n = 23; ST-ChroME 3.48 ± 0.49 ms, n = 12; ST-ChroME vs. ST-C1V1 T/T, P = 0; vs. ST-ChrimsonR, P = 0.0028; vs. ST-Chronos, P = 0.95, by Kruskal–Wallis test with multiple comparisons correction). h , Jitter for 1-Hz CGH stimulation of neurons expressing ST-opsins (ST-C1V1 T/T 8.6 ± 1.8 ms, n = 11; ST-ChrimsonR 12 ± 6.3 ms, n = 14; ST-Chronos 1.2 ± 0.36 ms, n = 20; ST-ChroME 0.54 ± 0.1 ms, n = 10; ST-ChroME vs. ST-C1V1 T/T, P = 0.0011; vs. ST-ChrimsonR, P = 0.048; vs. ST-Chronos, P = 0.7; by Kruskal–Wallis test with multiple comparisons correction). i , 2P image of whole cell recording from L2/3 pyramidal neuron expressing ST-ChroME-mRuby2 (image representative of n = 10 ST-ChroME-mRuby2 neurons). j , Fidelity index in response to Poisson-like stimulation (ST-ChroME ( n = 7) vs. ST-C1V1 T/T ( n = 6), P = 0; vs. ST-ChrimsonR ( n = 4), P = 0; vs. ST-Chronos ( n = 9), P = 0.99; by Kruskal–Wallis test with multiple comparisons corrections). k , Left: representative traces of two simultaneously recorded ST-ChroME + neurons stimulated with an identical Poisson train for 2.5 ms with a temporal offset of 3 ms. Top middle: example light-evoked spikes in the two neurons. Bottom middle: distribution of the difference in spike times from an example pair of neurons. Right: difference in mean spike times for n = 7 pairs. l , Bar graph showing the fraction of neurons expressing ST-Chronos (green) or ST-ChroME (magenta) that could be optogenetically driven in vivo ( P = 0.0089, two-sided Fisher’s exact test). All data represent means ± s.e.m. Full size image To achieve fast currents using 2P excitation, we adopted scanless holographic approaches, CGH and 3D-SHOT, that can simultaneously illuminate the entire soma. We developed an experimental setup that combines a standard 2P imaging system with a custom photostimulation laser path. The photostimulation path features a high-power (20 W or 40 W) 2-MHz, 1,040-nm laser with a spatial light modulator (SLM) placed in Fourier space to simultaneously target neurons in 3D. For targeting neural ensembles in vivo with high spatial resolution, the CGH path was replaced with 3D-SHOT 27 , a new form of 3D holography with temporal focusing that we recently developed and further improved (Supplementary Fig. 2 , Supplementary Tables 1 and 2 , and Methods ). We first sought to identify the best opsin for precise temporal control of neural activity using scanless approaches. We rejected channelrhodopsin2 (ChR2), as its 2P excitation peak is centered at 920 nm and would be strongly activated by GCaMP imaging, resulting in undesirable optical crosstalk 10 . Instead we tested several red-shifted opsins: C1V1 T/T 13 , ChrimsonR, and Chronos 28 (Supplementary Fig. 3a ). CGH stimulation (5 ms, 0.4 mW/µm 2 ) of Chronos + neurons elicited small photocurrents that were typically unable to generate action potentials (205 ± 50 pA). To improve spatial resolution and photocurrent amplitude, we employed the Kv2.1 sequence tag to synthesize ST 31 , 32 variants of Chronos, C1V1 T/T , and ChrimsonR that increased photocurrents elicited by CGH stimulation (ST-Chronos: 460 ± 60 pA; Supplementary Fig. 3b ). Neurons expressing ST-opsins had photocurrent kinetics similar to those in published reports 28 and had normal intrinsic properties (Supplementary Fig. 3c–i ). Despite this improvement, photocurrents were insufficient to reliably spike pyramidal neurons (Fig. 1a ). We therefore optimized the pulse parameters of the stimulation laser. We tested the effect of peak power, pulse energy, and average power on photocurrents by systematically varying the laser repetition rate (2–40 MHz) and pulse dispersion. We could saturate photocurrents across a range of powers, but high-peak powers (i.e., low rep-rates) saturated more efficiently (Supplementary Fig. 4a–d ). Stimulation powers used in experiments did not damage cells 33 (Supplementary Fig. 4e,f ). For all subsequent experiments, we employed 250- to 300-femtosecond laser pulses at a repetition rate of 2 MHz with power varying from 0.1 to 0.4 mW/μm 2 . Even after optimization, average maximal photocurrents elicited by CGH stimulation (5 ms, 0.4 mW/um 2 ) of pyramidal neurons expressing ST-opsins remained relatively weak (ST-C1V1 T/T : 380 ± 80 pA, ST-ChrimsonR: 430 ± 60 pA, ST-Chronos: 530 ± 50 pA; Fig. 1c ). A simple integrate-and-fire model of typical L2/3 neurons suggested that these photocurrents were unlikely to generate spikes (Supplementary Fig. 5a ). Even when we used long light pulses and high light powers, only a minority of neurons expressing these opsins could be activated by CGH stimulation (Fig. 1d and Supplementary Fig. 5 ). ChroME allows high fidelity replay of complex activity patterns Since none of these opsins could reliably spike neurons in response to brief holographic stimulation, we engineered a stronger opsin with the goal of holographically stimulating large ensembles of neurons. We focused on mutating ST-Chronos, aiming to develop a variant that would preserve its fast kinetics but would generate sufficiently large photocurrents with brief light pulses. Guided by homology modeling to the crystal structure of C1C2 34 (Supplementary Fig. 6a ), we mutated the pore region of ST-Chronos, identifying a neutral putative pore residue (M140) in Chronos that is negatively charged in other opsins (Supplementary Fig. 6b ). We reasoned that mutating this methionine to a negatively charged residue might increase the flux of positive ions through the pore and therefore increase current amplitudes. We tested several mutations via one-photon stimulation in Chinese hamster ovary (CHO) cells against a panel of ST-opsins and identified several mutants with larger photocurrent amplitudes than any other opsin that we tested (Supplementary Fig. 6c,d ). One of these mutants, ST-Chronos-M140E, or ‘ChroME’, exhibited rapid decay kinetics while exhibiting photocurrents more than 10 × larger than those of ChR2 (Supplementary Fig. 6c–e ). Neurons electroporated with ST-ChroME exhibited photocurrent amplitudes 3–5 × larger than ST-C1V1 T/T or ST-Chronos in response to CGH stimulation (5 ms at 0.4 mW/µm 2 evoked 1.8 ± 0.2 nA; Fig. 1c ). ST-ChroME retained the excitation spectrum and rapid rise time of ST-Chronos (Fig. 1e and Supplementary Fig. 3c ), but its decay time constant (3.0 ± 0.4 ms) was slightly slower than ST-Chronos (1.7 ± 0.6 ms; Supplementary Fig. 3d ). In contrast to other ST-opsins (and as predicted by modeling; Supplementary Fig. 5a ), 96% of ST-ChroME + neurons were activated by CGH stimulation (Fig. 1d ), requiring lower laser powers and shorter light pulses to evoke spikes than the other opsins (Supplementary Fig. 5b ). This was true whether opsins were delivered via in utero electroporation or by viral infection (Supplementary Fig. 6f–k ). We next examined the temporal precision of action potentials evoked from ST-ChroME + neurons and the minority of neurons expressing other ST-opsins that could be activated. At 1 Hz, light-evoked spikes from neurons expressing ST-ChroME or ST-Chronos occurred with short latency and low jitter, whereas the timing of spikes from ST-C1V1 T/T or ST-ChrimsonR + neurons was more variable (Fig. 1f–h ). To test temporal precision while eliciting naturalistic sequences of action potentials, we stimulated neurons with Poisson trains of holographic light pulses. Neurons expressing ST-ChroME and ST-Chronos followed these patterns with high fidelity, exhibiting high spike probability and low jitter across a wide range of stimulation frequencies throughout the stimulus train (fidelity index score: ST-ChroME, 0.87 ± 0.03; ST-Chronos, 0.90 ± 0.02; see Methods ). However, neurons expressing ST-ChrimsonR or ST-C1V1 T/T could not follow complex stimulus patterns (fidelity index score: ST-ChrimsonR, 0.48 ± 0.05; ST-C1V1 T/T , 0.25 ± 0.04; Fig. 1i,j and Supplementary Fig. 5c–g ). Since ST-ChroME allowed fast, reliable responses with brief stimulation, we reasoned that we could employ high-speed SLMs to spike different sets of neurons at high rates. To test the speed at which we could generate spike patterns in two different neurons, we recorded two ChroME + neurons and used a fast SLM (Supplementary Fig. 7 and Supplementary Video 1 ) to interleave holographic stimulation of each cell at the maximum SLM rate. We generated a Poisson train of light pulses on each trial and delivered the same sequence to both neurons, separated by 3 ms. This experiment showed the we could generate naturalistic spike trains in multiple neurons offset by brief periods (Fig. 1k ). To test whether ST-ChroME drives reliable spiking under more relevant in vivo conditions, we performed 2P guided loose-patch recordings in anesthetized animals. While only 31% of ST-Chronos + could be made to spike with 5-ms CGH pulses, over 89% of ST-ChroME + neurons could be activated in vivo (Fig. 1l ). Together, these data demonstrate that ST-ChroME can reliably generate the rapid, large photocurrents necessary to drive the temporally precise, short-latency spikes needed to replicate naturalistic neural activity patterns. Anion opsins permit rapid and potent silencing of neural activity We next asked whether we could identify or engineer an optogenetic silencer to suppress neural activity with high efficacy and temporal precision. We synthesized and tested a suite of ST-inhibitory opsins with ER export motifs (‘e’) 35 including pumps (eNpHR3 and eArch3) 36 , 37 and anion channels (GtACR1, psuACR 29 , 38 , and iC++ 39 ). ST-eGtACR1 generated the largest outward photocurrents while retaining moderately fast kinetics (rise time, 1.5 ± 0.7 ms; decay time, 12.5 ± 0.7 ms; Fig. 2a,b and Supplementary Fig. 8a,b ). GtACR1 photocurrents were near saturation in normal conditions and not improved by the ‘e’ signal (Supplementary Fig. 8c–g ). Furthermore, ST-eGtACR1 was more sensitive to 1,040-nm light than to 930-nm light (Supplementary Fig. 8h ). Fig. 2: Fast, potent holographic suppression of neural activity. a , Example average traces of whole-cell photocurrents elicited by 500-ms (100 mW, 0.2 mW/µm 2 ) CGH stimulation (pink bar) from CHO cells held at 0 mV expressing inhibitory ST-opsins; color-coded as in b . b , Mean photocurrent elicited during a 500-ms stimulation, as in a , plotted on a log scale ( n = 5 cells expressing ST-eNpHR3, n = 8 ST-eArch3, n = 9 ST-ePsuACR, n = 8 ST-eiC++, n = 5 ST-eGtACR1, n = 10 IRES-ST-eGtACR1). c , In vivo firing activity that persists during optogenetic suppression of L2/3 neurons expressing ST-opsins. Each dot represents mean activity for a single neuron (5–60 sweeps per cell; n = 7 no-opsin controls, n = 6 ST-ePsuACR, n = 8 ST-eArch3, n = 10 ST-eGtACR1). d , Example confocal images from juvenile (14–15 d old) and adult (35+ d old) mice expressing ST-eGtACR1-mRuby2 or H2b-mRuby3 IRES-ST-eGTACR1. Imaging conditions are matched within an opsin. Representative image from 3 mice each condition. e , Example whole-cell voltage-clamp recording of a L2/3 neuron expressing IRES-ST-eGtACR1 held at 0 mV and stimulated for 500 ms (pink bar) with varying illumination powers. f , In vivo activity that persists during optogenetic suppression with IRES-ST-eGtACR1 using 3D-SHOT (0.32 mW/µm 2 , 100 mW), presented as in c ( n = 9 IRES-ST-eGtACR1 cells). g , Overlay of 30 current-clamp traces from a L2/3 pyramidal neuron expressing IRES-ST-eGtACR1 during current injection, aligned to the onset of a 50-ms stimulation at three different power levels. h , Top: as in g , the time for suppression to take effect calculated for a 50-ms light stimulation. Reported as the tau (τ) of a fit to the observed number of action potentials after light onset in 1-ms bins, assuming a Poisson noise model ( n = 6 neurons). Bottom: membrane potential of each neuron during the last 10 ms of a 50-ms stimulus as a function of stimulus intensity ( n = 6 neurons). i , Duration of suppression, defined as the mean time until the next action potential as a function of stimulus intensity. Grey lines indicate individual replicates, black shows mean and s.e.m. ( n = 6 neurons). j , Left: overlay of 30 whole-cell current-clamp traces during light stimulation of different durations. Bottom: schematic of current injection protocol, in which onset of current injection was varied with respect to the light stimulus. Right: quantification of the duration of suppression as a function of stimulus duration. Grey lines indicate individual replicates, black shows mean ( n = 6 neurons). All data represent ± s.e.m. Full size image Since these silencers function through different biophysical mechanisms, it was possible that the opsin with the largest photocurrent might not be the most effective suppressor of endogenous neural activity. We therefore tested 2P holographic suppression in vivo by performing targeted loose-patch recordings from cells expressing inhibitory opsins. Of the opsins that we tested, ST-eGtACR1 was the most efficient silencer, reducing activity to 8.4 ± 3% of normal firing rate with 0.2 mW/µm 2 of 2P stimulation. In contrast, at the same laser power, ST-eArch3 only reduced activity to 37 ± 8%, whereas ST-ePsuACR or light alone did not significantly alter firing rates (82 ± 11% and 90 ± 9%, P = 0.31 and P = 0.47, respectively, Wilcoxon signed-rank test vs. no change; Fig. 2c and Supplementary Fig. 8i ). However, unlike all other opsins we expressed in vivo, we had difficulty identifying neurons positive for ST-eGtACR1-mRuby2. This seemed to be a problem only in adult animals, but was partially mitigated by the ‘e’ signal (Fig. 2d and Supplementary Fig. 9a ). We suspected this might be related to aggregation of GtACR1 protein when highly expressed, possibility leading to degradation or toxicity. To address this problem, we generated a bicistronic construct with a nuclear-localized fluorophore (H2B-mRuby3-IRES-ST-eGtACR1) that lowers expression levels of the opsin and spares the fluorophore from degradation. IRES-ST-eGtACR1 exhibited large photocurrents in CHO cells and neurons (460 ± 200 pA in CHO cells, 920 ± 140 pA in neurons; Fig. 2b,e ). Antibody staining to a FLAG epitope on the GtACR1 protein confirmed that it remained soma-targeted (Supplementary Fig. 9b ). Notably, unlike ST-eGtACR1, cells expressing IRES-ST-eGtACR1 were easily identified into adulthood (Fig. 2d and Supplementary Fig. 9a ). Neurons expressing IRES-ST-eGtACR1 had normal intrinsic properties (Supplementary Fig. 3e–i ) and spontaneous in vivo firing rates (Supplementary Fig. 9c ) even in older mice. Targeted in vivo loose-patch recording revealed that IRES-ST-eGtACR1 + neurons reduced their firing to 6.8 ± 5% of nominal rate in response to CGH stimulation (0.3 mW/um 2 ), suggesting that lowering expression levels did not affect the efficacy of silencing (Fig. 2f ). To measure the timing of suppression, we induced spiking in brain slices through current injection in cells electroporated with IRES-ST-eGtACR1. We varied the onset time of holographic suppression so that spike timing was randomized trial-to-trial, and we varied the stimulation intensity and duration in separate experiments (Fig. 2g–j ). We found that onset of suppression was rapid, with spiking eliminated within 1.5 ± 0.3 ms after light onset. Similarly to photocurrent response onsets, the onset time of suppression was power-dependent (Fig. 2g,h and Supplementary Fig. 8e ). Despite current injection, cells hyperpolarized to near the reversal potential of GtACR1 when stimulated with <0.1 mW/µm 2 , indicating potent suppression (–54 ± 3 mV at 0.08 mW/µm 2 stimulation; Fig. 2h ). Although the onset of suppression was rapid, suppression of neural activity persisted for 50–250 ms after the cessation of photostimulation, due to the decay kinetics of the GtACR1 channel. This suppression was dependent on both the intensity and duration of the light stimulus (Fig. 2h,i ). Together, these data validate IRES-ST-eGtACR1 as a tool for stable, rapid suppression of neural activity using 2P optogenetics. Creating and editing spatiotemporal sequences of neural activity in vivo Next, we employed ST-ChroME and IRES-ST-eGtACR1 in the intact brain to create and edit spatiotemporal patterns of neural activity. For this, we employed 3D-SHOT 27 (Fig. 3a and Supplementary Figs. 2 and 11 ) to enable 3D holographic stimulation with high axial resolution in vivo. To validate spatial resolution, we recorded the physiological point-spread function (PPSF) using targeted loose patch recordings from ST-ChroME + neurons in anesthetized mice at multiple focal planes (spot diameter: 20 µm; radial full-width at half-max: 11 ± 3 µm, axial full-width at half-max: 28 ± 4 µm; Fig. 3b,c ). Fig. 3: Creating and editing spatiotemporal neural activity in vivo. a , Simplified schematic of light path allowing simultaneous 2P imaging and 3D-SHOT photostimulation (PMT, photomultiplier tube; Lc, phase-patterning lens; Obj, objective; λ/2, half-wave plate). b , PPSF of 3D-SHOT stimulation of neurons measured by in vivo loose-patch recording. Spike probabilities for radial ( XY ) axis, left, and for axial ( Z ) axis, right ( n = 3 neurons). c , In vivo recording of 3D-SHOT’s axial PPSF as a function of distance from the system’s zero order. Inset: PPSFs were measured as a function of depth by testing the spiking response to digital defocusing of the hologram while mechanically offsetting the objective varying distances from the focal plane ( n = 3 neurons). d , Spike probability as a function of stimulation power in vivo for 1-Hz stimulation in L2/3 pyramidal neurons expressing ST-ChroME-mRuby2 via in utero electroporation (IUE; n = 10 neurons). e , Representative experiment showing in vivo Poisson stimulation of a L2/3 neuron expressing ST-ChroME. f , Jitter, spike probability, and fidelity index score for Poisson stimulation of L2/3 neurons expressing ST-ChroME ( n = 7 neurons). g , Firing rate of neurons during stimulation normalized to prestimulation rate and measured through in vivo loose-patch recordings from cells expressing IRES-ST-eGtACR1 ( n = 9 neurons). h , Representative raster plot from a neuron suppressed with 500-ms stimulation (red bar). i , Representative histogram of firing rate during IRES-ST-eGtACR1 suppression at several stimulation powers for the same neuron shown in h . Each line is the mean of 25+ stimulations binned at 100 ms. All data indicate mean ± s.e.m. Full size image The majority of ST-ChroME + neurons fired reliable, temporally precise action potentials in response to brief 3D-SHOT stimulation using powers less than 0.2 mW/µm 2 . This was true when electroporated with ST-ChroME-mRuby2 or virally transduced with AAV DIO-ST-ChroME-P2A-H2B-mRuby3 (Fig. 3d and Supplementary Fig. 12 ). We then stimulated them with naturalistic Poisson patterns, varying the pattern on each trial to generate unique sequences of evoked activity (Fig. 3e ). Quantifying these experiments revealed that ST-ChroME + neurons reliably spiked with sub-millisecond jitter, allowing the production of spatiotemporal activity patterns with high fidelity (Fig. 3f and Supplementary Fig. 13a ). Conversely, to remove spikes from endogenous neural activity, we recorded from IRES-ST-eGtACR1 + neurons. 3D-SHOT stimulation at 0.32 mW/µm 2 produced at least a 95% reduction in firing in >75% of IRES-ST-eGtACR1 + cells (Fig. 3g ). The efficacy of holographic suppression increased with stimulation power, allowing us to either completely silence the activity of a neuron during a defined time-window at high power or titrate a neuron’s average firing rate with lower powers (Fig. 3h,i and Supplementary Fig. 10a,b ). Suppression appeared constant over the entire stimulation period, consistent with the observation that GtACR1-evoked photocurrents did not substantially desensitize (Fig. 3h,i ). Suppression was repeatable over many trials without loss of efficacy or any apparent change in spontaneous firing rates of stimulated neurons (Fig. 3h and Supplementary Fig. 10c ). This demonstration of single-neuron suppression using 3D-SHOT represents the second element in a bidirectional toolbox to control spatiotemporal patterns of neural activity. Holographic spatiotemporal control of cortical inhibitory neurons Whereas L2/3 neurons typically fire sparsely, cortical inhibitory neurons are heterogeneous and many fire at much higher frequencies 40 . We therefore combined spatial and genetic selectivity by stimulating specific subsets of GABAergic neurons (PV, SOM, or VIP) 41 , 42 expressing Cre recombinase transgenically and infected with AAV-DIO-ST-Chronos-mRuby2. Inhibitory neurons are typically more excitable than pyramidal neurons, and ST-Chronos was sufficient to generate reliable action potentials in these cells 24 (Fig. 4a,b ). We identified power levels needed to elicit reliable spiking at 1 Hz (<0.3 mW/µm 2 ; Fig. 4c ) and performed Poisson 3D-SHOT stimulation (Fig. 4d–f ). Stimulation of each GABAergic cell type drove reliable, short-latency spikes with sub-millisecond jitter across many stimulation frequencies, allowing these neurons to follow stimulus trains with high fidelity (Fig. 4g–j and Supplementary Fig. 13b–d ). Unlike L2/3 pyramidal neurons and VIP neurons, PV and SOM cells were able to follow stimuli with instantaneous frequencies up to 100 Hz (Fig. 4j ). Additionally, we replayed several unique patterns of action potentials with identical mean rates, demonstrating our ability to reliably generate precise activity patterns over many trials (Supplementary Fig. 13e–g ). Fig. 4: Spatiotemporal activation of cortical inhibition. a , Example image of in vivo loose patch recording of a PV-neuron expressing AAV-ST-Chronos-mRuby2 (scale bar, 10 µm). b , Fractions of PV, SOM, and VIP neurons expressing AAV-DIO-ST-Chronos-mRuby2 that could be induced to fire action potentials in vivo using 3D-SHOT. c , Line plots showing mean spike probability at 1 Hz as a function of stimulation power for inhibitory neurons expressing ST-Chronos (PV: n = 10, SOM: n = 7, VIP: n = 14 cells). d – f , Representative experiments showing in vivo 3D-SHOT Poisson stimulation of PV (30 Hz), SOM (10 Hz), or VIP (10 Hz) neurons expressing ST-Chronos. g – i , Bar graphs indicating the jitter, spike probability, and fidelity index score for 3D-SHOT Poisson stimulation of PV, SOM, or VIP neurons expressing ST-Chronos (PV: n = 7, SOM: n = 7, VIP: n = 9 cells). j , Spike probability as a function of instantaneous stimulation frequency for PV (brown), SOM (blue), or VIP (purple) neurons undergoing Poisson-stimulation in vivo (PV: n = 7, SOM: n = 7, VIP: n = 9 cells). All data represent mean ± s.e.m. Full size image Addressing multiple forms of optical crosstalk To edit spatiotemporal activity patterns while simultaneously reading out network activity, we sought to combine our approach with 2P calcium imaging. To accomplish this, we addressed two forms of optical crosstalk that we encountered. First, the photostimulation laser can directly excite GCaMP6, adding severe artifacts to the imaging data. To overcome this problem, we synchronized the stimulation laser pulse gating with the resonant galvo mirrors using a custom thresholded resistor–capacitor circuit (Supplementary Fig. 14a–d ). This resulted in a stimulation duty cycle of ~16 kHz, providing stimulation on either side of each imaging line. Because this circuit is tunable, it provides a customizable tool to trade average stimulation power for effective field of view along the x axis. Typically, we sacrifice ~50% of our stimulation power, resulting in light artifacts in <240 μm of imaging area along the x axis (imaging window normally: 550 × 550 µm; with gate synchronization: 310 × 550 µm free of stimulation artifacts; Supplementary Fig. 14d ). However, due to the extremely fast duty cycle compared to the kinetics of the opsins, we observed that this loss of stimulation power results in only a 10–20% reduction in photocurrents when using the laser gate (Supplementary Fig. 14e,f ). All reported estimates of illumination densities account for losses from use of this circuit. Second, the imaging laser can excite opsins and thereby modulate neural activity independently of the stimulation laser. To characterize the effect of the imaging laser on photocurrents, we patched opsin-expressing neurons in brain slices and imaged them at different powers, window sizes, and volumes (i.e., frame rates). During 2P imaging of GCaMP6, the imaging laser induced brief photocurrents as the laser contacted the cell. These currents decayed between frames and were substantially smaller than holographic currents (Supplementary Fig. 15a,d ). Imaging volumetrically reduced the effective frame rate, decreasing the imaging-induced photocurrents. When reducing the size of the imaging window, thus increasing the dwell time of the imaging laser on the opsin-expressing neuron, photocurrents increased (Supplementary Fig. 15a–f ). Together, these data indicate that photocurrents caused by the imaging laser under standard widefield volumetric imaging conditions are unlikely to influence firing rates. Nevertheless, to directly test 2P imaging-induced crosstalk in vivo, we performed loose-patch recordings from all four classes of cortical neurons. Pyramidal neurons were electroporated with ST-ChroME or IRES-ST-eGtACR1 or virally transduced with AAV DIO-ST-ChroME, while PV, SOM, and VIP neurons were virally transduced with AAV DIO-ST-Chronos. We did not observe detectable modulation of firing rates by the imaging laser when scanning with 50 mW at 30 Hz over a 400 × 400-μm window, at depths between 100 and 270 μm (Supplementary Fig. 15g,h ). However, ChroME + neurons increased their firing rates when the size of the imaging window was below 400 × 400 μm (Supplementary Fig. 15i ). These data indicate that widefield volumetric or video-rate 2P imaging is compatible with these optogenetic tools, if care is taken to minimize crosstalk. Next, to enable 3D all-optical read–write experiments (Fig. 5a ), we created a custom suite of software to co-align 3D-SHOT and 3D calcium imaging (Supplementary Fig. 16 ). This alignment was facilitated by an improved version of 3D-SHOT that employs a rotating diffuser instead of a lens to shape the phase of the temporally focused disc. Using this approach, phase is randomly encoded spatially and temporally rather than shaped into a static spherical pattern. This increases available power through the objective and eliminates the secondary geometric focus 27 , further enhancing axial confinement (Supplementary Fig. 11 ). Fig. 5: All-optical read–write with high spatiotemporal fidelity. a , Schematic illustrating single-cell 3D all-optical read–write experiments performed in headfixed mice freely running on a circular treadmill. b , Example optical rheobase experiment (ten 5-ms pulses at 30 Hz) with varying light intensity using 3D-SHOT. Top: example Δ F / F calcium trace (black) or deconvolution (red). Bottom: raster plots of z -scored deconvolved calcium traces. c , Left: average deconvolved calcium traces from an optical rheobase experiment (shading indicates 95% confidence intervals (CI); scale bars represent 10% max response and 1 s). Right: the example neurons’ all-optical response function. d , Power intensity needed to approach saturation of the all-optical response function (80% of maximum, n = 96 neurons, representative experiment from n = 3 mice; red bars indicate mean ± s.e.m.). e , Five consecutive trials of sequential stimulation of n = 134 neurons from a representative experiment. Each panel corresponds to one trial (separated by dashed lines), and each line shows the trial-wise z -scored deconvolved calcium response for each neuron (see color bar on f ). Neurons were stimulated at 2 Hz with ten 5-ms pulses at 30 Hz. f , Mean z -scored deconvolved Δ F / F for each neuron in response to 3D-SHOT stimulation of each holographically targeted neuron. A neuron’s response to its own stimulation is plotted on the diagonal. Data represent the mean z -scored deconvolved calcium response from 12 trials from a representative experiment ( n = 3 mice). g , Each point represents the mean change in z -scored calcium response of a stimulated neuron upon stimulation (red) or the mean change in response to stimulation of other cells (gray). Mouse 1: n = 255 neurons, P = 3.76 × 10 −51 , Mouse 2: n = 115 neurons, P < 4.3 × 10 −17 Mouse 3: n = 106 neurons, P < 1.95 × 10 −18 , two-sided paired t test; red bars indicate mean ± s.e.m.). h , Mean fluorescence of all stimulated neurons, aligned so the targeted neuron is centered (two poststimulation frames per neuron). Image is the mean response of 134 targets. Dashed black lines show the size of the stimulation area, r = 10 µm. Data is from a representative experiment ( n = 3 mice). All data represent mean ± s.e.m. unless otherwise noted. Full size image All-optical control of neural activity with high spatiotemporal fidelity Using these technical advances, we tested our ability to perform all-optical read–write using 3D-SHOT stimulation to generate spikes with high-fidelity, sub-millisecond temporal precision, and cellular resolution in full 3D. Experiments were performed in primary somatosensory cortex (S1) of awake, headfixed mice on a treadmill. Mice expressed both GCaMP6s 43 and ST-ChroME in excitatory neurons (see Methods ). To avoid failures or extra spikes, we determined the minimum laser power needed for each cell to reliably drive spiking with short pulses (Fig. 5b,c ). The all-optical data matched in vivo physiology measurements, as neurons’ optical response function reached 80% of saturation with 0.16 ± 0.02 mW/µm 2 (Fig. 5d ). We then rapidly activated neurons located throughout a 550-µm × 550-µm × 100-µm volume (three imaging planes spaced 50 µm apart using an electrotunable lens 44 , as used throughout the study), well within the accessible volume of our stimulation (Supplementary Fig. 11h,i ). Neurons were stimulated one by one with a series of ten light pulses (5 ms, 30 Hz), and we read out the effects via GCaMP fluorescence. Generation of action potentials in this manner elicited large increases in GCaMP6s fluorescence. Deconvolution of the calcium signal (see Methods ) revealed that the temporal sequence of activation was reliable across many trials and repeatable in multiple animals (Fig. 5e–g ). On average, spatial resolution remained high even in awake in vivo conditions (Fig. 5h ), but failures and off-target activation could occur during rare episodes of brain motion (Supplementary Fig. 17a ). Such motion was easily identified post hoc, and trials in which motion coincided with photostimulation were excluded from analysis (Supplementary Fig. 17b ). Holographic stimulation did not affect the animals’ running behavior (Supplementary Fig. 17c ). All-optical suppression of neurons Next, we performed all-optical suppression of activity in awake mice. As L2/3 pyramidal neurons fire sparsely, we focused on PV + interneurons, which have high tonic firing rates. To accomplish this, we created a Cre-dependent viral version of IRES-ST-eGtACR1 (AAV9 DIO-NLS-mRuby3-IRES-ST-eGtACR1), and confirmed its efficacy in vitro via whole-cell recordings and in PV cells in vivo via cell-attached recordings (Supplementary Fig. 18a–c ). We co-infected PV-Cre mice with viral DIO-IRES-ST-eGtACR1 and DIO-GCaMP6f. As in Fig. 5 , we imaged calcium activity while the animal was awake and headfixed on a treadmill (Fig. 6a ). We sequentially suppressed individual PV cells (1-s illumination, 0.16 mW/µm 2 ; Fig. 6b and Supplementary Fig. 18d,e ); most (90.6%) cells exhibited reduced fluorescence when targeted but showed no consistent change when other neurons were targeted (Fig. 6b–d ). We observed no correlation between the direction or magnitude of a response and its distance from the targeted cell (Supplementary Fig. 18f ). Fig. 6: All optical suppression. a , Top: schematic of experimental design, as in Fig. 5 . Mice expressed GCaMP6f and IRES-ST-eGtACR1 in PV interneurons via viral infection of PV-Cre mice. Individual neurons are suppressed sequentially at 0.5 Hz, with 1 s of illumination. Bottom: representative image of three-plane field of view (550 × 550 × 100 µm). Inset: enlargement of a PV-cell expressing both GtACR1 and GCaMP6f (representative image of 16 recordings, 3 mice). b , Top: mean and 95% CI (shaded), of all neurons from this recording ( n = 45; 24 trials each). Red bar indicates period of stimulation. Targeted cells that were obscured by the optical artifact were excluded from analysis. Bottom: trial averaged z -scored fluorescence response of each targeted neuron during suppression of that neuron (red) and during stimulation of a different neuron in the field of view (gray). c , Averaged fluorescence response of each targeted neuron to stimulation of each targeted neuron during the reporting window (0.5–1.5 s after light onset). The trial immediately after self-targeting was ignored and blanked. Each box is the average of all 24 trials from this experiment. d , Summary data from 3 mice ( n = 78, 32, and 28 cells stimulated and recorded from each mouse, respectively). Each dot is the mean z -sored fluorescence of a cell not located in the optical artifact in response to self-targeting (red) or in response to all other stimulations (gray). Bars indicate mean ± s.e.m. of the population response (Mouse 1: P < 2.6 × 10 −19 , Mouse 2: P < 2.3 × 10 −7 , Mouse 3: P < 6.3 × 10 −5 , paired two-sided t test; *** P < 0.001; red bars indicate mean ± s.e.m.). e , Schematic of experimental design, as in a , but depicting simultaneous suppression of multiple neurons in consecutive ensembles ( n = 4 neurons per ensemble, 0.5-Hz 1-s stimulation). f , Responses of two representative ensembles (trial-averaged z -scores of 4 cells), ensemble 1 (top) and ensemble 2 (bottom) to suppression of ensemble 1 (purple) or of ensemble 2 (green). Shaded region is the trial-wise 95% CI. g , Mean z -score image of entire field of view during ensemble 1 (E1) stimulation with cells from ensemble 1 and ensemble 2 (E2) enlarged (insets). The area of the optical artifact was blanked. Mean of 24 trials. h , Summary data from 2 mice ( n = 31 and 18 ensembles of 4 cells per mouse, repeated twice in different areas). Each dot represents the mean z -scored fluorescence of an ensemble of 4 cells in response to suppression of that ensemble (red) or all other ensembles (gray). Red bars indicate mean ± s.e.m. of the population response. Both mice showed increased suppression when recording from a targeted ensemble compared to when recording from a nontargeted ensemble (Mouse 1, P < 2.0 × 10 −5 ; Mouse 2, P < 5.2 × 10 −8 , paired two-sided t test, *** P < 0.001). Full size image Next, we suppressed groups of four randomly selected PV cells while simultaneously imaging (6–12 groups per experiment, 1-s illumination at 0.08 mW/µm 2 ; Fig. 6e–h ). Suppression of ensembles was also selective, as the laser caused suppression in only the cells targeted by the holographic pattern (Fig. 6h ). These data demonstrate all-optical suppression of neural activity in multiple neurons across a large working volume. All-optical spatiotemporal control of neural ensembles We next employed ST-ChroME to manipulate larger ensembles. When testing spatial resolution of ensemble stimulation in brain slices, we found that use of the ST-opsin, which increases stimulation resolution for one target 31 , was essential when stimulating groups of neurons with many holograms (Supplementary Fig. 19a–f ), something not employed in previous manipulations of neural ensembles 14 , 16 , 45 . We tested multispot spatial resolution in vivo with cell-attached recordings of ST-ChroME + cells. The spiking PPSF was measured for each cell with holograms targeting 1–50 spots simultaneously throughout a large volume (400 × 400 × 200 µm). These experiments showed that 3D-SHOT stimulation in vivo remained spatially precise when targeting up to 50 locations simultaneously (Supplementary 19g–i ). To manipulate large groups of cells all-optically, we prepared mice as in Fig. 5 and selected 150 ST-ChroME + neurons across three planes (Fig. 7a ). We randomly assigned them to unique neural ensembles containing overlapping sets of 10, 25, or 50 neurons and stimulated them with 10 pulses each at 10, 20, or 30 Hz (Fig. 7b , Supplementary Fig. 20a , and Supplementary Video 2 ). We did not stimulate more than 50 neurons simultaneously due to limitations in available laser power (4.1 Watts available from the objective, resulting in approximately 0.13 mW/µm 2 or 40 mW per target, accounting for losses from the imaging gate, but not for decreased diffraction efficiency of phase masks encoding 50 spots across the accessible volume). For control trials, we either did not photostimulate the brain at all or directed the laser to 50 random spots (Supplementary Fig. 20a ). Neurons responded reliably to stimulation when targeted as a member of an ensemble, regardless of the identity of the other ensemble members. These neurons retained normal calcium dynamics when not being stimulated (Fig. 7b and Supplementary Fig. 20b ). Stimulation of ensembles was selective and successful when targeting ensembles of different sizes, at different frequencies, and either within or across axial planes (Fig. 7c–h and Supplementary Fig. 20a–c ). Fig. 7: Manipulating neural ensembles with high temporal and spatial precision. a , Top: schematic of all-optical ensemble experiments, as in Fig. 5 . We stimulated 33 ensembles of 10, 25, or 50 neurons with 10 pulses at 10–30 Hz. Bottom: representative images of three-plane field of view (550 × 550 × 100 µm), with depth from pial surface noted. Inset: enlargement showing example calcium source expressing ST-ChroME. b , A representative neuron stimulated as part of five different ensembles composed of varying numbers of cells. Top: normalized mean Δ F / F (black) or OASIS deconvolution (red) during stimulation. Bottom: raster plots showing z -scored deconvolved calcium activity from 10 stimulation trials (top) or control trials (bottom). c , Summary data from experiments in three mice. Each point represents the mean change in z -scored calcium response of all ensemble members in response to stimulation of the target ensemble (red) or mean response to stimulation of other ensembles (gray). Ensembles significantly increased their fluorescence only when they were stimulated (*** P < 0.001; Mouse 1, n = 33 ensembles, P = 3.5 × 10 −10 ; Mouse 2, n = 22 ensembles, P = 2.8 × 0 −4 ; Mouse 3, n = 24 ensembles, P = 5.5 × 10 −4 , paired two-sided t test). d , Normalized z -scored calcium response of the neurons that compose each stimulated ensemble upon stimulation of each ensemble. Color codes show the size of the ensembles (green, 10; brown, 25; blue, 50 neurons). e – g , Responses of each neuron in each ensemble to each ensemble stimulation, grouped by ensemble identity and separated by size. Data represent the mean z -scored deconvolved calcium response for each neuron. h , Maps showing the mean response of all calcium sources to stimulation of four unique ensembles composed of 50 cells across three depths. Green asterisks indicate neurons that were targeted for stimulation. Note: ensembles can be distributed in 3D (ensemble 1) or confined to one depth (ensembles 2–4). Data calculated from 0–300 ms after stimulation. Full size image During ensemble stimulation, we observed infrequent activation of neurons that were not holographically targeted. Cells located within the PPSF (0–11 µm from the nearest target neuron) showed evidence of the expected facilitation from the stimulation laser ( z -score, 0.87 ± 1.3 (mean and s.d.); Supplementary Fig. 21a–c ). However, neurons just outside the PPSF were not modulated by the optogenetic stimulus (11.5–25 µm away from nearest target: z -score, 0.07 ± 0.86, mean and s.d.). On average, neurons distal to the nearest target exhibited a small but significant suppression ( P = 1.94 × 10 –35 ; Supplementary Fig. 21a–d ) suggesting that optogenetic stimulation engaged cortical circuits for lateral inhibition 46 (>30 µm distal to the nearest target, z -score = –0.081 ± 0.69, mean and s.d.). Similar effects were not observed in control trials (Supplementary Fig. 21d ). Nontarget neurons exhibited high variability during both control and stimulation trials, consistent with expected spontaneous activity in the awake, running mouse, whereas the variability of target neurons was strongly reduced by holographic stimulation (Supplementary Fig. 21e–h ). Analysis of targeted neurons during interleaved control trials showed no evidence that repeated stimulation caused toxicity (Supplementary Fig. 22a–c ). To directly measure temperature changes induced by stimulation, we implanted a thermal probe into the cortex of anesthetized mice and repeated the stimulation protocol exactly as used above. This resulted in brain heating of 1.7–2.2 °C over the course of 1 h (Supplementary Fig. 22d ). We next addressed whether heat would constrain future all-optical experiments, using full laser power at maximum rates. We found that 1 s of stimulation targeting 50 spots at full laser power (4.1 W, 83% duty cycle) increased brain temperature by ~2 °C (Supplementary Fig. 22e ). Continuous stimulation resulted in substantial brain heating after 1 min (6–8 °C, Supplementary Fig. 22f ). These data define bounds for potentially acceptable levels of laser illumination. The ability to control the firing patterns of arbitrary neural ensembles raises the possibility of achieving optogenetic control over population activity 47 . Since sensory stimuli often decorrelate population activity 48 ; we tested whether holographic ensemble stimulation could drive a change in population activity that mimicked a sensory stimulus. Toward this end, we computed the pairwise correlations during spontaneous activity of all neurons during control trials or during trials in which random ensembles were photostimulated. Ensemble photostimulation resulted in striking changes in the structure of population correlations (Fig. 8a,b ) and resulted in significant decorrelation of untargeted neurons during stimulation trials (Fig. 8c ). Conversely, targeted neurons exhibited the opposite effect, increasing their pairwise correlations on trials in which they were stimulated (Fig. 8d ). These results show that high-fidelity, temporally precise holographic activation of neural ensembles can provide the scale of experimental control need to directly manipulate previously inaccessible properties of neural networks such as correlational structure and shared variability with cellular resolution. Fig. 8: Altering population correlational structure with 2P ensemble stimulation. a , Left: pairwise Pearson’s correlations for nontargeted neurons, calculated based on firing during control trials ( n = 365 neurons). Right: pairwise correlations of target neurons during control trials ( n = 150 neurons). b , Pairwise correlations between nontarget neurons (left) or target neurons (right) during trials in which ensembles were stimulated at 30 Hz (ten 5-ms pulses; see color bar on right). c , Cumulative distributions of all pairwise correlations between nontarget neurons during control trials (black) or during trials in which ensemble stimulation occurred at 10–30 Hz (red). All stimulation conditions decorrelated population activity relative to control trials ( P < 0.01), but were not significantly different from each other ( P > 0.425, Friedman test with Tukey–Kramer correction for multiple comparisons). Cumulative distributions are from a representative experiment ( n = 3 mice; stimulation vs. control trials, nontarget cells: Mouse 1, P = 0.007; Mouse 2, P = 1.3 × 10 −6 ; Mouse 3, P = 0.004, Friedman test with Tukey–Kramer correction for multiple comparisons). d , Cumulative distributions of all pairwise correlations between target neurons during control trials (black) or during trials in which ensembles were stimulated at 10–30 Hz (red). All stimulation conditions increased correlations between target neurons relative to control trials ( P < 0.01), but were not significantly different from each other ( P > 0.186, Friedman test with Tukey–Kramer correction for multiple comparisons). Cumulative distributions are from a representative experiment ( n = 3 mice; stimulation vs. control trials, target cells; Mouse 1, P = 0.003; Mouse 2, P = 5.4 × 10 −3 ; Mouse 3, P = 0.02, Friedman test with Tukey–Kramer correction for multiple comparisons). Full size image Discussion We developed an integrated experimental approach for multimodal control of neural ensemble activity with cellular resolution and millisecond precision in vivo. This system achieves the simultaneous temporal precision, spatial resolution, reliability, and scale needed to generate or edit custom spatiotemporal activity patterns. It builds on previous 2P optogenetics manipulations of neural activity 14 , 15 , 16 , but offers the critical advances needed to achieve the faithful reproduction of naturalistic or artificial sequences of neural activity that could help parse temporal and spatial information in neural codes. The generation of ST-ChroME and IRES-ST-eGtACR1, the application and improvement of 3D-SHOT, and the integration and optimization of these systems with fast volumetric calcium imaging provides the increase in performance needed to address many fundamental yet unanswered question in neuroscience. Several in vivo mouse studies have previously employed 2P optogenetics with calcium imaging and/or electrophysiology to activate identified neurons 13 , 14 , 15 , 16 . Since these studies used C1V1 T/T , a slow opsin exhibiting 2P photocurrents <500 pA 12 , 13 , they had comparatively poor control over the onset, timing, and, perhaps most importantly, the absolute number (and pattern) of evoked spikes over any given time window. Thus, spiral scanning of C1V1 T/T + neurons allows all-optical manipulation of neural ensembles (most recently in 3D 45 ), but not generation of spatiotemporally precise patterns of neural activity. However, recent work suggests that spiral scanning may be more power-efficient than scanless holography 45 , which should be considered when seeking to maximize the number of neurons simultaneously addressed if precise control of the underlying spike train is not required. Although scanless 2P optogenetics holds the promise of eliciting short-latency, low jitter action potentials, it was initially limited by the strength of available opsins (photocurrents < 500 pA) 19 , 49 . However, two recent papers demonstrated sub-millisecond temporal resolution using scanless holographic optogenetics in brain slices 24 , 25 . One employed the ultrafast opsin Chronos fused with GFP to demonstrate sub-millisecond control of firing at high rates in inhibitory interneurons 24 . This study reported average photocurrents of ~400 pA, agreeing with our observations that Chronos is not powerful enough to reliably elicit spiking in most L2/3 pyramidal neurons, which require larger photocurrents due to their low intrinsic excitability. Here we extend these results by using ST-Chronos-mRuby2 to drive naturalistic spatiotemporal activity patterns in three genetically defined inhibitory neuron subtypes in vivo (Fig. 4 ). Another study presented soCoChR 25 , a combination of a novel soma-restriction sequence (‘so’ versus ST 31 , 32 ) and CoChR, a previously characterized potent opsin with slow-off kinetics 28 (Supplementary Fig. 6 ). Direct comparison of the ‘so’ tag and the Kv2.1 tag in brain slices and in vivo are needed to address which tag is preferable. ST-ChroME exhibited larger 2P photocurrent amplitudes than those exhibited by soCoChR, suggesting that ChroME may be preferable for many applications. Recently, multiple groups have reported that opsins with slow kinetics can generate a temporally precise spike upon 2P stimulation 25 , 45 . We elicited spikes with sub-millisecond jitter in 1 of 11 ST-C1V1 + and 4 of 14 ST-ChrimsonR + neurons, though the population averages (8.6 ± 2 and 12 ± 5 ms, respectively; Fig. 1h ) agreed with previous reports 14 . Our data indicate that opsins with slow decay constants are at an inherent disadvantage in reproducing precise spike trains at physiological spike rates 30 . If more than one action potential per trial is required, faster opsins have a clear advantage. Almost every neuron expressing ST-ChroME exhibited sub-millisecond jitter, even in response to naturalistic Poisson stimulation in vivo. We further optimized the extremely potent GtACR1 29 specifically for the purpose of multiphoton suppression, achieving fast, reliable, and potent silencing of neurons. Using IRES-ST-eGtACR1, we observe a substantially higher photocurrent than a previous report of 2P suppression 13 , which employed a different stimulation method and different targeting sequences. Direct comparison of IRES-ST-eGtACR1 to ST-eArch3 yielded an 80-fold increase in photocurrent. Therefore, the approach presented here represents an important advance in optogenetic technology for the editing and synthesis of neural activity patterns that can be used to probe the fundamental logic of sensation, cognition, and behavior at the cellular scale. Limiting optical cross talk between the read and write channels is a critical aspect of any all-optical approach. Our data show that we can minimize undesired activation of opsin-expressing neurons with the imaging laser. Since even red-shifted opsins absorb at the blue end of the 2P spectrum, an alternative approach is to employ a blue-shifted opsin and a red-shifted calcium indicator 20 . However, the lower efficacy of these red indicators compared to GCaMPs (at least at present), and the lack of high-pulse-energy lasers at 920 nm restricts the scale of this approach. Nevertheless, for applications seeking to minimize optical crosstalk, this color-flipped scheme may be preferable. The utility of multiphoton optogenetics for biological applications depends on the number of neurons that can be photostimulated simultaneously and per unit time. The number of simultaneous targets is constrained by the available average power from the stimulation laser and the diffraction efficiency of the SLM phase mask. To estimate the maximum number of light-evoked spikes under various conditions, we built a model based on our empirical data and our hardware specifications. The model indicates that our system could produce thousands of light-evoked spikes in 1 s (Supplementary Fig. 23 and Methods ), though detecting these responses using calcium imaging would be technically challenging. We note that laser-induced brain damage or heating will place an upper bound on the maximum duration and power of light that can be directed into brain tissue 33 and may substantially constrain the maximum numbers of neurons that can be stimulated in practice. This combination of temporally focused 3D holography with calcium imaging and fast, potent actuators and suppressors allows new experimental applications. This approach should enable experiments where specific statistical features of neural activity can be varied optogenetically to probe functional connectivity, perception, or behavior. For example, controlling the number, rate, and timing of action potentials written to specific ensembles will allow neuroscientists to test models of cortical dynamics by probing the boundary conditions for the initiation of ‘winner take all’ ensemble activity. Perhaps most notably, our approach provides the tight stimulus control needed to rigorously test how parameterized manipulation of specific neurons can create or alter behavior. Various reports suggest that animals can engage behavioral responses based on the activity of a number of neurons compatible with our approach 50 . By recording neural activity and simultaneously writing or suppressing custom spatiotemporal sequences of neural activity, investigators can use this new technology to probe how specific, unique patterns of neural activity influence neural circuits and behavior. METHODS Ethics All experiments were performed in accordance with the guidelines and regulations of the ACUC of the University of California, Berkeley. Protocol # AUP-2014010-6832. Every experiment was conducted in at least three mice unless otherwise stated. Transgenic mice The following mouse lines were used for this study: ICR (CRL:CD1), the PV-IRES-cre line (B6;129P2-Pvalbtm1(cre)Arbr/J; JAX stock #008069), the SOM-IRES-cre line (JAX stock 013044), the VIP-IRES-cre line (JAX stock 010908), the Emx-IRES-cre line (JAX stock 005628), the Drd3-Cre line (JAX stock 002958) the teto-GCaMP6s line (JAX stock 024742), and the CaMKII-tTA line (JAX stock 003010). Mice were housed in cohorts of five or fewer with a light:dark cycle of 12:12 h, and were used for experimentation during their subjective night. Plasmid construction and mutagenesis Chronos, eArch3.0, and iC++ were generated by gene synthesis (Chronos and eArch3.0, Genewiz, South Plainfield, NJ; and iC++, Integrated DNA Technologies, Coralville, IA) and anion opsins PsuACR and GtACR1 were provided by J. Spudich (University of Texas Health Science Center, Houston, TX). C1V1 T/T , ChR2, ChrimsonR, and eNpHR3.0 were obtained from Addgene. All opsins were fused to mRuby2 at their C terminus either at a NotI site (Chronos and eArch3.0) or at an AgeI site (GtACR1, iC++, PsuACR, and eNpHR3.0) and subcloned into the pCAGGS expression vector between KpnI and XhoI restriction sites by In-Fusion Cloning (Clontech, Mountain View, CA). To target the opsins to the soma and proximal dendrites of neurons, the sequence encoding the proximal restriction and clustering domain of the Kv2.1 voltage-gated potassium channel consisting of amino acids 536–600 31 , 32 , 51 (soma-targeting; ST) was codon optimized, synthesized (Integrated DNA Technologies, Coralville, IA), and inserted at the C terminus of mRuby2 between BsrGI and XhoI restriction sites by In-Fusion Cloning. To further enhance membrane trafficking, the endoplasmic reticulum (ER) export signal from the vertebrate inward rectifier potassium channel Kir and a neurite trafficking signal were appended to iC++ and to the ER-export signal to GtACR1 and PsuACR. For generation of a bicistronic pCAGGS vector encoding mRuby3 fused to histone H2B to promote its nuclear localization, followed by GtACR1 under the control of an internal ribosome entry sequence (IRES), GtACR1 fused to the soma-targeting domain was subcloned downstream of an IRES. For GtACR1 AAV preparations, the H2B was replaced with a nuclear localization sequence (NLS) due to space constraints. Mutations in Chronos were introduced by overlap extension PCR and verified by DNA sequencing. 2P holographic microscope setup Our experimental setup (Supplementary Fig. 2 ) is a movable objective microscope (MOM, Sutter Instruments), modified with commercially available parts (listed in Supplementary Table 2 ). The microscope objective is mounted on a 3D mechanical stage (MP-285 controller, Sutter), enabling controlled mechanical displacement of the objective in 3D. (We represent this mechanical displacement by the ‘true coordinates’: ( x , y , z ), expressed in microns). A polarizing beam-splitter (BS) merges the photostimulation and imaging laser paths with minimal losses. Volumetric photostimulation with 2P holography For photostimulation, we relied on femtosecond lasers (Coherent Monaco: 1,040 nm, 2 MHz, 40 W; or Amplitude Satsuma: 1,040 nm, 2 MHz, 20 W, on a separate identical setup). For conventional 3D computer generated holography (CGH), the beam is expanded into a wider collimated Gaussian beam with a pair of lenses, L 10 , and L 7 , placed in a 4 f configuration, and illuminates the spatial light modulator (SLM). A telescope lens, L 6 , transforms the phase-patterning in the pupil plane into a hologram with an accessible volume centered on the image plane. In the absence of a phase patterning, the SLM focuses all the incoming light into the zero-order, which is absorbed by a reverse pinhole filter. During operation, the 3D volume image is replicated by a set of relay lenses, L 4 and L 5 , and aligned to maximally overlap with the accessible volume for 2P imaging. All lenses in the optical path are separated by two focal distances on either side, so that the image plane (dashed green) and pupil planes (dashed red) alternate between lenses from the initial collimated laser beam to the focal plane under the microscope objective. A half-wave plate adjusts the polarization of the laser to match the orientation of the liquid crystal on the SLM. 3D-SHOT implementation with a rotating diffuser for partially coherent 3D holography with temporal focusing After alignment of the photostimulation path for conventional CGH is complete, we place two mirrors mounted on a pair of sliding stages to divert the beam on a separate path on either side of lens L 10 . The beam is diverted toward a blazed holographic diffraction grating. The incidence angle of the illumination and the orientation of the grating are adjusted so that the first diffracted order is reflected orthogonally to the grating surface and back into the optical path along the same optical axis. The outgoing beam is then demagnified 5 × with a pair of lenses, L 9 , and L 8 . In the focal plane of the demagnified image (dashed green line), spectrally separated components interfere constructively after propagating along separate paths and reconstruct a custom temporally focused pattern (CTFP) matched to the dimensions of a neuron soma (here a disc of radius 10 µm). Let Δλ be the spectral bandwidth of the femtosecond laser and 1/ a the spatial frequency of the grating. The spectral separation d (on the SLM; Supplementary Fig. 2b ), is given by $$d=\frac{{f}_{7}}{{f}_{8}}\frac{{f}_{9}}{a} {\Delta } {\lambda }$$ The fundamental principle of 3D-SHOT is to replicate identical copies of the CTFP anywhere in 3D with the SLM. Temporal focusing and 3D holography are made compatible by engineering the phase of the CTFP to simultaneously broaden the spatial footprint of all spectral components and ensure high diffraction efficiency by the SLM. Static phase masks introduce geometric secondary foci above or below the temporally focused plane 27 , so we rely on a rotating diffuser placed in the first virtual image plane to apply a rapidly varying randomized phase perturbation to the CTFP. The characteristic angle of the Gaussian diffuser (θ = 0.5°, #47-989, Edmund Optics) is chosen to simultaneously broaden the spatial footprint of all spectral components. The diffusion length, r , given by $$r={\theta } {{f}_{7}}$$ matches the dimensions of the SLM active area. The diffuser is glued on a brushless motor (from a modified 30-mm diameter computer hardware ventilation fan), and continuously rotates during operation. Hologram computation and power balancing Variability in opsin expression, as well as spatially dependent diffraction efficiency, requires the ability to control the power distribution precisely in each target. To compensate for spatially dependent diffraction efficiency throughout the optical system, we proceed to a power calibration, as shown in Supplementary Fig. 16g–i . We computed holograms, each targeting one spot in a 3D grid pattern, and we used our substage camera system (Supplementary Fig. 11a ) to quantify the amount of 2P absorption, I 2 , achieved by each hologram while supplying the same amount of laser power, I 0 , to the SLM. Experimental measurements of 2P absorption, \({I}^{2}\left(x{{\prime}_{i}},y{{\prime}_{i}},z{{\prime}_{i}}\right)\) (Supplementary Fig. 16g ), show how intensity degrades when targeting far away from the zero-order. To digitally compensate for this unwanted effect, we estimate losses with a 3D polynomial interpolation of the power calibration data (Supplementary Fig. 16h ). Interpolation error measurements (Supplementary Fig. 16i ) show how our model fits experimental measurements within the operating volume, with the known exception of the blocked zero-order. Several methods have been developed for 3D hologram computation. Here, for precise control of the intensity distribution, we used either an iterative method, global Gerchberg–Saxton 52 , or 2P optimized NOVO-CGH 53 with a Euclidean cost function to maximize contrast in holograms where precise control of the power distribution is required simultaneously in many targets. Relevant algorithms for 3D-alignment of 3D-SHOT with 3D imaging, and for digital power compensation with 3D polynomial interpolation, are provided on our repository ( ). Synchronization of stimulation window to imaging frames (for example, laser gate) Since 2P excitation by the photostimulation laser yields stimulation artifacts, we developed an electronic circuit to restrict the photostimulation laser to engage only as the resonant galvo mirror reverses course, on either side of the imaging window. The electronic circuit is shown in Supplementary Fig. 14a, with additional details on our repository ( ). Stroboscopic imaging for SLM high speed performance testing To characterize the illumination pattern during high-speed phase transitions (using the Meadowlark 512 L), we considered a test case with four holograms, each targeting several randomized points distributed throughout the volume. We employed stroboscopic imaging to measure the intensity during the phase transition at very high speeds by illuminating a fluorescent calibration slide at specific times during the cycle and with time-averaged imaging with a substage camera (Supplementary Table 2 ). A repeating sequence was played at 300 Hz using the SLM built-in trigger. To observe holograms at any point of this cycle during SLM frame-to-frame transitions, the SLM trigger clock was synchronized with the laser controller to restrict illumination to a 1-ms pulse placed anywhere within the 13.3 ms duration of the cyclic sequence. CHO cell recording Chinese hamster ovary (CHO) cells were cultured and recorded as described 27 . One-photon photostimulation of cells was performed at 550 nm for C1V1 T/T , eNpHR3.0, eArch3.0, and PsuACR; at 470 nm for Chronos, ChroME, iC++, and GtACR1; and at 630 nm for ChrimsonR, at a power of 0.55 mW using a Lumencor light engine (Lumencor). For activating opsins, currents were measured at a holding potential of –40 mV; for suppressing opsins, currents were measured at 0 mV. For 2P spectra measurements in CHO cells, currents were evoked by rapidly raster-scanning a diffraction limited spot from a Chameleon Ti-Sapphire Laser (Coherent) and average power was normalized across wavelengths by attenuating the laser beam using a Pockels cell (Conoptics). In utero electroporations Electroporations were performed on pregnant CD1 (ICR) mice (E15, Charles River ca. SC:022). For each surgery, the mouse was initially anesthetized with 5% isoflurane and maintained with 2.5% isoflurane. The surgery was conducted on a heating pad, and warm, sterile phosphate-buffered saline (PBS) was intermittently perfused over the pups throughout the procedure. A micropipette was used to inject ~2 µL of recombinant DNA at a concentration of 2 µg/µL into the left ventricle of each neonate’s brain (typically, DNA-encoding opsins were doped with plasmids expressing GFP or mRuby3 at a concentration of 1:20 to facilitate screening for expression). Fast-green (Sigma-Aldrich) was used to visualize a successful injection. Following successful injection, platinum-plated 5-mm Tweezertrodes (BTX Harvard Apparatus ca. 45-0489) were positioned along the frontal axis across the head of the neonate with the positive electrode of the tweezers positioned against the left side of the head. An Electro Square Porator (BTX Harvard Apparatus ca. 45-0052) was used to administer a train of five 40-mV pulses with a 1-s delay. After the procedure, the mouse was allowed to recover and come to term, and the delivered pups were allowed to develop normally. Brain slice recording Acute coronal slices were prepared and recorded from mice (ages P14–P29) as described 27 . For measuring opsin kinetics, the photocurrent elicited by 0.5- or 1-s CGH stimulation (0.04 mW/µm 2 , 20 mW, disc r = 12.5 µm) was measured in voltage-clamp mode. The time to peak current was measured from average currents, and decay time constants were measured by fitting the traces from stimuli offset to 80% of baseline to a single exponential. In some neurons expressing ST-Chronos and ST-ChroME, the decay kinetics were better fit with a two-term exponential decay function of form I = ae bt + ce dt . The size of the primary decay tau ( b ) was unaffected by the two-term fit. This secondary decay tau ( d ) averaged ~50 ms for Chronos and ~200 ms for ChroME, and the scale constant ( c ) for the secondary tau was maximally 0.3. For current injection experiments in Fig. 1 and Supplementary Fig. 1 , random white noise (mean 0 pA, range ± 60 pA) was generated on each sweep, and rheobase was determined by increasing current injections in a stepwise fashion (25 pA/step) until action potentials were recorded. This procedure was repeated for each stimulus duration tested. In Supplementary Fig. 1 , current injections were performed at the rheobase 5 ms or 1 s. White noise stimulus without additional current injection never resulted in action potentials. For optogenetics experiments, mice were screened by a handheld 300 mW 594-nm laser and filtered goggles for expression, after decapitation and before slicing. After slicing, recordings were made from the slices with strongest expression from the densest area as judged by handheld laser and filter goggles. To be included in subsequent datasets neurons, were required to pass an expression test: experiments were continued if the recorded neuron spiked in response to a brief one-photon stimulus (5 ms, 0.5 mW, 490 nm for Chronos and ChroME; and 5 ms, 0.5 mW, 510 nm for C1V1 T/T and ChrimsonR; for voltage-clamp experiments, this one-photon test was performed in cell-attached mode before break-in). As some suppressing opsins had much smaller currents, cells were only excluded if no visible current was detected with a 250-ms, 10-mW, 490-nm one-photon pulse. To determine a neuron’s optical rheobase, stimulus duration and power levels were increased in a stepwise fashion while recording in cell-attached or current-clamp configuration. We first increased laser power for a 5-ms stimulus by steps of ~25 mW average power to 0.4 mW/µm 2 (200 mW, CGH disc radius = 12.5 µm), and then the stimulus duration was increased by 5-ms steps to 25 ms. If neurons still did not spike, power was increased in a stepwise fashion to a maximum of 0.8 mW/µm 2 (400 mW, CGH disc radius = 12.5 µm). We defined a neuron’s optical rheobase as the first observed stimulus combination (time and power) that elicited 100% spike probability while stimulating at 1–5 Hz. CGH stimulation at 1 Hz to measure latency and jitter was performed at the optical rheobase. Neurons that did not spike using laser power 0.8 mW/µm 2 (400 mW, disc r = 12.5 µm) at stimulus durations <30 ms were considered to not be spikeable using 2P holography. In practice, neurons that were not activated with 0.4 mW/µm 2 /10-ms pulses were rarely activated at higher powers. Latency to spike was quantified as the time from the initiation of holographic stimulation until the membrane potential crossed 0 mV. Jitter is defined as the s.d. for the spike latencies corresponding to a particular stimulus. For Poisson stimulation in pyramidal neurons, stimulation was conducted with a mean frequency of 5 Hz and was performed at power levels approximately 25% higher than the optical rheobase. For Poisson stimulation, jitter was calculated as the s.d. of all spike latencies across all instantaneous rates. To calculate the fidelity index, for each sweep, the pulsatile time indices of stimulation times and the spike times were each convolved with a Gaussian kernel with σ = 10 ms, and the fidelity index for that sweep was defined as the max normalized cross-correlation at lag 0–10 ms. Fidelity index reported for each cell is the mean score computed for each sweep. For dual-patch experiments using the fast SLM, pyramidal neurons expressing ST-ChroME virally (neonatally injected into Emx-Cre mice) were stimulated with a mean rate of 10 Hz. On each trial a Poisson train was generated for cell A and replicated with an offset of 3 ms with a flip of the SLM in between (exposure time: 2.5 ms with 0.5 ms to allow the SLM to flip frames). The time between spikes was calculated as the difference in spike times of cells A and B given a spike in cell A within the preceding 10 ms. Cells were excluded if a 2.5-ms stimulation time was unable to generate action potentials. To determine the onset time of holographic suppression from current clamp recordings, the time of each action potential was binned into 1-ms bins. The observed bin counts were fit to an exponential decay from the mean firing rate to 0 spikes per bin, starting at the time of light onset and assuming a Poisson noise distribution. The duration of suppression was defined as the mean time from suppression onset to the next detected spike after suppression ceased. For 2P imaging crosstalk experiments, neurons were placed in the center of the field of view in the focal plane of the natural 2P focus for volumetric imaging. 2P imaging was performed at the specified window size, speed, and power for 1-s sweeps after a 1-s baseline period. Shaping 2P stimulation pulse and survival curve For laser pulse shaping experiments, whole-cell recordings were obtained and pulse features of CGH stimulation (radius = 12.5 µm) were varied online while holding the cell. The Satsuma HP 1,040-nm stimulation laser is switchable between 2 and 40 MHz, allowing online testing of the response to multiple repetition rates. Peak power was controlled using the EOM to vary average power, and the relationship between peak power and pulse energy was probed using the Satsuma HP laser’s onboard dispersion compensation, which allowed us to control pulse-width online to test pulses of identical pulse energy with variable peak power. The stage positions necessary to chirp the laser pulses were determined before the experiment began using a Mini TPA Compact, Tuning-free Autocorrelator (Angewandte Physik & Elektronik GmbH). To establish the maximum safe power levels useable before acute cellular damage, GCaMP6 + cells (without opsin) were stimulated with increasing power densities until calcium responses ceased (indicating cell death) or cavitation of the tissue was observed. We do not know the specific mechanisms of cellular damage, but only scored damage when a laser pulse resulted in an acute change in cellular fluorescence. Teto-GCaMP mice were headplated and windowed as if for in vivo imaging (see below) and deeply anesthetized as in cell-attached recordings (see below). Up to 800 mW per target (3D-SHOT disc radius r = 10 µm) were used. In vivo patch recording 2P guided-patch recordings were performed from adult mice (35 d old or older) as described 27 . For suppression experiments, whisker stimulation was achieved with an air puffer (PicoSpritzer II, General Value) directed toward the contralateral whisker pad. Six 50-ms puffs were applied before during and after each optogenetic stimulation; in general, L2/3 neurons did not respond in a time-locked manner but increased their overall firing rates during stimulation. Glass pipettes (2- to 5-MΩ resistance) were filled with HEPES ACSF and Alexa Fluor-488 dye (100 µM) for visualization. Cells were identified by the presence of mRuby2 or mRuby3 fluorescence imaged at 50–100 mW at 930–1,000 nm. Data were acquired using a Multiclamp 700B Amplifier (Axon Instruments) and digitized at 20 kHz (National Instruments). Data was digitally bandpass-filtered at 0.5–2.2 kHz for identification of spikes. All data were acquired using custom written Matlab (MathWorks) software. Cells were included for analysis if they were spontaneously active and those spikes were sufficiently larger than the noise (>5 s.d. of the noise). Furthermore, cells had to respond to a one-photon LED stimulation (Sola SE, Lumencor), fire action potential(s) if they expressed an excitatory opsin, or temporarily cease firing if they expressed a suppressing opsin (5 ms, 0.5 mW, 490 nm for Chronos and ChroME; 250 ms, 10 mW, 490 nm for GtACR). To determine the fraction of spikeable neurons, cells that exhibited spontaneous activity and passed the one-photon test were stimulated repetitively (1–5 Hz) with 5-ms light pulses of increasing laser power until they spiked to each laser pulse. Neurons that did not spike reliably with stimuli of 5-ms, 0.32-mW/µm 2 (100 mW) 3D-SHOT disc r = 10 µm were considered not spikeable. In practice, neurons that responded to holographic stimulation were activated with much less than maximum power. These laser powers and conditions were used for Poisson stimulus trains and tests of spatial resolution. Poisson stimuli were analyzed as in brain slices. Neurons were typically recorded at a depth of 75–250 µm below the pia surface. For 2P imaging crosstalk experiments, cells were centered in the field of view in the focal plane and 2P imaging was conducted at the specified window size, speed, and laser power for 5 s after 5 s of nonimaging. All experiments used 512 × 512 pixels for the imaging window. In vivo temperature measurements To measure brain temperature during holographic photostimulation and imaging, animals were prepared and deeply anesthetized as above for in vivo cell-attached patch recording, except that they were additionally administered 2 mg/kg of dexamethasone as an anti-inflammatory agent. Once the craniotomy was completed, the dura was removed and an 800-μm thick thermocouple coated in DiI was slowly lowered at a 45° angle using a micromanipulator (Sutter MP285) until it was 300–500 μm beneath the pial surface. It was then secured in place with Ortho-Jet while the open craniotomy was protected by Gelfoam and ACSF. The thermocouple was attached to a TC-324C temperature controller (Warner Instruments). The temperature-to-voltage conversion was calibrated in a series of water baths. The thermocouple was located under 2P illumination based on DiI signal, and the objective was placed over the thermocouple for the duration of stimulation. A single phase mask that targeted 50 spots was held static for the duration of the experiment, and duty cycling of the stimulation laser was performed using the EOM. Histology Mice were deeply anesthetized with ketamine/xylazine and transcardially perfused with phosphate-buffered saline (PBS) and 4% paraformaldehyde. Brains were postfixed for at least 2 h. Brains were embedded in 30% sucrose solution overnight, then frozen and 40-µm sections made on a microtome (American Optical Society). If the sections required immunohistochemistry, they were blocked for 1 h at 4 °C in blocking solution (0.6% Triton X-100, 0.2% Tween-20, 3% normal goat serum, and 3% BSA in PBS, all from Fisher Scientific), and then incubated in α-FLAG antibody at 1:1,000 dilution (Sigma clone M2). Then sections were washed with PBS and 0.25% Triton X-100 before being incubated in secondary antibody Alexa Fluor 488 goat anti-mouse (1:1,000). All sections were mounted on slides and sealed with Vectashield with DAPI (Vector Laboratories). Confocal images were acquired using an Olympus Fluoview system (Fv1000 Olympus Microscope) running the Fluoview software (Olympus), with 488 and 543 nm lasers. Viral infection For viral injection, animals were anesthetized using 2% isoflurane on a heating pad and headfixed in a stereotactic apparatus (Kopf). After sterilizing the incision site, the skin was opened and a small burr hole was drilled over S1 using a 0.24-mm drill bit (Busch; (3.5 mm lateral, 1.4 mm posterior to bregma). We injected 200–600 nL of virus using a microsyringe pump (Micro4) and a wiretrol II glass pipette (Drummond) at a rate of 25–50 nL/s at a depth of 150–300 µm below the pia surface. After the injection was complete, we waited 5 min before retracting the needle and closing the scalp with sutures. On some surgeries, the virus was injected directly after installation of a headplate (see below). For opsin injections, mice were used 2–6 weeks postinjection. For GCaMP6 injections, mice were used for experimentation 1–2 weeks postinjection. Viruses used were: AAV-syn-DIO-ST-Chronos-mRuby2 (UC Berkeley Vector core, titer: 4.8 × 10 14 ), AAV-CAG-DIO-ST-ChroME-P2A-H2B-mRuby3 (Custom Order Penn Vector core, titer: 3.76–7.53 × 10 12 ), AAV-CAG-DIO-IRES-eGtACR1-NLS-mRuby3 (Custom Order Penn Vector Core, titer: 6.82 × 10 14 ), AAV-syn-DIO-GCaMP6f (Penn Vector Core titer: 6.56 × 10 12 ), and AAV-Syn-Cre (titer: 1.4 × 10 12 ). For some whole-cell slice experiments using virus to test ST-Chronos, ST-ChroME, or IRES-ST-eGtACR1, viruses were introduced via neonatal injection as described 27 , 54 into P2–P3 littermate Emx-Cre or Drd3-Cre animals (P3: three injection sites, two injections per site at 100 and 250 µm below the pial surface, 18.5 nL/injection). In vivo 2P imaging and photostimulation Three different combinations of opsin and GCaMP6s were used in this study for all-optical experiments. First, mice expressing GCaMP6s in excitatory neurons using CaMKII-tTA crossed to teto-GCaMP6s 43 were co-injected with AAV-syn-Cre and AAV-CAG-DIO-ST-ChroME-P2A-H2B-mRuby3. Alternatively, triple-transgenic animals expressing CaMKII-tTA, teto-GCaMP6s, and Emx1-Cre were injected with AAV-CAG-DIO-ST-ChroME-P2A-H2B-mRuby3. For all-optical suppression experiments, mice expressing PV-Cre were co-injected with AAV-DIO-Syn-GCaMP6f and AAV-CAG-DIO-NLS-mRuby3-IRES-eGtACR1. Mice were fitted with a custom stainless steel headplate as previously described 54 . Mice were anesthetized with isoflurane (2%) and administered 2 mg/kg of dexamethasone as an anti-inflammatory and 0.05 mg/kg buprenorphine as an analgesic. The scalp was removed, the fascia retracted, and the skull lightly etched. Following application of Vetbond (3 M) to the skull surface, a custom stainless steel headplate was fixed to the skull with two dental cements: Metabond (C&B) followed by Ortho Jet (Lang). After the dental cement dried, a 3-mm diameter craniotomy over the left primary somatosensory cortex was drilled, and residual bleeding stopped with repeated wet–dry cycles using sterile artificial cerebral spinal fluid, gauze, and Gelfoam (Pfizer). A window plug consisting of two 3-mm diameter coverslips glued to the bottom of a single 5-mm diameter coverslip (using Norland Optical Adhesive #71) was placed over the craniotomy and sealed permanently using Ortho Jet (Lang). Animals were allowed to recover in a heated recovery cage before being returned to their home cage. Two days after surgery, animals were habituated to head-fixation under a freely moving circular treadmill for 2–7 d. For calcium imaging, mice were headfixed on a freely spinning running wheel under a Nixon 20 × magnification water-immersion objective and imaged with a Sutter MOM 2P resonant scanning microscope within a darkened box (see above for description of imaging setup). Volume acquisition occurred at 5.8–6.6 Hz for 550 × 550-µm fields of view with three Z -planes each separated by 50 µm. Imaging planes were 100–350 µm below the pia surface. 2P imaging was conducted at a wavelength of 930 nm with average powers at 50 mW. Neurons that co-expressed opsin tagged to a red fluorophore and GCaMP6 were identified based on average videos taken at 1,000–1,040 nm, and using custom Matlab software, regions of interest were circled and their centroids were used to compute holographic targets that were preloaded in sequence on the SLM. Custom Matlab digital acquisition software controlled the experiment by triggering Scanimage5 to acquire frames, the SLM to change the hologram, and EOM to control laser power. 2P imaging data analysis Motion correction, calcium source extraction, and deconvolution were performed using Suite2P as described 55 . Briefly, raw calcium videos were motion-corrected using Suite2P with subpixel alignment = 2, and calcium sources were extracted with key parameters diameter = 12–16 and signal extraction = ‘raw’. Calcium sources were then manually examined and accepted or rejected based on their overlap with morphologically identifiable neurons. Neuropil subtracted fluorescence vectors ( F ) or the OASIS deconvolution ( S ) were used for downstream analysis. Calcium signals were acquired continuously, and each cell’s fluorescence z -scored or Δ F / F 0 was calculated using F 0 set equal to the tenth percentile of fluorescence observed over the entire experiment. If a trial had motion over threshold (5 µm) during half or more of the stimulation period, the entire trial was excluded, since the results of experiment may not be interpretable, but if motion occurred when not stimulating it was corrected post hoc and the data was included. Holographic targets were aligned to calcium sources by calculating the Euclidean distance between the centroids of all holographic targets and all calcium sources and finding the minimum. Very rarely, targets assigned were assigned to calcium sources with distance >15 µm. These targets were assumed to have failed and were excluded from subsequence analysis. We found that the OASIS deconvolution signals provided a good estimate for the first derivate of the calcium signal, since local peaks in the S vector aligned with the frame on which stimulation occurred better than Δ F / F 0. Since deconvolved calcium signals decay much faster than fluorescent signals, this ameliorates analysis problems where slowly decaying fluorescence from a recently stimulated neuron may be attributed to holographic stimulation of the next cell in a sequence. Therefore, most subsequent analysis for ST-ChroME expressing neurons was performed based on the S vector. In GtACR1 experiments with GCaMP6f in PV neurons, reduction of calcium activity was most apparent when analyzing z -scored fluorescence responses, likely due to the relatively high firing rates of PV neurons making true baseline F 0 values hard to determine. Each cell’s fluorescence was z -scored on data from the entire experiment, and then had its baseline (determined during a nonstimulation period) subtracted to remove state dependent variability. In single-neuron suppression experiments, neurons were stimulated one by one at a rate of 0.5 Hz for 1 s at 0.16 mW/µm 2 (50 mW per neuron). As the fluorescence response to stimulation lasted more than 2 s, the trial immediately following stimulation was excluded from analysis. Similarly, small-ensemble stimulation was performed at 0.5 Hz for 1 s, but at 0.08 mW/µm 2 (25 mW per neuron), but for four neurons simultaneously or 100 mW in the field of view. In both cases, cells that fell within the region of optical artifact were excluded from subsequent analysis, but were not considered a failed stimulation. The laser gate was used in every all-optical experiment. Small-ensemble responses were calculated as the average of every recorded member of the four stimulated cells, so if a member of the ensemble was in the optical artifact that cells response would be ignored, and the remaining cell responses would comprise the ensemble response. The neurons that made up each ensemble were selected randomly using Matlab ‘randperm’ from a list of identified neurons. For analysis of single-neuron stimulation experiments, the S vector was z -scored on a trial-wise basis. For optical rheobase experiments, neurons were stimulated one by one at a rate of 2 Hz with ten 5-ms pulses at 30 Hz with varying laser intensities (corrected for varying diffraction efficiency of holograms targeting areas across the accessible volume). To determine the power needed to activate neurons, the relationship between stimulation power and the mean z -scored S vectors were fit with a smoothing interpolant and the power to reach 80% of saturation was reported. To determine the single-neuron response matrix, neurons were sequentially stimulated with ten 5-ms pulses at 30 Hz and the mean z -scored S vector for each neuron was averaged for two frames (~300 ms) after stimulation of each target. For ensemble stimulation experiments, the S vector was z -scored on a trial-wise basis as before, but with a small modification. Since stimulated neurons were not stimulated once per trial, but instead were repetitively stimulated over the course of every trial as part of many different ensembles, we evaluated the response of each neuron to each unique stimulation by discarding frames in which it was directly stimulated as part of a different ensemble. This analysis essentially treated each 73-s sweep as a series of concatenated 2-s single trials, but preserved the baseline information obtained during nonstimulation periods of the long sweep. Responses to ensemble stimulation of ten 5-ms pulses at 10, 20, or 30 Hz are shown as the mean z -scored S vector for each stimulus. Maps showing the response of all neurons to ensemble stimuli show the mean z -scored S vector for all neurons for two frames (~300 ms) after the marked ensemble was stimulated. For correlational analysis, for each pair of neurons, Pearson’s ρ was calculated for each trial type from the raw S vector corresponding to those trials (i.e., one correlation coefficient was calculated from the concatenation of all trials of a given type, corresponding to 10–13 trials of length 73 s, or ~12–16 min of imaging data). Differences in the distributions of correlation coefficients across trial types were assessed for significance by the Friedman test with the Tukey–Kramer correction for multiple comparisons. Modeling the speed and scale of photoactivation A description of the model used to calculate the maximum number of light-evoked spikes per s (Supplementary Fig. 23) is available online in our repository ( ). Statistics All analyses were performed using Matlab (MathWorks). The analyses performed were: paired t test, Mann–Whitney U test, Wilcoxon signed-rank test, Fisher’s exact test, Friedman’s test, and Kruskal–Wallis test. All tests were two-sided unless otherwise noted. For parametric tests, data distribution was assumed to be normal, but this was not formally tested. No statistical methods were used to predetermine sample sizes, but we collected sample sizes that were similar to or exceeded those reported in previous publications 13 , 14 , 25 . Data collection and analysis were not performed blind to the conditions of the experiments. Randomization was used in all applicable experiments (for experiments with multiple trial types, the order of trials was randomized). Animals and data points were only excluded based on criteria described above. Accession codes Addgene: pCAG-ChroME-mRuby2-ST, 108902 ; pAAV-CAG-DIO-ChroME-P2A-H2B-mRuby3, 108912 ; pCAG-H2B-mRuby3-IRES-eGtACR1-ST, 108960 ; pAAV-CAG-DIO-NLS-H2B-mRuby3-IRES-eGtACR1-ST, 109048 . Reporting Summary Further information on experimental design is available in the Nature Research Reporting Summary linked to this article. Code availability statement Code for alignment of 3D holography with 3D imaging, holographic control, hologram computation, and analysis will be hosted online upon publication. Data availability statement The datasets generated and analyzed in this study are available from the corresponding author on reasonable request. Additionally, the sequences for constructs created in this study will be made publicly available on Addgene. | What if we could edit the sensations we feel; paste in our brain pictures that we never saw, cut out unwanted pain or insert non-existent scents into memory? University of California, Berkeley neuroscientists are building the equipment to do just that, using holographic projection into the brain to activate or suppress dozens and ultimately thousands of neurons at once, hundreds of times each second, copying real patterns of brain activity to fool the brain into thinking it has felt, seen or sensed something. The goal is to read neural activity constantly and decide, based on the activity, which sets of neurons to activate to simulate the pattern and rhythm of an actual brain response, so as to replace lost sensations after peripheral nerve damage, for example, or control a prosthetic limb. "This has great potential for neural prostheses, since it has the precision needed for the brain to interpret the pattern of activation. If you can read and write the language of the brain, you can speak to it in its own language and it can interpret the message much better," said Alan Mardinly, a postdoctoral fellow in the UC Berkeley lab of Hillel Adesnik, an assistant professor of molecular and cell biology. "This is one of the first steps in a long road to develop a technology that could be a virtual brain implant with additional senses or enhanced senses." Mardinly is one of three first authors of a paper appearing online April 30 in advance of publication in the journal Nature Neuroscience that describes the holographic brain modulator, which can activate up to 50 neurons at once in a three-dimensional chunk of brain containing several thousand neurons, and repeat that up to 300 times a second with different sets of 50 neurons. "The ability to talk to the brain has the incredible potential to help compensate for neurological damage caused by degenerative diseases or injury," said Ehud Isacoff, a UC Berkeley professor of molecular and cell biology and director of the Helen Wills Neuroscience Institute, who was not involved in the research project. "By encoding perceptions into the human cortex, you could allow the blind to see or the paralyzed to feel touch." Holographic projection Each of the 2,000 to 3,000 neurons in the chunk of brain was outfitted with a protein that, when hit by a flash of light, turns the cell on to create a brief spike of activity. One of the key breakthroughs was finding a way to target each cell individually without hitting all at once. To focus the light onto just the cell body - a target smaller than the width of a human hair - of nearly all cells in a chunk of brain, they turned to computer generated holography, a method of bending and focusing light to form a three-dimensional spatial pattern. The effect is as if a 3D image were floating in space. In this case, the holographic image was projected into a thin layer of brain tissue at the surface of the cortex, about a tenth of a millimeter thick, though a clear window into the brain. "The major advance is the ability to control neurons precisely in space and time," said postdoc Nicolas Pégard, another first author who works both in Adesnik's lab and the lab of co-author Laura Waller, an associate professor of electrical engineering and computer sciences. "In other words, to shoot the very specific sets of neurons you want to activate and do it at the characteristic scale and the speed at which they normally work." The researchers have already tested the prototype in the touch, vision and motor areas of the brains of mice as they walk on a treadmill with their heads immobilized. While they have not noted any behavior changes in the mice when their brain is stimulated, Mardinly said that their brain activity - which is measured in real-time with two-photon imaging of calcium levels in the neurons - shows patterns similar to a response to a sensory stimulus. They're now training mice so they can detect behavior changes after stimulation. Prosthetics and brain implants The area of the brain covered - now a slice one-half millimeter square and one-tenth of a millimeter thick - can be scaled up to read from and write to more neurons in the brain's outer layer, or cortex, Pégard said. And the laser holography setup could eventually be miniaturized to fit in a backpack a person could haul around. Mardinly, Pégard and the other first author, postdoc Ian Oldenburg, constructed the holographic brain modulator by making technological advances in a number of areas. Mardinly and Oldenburg, together with Savitha Sridharan, a research associate in the lab, developed better optogenetic switches to insert into cells to turn them on and off. The switches - light activated ion channels on the cell surface that open briefly when triggered - turn on strongly and then quickly shut off, all in about 3 milliseconds, so they're ready to be re-stimulated up to 50 or more times per second, consistent with normal firing rates in the cortex. Pégard developed the holographic projection system using a liquid crystal screen that acts like a holographic negative to sculpt the light from 40W lasers into the desired 3D pattern. The lasers are pulsed in 300 femtosecond-long bursts every microsecond. He, Mardinly, Oldenburg and their colleagues published a paper last year describing the device, which they call 3D-SHOT, for three-dimensional scanless holographic optogenetics with temporal focusing. "This is the culmination of technologies that researchers have been working on for a while, but have been impossible to put together," Mardinly said. "We solved numerous technical problems at the same time to bring it all together and finally realize the potential of this technology." As they improve their technology, they plan to start capturing real patterns of activity in the cortex in order to learn how to reproduce sensations and perceptions to play back through their holographic system. | 10.1038/s41593-018-0139-8 |
Physics | Destroying the superconductivity in a kagome metal | Guolin Zheng et al, Electrically controlled superconductor-to-failed insulator transition and giant anomalous Hall effect in kagome metal CsV3Sb5 nanoflakes, Nature Communications (2023). DOI: 10.1038/s41467-023-36208-6 Journal information: Nature Communications | https://dx.doi.org/10.1038/s41467-023-36208-6 | https://phys.org/news/2023-03-destroying-superconductivity-kagome-metal.html | Abstract The electronic correlations (e.g. unconventional superconductivity (SC), chiral charge order and nematic order) and giant anomalous Hall effect (AHE) in topological kagome metals AV 3 Sb 5 (A = K, Rb, and Cs) have attracted great interest. Electrical control of those correlated electronic states and AHE allows us to resolve their own nature and origin and to discover new quantum phenomena. Here, we show that electrically controlled proton intercalation has significant impacts on striking quantum phenomena in CsV 3 Sb 5 nanodevices mainly through inducing disorders in thinner nanoflakes and carrier density modulation in thicker ones. Specifically, in disordered thin nanoflakes (below 25 nm), we achieve a quantum phase transition from a superconductor to a “failed insulator” with a large saturated sheet resistance for T → 0 K. Meanwhile, the carrier density modulation in thicker nanoflakes shifts the Fermi level across the charge density wave (CDW) gap and gives rise to an extrinsic-intrinsic transition of AHE. With the first-principles calculations, the extrinsic skew scattering of holes in the nearly flat bands with finite Berry curvature by multiple impurities would account for the giant AHE. Our work uncovers a distinct disorder-driven bosonic superconductor-insulator transition (SIT), outlines a global picture of the giant AHE and reveals its correlation with the unconventional CDW in the AV 3 Sb 5 family. Introduction The layered kagome metals AV 3 Sb 5 (A = K, Rb and Cs) that possess topological electron bands and geometrical frustration of vanadium lattices are of great interests 1 , 2 , 3 . This is in no small part due to the many quantum phenomena that they support including unconventional SC 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , novel nematic order 12 , chiral charge density order 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , giant anomalous Hall effect 23 , 24 , 25 as well as the interplay between two-gap SC and CDW in CsV 3 Sb 5 26 . The unique coexistence of electronic correlations and band topology in AV 3 Sb 5 allows for investigating intriguing transitions of these correlated states, such as the superconductor-insulator transition (SIT), a protocol quantum phase transition (QPT) that is usually tuned by disorders, magnetic fields and electric gating 27 , 28 . Moreover, the origin of giant AHE in AV 3 Sb 5 and its correlation with chiral CDW remain elusive 29 , 30 , in spite of several recently proposed mechanisms including the extrinsic skew scattering of Dirac quasiparticles with frustrated magnetic sublattice 23 , the orbital currents of novel chiral charge order 13 or the chiral flux phase in the CDW phase 31 . Thus the ability to tune the carrier density and the corresponding Fermi surfaces would play a vital role in understanding and manipulating these novel quantum states and further realizing exotic QPTs. In this work, we find that electrically controlled proton intercalation 32 , 33 exhibits crucial impacts on the superconducting state, CDW state and the associated AHE in CsV 3 Sb 5 nanoflakes via disorders and carrier density modulation. In thinner nanoflakes (below 25 nm) with large gate voltages (e.g. with an amplitude above 15 V), the enhanced disorders from intercalated protons suppressed both CDW and superconducting phase coherence and gave rise to a SIT associated with the localized Cooper pairs, featuring a saturated sheet resistance reaching up to 10 6 Ω for T → 0, dubbed a “failed insulator”. While in thicker CsV 3 Sb 5 nanoflakes with much lower gate voltages (within 7 V), the superconducting transition instead retained with nearly unchanged sheet resistance in normal state at 5 K, indicating very limited impact of disorder. However, the Hall measurements demonstrate a large modulation of carrier density (with the modulation up to 10 22 cm −3 ) and the relevant Fermi surface topology changes from a hole pocket to an electron pocket. Consequently, we find that the giant anomalous Hall conductivity (AHC) with a maximal amplitude exceeding 10 4 Ω −1 cm −1 mainly pinned down to a narrow hole-carrier-density window around \(p=\left(2.5\pm 1.2\right)\times {10}^{22}\,{{{{{{\rm{cm}}}}}}}^{-3}\) at low temperatures. Meanwhile, the AHE exhibits a clear extrinsic-intrinsic transition as the Fermi level shifts across the CDW gap near the saddle point. The observed giant AHE can be ascribed to the extrinsic skew scattering of the holes in the flat bands with nonzero Berry curvature by V vacancies and magnetic field tilted paramagnetic (PM) impurities. Results The layered material CsV 3 Sb 5 has a hexagonal crystal structure with space group P6/mmm (No. 191). As shown in the upper panel of Fig. 1a , the V 3 Sb layers are sandwiched by antimonene layers and Cs layers. X-ray diffraction (XRD) (see Supplementary Fig. 1 ), reveals a sharp (001) diffraction peak, indicating a single crystal possessing (001) preferred orientation, in line with previous work 4 . The striking feature of CsV 3 Sb 5 is that the V atoms form a 2D kagome network. This frustrated magnetic sublattice of V was expected to induce novel correlation effects, such as spin-liquid states 34 , 35 . The lower panel of Fig. 1a illustrates a schematic of the gating device. A CsV 3 Sb 5 nanoflake is mounted on the solid proton conductor with an underlying Pt electrode to form a solid proton field effect transistor (SP-FET). Next we demonstrate how the proton intercalation significantly affects the SC state, CDW state and the giant AHE in CsV 3 Sb 5 nanoflakes with distinct thicknesses. Fig. 1: Temperature-dependent longitudinal resistance curves under various gating voltages in device #5 (21 nm). a Schematic of proton gating on CsV 3 Sb 5 nanoflakes (upper) and Hall-bar device (lower). b Temperature dependence of sheet resistance at various gate voltages in device #5. c Sheet resistance as a function of charge density n e near SIT. The critical resistance R c ~ 316 Ω is obtained with carrier density \({{{{{{\rm{n}}}}}}}_{{{{{{\rm{c}}}}}}} \sim 7.91\times {10}^{17}{{{{{{\rm{m}}}}}}}^{-2}\) . d Sheet resistance as the function of 1/T under the same voltages. It exhibits a “failed insulator” with a large saturated r e sistance for T → 0 K on the insulating side. e Multiple sets of R s (T, B) curves can collapse onto a single function, akin to a 2D SIT. f shows the derivatives of the resistance curves R xx (T) under different voltages. As the voltage changes from 0 V to −6.5 V, the CDW transition temperature T cdw gradually decreases from 85 K to 73 K at V g = −6.5 V. CDW is largely suppressed when the gate voltage exceeds −7.0 V. Full size image Protonic gate on thinner CsV 3 Sb 5 nanoflakes We first investigate the impacts of proton intercalation on the correlated electron states including SC and CDW in thinner CsV 3 Sb 5 flakes below 25 nm. Figure 1b shows the temperature-dependent sheet resistance of device #5 with thickness of 21 nm at various gate voltages. A clear SC phase appears with the offset transition temperature \({{{{{{\rm{T}}}}}}}_{{{{{{\rm{c}}}}}}}^{{{{{{\rm{offset}}}}}}}\) around 3 K in the absence of a protonic gate. Besides, a resistance anomaly as a characteristic of CDW near 80 K can be identified on R s − T curve (around 90 K in bulk, also in Supplementary Fig. 2 ). Applying a protonic gate, SC is clearly suppressed and disappeared when \({{{{{{\rm{V}}}}}}}_{{{{{{\rm{g}}}}}}}\le -2\,{{{{{\rm{V}}}}}}\) and R s fattens at around \({{{{{{\rm{V}}}}}}}_{{{{{{\rm{g}}}}}}}=-11\,{{{{{\rm{V}}}}}}\) . At \({{{{{{\rm{V}}}}}}}_{{{{{{\rm{g}}}}}}} < -11\,{{{{{\rm{V}}}}}}\) , temperature-dependent R s gradually exhibits an upturn at low temperature region and it eventually reaches up to above 10 6 Ω at \({{{{{{\rm{V}}}}}}}_{{{{{{\rm{g}}}}}}}=-25\,{{{{{\rm{V}}}}}}\) , indicating a quantum phase transition from a superconductor to an insulator. We plot the sheet resistance as a function of carrier density near SIT at temperatures between 2 K and 50 K and obtain a critical resistance R c ≈ 316 Ω with a critical carrier density \({{{{{{\rm{n}}}}}}}_{{{{{{\rm{c}}}}}}}\approx 7.91\,\times {10}^{17}\,{{{{{{\rm{m}}}}}}}^{-2}\) , as shown in Fig. 1c . Converting this critical resistance R c to the sheet resistance per layer, \({{{{{{\rm{R}}}}}}}_{{{{{{\rm{c}}}}}}/{{{{{\rm{layer}}}}}}}\) = (ρ c /l) with l being the thickness of a single layer, we get \({{{{{{\rm{R}}}}}}}_{{{{{{\rm{c}}}}}}/{{{{{\rm{layer}}}}}}}=7268\) \(\Omega\) , very close to the quantum resistance of Cooper pair \({{{{{{\rm{R}}}}}}}_{{{{{{\rm{Q}}}}}}} \sim 6450\) \(\Omega\) . In spite of the big R s reaching up to 10 6 Ω on insulating side, however, a saturated resistance trend appeared for T → 0 K, as shown in Fig. 1d . Note that this insulating state with a saturated resistance for T → 0 K is not a typical insulator but a “failed insulator”, probably due to the incoherent tunneling between localized Cooper pairs 36 , 37 . This type of superconductor to “failed insulator” transition has also been observed in another sample #8 in Supplementary Fig. 5 and mainly results from the enhanced effective disorders due to the intercalated protons in the thinner nanoflakes with higher gate voltages 38 . SIT can be usually characterized by two distinct scenarios (bosonic and fermionic) according to the nature of insulating phase, while the finite-size scaling analysis could yield its critical exponents and further reveal the universality class of QPT 39 , 40 , 41 , 42 . To get the critical exponents of SIT, we plot more than twenty sets of \({R}_{s}\left(T,B\right)\) curves and find that they can collapse onto a single function, as predicted for a 2D SIT. The appearance of flatten resistance near R c suggests the bosonic nature of SIT, in which the coherent Copper pairs in the SC phase are localized by disorders with loss of macroscopic phase coherence in the insulating phase 27 , 28 . The finite size scaling dependence of R s on T and a tuning parameter has the form \({{{{{\rm{\rho }}}}}}\left({{{{{\rm{T}}}}}},{{{{{{\rm{n}}}}}}}_{{{{{{\rm{s}}}}}}}\right)={{{{{\rm{\rho }}}}}}[\frac{{{{{{\rm{T}}}}}}}{{{{{{{\rm{T}}}}}}}_{0}({{{{{{\rm{n}}}}}}}_{{{{{{\rm{s}}}}}}})}]\) with \({{{{{{\rm{T}}}}}}}_{0}\propto {\left|{{{{{{\rm{n}}}}}}}_{{{{{{\rm{s}}}}}}}-{{{{{{\rm{n}}}}}}}_{{{{{{\rm{c}}}}}}}\right|}^{{{{{{\rm{\nu }}}}}}{{{{{\rm{z}}}}}}}\) where n s is the charge density, n c is the critical carrier density with the value of \({{{{{{\rm{n}}}}}}}_{c}\approx 7.91\times {10}^{17}{m}^{-2}\) (or \(1.3\times {10}^{18}{{{{{{\rm{m}}}}}}}^{-2}\) in device #8) and T 0 is the scaling parameter which approaches to zero at n s = n c . ν is the correlation-length exponent and z is the temporal critical exponent 42 . By extracting the exponent product νz and plotting lnT 0 versus \({{{{{\rm{ln}}}}}}{{{{{\rm{|}}}}}}{{{{{{\rm{n}}}}}}}_{{{{{{\rm{s}}}}}}}-{{{{{{\rm{n}}}}}}}_{{{{{{\rm{c}}}}}}}{{{{{\rm{|}}}}}}\) curve, we can obtain νz = 1.85 (or 1.68 in device #8) with an uncertainty of ±0.14 (see Supplementary Fig. 3 ). This estimated exponent product is close to that of magnetic-field tuned SIT in the hybrid system of SC indium islands deposed on 2D indium oxide thin film 43 , which was also attributed to localization of persisting Cooper pairs. Note that the critical exponent here is distinct from those of 2D conventional models for SIT 28 such as classical percolation model (vz = 4/3), quantum percolation model (vz = 7/3), which probably results from the complexity of SC gap and multiple impurities 44 . Thus, CsV 3 Sb 5 provides us with a unique example system to explore rich QPTs involving intrinsic superconductors with topological energy bands and frustrated kagome lattice. Figure 1d shows the derivatives of the RT curves at various V g . Interestingly, CDW transition temperature \({{{{{{\rm{T}}}}}}}_{{{{{{\rm{CDW}}}}}}}=85\,{{{{{\rm{K}}}}}}\) at \({{{{{{\rm{V}}}}}}}_{{{{{{\rm{g}}}}}}}=0\,{{{{{\rm{V}}}}}}\) gradually decreases to 73 K at \({{{{{{\rm{V}}}}}}}_{{{{{{\rm{g}}}}}}}=-6.5\,{{{{{\rm{V}}}}}}\) where SC has been suppressed. More importantly, at \({{{{{{\rm{V}}}}}}}_{{{{{{\rm{g}}}}}}}\le -7\,{{{{{\rm{V}}}}}}\) , we found this resistance anomaly totally disappeared on RT curve, indicating the disappearance of the CDW. The non-synchronous disappearance of SC and CDW reveals that SC is more sensitive to disorders. The suppression of CDW is also consistent with recent works for CsV 3 Sb 5 under high pressures, possibly due to band reconstructions or Fermi level shift 4 , 5 , 6 , 7 , 8 . It is clear that the protonic gate significantly modifies the CDW and SC phases, facilitating the further investigation of the intertwinement among these novel electronic correlations in AV 3 Sb 5 . Protonic gate on thicker CsV 3 Sb 5 nanoflakes For a given gate voltage, the thick samples would diminish the impact of disorder of intercalated protons, leaving a large tune of the carrier density. Let us concentrate on the significance of the carrier density modulation on the AHE in CDW phase. We choose thicker CsV 3 Sb 5 nanoflakes with much lower gate voltages within 7 V and find that the proton intercalation mainly changes the carrier density in those thicker ones, leading to only a slight modulation of SC transition temperature (Supplementary Fig. 6 ), which is consistent with recent reports 45 , 46 , 47 . Figure 2a shows the Hall traces of device #4 (around 80 nm) at various temperatures and selected gate voltages. At low magnetic fields, the Hall resistance R yx at \({{{{{{\rm{V}}}}}}}_{{{{{{\rm{g}}}}}}}=6.4\,{{{{{\rm{V}}}}}}\) exhibited a nonlinear behavior at 5 K. This antisymmetric “S”-shape R yx was attributed to field induced AHE in KV 3 Sb 5 23 and CsV 3 Sb 5 24 . At high fields, R yx exhibits an approximately linear field dependence associated with the ordinary Hall effect induced by the Lorentz force. Note that AV 3 Sb 5 is a multi-band kagome metal with its transport properties mainly determined by the hole pocket near the M points 48 . Thus we can use a simple band model to fit this linear Hall resistivity at high field region and extract the approximate hole carrier density at the M-points. When \({6.4{{{{{\rm{V}}}}}}\ge {{{{{\rm{V}}}}}}}_{{{{{{\rm{g}}}}}}}\ge -2.7\,{{{{{\rm{V}}}}}}\) , the Hall traces in device #4 exhibit two distinct features. For each gating voltage, the temperature-dependent Hall effects demonstrate a sign reversal at the critical temperature T*, probably due to the temperature-induced band renormalization 48 . In addition, the Hall slope decreases gradually as the voltage is swept towards −2.7 V, indicating a gradual increase of the hole carrier density. At \({{{{{{\rm{V}}}}}}}_{{{{{{\rm{g}}}}}}}=-4.6\,{{{{{\rm{V}}}}}}\) , however, the Fermi surface topology suddenly changes from a hole pocket to an electron pocket with a negative Hall slope. This doping-induced sign reversal of Hall resistance has also been observed in other samples (Supplementary Fig. 9 ). In contrast to the hole pockets, the Hall traces in the electron pockets exhibit no sign reversal as the temperature is increased, as shown in the bottom right panel of Fig. 2a at \({{{{{{\rm{V}}}}}}}_{{{{{{\rm{g}}}}}}}=-6.1\,{{{{{\rm{V}}}}}}\) , indicating a dramatic suppression of T* in the electron pocket. We further plot the gate-dependent carrier density at 5 K in device #4 in Fig. 2b . At \({{{{{{\rm{V}}}}}}}_{{{{{{\rm{g}}}}}}}=-4.6\,{{{{{\rm{V}}}}}}\) , the Fermi level is shifted across the electro-hole crossover point. We note that the discontinuity of the carrier density under gate voltage probably stems from the complex evolution of density of states (DOS) during the proton intercalation (See theoretical calculations below). Figure 2c shows the carrier-density (obtained at 5 K) dependent T* for bulk crystals and four nanodevices. In hole pockets, a higher hole density may lead to a smaller T*. However, T* approaches 0 K for the electron pockets, due to the sudden change of the Fermi surface topology. Fig. 2: Gate-tuned Hall resistance and carrier density dependent band topology. a Temperature-dependent Hall effect in device #4 under different gating voltages. b Gate-dependent carrier density in device #4 at 5 K. Sweeping the gate voltage from 6.4 V to −6.1 V, the band structure evolves from a hole band to an electron band in the low temperature region. c Carrier density dependent T* in different samples. Full size image We now discuss the gate-dependent AHE in CsV 3 Sb 5 . The total Hall resistivity ρ yx consists of two components 49 : \({{{{{{\rm{\rho }}}}}}}_{{{{{{\rm{yx}}}}}}}={{{{{{{\rm{\rho }}}}}}}_{{{{{{\rm{yx}}}}}}}^{{{{{{\rm{N}}}}}}}+{{{{{\rm{\rho }}}}}}}_{{{{{{\rm{yx}}}}}}}^{{{{{{\rm{A}}}}}}}\) , with \({{{{{{\rm{\rho }}}}}}}_{{{{{{\rm{yx}}}}}}}^{{{{{{\rm{N}}}}}}}\) the normal Hall resistivity and \({{{{{{\rm{\rho }}}}}}}_{{{{{{\rm{yx}}}}}}}^{{{{{{\rm{A}}}}}}}\) the anomalous Hall resistivity. In order to extract the AHE component, the Hall resistivity was linearly fitted at high field to subtract \({{{{{{\rm{\rho }}}}}}}_{{{{{{\rm{yx}}}}}}}^{{{{{{\rm{N}}}}}}}\) . Figure 3a shows the gate-dependent anomalous Hall resistivity \({{{{{{\rm{\rho }}}}}}}_{{{{{{\rm{yx}}}}}}}^{{{{{{\rm{A}}}}}}}\) of device #4 at 5 K under various gate voltages. The maximum \({{{{{{\rm{\rho }}}}}}}_{{{{{{\rm{yx}}}}}}}^{{{{{{\rm{A}}}}}}}\) occurs at V g = 4.5 V with an amplitude of 0.041 μΩ∙cm that is approximately eight-fold larger than the minimum \({{{{{{\rm{\rho }}}}}}}_{{{{{{\rm{yx}}}}}}}^{{{{{{\rm{A}}}}}}}\) (0.0048 μΩ∙cm) measured at −4.6 V. Interestingly, the AHE also exhibits a sign reversal at V g = −4.6 V which is probably due to the sign change of the Berry curvature in different energy bands, as shown in Fig. 3a . To get the AHC \({{\sigma }}_{{{{{{\rm{xy}}}}}}}^{{{{{{\rm{A}}}}}}}\) , we first convert the Hall resistivity into the Hall conductivity \({{\sigma }}_{{{{{{\rm{xy}}}}}}}={{{{{{\rm{\rho }}}}}}}_{{{{{{\rm{yx}}}}}}}/\left({{{{{{{\rm{\rho }}}}}}}_{{{{{{\rm{yx}}}}}}}}^{2}+{{{{{{\rm{\rho }}}}}}}_{{{{{{\rm{xx}}}}}}}^{2}\right)\) , followed by linearly fitting the conductivity at high field and subtracting the normal Hall conductivity \({{\sigma }}_{{{{{{\rm{xy}}}}}}}^{{{{{{\rm{N}}}}}}}\) . Figure 3b displays the non-monotonic variation of both the AHC and the anomalous Hall angle (AHA) \({{{{{\rm{\theta }}}}}}=\left|{{\sigma }}_{{{{{{\rm{xy}}}}}}}^{{{{{{\rm{A}}}}}}}/{{\sigma }}_{{{{{{\rm{xx}}}}}}}\right|\) . The maximal AHC reaches \(1.24\times {10}^{4}{\Omega }^{-1}{{{{{{\rm{cm}}}}}}}^{-1}\) with an AHA of 2.2% at 4.5 V. Moreover, the AHC (AHA) can be modulated by more than ten times in device #4, revealing the high tunability of the AHE in CsV 3 Sb 5 . Figure 3c shows two carrier density regions that exhibit large AHE for different devices. The first region is mainly pinned down in the hole pocket around \({{{{{\rm{p}}}}}}=(2.5\pm 1.2)\times {10}^{22}{{{{{{\rm{cm}}}}}}}^{-3}\) with the maximum AHC exceeding 10 4 Ω −1 cm −1 . Remarkably, another large AHE appears in the electron pocket between n = (3 ± 0.6) × 10 22 cm −3 with the AHC around 5 × 10 3 Ω −1 cm −1 . Shifting away from these two regions, however, the AHC either keeps a finite value or approaches zero for devices #1 and #2 (Supplementary Fig. 9 ). Fig. 3: Gate-tuned giant anomalous Hall effects in device #4. a Gate-dependent anomalous Hall effect at 5 K after subtraction of the linear Hall background in the high field regions (ordinary Hall part). b Gate-dependent AHC and anomalous Hall angles (AHA). c Carrier density dependent AHC in different devices #1, #2, #3 and #4. The maximum AHC occurred with a hole carrier density of ~2 × 10 22 cm −3 . Full size image The scaling law between AHC and σ xx may assist in identifying the underlying mechanism of the AHE 49 , 50 . Figure 4a displays the scaling relations \({{\sigma }}_{{{{{{\rm{xy}}}}}}}^{{{{{{\rm{A}}}}}}}\) vs σ xx at various gate voltages and temperatures. In the high conductivity region (σ xx exceeds \(5\times {10}^{5}\,{\Omega }^{-1}{{{{{{\rm{cm}}}}}}}^{-1}\) ), the maximal AHC obtained in device #4 (4.5 V) and device #7 (at 3.8 V) can be well captured by a linear scaling relation \({{\sigma }}_{{{{{{\rm{xy}}}}}}}^{{{{{{\rm{A}}}}}}}\propto 0.14\,{{\sigma }}_{{{{{{\rm{xx}}}}}}}\) , revealing that the skew-scattering mechanism may dominate the AHE 49 , 50 , 51 , 52 , 53 . The possible side jump contribution is also discussed in Supplementary section 9 . However, at V g = −4.6 V, a finite AHC around \({10}^{3}\,{\Omega }^{-1}\,{{{{{{\rm{cm}}}}}}}^{-1}\) is approximately independent of the longitudinal conductivity σ xx , implying that the intrinsic AHE from the Berry curvature becomes dominant. For other gating voltages, AHEs are likely linked to the mixing region. The gate-induced crossover between the extrinsic (at V g = 4.5 V) and intrinsic regimes (at V g = −4.6 V) reveals a strong dependence of AHE on the Fermi energy of CsV 3 Sb 5 . Fig. 4: Scaling relation, band structure and the density of states in gated CsV3Sb5. a The scaling relation of AHC against the longitudinal conductivity in devices #4 and #7 (with thickness around 50 nm). Near the high conductivity region (above 5 × 10 5 Ω −1 cm −1 ), the giant AHE is dominated by the skew scattering (#4 at 4.5 V and #7 at 3.8 V). At −4.6 V (#4, electron band), the AHE is dominated by the intrinsic Berry curvature. b Band structures of the paramagnetic phase with different doping levels in CsV 3 Sb 5 . Ne refers the charge number in each primitive cell. c Illustration of the evolution of Fermi energy under different gate voltages. Red dashed arrow shows the probable Fermi level in bulk crystal (slightly above VHS1). Cs vacancies in CsV 3 Sb 5 nanoflakes will significantly lower the Fermi level (black arrow). Applying a negative (positive) voltage will shift the Fermi level downward (upward). The giant AHE occurs when the Fermi level approaches the upper sub-band with relatively large DOS. Full size image To gain more insights into AHE, we performed the first-principles calculations of the band structure, the DOS and the intrinsic AHC (Supplementary Fig. 12 ). The calculated AHC due to the field induced magnetization of the spin of V atoms over a broad energy region, exhibits a maximum (~ \({1500\Omega }^{-1}{{{{{{\rm{cm}}}}}}}^{-1}\) ) in the hole band, one order smaller than the maximum experimental value. This suggests that the intrinsic contribution from the Berry curvature of single-particle energy bands should not dominate the giant AHE in experiments. Note that, because of the tiny observed magnetic moments of V atoms 45 , the realistic intrinsic AHC from field-induced magnetization should be much smaller than the observed intrinsic AHC. It has been known that the extrinsic skew scattering of AHE essentially originates from the asymmetric scattering of carriers by nonmagnetic/magnetic impurities. Usually, there are three distinct scenarios to produce the extrinsic skew scattering and the resultant AHE 49 . By careful examination of each scenario, we could exclude the Kondo scattering and resonant skew scattering (Supplementary section 9 ). We found that the scenario associated with finite Berry curvature of energy bands and scattering by nonmagnetic/magnetic impurities primarily accounts for extrinsic AHE in AV 3 Sb 5 49 , 54 . Discussion We further investigate the impact of charge doping on the band structure and AHE. Since the charge doping in CsV 3 Sb 5 is orbitally selective, the hole (electron) doping can significantly shift the van Hove singularity (VHS1) upward (downward) with respect to the Fermi level within the rigid-band approximation, as shown in Fig. 4b . In our pristine CsV 3 Sb 5 single crystal, the Fermi level lies slightly above VHS1 near the M point (Supplementary section 2 ), with some nearly flat bands consist of \({{{{{{\rm{d}}}}}}}_{{{{{{\rm{xz}}}}}},{{{{{\rm{yz}}}}}}}\) and \({{{{{{\rm{d}}}}}}}_{{{{{{\rm{xy}}}}}},{{{{{{\rm{x}}}}}}}^{2}-{{{{{{\rm{y}}}}}}}^{2}}\) orbitals of V atoms (Supplementary Fig. 13 ) 55 , 56 , 57 , 58 . When T < T cdw , a CDW gap opens near VHS1, splits the bands at VHS1 into two sub-bands and suppresses the DOS near the Fermi level, as shown in Fig. 4c . Accordingly, the Fermi level in bulk CsV 3 Sb 5 lies in the CDW gap 11 , 55 near the M point (red dashed arrow), exhibiting a large AHE. In exfoliated CsV 3 Sb 5 nanoflakes, the Fermi level approaches the lower sub-band due to the increasing Cs vacancies and the AHC at V g = 0 V reduces to about one third of the maximal value, i.e., 4500 \(\Omega\) −1 cm −1 . Applying a negative voltage will accordingly lower the Fermi level (details in Supplementary section 14 ) and generate a relatively large AHE region (with AHC around 5000 \(\Omega\) −1 cm −1 ) in the electron pockets. At V g > 0 V, however, the Fermi level will be shifted upward and back to the upper sub-band again (dashed red arrow), the giant AHE reappears at 4.5 V in device #4. This giant AHE primarily comes from the skew scatterings of holes in the nearly flat bands at VHS1 with nonzero Berry curvature by the V vacancies and/or PM impurities. The large discrepancy of AHE in the electron and hole pockets is consistent with asymmetric distribution of DOS in the CDW bands near the M points 11 . Note that the intrinsic AHC at V g = −4.6 V near the electron-hole crossover point is mainly ascribed to the large suppression of the DOS at the middle of the CDW gap. After evaluating the possible AHE from the electron near the Γ point and the Dirac bands, we find this intrinsic AHE mainly originates from the recent chiral charge order forming from the electronic states near the saddle point 59 , 60 , 61 . In summary, we revealed two major impacts of proton intercalation on CsV 3 Sb 5 , inducing disorders in thinner nanoflakes and carrier density modulation in thicker ones. In thin nanoflakes below 25 nm with \(\left|{{{{{{\rm{V}}}}}}}_{{{{{{\rm{g}}}}}}}\right|\ge 15\,{{{{{\rm{V}}}}}}\) , we first observed a distinct superconductor-to-“failed insulator” transition associated with localized Cooper pairs. In thicker nanoflakes, a moderate gate voltage can lead to a large modification of the carrier density and induce a clear extrinsic-intrinsic transition of AHE. The giant AHE in AV 3 Sb 5 can be attributed to the intense extrinsic skew scattering of holes in the nearly flat bands with finite intrinsic AHE in the CDW phase at the saddle points by multiple impurities. This significant and electrically controlled SIT and AHE in CsV 3 Sb 5 should inspire more investigations of the relevant intriguing physics and promising energy-saving nanoelectronic devices. Methods Single crystal growth Single crystals of CsV 3 Sb 5 were synthesized via Sb flux method. The elemental Cs, V and Sb were mixed at a molar ratio of 1:3:20, and loaded in into a MgO crucible. This process was performed in a glove box in Ar ambience. Then the crucible was sealed in a vacuumed quartz tube. The ampule was slowly heated to 1000 ˚C and kept for 20 h. After cooling at a rate of 2 °C/min, the extra flux was removed by fast centrifuging at 640 °C. (Also in Supplementary Section 1 ). Device fabrication and transport measurements Solid protonic electrolyte was prepared by the sol-gel processes by mixing tetraethyl orthosilicate (from Alfa Aesar), ethanol, deionized water, phosphoric acid (from Alfa Aesar, 85% wt%) with a typical molar ratio 1:18:6:0.03. The mixed solution was then stirred for 2 h and annealed for another 2 h at 50 °C in a sealed bottle to form polymerized Si–O–Si chains. Finally, the substrate with bottom gate electrodes was spin-coated with the prepared protonic solution and baked at 150 °C for 25 mins. Transport measurements were performed in a commercial Physical Property Measurement System (PPMS) with magnetic field up to 9 T and a commercial magnetic property measurement system (MPMS) with magnetic field of 7 T. Data availability The data used in Figs. 1 – 4 of the main text are provided in the Source Data. Additional data related to this study are available from the corresponding authors upon reasonable request. Source data are provided with this paper. | A new RMIT-led international collaboration published in February has uncovered, for the first time, a distinct disorder-driven bosonic superconductor-insulator transition. The discovery outlines a global picture of the giant anomalous Hall effect and reveals its correlation with the unconventional charge density wave in the AV3Sb5 kagome metal family, with potential applications in future ultra-low energy electronics. Superconductors, which can transmit electricity without energy dissipation, hold great promise for the development of future low-energy electronics technologies, and are already applied in diverse fields such as hover trains and high-strength magnets (such as medical MRIs). However, precisely how the superconductivity forms and works in many materials remains an unsolved issue and limits its applications. Recently, a new kagome superconductor family AV3Sb5 has attracted intensive interest for their novel properties. "Kagome" materials feature an unusual lattice named for a Japanese basket-weave pattern with corner-sharing triangles. The AV3Sb5 materials (where A refers to cesium, rubidium, or potassium) provide ideal platforms for physics studies such as topology and strong correlations, but despite many recent investigations, the origin of the material's giant anomalous Hall effect and superconductivity remain in debate. The FLEET-led collaboration of researchers at RMIT University (Australia) and partner organization the High Magnetic Field Laboratory (China) confirm for the first time the electric control of superconductivity and AHE in a van der Waals kagome metal CsV3Sb5. Manipulating giant anomalous Hall effect via reversible proton intercalation Possessing topological electron bands and geometrical frustration of vanadium lattices, the layered kagome metals AV3Sb5 have attracted great interests in condensed matter physics due to the many quantum phenomena that they support including: unconventional, novel nematic orderchiral charge density ordergiant anomalous Hall effect (AHE), andthe interplay between two-gap superconductivity and charge density wave (CDW) in AV3Sb5. Moreover, the origin of giant AHE in AV3Sb5 and its correlation with chiral CDW remain elusive, in spite of several recently proposed mechanisms including the extrinsic skew scattering of Dirac quasiparticles with frustrated magnetic sublattice, the orbital currents of novel chiral charge order and the chiral flux phase in the CDW phase. "Up to now, we had obtained many intriguing results with proton gate technique in vdW spintronic devices. Since this technique can effectively modulate the carrier density up to 1021 cm-3, we would like to apply it on AV3Sb5, which harbors a similar carrier density level," says the new study's first author, FLEET Research Fellow Dr. Guolin Zheng (RMIT). "The ability to tune the carrier density and the corresponding Fermi surfaces would play a vital role in understanding and manipulating these novel quantum states and would potentially realize some exotic quantum phase transitions." The team chose to test this theory on CsV3Sb5 which potentially has the largest spare atom space for proton intercalation. The devices were easily designed and fabricated based on the team's rich experience in this field. Their subsequent results with CsV3Sb5 depended strongly on material thickness. "It was very difficult to effectively modulate the 'thicker' nanoflakes (more than 100 nm)," says co-first author, FLEET Research Fellow Dr. Cheng Tan (RMIT). "But when the thickness went down to around 40 nm, the injection of the proton became quite easy," says Cheng. "We even found that the injection is highly reversible. Indeed, we have seldom met such a proton-friendly material." Interestingly, with the evolving proton intercalation, the carrier type (or the "sign" of the Hall effect) could be modulated to either hole or electron type and the amplitude of the AHEs achieved were effectively tuned as well. Further experimental and theoretical investigations indicate that this dramatic modulation of giant AHE originates from the Fermi level shift in the reconstructed band structures. "The results of the gated AHE also revealed that the most possible origin of the AHE is skew scattering and this further improves our understanding on the kagome metal," explains Guolin. "But we have not yet observed superconductor-insulator transition in 40 nm nanoflakes." "We must further try thinner CsV3Sb5 nanoflakes to explore this." Proton intercalation induced superconductor-to-'failed insulator' transition The unique coexistence of electronic correlations and band topology in AV3Sb5 allows for investigating intriguing transitions of these correlated states, such as superconductor-insulator transition, a quantum phase transition usually tuned by disorders, magnetic fields and electric gating. By decreasing the number of atomic layers, the team took further steps to explore the potential quantum phase transitions in CsV3Sb5. "At first I directly tried some <10 nm ultrathin nanoflakes," says Cheng. "I did observe that the critical temperatures of the superconductivity phase decreased with the increasing proton intercalation, but I could not definitively confirm that the superconductivity disappeared, as it might still exist at milliKelvin temperatures, where we cannot reach. Also, the devices were very fragile when I tried to further increase the proton intercalation." So Cheng changed the strategy and dealt with the 10~20nm thicker nanoflakes, as well as trying different electrode materials to seek a better electrical contact. This strategy met with success. The team, surprisingly, observed that the critical temperature of the CDW phase decreased and the temperature-dependent resistance curves exhibit a clear superconductor-to-insulator transition under increasing proton injection. "The proton intercalation introduced the disorder and suppressed both CDW and superconducting phase coherence," says contributing-author A/Prof Lan Wang (also at RMIT). "And this gave rise to a superconductor-insulator transition associated with localized Cooper pairs and featuring a saturated sheet resistance reaching up to 106 Ω for temperature approaching zero, dubbed a 'failed insulator.'" "Our work uncovers a distinct disorder-driven bosonic superconductor-insulator transition, outlines a global picture of the giant AHE and reveals its correlation with the unconventional CDW in the AV3Sb5 family." "This significant and electrically-controlled superconductor-insulator transition and anomalous Hall effect in kagome metals should inspire more investigations of the relevant intriguing physics, with promise for energy-saving nanoelectronic devices." "Electrically controlled superconductor-to-failed insulator transition and giant anomalous Hall effect in kagome metal CsV3Sb5 nanoflakes" was published in Nature Communications in February 2023. | 10.1038/s41467-023-36208-6 |
Earth | American cities are many times brighter than German counterparts, study shows | Kyba, C.C.M., Garz, S., Kuechly, H., Sánchez de Miguel, A., Zamorano, J., Hölker, F., (2015) "High-resolution imagery of Earth at Night: new sources, opportunities, and challenges." Remote Sensing. 2015, 7(1), 1-23; DOI: 10.3390/rs70100001 | http://dx.doi.org/10.3390/rs70100001 | https://phys.org/news/2014-12-american-cities-brighter-german-counterparts.html | Abstract Guidelines Hypothesis Interesting Images Letter New Book Received Obituary Opinion Perspective Proceeding Paper Project Report Protocol Registered Report Reply Retraction Short Note Study Protocol Systematic Review Technical Note Tutorial Viewpoint All Article Types Advanced Search Section All Sections AI Remote Sensing Atmospheric Remote Sensing Biogeosciences Remote Sensing Coral Reefs Remote Sensing Discoveries in Remote Sensing Earth Observation Data Earth Observation for Emergency Management Ecological Remote Sensing Engineering Remote Sensing Environmental Remote Sensing Forest Remote Sensing Ocean Remote Sensing Remote Sensing and Geo-Spatial Science Remote Sensing Communications Remote Sensing Image Processing Remote Sensing in Agriculture and Vegetation Remote Sensing in Geology, Geomorphology and Hydrology Remote Sensing of the Water Cycle Satellite Missions for Earth and Planetary Exploration Urban Remote Sensing All Sections Special Issue All Special Issues Remote Sensing: 15th Anniversary 100 Years ISPRS - Advancing Remote Sensing Science 2nd Edition Advances in Remote Sensing for Archaeological Heritage 3D Point Clouds in Rock Mechanics Applications Accuracy Assessment of UAS Lidar Active-Passive Microwave Sensing for Earth System Parameters Advanced Communication and Networking Techniques for Remote Sensing Advanced Machine Learning and Big Data Analytics in Remote Sensing for Natural Hazards Management Advanced Machine Learning Approaches for Hyperspectral Data Analysis Advanced Radar Signal Processing and Applications Advanced Remote Sensing Methods for 3D Vegetation Mapping and Characterization Advanced Satellite-Terrestrial Networks Advanced Topics in Remote Sensing Advancement of Remote Sensing and UAS in Cartography and Visualisation Advancements in Remote Sensing and GIS in Mobile Computing and Location-Based Services Advances and Innovative Applications of Unmanned Aerial Vehicles Advances in Applications of Volunteered Geographic Information Advances in Climate and Geophysics Advances in Geographic Object-Based Image Analysis (GEOBIA) Advances in Mobile Laser Scanning and Mobile Mapping Advances in Object and Activity Detection in Remote Sensing Imagery Advances in Object and Activity Detection in Remote Sensing Imagery II Advances in Object-Based Image Analysis—Linking with Computer Vision and Machine Learning Advances in Quantitative Remote Sensing in China – In Memory of Prof. Xiaowen Li Advances in Real Aperture and Synthetic Aperture Ground-Based Interferometry Advances in Remote Sensing and Geographic Information Science and Their Uses in Geointelligence Advances in Remote Sensing Applications for the Detection of Biological Invasions Advances in Remote Sensing for Archaeological Heritage Advances in Remote Sensing in Coastal Geomorphology Advances in Remote Sensing of Agriculture Advances in Remote Sensing of Crop Water Use Estimation Advances in Remote Sensing of Forestry Advances in Remote Sensing of Wildland Fires Advances in Remote Sensing-based Disaster Monitoring and Assessment Advances in SAR: Sensors, Methodologies, and Applications Advances in Satellite Altimetry Advances in Satellite Altimetry and Its Application Advances in Synthetic Aperture Radar Remote Sensing Advances in the Remote Sensing of Terrestrial Evaporation Advances in VIIRS Data Advances of Remote Sensing in Environmental Geoscience Aerosol and Cloud Remote Sensing Airborne Laser Scanning Analysis of Remote Sensing Image Data Application of Machine Learning in Volcano Monitoring Application of Multi-Sensor Fusion Technology in Target Detection and Recognition Application of Remote Sensing in Hydrological Modeling and Watershed Management Applications and New Trends in Metrology for Radar/LiDAR-Based Systems Applications of Deep Learning in Smart Agriculture Applications of Full Waveform Lidar Applications of GNSS Reflectometry for Earth Observation Applications of Micro- and Nano-Satellites for Earth Observation Applications of Remote Sensing in Landscape Ecology in Latin America Applications of Remote Sensing in Rangelands Research Applications of Synthetic Aperture Radar (SAR) for Land Cover Analysis Approaches for Monitoring Land Degradation with Remote Sensing Archaeological Prospecting and Remote Sensing Artifacts in Remote Sensing Data Analysis: An Ecological Perspective Artificial Intelligence and Automation in Sustainable Smart Farming Artificial Neural Networks and Evolutionary Computation in Remote Sensing Assessment of Renewable Energy Resources with Remote Sensing Assimilation of Remote Sensing Data into Earth System Models Atmospheric Remote Sensing Baltic Sea Remote Sensing Big Data Analytics for Secure and Smart Environmental Services Big Earth Observation Data: From Cloud Technologies to Insights and Foresight Bistatic HF Radar Calibration and Validation of Synthetic Aperture Radar Calibration and Verification of Remote Sensing Instruments and Observations Carbon Cycle, Global Change, and Multi-Sensor Remote Sensing Cartography of the Solar System: Remote Sensing beyond Earth Celebrating the 50th Anniversary of the Moon Landing with Views of Earth from Satellite and Apollo Missions Citizen Science and Earth Observation Climate Modelling and Monitoring Using GNSS Close Range Remote Sensing Close-Range Remote Sensing by Ground Penetrating Radar Compact Polarimetric SAR Concurrent Positioning, Mapping and Perception of Multi-source Data Fusion for Smart Applications Contemporary Microwave and Radar Techniques in Remote Sensing—MIKON 2022, IRS 2022 CORINE Land Cover System: Limits and Challenges for Territorial Studies and Planning Cross-Calibration and Interoperability of Remote Sensing Instruments Cryospheric Remote Sensing Data Mining in Multi-Platform Remote Sensing Data Science in Remote Sensing Data Science, Artificial Intelligence and Remote Sensing Deep Learning for Intelligent Synthetic Aperture Radar Systems Deep Learning for Target Object Detection and Identification in Remote Sensing Data Deep Learning in Remote Sensing: Sample Datasets, Algorithms and Applications Dense Image Time Series Analysis for Ecosystem Monitoring Design and Calibration of Microwave Radiometers and Scatterometers for Remote Sensing of the Earth Designing and Managing the Next Generation of Transportation Infrastructure Digital Forest Resource Monitoring and Uncertainty Analysis Discovering A More Diverse Remote Sensing Discipline Drone-Based Ecological Conservation Earth Environment Monitoring with Advanced Spaceborne Synthetic Aperture Radar: New Architectures, Operational Modes, and Processing Techniques Earth Monitoring from A New Generation of Geostationary Satellites Earth Observation and Sustainable Development in Marine and Freshwater Systems Earth Observation for Ecosystems Monitoring in Space and Time Earth Observation for Water Resource Management in Africa Earth Observation in Planning for Sustainable Urban Development Earth Observation Technology Cluster: Innovative Sensor Systems for Advanced Land Surface Studies Earth Observation to Support Disaster Preparedness and Disaster Risk Management Earth Observation with AVHRR Data and Interconsistency Earth Observations for a Better Future Earth Earth Observations for Addressing Global Challenges Earth Observations for Geohazards Earth Observations for the Sustainable Development Ecogeomorphological Research Using Satellite Images Ecological Status and Change by Remote Sensing Environmental Research with Global Navigation Satellite System (GNSS) EO Solutions to Support Countries Implementing the SDGs ESA - NRSCC Cooperation Dragon 4 Final Results Estimation of Forest Biomass from SAR European Remote Sensing-New Solutions for Science and Practice Feature Papers Field Spectroscopy and Radiometry Fire Remote Sensing: Capabilities, Innovations, Opportunities and Challenges Frontiers in Spectral Imaging and 3D Technologies for Geospatial Solutions Fusion of High-Level Remote Sensing Products Fusion of LiDAR Point Clouds and Optical Images Fusion of Multi-Satellite and Multi-Sensor SARs Data for Investigating Long-Term Surface Ground Movements and Processes GEOBIA in the Era of Big EO Data Geological Remote Sensing Geomorphological Mapping and Process Monitoring Using Remote Sensing Geomorphological Processes and Natural Hazards Geospatial Statistics and Spatial/Spatiotemporal Analysis in Remote Sensing GIS and Remote Sensing advances in Land Change Science Global Croplands Global Navigation Satellite Systems for Earth Observing System Global Positioning Systems (GPS) and Applications GNSS High Rate Data for Research of the Ionosphere Google Earth Engine and Cloud Computing Platforms: Methods and Applications in Big Geo Data Science Google Earth Engine Applications Google Earth Engine: Cloud-Based Platform for Earth Observation Data and Analysis GPS/GNSS Contemporary Applications GPS/GNSS for Earth Science and Applications High Performance Computing in Remote Sensing High-precision GNSS: Methods, Open Problems and Geoscience Applications Human-Induced Global Change Hydrological Remote Sensing Hydrometeorological Prediction and Mapping Hyperspectral Imaging for Precision Farming Hyperspectral Remote Sensing Image Compression and Coding in Remote Sensing Indoor Localization Innovative Application of AI in Remote Sensing Innovative Remote Sensing for Monitoring and Assessment of Natural Resources InSAR for Earth Observation InSAR in Remote Sensing Instrumenting Smart City Applications with Big Sensing and Earth Observatory Data: Tools, Methods and Techniques Integrated Applications of Geo-Information in Environmental Monitoring Integrating Remote Sensing and Social Sensing Intelligence Computing Paradigms with Remote Sensing Networks in Water Hydrology Joint Artificial Intelligence and Computer Vision Applications in Remote Sensing Land Change Assessment Using Remote Sensing and Geographic Information Science Land Cover Classification Using Multispectral LiDAR Data Land Cover/Land Use Change (LC/LUC) – Causes, Consequences and Environmental Impacts in South/Southeast Asia Land Surface Fluxes Land Surface Global Monitoring from PROBA-V to Sentinel-3 Land Surface Processes and Interactions—From HCMM to Sentinel Missions and beyond Land Use / Land Cover Monitoring using SAR and Complementarity with Optical Sensors Landsat-8 Sensor Characterization and Calibration Laser Scanning in Forests Latest Advances in Remote Sensing-Based Environmental Dynamic Models Latest Developments in Clustering Algorithms for Hyperspectral Images Lessons Learned from the SPOT4 (Take5): Experiment in Preparation for Sentinel-2 LiDAR Lidar/Laser Scanning in Urban Environments Machine and Deep Learning for Earth Observation Data Analysis Machine Learning Applications in Earth Science Big Data Analysis Machine Learning Models from Regression to Deep Learning Neural Networks for Assessment, Prediction, Mitigation, and Control of Geospatial, Socioeconomic, and Environmental Impacts of Climate Change Machine Learning Techniques for Remote Sensing and Electromagnetic Applications Mapping Ecosystem Services Flows and Dynamics Using Remote Sensing Mapping the Dynamics of Forest Plantations in Tropical and Subtropical Regions from Multi-Source Remote Sensing Mapping, Monitoring and Impact Assessment of Land Cover/Land Use Changes in South and South East Asia Mathematical Models for Remote Sensing Image and Data Processing Microwave Indices from Active and Passive Sensors for Remote Sensing Applications Microwave Remote Sensing Mobile Laser Scanning Mobile Mapping Technologies Mobile Remote Monitoring Technology for Mental Health Modern Advances in Electromagnetic Imaging and Remote Sensing: Enabling Hardware, Computational Techniques and Machine Learning Monitoring and Modelling of Dynamics in Tropical Coastal Systems Monitoring Global Vegetation with AVHRR NDVI3g Data (1981-2011) Monitoring Land Surface Dynamic with AVHRR Data Monitoring of Land Changes Mountain Remote Sensing Multi-Constellation Global Navigation Satellite Systems: Methods and Applications Multi-Sensor and Multi-Data Integration in Remote Sensing Multi-Temporal Remote Sensing Multiscale and Multitemporal High Resolution Remote Sensing for Archaeology and Heritage: From Research to Preservation Multistatic and Bistatic SAR: State of Affairs and Way Forward Multitemporal Land Cover and Land Use Mapping Natural Hazard Assessment and Disaster Management Using Remote Sensing New Frontiers of Multiscale Monitoring, Analysis and Modeling of Environmental Systems via Integration of Remote and Proximal Sensing Measurements New Insights in InSAR and GNSS Measurements New Outstanding Results over Land from the SMOS Mission New Perspectives of Remote Sensing for Archaeology New Sensors, Multi-Sensor Integration, Large Volumes: New Opportunities and Challenges in Forest Fire Research New Technologies for Earth Remote Sensing New Trends in Forest Fire Research Incorporating Big Data and Climate Change Modeling Novel Bistatic SAR Scattering Theory, Imaging Algorithms, and Applications Object Detection, Recognition and Identification Using Remote Sensing Technique Object-Based Image Analysis Observation of Lake–Air Interaction and Atmospheric Boundary Layer Process, Remote Sensing Inversion over the Pan-Third Pole Observations, Modeling, and Impacts of Climate Extremes Observing the Ocean’s Interior from Satellite Remote Sensing Ocean Remote Sensing Open Resources in Remote Sensing Opportunities and Challenges for Medium Resolution (hecta- to kilometric) Earth Observation of Land Surfaces in the Advent of European Proba-V and Sentinel-3 Missions Optical Remote Sensing of the Atmosphere OPTIMISE: Innovative Optical Tools for Proximal Sensing of Ecophysiological Processes Photogrammetry and Image Analysis in Remote Sensing Point Cloud Processing and Analysis in Remote Sensing Polar Sea Ice: Detection, Monitoring and Modeling Positioning and Navigation in Remote Sensing Precise Orbit Determination with GNSS Processing of Bi-static, Geo-Synchronous and Multi-Satellite SAR Constellation Data Progress on the Use of UAS Techniques for Environmental Monitoring Proximal and Remote Sensing in the MWIR and LWIR Spectral Range Quantifying and Validating Remote Sensing Measurements of Chlorophyll Fluorescence Quantifying the Environmental Impact of Forest Fires Quantitative Inversion and Validation of Satellite Remote Sensing Products Quantitative Remote Sensing of Land Surface Variables Quantitative Remote Sensing Product and Validation Technology Quantitative Volcanic Hazard Assessment and Uncertainty Analysis in Satellite Remote Sensing and Modeling Quantitative Volcanic Hazard Assessment and Uncertainty Analysis in Satellite Remote Sensing and Modeling: Part II Radar Applications in Cultural Heritage Radar High-Speed Target Detection, Tracking, Imaging and Recognition - 2nd Edition Radar Remote Sensing for Agriculture Radar Remote Sensing of Cloud and Precipitation Radar Remote Sensing on Life Activities Radar Systems for the Societal Challenges Radiative Transfer Modelling and Applications in Remote Sensing Radio Frequency Interference (RFI) in Microwave Remote Sensing Rapid Processing and Analysis for Drone Applications Real-time GNSS Precise Positioning Service and its Augmentation Technology Real-Time Radar Imaging and Sensing Recent Advances in Geophysical Exploration and Monitoring on Coastal Areas Recent Advances in GPR Imaging Recent Advances in Pattern Recognition and Analysis in Landscape Ecology Recent Advances in Polarimetric SAR Interferometry Recent Advances in Remote Sensing for Crop Growth Monitoring Recent Advances in Subsurface Sensing Technologies Recent Advances in Thermal Infrared Remote Sensing Recent Developments in Remote Sensing for Physical Geography Recent Progress and Developments in Imaging Spectroscopy Recent Progress in Ground Penetrating Radar Remote Sensing Recent Trends in UAV Remote Sensing Regional and Global Land Cover Mapping Remote and Proximal Assessment of Plant Traits Remote Sensed Data and Processing Methodologies for 3D Virtual Reconstruction and Visualization of Complex Architectures Remote Sensing and Cyber Situational Awareness Remote Sensing and Decision Support for Precision Orchard Production Remote Sensing and GIS for Habitat Quality Monitoring Remote Sensing and Machine Learning Applications in Atmospheric Physics, Weather, and Air Quality Remote Sensing and Oil Spill Response: Leveraging New Technologies to Safeguard the Environment Remote Sensing and Vegetation Mapping Remote Sensing Application for Promoting Ecosystem Services and Land Degradation Management in Mid-Latitude Ecotone (MLE) Remote Sensing Applications in Monitoring of Protected Areas Remote Sensing Applications to Human Health Remote Sensing Applied to Soils: From Ground to Space Remote Sensing as Tool in Geofluids Dynamics and Related Risks Remote Sensing Big Data: Theory, Methods and Applications Remote Sensing by Synthetic Aperture Radar Technology Remote Sensing Data Application, Data Reanalysis and Advances for Mesoscale Numerical Weather Models Remote Sensing Data Fusion and Applications Remote Sensing Dedicated to Geographical Conditions Monitoring Remote Sensing for 3D Urban Morphology Remote Sensing for Archaeological Heritage Preservation Remote Sensing for Coral Reef Monitoring Remote Sensing for Cultural Heritage Remote Sensing for Environment and Disaster Remote Sensing for EU Habitats Directive Application Remote Sensing for Improved Understanding of Land Surface, Hydrology, and Water Quality Remote sensing for Intelligent Transportation Systems Remote Sensing for Land Cover/Land Use Mapping at Local and Regional Scales Remote Sensing for Land Cover/Land Use Mapping at Local and Regional Scales: Part II Remote Sensing for Land Surface Temperature (LST) Estimation, Generation, and Analysis Remote Sensing for Landslide Monitoring, Mapping and Modeling Remote Sensing for Landslides Investigation: From Research into Practice Remote Sensing for Maritime Safety and Security Remote Sensing for Monitoring Wildlife and Habitat in a Changing World Remote Sensing for Near-Real-Time Disaster Monitoring Remote Sensing for Public Health Remote Sensing for Sustainable Energy Systems Remote Sensing for the Definition and Near-Real Time Monitoring of Meteorological Extremes Remote Sensing for Understanding Coral Reef Dynamics and Processes: Photo-Systems to Coral Reef Systems Remote Sensing for Water Productivity Assessments in Agriculture, Ecosystems and Water Resources Remote Sensing for Wetland Inventory, Mapping and Change Analysis Remote Sensing from Unmanned Aerial Vehicles (UAVs) Remote Sensing in 2021 Remote Sensing in 2023 Remote Sensing in Applications of Geoinformation Remote Sensing in Climate Modeling Remote Sensing in Climate Monitoring and Analysis Remote Sensing in Coastal Ecosystem Remote Sensing in Coastal Environments Remote Sensing in Dryland Assessment and Monitoring Remote Sensing in Ecosystem Modelling Remote Sensing in Flood Monitoring and Management Remote Sensing in Food Production and Food Security Remote Sensing in Geology Remote Sensing in Geomorphology Remote Sensing in Natural and Cultural Heritage Remote Sensing in Precision Agriculture Remote Sensing in Public Health Remote Sensing in Sea Ice Remote Sensing in Seismology Remote Sensing in Support of Aeolian Research Remote Sensing in Support of Environmental Governance Remote Sensing in Support of Environmental Policy Remote Sensing in the Age of Electronic Ecology Remote Sensing in Tibet and Siberia Remote Sensing in University of Warsaw: Celebrating 60th Anniversary on Remote Sensing Remote Sensing Measurements for Monitoring Achievement of the Sustainable Development Goals (SDGs) Remote Sensing Monitoring Aerosols and Its Effects on Atmospheric Radiation Remote Sensing Observations of the Giant Planets Remote Sensing of Aerosol - Cloud Interactions Remote Sensing of Arctic Tundra Remote Sensing of Atmosphere and Underlying Surface Using OLCI and SLSTR on Board Sentinel-3: Calibration, Algorithms, Geophysical Products and Validation Remote Sensing of Biodiversity Remote Sensing of Biodiversity Monitoring Remote Sensing of Biogeochemical Cycles Remote Sensing of Biological Diversity Remote Sensing of Biomass Burning Remote Sensing of Biophysical Parameters Remote Sensing of Changing Northern High Latitude Ecosystems Remote Sensing of Desert Landscapes to Monitor Impacts of Renewable Energy Developments Remote Sensing of Desertification Remote Sensing of Drought Monitoring Remote Sensing of Dryland Environment Remote Sensing of Ecosystem Structure and Function Dynamics Due to Climate Change and Human Activities Remote Sensing of Environmental Changes in Cold Regions Remote Sensing of Essential Climate Variables and Their Applications Remote Sensing of Floodpath Lakes and Wetlands Remote Sensing of Forest Health Remote Sensing of Geo-Hydrological Process in an Arid Region Remote Sensing of Geopolitics Remote Sensing of Glaciers Remote Sensing of Grassland Ecosystem Remote Sensing of Human-Environment Interactions along the Urban-Rural Gradient Remote Sensing of Hydro-Meteorology Remote Sensing of Image Pansharpening Remote Sensing of Islands Remote Sensing of Land Degradation and Drivers of Change Remote Sensing of Land Degradation in Drylands Remote Sensing of Land Surface Radiation Budget Remote Sensing of Mangroves: Observation and Monitoring Remote Sensing of Night Lights – Beyond DMSP Remote Sensing of Peatlands I Remote Sensing of Phytoplankton Remote Sensing of Rainfall and Snowfall - Recent Advances Remote Sensing of Regional Soil Moisture Remote Sensing of Solar Surface Radiation Remote Sensing of the Oceans: Blue Economy and Marine Pollution Remote Sensing of Tropical Environmental Change Remote Sensing of Urban Ecology Remote Sensing of Vegetation Proportion, Attribute, Condition, and Change Remote Sensing of Vegetation Structure and Dynamics Remote Sensing of Water Quality Remote Sensing of Water Resources Remote Sensing of Wildfire Remote Sensing of Wind Energy Remote Sensing on Earth Observation and Ecosystem Services Remote Sensing on Land Surface Albedo Remote Sensing with Nighttime Lights Remote Sensing-Based Proxies to Predict Socio-Economic and Demographic Data Remote Sensing: 10th Anniversary Remote Sensing: A Themed Issue in Honor of Professor John R. Jensen Remotely Sensed Estimates of Fire Radiative Energy Reproducibility and Replicability in Remote Sensing Workflows Retrieval and Validation of Trace Gases Using Remote Sensing Measurements Retrieval, Validation and Application of Satellite Soil Moisture Data Root Dynamics Tracking Using Remote Sensing SAR for Forest Mapping SAR Imagery for Landslide Detection and Prediction Satellite Altimetry for Earth Sciences Satellite Climate Data Records and Applications Satellite Derived Bathymetry Satellite Hydrological Data Products and Their Applications Satellite Mapping Technology and Application Satellite Monitoring of Water Quality and Water Environment Satellite Remote Sensing Applications for Fire Management Satellite Remote Sensing for Water Resources in a Changing Climate Satellite Remote Sensing of High-Temperature Thermal Anomalies Satellite Remote Sensing of High-Temperature Thermal Anomalies, Volume II Satellite-Derived Wind Observations Science of Landsat Analysis Ready Data Sea Ice Remote Sensing and Analysis Selected Papers from IGL-1 2018 — First International Workshop on Innovating GNSS and LEO Occultations & Reflections for Weather, Climate and Space Weather Selected Papers from Remote Sensing Science Workshop Selected Papers from the "2019 International Symposium on Remote Sensing" Selected Papers from the “International Symposium on Remote Sensing 2018” Selected Papers from the “International Symposium on Remote Sensing 2021” Selected Papers from the 1st International Electronic Conference on Remote Sensing (ECRS-1) Selected Papers from the 2nd International Electronic Conference on Remote Sensing (ECRS-2) Selected Papers from the 5th International Electronic Conference on Remote Sensing Selected Papers from the First "Symposia of Remote Sensing Applied to Soil Science", as part of the "21st World Congress of Soil Science, 2018" Selected Papers from the Polarimetric Interferometric SAR Workshop 2019 Semantic Segmentation Algorithms for 3D Point Clouds She Maps Silk-Road Disaster Monitoring and Risk Assessment Using Remote Sensing and GIS Societal and Economic Benefits of Earth Observation Technologies Space Geodesy and Ionosphere Space-Borne Gravimetric Measurements for Quantifying Earth System Mass Change Spatial Data Infrastructures for Big Geospatial Sensing Data Spatial Demography and Health – The 1st Internaitonal Symposium on Lifecourse Epidemiology and Spatial Science (ISLES) Spatial Enhancement of Hyperspectral Data and Applications Spatial Modelling of Natural Hazards and Water Resources through Remote Sensing, GIS and Machine Learning Methods Spatio-Temporal Mobility Data State-of-the-Art Remote Sensing in North America 2019 State-of-the-Art Remote Sensing in North America 2023–2024 State-of-the-Art Remote Sensing in South America State-of-the-Art Technology of Remote Sensing in Russia Synthetic Aperture Radar (SAR) Technology Advance and for Cultural Heritage Applications Synthetic Aperture Radar (SAR)—New Techniques, Missions and Applications Systems and Technologies for Remote Sensing Application through Unmanned Aerial Systems (STRATUS) Teaching and Learning in Remote Sensing Technological Developments of Unmanned Aerial Systems (UAS) for Remote Sensing Applications Ten Years of Remote Sensing at Barcelona Expert Center Ten Years of TerraSAR-X—Scientific Results Terrestrial Laser Scanning The Application of Thermal Urban Remote Sensing to Understand and Monitor Urban Climates The Development and Validation of Remote Sensing Products for Terrestrial, Hydrological, and Ecological Applications at the Regional Scale The Environmental Mapping and Analysis Program (EnMAP) Mission: Preparing for Its Scientific Exploitation The Internet of Things (IoT) in Remote Sensing: Opportunities and Challenges The Kyoto and Carbon Initiative—Environmental Applications by ALOS-2 PALSAR-2 The Role of Remote Sensing in Sustainable Renewable Energy Exploitation The Use of Earth Observations for Exposure Assessment in Epidemiological Studies The Use of UAV Platforms for Cultural Heritage Monitoring and Surveying Themed Issue: Practical or Economic Applications of Optical/Thermal Remote Sensing: A Themed Issue in Honor of Professor Toby N. Carlson Thermal Remote Sensing Applications for the Atmospheric Surface Layer Thermal Remote Sensing Applications: Present Status and Future Possibilities Time Series Analysis in Remote Sensing: Algorithm Development and Applications Time-of-Flight Range-Imaging Cameras Towards Practical Application of Artificial Intelligence in Remote Sensing Towards Remote Long-Term Monitoring of Wetland Landscapes Trends in UAV Remote Sensing Applications Trends in UAV Remote Sensing Applications: Part II Two Decades of MODIS Data for Land Surface Monitoring: Exploitation of Existing Products and Development of New Algorithms UAV and IoT-based Measurement of Vegetation Indices for Environmental Monitoring UAV Imagery for Precision Agriculture UAV Photogrammetry and Remote Sensing UAV-Based Remote Sensing Methods for Modeling, Mapping, and Monitoring Vegetation and Agricultural Crops Uncertainties in Remote Sensing Understanding Animal-Ecosystem Interactions with Remote Sensing Underwater 3D Recording & Modelling Underwater Acoustic Remote Sensing Unmanned Aerial Vehicle Applications in Cryospheric Sciences Unmanned Aerial Vehicles (UAVs) based Remote Sensing Urban Remote Sensing Use of LiDAR and 3D point clouds in Geohazards Validation and Inter-Comparison of Land Cover and Land Use Data Validation on Global Land Cover Datasets Visible Infrared Imaging Radiometers and Applications Volcanic Processes Monitoring and Hazard Assessment Using Integration of Remote Sensing and Ground-Based Techniques Volcano Remote Sensing Water Optics and Water Colour Remote Sensing What can Remote Sensing Do for the Conservation of Wetlands? All Special Issues Volume Issue Number Page Logical Operator Operator AND OR Search Text Search Type All fields Title Abstract Keywords Authors Affiliations Doi Full Text | German cities emit several times less light per capita than comparably sized American cities, according to a recent publication in the journal Remote Sensing. The size of the gap grew with city size, as light per capita increased with city size in the USA but decreased with city size in Germany. The study also examined regional differences, and surprisingly found that light emission per capita was higher in cities in the former East of Germany than from those in the former West. The lead author, Dr. Christopher Kyba, studies visible light at night as a member of the Remote Sensing section of the German Research Center for Geosciences (GFZ). "The size of the difference in light emission is surprisingly large. This work will allow us to identify comparable cities in order to uncover the reasons behind the differences." These could include differences in the type of lamps, but also architectural factors like the width of the streets and the amount of trees. The LED lamps currently being installed in many cities are expected to greatly change the nighttime environment, for example by reducing the amount of light that shines upwards. A main point of the study is to emphasize the great improvement in the quality of nighttime imagery of Earth since 2012. The European Space Agency's NightPod instrument has allowed astronauts to take high resolution images of individual cities. In addition, the entire world is now imaged nightly at 750 meter resolution by the Visible Infrared Imaging Radiometer Suite Day-Night Band onboard the Suomi National Polar-Orbiting Program weather satellite. This new imagery has made it possible to identify and measure the output of individual bright sources of light pollution for the first time. The study found that in Megacities in developing countries, the brightest light sources were typically airports or harbors. In contrast, the brightest areas in the capital cities of Europe are often associated with leisure, for example stadiums and city centers. While artificial light at night is a problem for astronomers and nocturnal animals, it has the potential to be an important tool in understanding human activity. In order to make the most use out of it, the researchers say they will need to study urban light emissions in detail, including their spectrum, the directions in which light is emitted, and changes in light use and lit area over time. The study demonstrated one practical use of the new data: since maps of nighttime light emission highlight the areas where light pollution is especially prevalent, they provide information about which areas can best be targeted for energy savings. Coauthor Dr. Franz Hölker from the Leibniz Institute for Freshwater Ecology and Inland Fisheries (IGB) explains, "artificial light is responsible for a sizable portion of all nighttime electricity consumption. Identifying areas where light could be more efficiently used will make it possible to save energy, reduce costs, and reduce the impact of artificial light on the nighttime environment." | 10.3390/rs70100001 |
Physics | 'Quantum Internet': Towards realization of solid-state quantum network | dx.doi.org/10.1038/nature12016 Journal information: Nature | http://dx.doi.org/10.1038/nature12016 | https://phys.org/news/2013-04-quantum-internet-solid-state-network.html | Abstract Quantum entanglement between spatially separated objects is one of the most intriguing phenomena in physics. The outcomes of independent measurements on entangled objects show correlations that cannot be explained by classical physics. As well as being of fundamental interest, entanglement is a unique resource for quantum information processing and communication. Entangled quantum bits (qubits) can be used to share private information or implement quantum logical gates 1 , 2 . Such capabilities are particularly useful when the entangled qubits are spatially separated 3 , 4 , 5 , providing the opportunity to create highly connected quantum networks 6 or extend quantum cryptography to long distances 7 , 8 . Here we report entanglement of two electron spin qubits in diamond with a spatial separation of three metres. We establish this entanglement using a robust protocol based on creation of spin–photon entanglement at each location and a subsequent joint measurement of the photons. Detection of the photons heralds the projection of the spin qubits onto an entangled state. We verify the resulting non-local quantum correlations by performing single-shot readout 9 on the qubits in different bases. The long-distance entanglement reported here can be combined with recently achieved initialization, readout and entanglement operations 9 , 10 , 11 , 12 , 13 on local long-lived nuclear spin registers, paving the way for deterministic long-distance teleportation, quantum repeaters and extended quantum networks. Main A quantum network can be constructed by using entanglement to connect local processing nodes, each containing a register of well-controlled and long-lived qubits 6 . Solids are an attractive platform for such registers, as the use of nanofabrication and material design may enable well-controlled and scalable qubit systems 14 . The potential impact of quantum networks on science and technology has recently spurred research efforts towards generating entangled states of distant solid-state qubits 15 , 16 , 17 , 18 , 19 , 20 , 21 . A prime candidate for a solid-state quantum register is the nitrogen–vacancy (NV) defect centre in diamond. The NV centre combines a long-lived electronic spin ( S = 1) with a robust optical interface, enabling measurement and high-fidelity control of the spin qubit 15 , 22 , 23 , 24 . Furthermore, the NV electron spin can be used to access and manipulate nearby nuclear spins 9 , 10 , 11 , 12 , 13 , 25 , thereby forming a multi-qubit register. To use such registers in a quantum network requires a mechanism to coherently connect remote NV centres. Here we demonstrate the generation of entanglement between NV centre spin qubits in distant set-ups. We achieve this by combining recently established spin initialization and single-shot readout techniques 9 with efficient resonant optical detection and feedback-based control over the optical transitions, all in a single experiment and executed with high fidelity. These results put solid-state qubits on a par with trapped atomic qubits 3 , 4 , 5 as highly promising candidates for implementing quantum networks. Our experiment makes use of two NV spin qubits located in independent low-temperature set-ups separated by 3 m ( Fig. 1a ). We encode the qubit basis states |↑〉 and |↓〉 in the NV spin sublevels m S = 0 and m S = −1, respectively. Each qubit can be independently read out by detecting spin-dependent fluorescence in the NV phonon sideband (non-resonant detection) 9 . The qubits are individually controlled with microwave pulses applied to on-chip striplines 23 . Quantum states encoded in the qubits are extremely long-lived: using dynamical decoupling techniques 23 , we obtain a coherence time exceeding 10 ms ( Fig. 1b ), which is the longest coherence time measured so far for a single electron spin in a solid. Figure 1: Experimental set-up and protocol for generating long-distance entanglement between two solid-state spin qubits. a , Experimental set-up. Each nitrogen–vacancy (NV) centre resides in a synthetic ultrapure diamond oriented in the <111> direction. The two diamonds are located in two independent low-temperature confocal microscope set-ups separated by 3 m. The NV centres can be individually excited resonantly by red lasers and off-resonantly by a green laser. The emission (dashed arrows) is spectrally separated into an off-resonant part (phonon sideband, PSB) and a resonant part (zero-phonon line, ZPL). The PSB emission is used for independent single-shot readout of the spin qubits 9 . The ZPL photons from the two NV centres are overlapped on a fibre-coupled beamsplitter. Microwave pulses for spin control are applied via on-chip microwave striplines. An applied magnetic field of 17.5 G splits the m S = ±1 levels in energy. The optical frequencies of NV B are tuned by a d.c. electric field applied to the gate electrodes (inset, scanning electron microscope image of a similar device). To enhance the collection efficiency, solid immersion lenses have been milled around the two NV centres 9 . b , The coherence of the NV B spin qubit as a function of total free evolution time t FE during an N -pulse dynamical decoupling sequence 23 . Curves are fitted to A exp[−( t FE / T coh ) 3 ] + 0.5. For N = 64 we find T coh = 14.3 ± 0.4 ms. Error bars are 2 s.e. c , Entanglement protocol (details in main text), illustrating the pulse sequence applied simultaneously to both NV centres. Both NV centres are initially prepared in a superposition 1/√2(|↑〉+|↓〉). A short 2 ns spin-selective resonant laser pulse creates spin–photon entanglement 1/√2(|↑1〉+|↓0〉). The photons are overlapped on the beamsplitter and detected in the two output ports. Both spins are then flipped, and the NV centres are excited a second time. The detection of one photon in each excitation round heralds the entanglement and triggers individual spin readout. PowerPoint slide Full size image We generate and herald entanglement between these distant qubits by detecting the resonance fluorescence of the NV centres. The specific entanglement protocol we use is based on the proposal of ref. 26 , and is schematically drawn in Fig. 1c . Both centres NV A and NV B are initially prepared in a superposition 1/√2(|↑〉 + |↓〉). Next, each NV centre is excited by a short laser pulse that is resonant with the |↑〉 to | e 〉 transition, where | e 〉 is an optically excited state with the same spin projection as |↑〉. Spontaneous emission locally entangles the qubit and photon number, leaving each set-up in the state 1/√2(|↑1〉 + |↓0〉), where 1 (0) denotes the presence (absence) of an emitted photon; the joint qubit–photon state of both set-ups is then described by 1/2(|↑ Α ↑ Β 〉|1 Α 1 Β 〉 + |↓ Α ↓ Β 〉|0 Α 0 Β 〉 + |↑ Α ↓ Β 〉|1 Α 0 Β 〉 + |↓ Α ↑ Β 〉|0 Α 1 Β 〉). The two photon modes, A and B, are directed to the input ports of a beamsplitter (see Fig. 1a ), so that fluorescence observed in an output port could have originated from either NV centre. If the photons emitted by the two NV centres are indistinguishable, detection of precisely one photon on an output port would correspond to measuring the photon state 1/√2(|1 Α 0 Β 〉 ± e − iϕ |0 Α 1 Β 〉) (where ϕ is a phase that depends on the optical path length). Such a detection event would thereby project the qubits onto the maximally entangled state | ψ 〉 = 1/√2(|↑ Α ↓ Β 〉 ± e − iϕ |↓ Α ↑ Β 〉). Any realistic experiment, however, suffers from photon loss and imperfect detector efficiency; detection of a single photon is thus also consistent with creation of the state |↑↑〉. To eliminate this possibility, both qubits are flipped and optically excited for a second time. Because |↑↑〉 is flipped to |↓↓〉, no photons are emitted in the second round for this state. In contrast, the states | ψ 〉 will again yield a single photon. Detection of a photon in both rounds thus heralds the generation of an entangled state. The second round not only renders the protocol robust against photon loss, but it also changes ϕ into a global phase, making the protocol insensitive to the optical path length difference 26 (see Supplementary Information ). Furthermore, flipping the qubits provides a refocusing mechanism that counteracts spin dephasing during entanglement generation. The final state is one of two Bell states | Ψ ± 〉 = 1/√2(|↑ Α ↓ Β 〉 ± |↓ Α ↑ Β 〉), with the sign depending on whether the same detector (+) or different detectors (−) clicked in the two rounds. A key challenge for generating remote entanglement with solid-state qubits is obtaining a large flux of indistinguishable photons, in part because local strain in the host lattice can induce large variations in photon frequency. The optical excitation spectra of the NV centres ( Fig. 2a ) display sharp spin-selective transitions. Here we use the E y transition (spin projection m S = 0) in the entangling protocol and for qubit readout; we use the A 1 transition for fast optical pumping into |↑〉 (ref. 9 ). Owing to different strain in the two diamonds, the frequencies of the E y transitions differ by 3.5 GHz, more than 100 linewidths. By applying a voltage to an on-chip electrode ( Fig. 1a inset), we tune the optical transition frequencies of one centre (NV B) through the d.c. Stark effect 18 , 27 and bring the E y transitions of the two NV centres into resonance ( Fig. 2a bottom). Figure 2: Generating and detecting indistinguishable photons. a , Photoluminescence excitation spectra of NV A and NV B; frequency is given relative to 470.4515 THz. Transitions are labelled according to the symmetry of their excited state. The A 1 transition is used to initialize the NV centre into the |↑〉 state ( m S = 0) and the E y transition is used for entanglement creation and single-shot readout. By applying a voltage to the gate electrodes of NV B, the E y transitions are tuned into resonance (dashed line). b , Dynamical preparation of charge and optical resonance. Top, preparation protocol. A 10 μs green laser pulse (green line) pumps the NV centre into the desired negative charge state 9 . Next, the optical transition frequencies are probed by simultaneously exciting the E y and A 1 transitions for 60 μs while counting the number of detected photons. Conditional on passing a certain threshold the experimental sequence is started (preparation successful) or else the protocol is repeated (preparation failed). APD, avalanche photodiode. Bottom, line-narrowing effect of the preparation protocol exemplified by the dependence of the decay time of optical Rabi oscillations on preparation threshold. Dashed line indicates lifetime-limited damping 28 . For the entanglement experiment, we choose a threshold of 45 (20) photons for NV A (NV B). c , Resonant optical excitation and detection. The polarization axis of the detection path is aligned perpendicular to the excitation axis. The dipole axis of the E y transition is oriented in between these two axes (inset). Remaining laser light reflection is time-filtered by defining a photon detection window that starts after the laser pulse. Data are recorded with 256 ps time bins. P det , detection probability. d , Two-photon quantum interference using resonant excitation and detection. The g (2) correlation function is obtained from all coincidence detection events of APD 1 and APD 2 during the entanglement experiment (see Supplementary Information ). The sidepeaks are fitted to an exponential decay; from the fit values, we obtain the expected central peak shape g (2) ⊥ (red line) for non-interfering photons. The visibility of the interference is given by ( g (2) ⊥ − g (2) )/ g (2) ⊥ . PowerPoint slide Full size image Charge fluctuations near the NV centre also affect the optical frequencies. To counteract photo-ionization, we need to regularly apply a green laser pulse to repump the NV centre into the desired charge state. This repump pulse changes the local electrostatic environment, leading to jumps of several linewidths in the optical transition frequencies 28 . To overcome these effects, we only initiate an experiment if the number of photons collected during a two-laser probe stage ( Fig. 2b ) exceeds a threshold, thereby ensuring that the NV centre’s optical transitions are on resonance with the lasers. The preparation procedure markedly improves the observed optical coherence: as the probe threshold is increased, optical Rabi oscillations persist for longer times (see Fig. 2b ). For high thresholds, the optical damping time saturates around the value expected for a lifetime-limited linewidth 28 , indicating that the effect of spectral jumps induced by the repump laser is strongly mitigated. Besides photon indistinguishability, successful execution of the protocol also requires that the detection probability of resonantly emitted photons exceed that of scattered laser photons and of detector dark counts. This is particularly demanding for NV centres, because only about 3% of their emission is in the zero-phonon line and useful for the protocol. To minimize detection of laser photons, we use both a cross-polarized excitation–detection scheme ( Fig. 2c inset) and a detection time filter that exploits the difference between the length of the laser pulse (2 ns) and the NV centre’s excited-state lifetime (12 ns; Fig. 2c ). For a typical detection window used, this reduces the contribution of scattered laser photons to about 1%. Combined with microfabricated solid-immersion lenses for enhanced collection efficiency ( Fig. 1a inset) and spectral filtering for suppressing non-resonant NV emission, we obtain a detection probability of a resonant NV photon of about 4 × 10 −4 per pulse—about 70 times higher than the sum of background contributions. The degree of photon indistinguishability and background suppression can be obtained directly from the second-order autocorrelation function g (2) , which we extract from our entanglement experiment (see Supplementary Information ). For fully distinguishable photons, the value of g (2) would reach 0.5 at zero arrival time difference. A strong deviation from this behaviour is observed ( Fig. 2d ) due to two-photon quantum interference 29 that, for perfectly indistinguishable photons, would make the central peak fully vanish. The remaining coincidences are likely to be caused by (temperature-dependent) phonon-induced transitions between optically excited states 30 in NV A (these transitions are less relevant for NV B because it is at a lower temperature). The visibility of the two-photon interference observed here—(80 ± 5)% for |d t | < 2.56 ns—is a significant improvement over previously measured values 18 , 19 and central to the success of the entangling scheme. To generate and detect remote entanglement experimentally, we run the following sequence: first, both NV centres are independently prepared into the correct charge state and brought into optical resonance according to the scheme in Fig. 2b . Then we apply the entangling protocol shown in Fig. 1c using a 600 ns delay between the two optical excitation rounds. We repeat the protocol 300 times before we return to the resonance preparation step; this number is a compromise between maximizing the attempt rate and minimizing the probability of NV centre ionization. A fast logic circuit monitors the photon counts in real time and triggers single-shot qubit readout on each set-up whenever entanglement is heralded, that is, whenever a single photon is detected in each round of the protocol. The readout projects each qubit onto the {|↑〉, |↓〉} states ( Z -basis), or onto the {|↑〉±|↓〉, |↑〉 |↓〉} states ( X or − X basis). The latter two are achieved by first rotating the qubit by π/2 using a microwave pulse before readout. By correlating the resulting single-qubit readout outcomes, we can verify the generation of the desired entangled states. To obtain reliable estimates of the two-qubit state probabilities, we correct the raw data with a maximum-likelihood method for local readout errors. These readout errors are known accurately from regular calibrations performed during the experiment (see Supplementary Information ). Figure 3 shows the obtained correlations. When both qubits are measured along Z (readout basis { Z , Z }), the states Ψ + and Ψ − (as identified by their different photon signatures) display strongly anti-correlated readout results (odd parity). The coherence of the joint qubit state is revealed by measurements performed in rotated bases ({ X , X }, {− X , X }), which also exhibit significant correlations. Furthermore, these measurements allow us to distinguish between states Ψ + and Ψ − . For Ψ + the { X , X } ({− X , X }) outcomes exhibit even (odd) parity, whereas the Ψ − state displays the opposite behaviour, as expected. The observed parities demonstrate that the experiment yields the two desired entangled states. Figure 3: Verification of entanglement using spin–spin correlations. Each time that entanglement is heralded the spin qubits are individually read out and their results correlated. The readout bases for NV A and NV B can be rotated by individual microwave control (see text). The state probabilities are obtained by a maximum-likelihood estimation on the raw readout results (see Supplementary Information ). Error bars depict 68% confidence intervals; dashed lines indicate expected results for perfect state fidelity. Data are obtained from 739 heralding events. For Ψ − , the detection window in each round is set to 38.4 ns, and the maximum absolute detection time difference |δ τ | between the two photons relative to their laser pulses is restricted to 25.6 ns. δ τ = τ 2 − τ 1 , where τ 1 is the arrival time of the first photon relative to the first laser pulse and τ 2 the arrival time of the second photon relative to the second laser pulse. For Ψ + the second detection window is set to 19.2 ns with |δ τ | < 12.8 ns, in order to reduce the effect of photo-detector afterpulsing. PowerPoint slide Full size image We calculate a strict lower bound on the state fidelity by combining the measurement results from different bases (see Supplementary Information ): where P ij is the probability for the measurement outcome ij in the { Z , Z } basis (that is, the diagonal elements of the density matrix ρ ) and C is the contrast between odd and even outcomes in the rotated bases. We find a lower bound of (69 ± 5)% for Ψ − and (58 ± 6)% for Ψ + , and probabilities of 99.98% and 91.8%, respectively, that the state fidelity is above the classical limit of 0.5. These values firmly establish that we have created remote entanglement, and are the main result of this Letter. The lower bound on the state fidelity given above takes into account the possible presence of coherence within the even-parity subspace {|↑↑〉, |↓↓〉}. However, the protocol selects out states with odd parity and therefore this coherence is expected to be absent (see Supplementary Information ). To compare the results to the expected value and to account for sources of error, we set the related (square-root) term in equation (1) to zero and obtain for the data in Fig. 3 as best estimate F = (73 ± 4)% for Ψ − and F = (64 ± 5)% for Ψ + . Several known error sources contribute to the observed fidelity. Most importantly, imperfect photon indistinguishability reduces the coherence of the state. In Fig. 4a we plot the maximum state fidelity expected from photon interference data ( Fig. 2d ) together with the measured state fidelities, as a function of the maximum allowed difference in detection time of the two photons relative to their respective laser pulses. We find that the fidelity can be slightly increased by restricting the data to smaller time differences, albeit at the cost of a lower success rate ( Fig. 4b ). Figure 4: Dependence of the fidelity and the number of entanglement events on the detection time difference of the photons. a , Upper bound on the state fidelity from photon interference data (see Supplementary Information ) and best estimate of the state fidelity from the correlation data as a function of the maximum allowed photon detection time difference (|δ τ | < δ τ max ). Detection time windows are chosen as in Fig. 3 . Shaded regions indicate 68% confidence intervals. b , Number of entanglement events obtained during 158 h as a function of the maximum allowed photon detection time difference, δ τ max . PowerPoint slide Full size image The fidelity is further decreased by errors in the microwave pulses (estimated at 3.5%), spin initialization (2%), spin decoherence (<1%) and spin flips during the optical excitation (1%) (see Supplementary Information ). Moreover, Ψ + is affected by afterpulsing, whereby detection of a photon in the first round triggers a fake detector click in the second round. Such afterpulsing leads to a distortion of the correlations (see, for example, the increased probability for |↓↓〉 in Fig. 3 ) and thereby a reduction in fidelity for Ψ + (see Supplementary Information ). Besides these errors that reduce the actual state fidelity, the measured value is also slightly lowered by a conservative estimation for readout errors and by errors in the final microwave π/2 pulse used for reading out in a rotated basis. The fidelity of the remote entanglement could be significantly increased in future experiments by further improving photon indistinguishability. This may be achieved by more stringent frequency selection in the resonance initialization step and by working at lower temperatures, which will reduce phonon-mediated excited-state mixing 30 . Also, the microwave errors can be much reduced; for instance, by using isotopically purified diamonds 12 and polarizing the host nitrogen nuclear spin 9 . The success probability of the protocol is given by P Ψ = 1/2 η A η B . Here η i is the overall detection efficiency of resonant photons from NV i and the factor 1/2 takes into account cases where the two spins are projected into |↓↓〉 or |↑↑〉, which are filtered out by their different photon signature. In the current experiment, we estimate P Ψ ≈ 10 −7 from the data in Fig. 2c . The entanglement attempt rate is about 20 kHz, yielding one entanglement event per 10 min. This is in good agreement with the 739 entanglement events obtained over a time of 158 h. The use of optical cavities would greatly enhance both the collection efficiency and emission in the zero-phonon line 31 and increase the success rate by several orders of magnitude. Creation of entanglement between distant spin qubits in diamond, as reported here, opens the door to extending the remarkable properties of NV-based quantum registers towards applications in quantum information science. By transferring entanglement to nuclear spins near each NV centre, a non-local state might be preserved for seconds or longer 12 , facilitating the construction of cluster states 2 or quantum repeaters 8 . At the same time, the auxiliary nuclear spin qubits also provide an excellent resource for processing and error correction. When combined with future advances in nanofabricated integrated optics and electronics, the use of electrons and photons as quantum links and nuclear spins for quantum processing and memory offers a compelling route towards realization of solid-state quantum networks. | (Phys.org) —Researchers at TU Delft in the Netherlands have managed to bring two electrons, three meters from each other, into a quantum- entangled state. This result marks a major step towards realizing a quantum network that can be used to connect future quantum computers and to send information in a completely secure way by means of 'teleportation'. The results have been published online on April 24 in Nature. Entanglement is arguably the most intriguing consequence of the laws of quantum mechanics. When two particles become entangled, their identities merge: their collective state is precisely determined but the individual identity of each of the particles has disappeared. The entangled particles behave as one, even when separated by a large distance. Einstein doubted this prediction, which he called 'spooky action at a distance', but experiments have proven its existance. Entangled states are interesting for computers as they allow a huge number of calculations to be carried out simultaneously. A quantum computer with 400 basic units ('quantum bits') could, for example, already process more bits of information simultaneously than there are atoms in the universe. In recent years, scientists have succeeded in entangling quantum bits within a single chip. Now, for the first time, this has been successfully achieved with quantum bits on different chips. Prof. Ronald Hanson's research group at TU Delft's Kavli Institute of Nanoscience makes quantum bits from electrons in diamonds. 'We use diamonds because they form 'mini-prisons' for electrons if there is a nitrogen atom in the position of one of the carbon atoms. As we can examine these mini-prisons individually, we can study and monitor an individual electron and even a single atomic nucleus. We can prepare the spin (direction of rotation) of these particles in a previously determined state, control the spin and subsequently read it out. We do all this using a material from which chips can be made. That is important it is believed that only chip-based systems can be scaled up to a practical technology', Hanson explains. The two electrons are stimulated by a laser pulse to emit a photon (particle of light). This photon carries with it information on the state of the electron. Both photons pass at the same time through a semi-transparant mirror. Following the laws of quantum physics, a photon detected behind the mirror came from both electrons at the same time. This way detection of a photon entangles the two electrons. Measurement of the state (‘spin’) of one of the electrons then instantaneously determines the state of the other electron. Compare this to tossing a coin. Tossing a single coin yields a random outcome. But for entangled coins, whenever one of the coins shows “heads”, the other one always yields “tails”, and vice versa. Entanglement over a distance of three metres Co-financed by the FOM Foundation and in cooperation entity. 'Incidentally, the three-metre distance between the electrons was chosen quite arbitrarily. We could conduct this experiment over much larger distances', Hanson adds.with the British firm Element Six, Hanson and his colleagues succeeded in bringing two electrons in different diamonds, situated at several metres' distance from each other, into an entangled state. As the two electrons do not feel each other at this large distance, the researchers used light particles to mediate the required interaction. To prove the resulting entanglement, the spin orientation of both electrons was read out and compared. Although the spin orientation of each electron individually was completely random, exactly as predicted by quantum mechanics, the researchers found that the two orientations were always exactly opposite to each other. This proves that the two electrons are entangled and behave as a single entity. 'Incidentally, the three-metre distance between the electrons was chosen quite arbitrarily. We could conduct this experiment over much larger distances', Hanson adds. TeleportationThe next step for the research is the teleportation of electrons. Hanson: 'In theory it is possible to 'teleport' the state of a particle over a large distance to another particle by making smart use of entanglement. Quantum teleportation does not relocate the material itself, but only the state of that material. But given the fact that all elementary particles are identical, quantum teleportation of one electron to another has the same effect as relocating the electron.' According to Hanson, in addition to new fundamental insights, there are two further reasons why the publication in Nature is likely to be an important impulse for the development of new technologies. 'Firstly, because this is an important step towards creating a quantum network for communication between future quantum computers – a quantum internet. We are already working on expanding the experiments using more quantum bits per chip. Entanglement could be used to link such a network of quantum computers.' 'Secondly, teleportation offers the possibility of sending information in a completely secure way. With teleportation, the information does not travel through the intermediate space and therefore cannot be intercepted.' | dx.doi.org/10.1038/nature12016 |
Earth | Deep-sea plastic accumulations by turbidity currents: NW South China sea | Guangfa Zhong et al. Transport and accumulation of plastic litter in submarine canyons—The role of gravity flows, Geology (2021). DOI: 10.1130/G48536.1 Journal information: Geology | http://dx.doi.org/10.1130/G48536.1 | https://phys.org/news/2021-01-deep-sea-plastic-accumulations-turbidity-currents.html | Abstract Manned submersible dives discovered plastic litter accumulations in a submarine canyon located in the northwestern South China Sea, ∼150 km from the nearest coast. These plastic-dominated litter accumulations were mostly concentrated in two large scours in the steeper middle reach of the canyon. Plastic particles and fragments generally occurred on the upstream-facing sides of large boulders and other topographic obstacles, indicating obstruction during down-valley transportation. Most of the litter accumulations were distributed in the up-valley dipping slopes downstream of the scour centers. This pattern is tentatively linked to turbidity currents, which accelerated down the steep upstream slopes of the scours and underwent a hydraulic jump toward the scour centers before decelerating on the upstream-facing flank. Associated seabed sediment consisted of clayey and sandy silts, with unimodal or bimodal grain-size distributions, which are typical for turbidites. The focused distribution of the litter accumulations is therefore linked to turbidity currents that episodically flush the canyon. Our findings provide evidence that litter dispersion in the deep sea may initially be governed by gravity flows, and that turbidity currents efficiently transfer plastic litter to the deeper ocean floor. INTRODUCTION Marine plastic pollution is a pressing global challenge because of its deleterious impact on the environment and its ecosystems, on human health, and on social economy ( U.N. Environment, 2019 ). It has been estimated that ∼4.8–12.7 million tons of plastics enter the oceans each year ( Jambeck et al., 2015 ), and 70% of this amount would reach the seafloor ( Pham et al., 2014 ). Compared to waste floating on the sea surface, benthic litter has received considerably less attention, primarily due to the difficulty of access. Limited studies have nevertheless shown that benthic litter is ubiquitous in the ocean ( Tekman et al., 2017 ; Chiba et al., 2018 ). Submarine canyons not only serve as conduits delivering vast amounts of sediments, nutrients, and pollutants into the deep sea, but also as sinks (permanent and/or transient) of large volumes of such materials ( Fildani, 2017 ). Litter density in submarine canyons has been reported to be 2–3 times higher than that on adjacent open shelves and slopes ( Pham et al., 2014 ; Cau et al., 2017 ; Kane et al., 2020 ). The mechanisms by which benthic litter is dispersed in submarine canyons are, however, largely unknown. It has been suggested that certain underflows, including internal tides ( Schlining et al., 2013 ; van den Beld et al., 2017 ) and various types of gravity flows ( Wei et al., 2012 ; Tubau et al., 2015 ; Daly et al., 2018 ; Kane and Clare, 2019 ; Pierdomenico et al., 2019 ; Pohl et al., 2020 ), might be responsible. Here, we report the results of manned submersible dives that found focused accumulations of benthic litter in a submarine canyon of the South China Sea (SCS). We investigated the transport and depositional processes that formed these accumulations by documenting litter distribution together with grain-size and morphodynamic analyses. BACKGROUND AND METHODS Located in the northwestern SCS, the studied submarine canyon (18.10–18.64°N, 111.86–112.10°E) is headed at 350 m water depth on the outer shelf, ∼150 km from the nearest coast, and it terminates at ∼2350 m depth on the continental slope. At its distal end, it merges with the Xisha Trough, a large submarine canyon in the SCS ( Fig. 1A ). The shelf-indented canyon consists of a 20-km-long, relatively gentle upper reach, followed by an 8-km-long, steep and rugged middle reach, and a 32-km-long flat lower reach. The middle reach of the canyon, the focus of this study, consists of a long stepped chute leading into two successive large scours at the chute toe ( Fig. 1B ; Fig. S1 in the Supplemental Material 1 ). Circulation on the northern SCS margin is characterized by three layers of western boundary currents (WBCs), i.e., the upper layer cyclonic (<500–1000 m water depth), the intermediate anticyclonic, and the deep cyclonic (>2000 m water depth) WBCs ( Wang et al., 2011 ; Zhou et al., 2017 ; Zhu et al., 2019 ). Shelf-parallel currents also occur, including the Guangdong Coastal Current in the nearshore area, and the SCS Warm Current (SCSWC) mainly between the 200 and 400 m isobaths in the outer shelf to shelf-break zone ( Fig. 1A ; Guan and Fang, 2006 ). In addition, the margin is subject to frequent typhoons and strong internal waves ( Alford et al., 2015 ). We conducted seven manned submersible dives in May 2018 (dive 78), July 2019 (dives 159 and 172–173), and March 2020 (dives 227–229; Figs. 1 – 3 ). The initial purpose of these dives was to investigate the step-like morphology in the canyon’s middle reach. However, we encountered unexpectedly large litter accumulations in our first dive ( Fig. 2 ; Fig. S1; Peng et al., 2019 ). These accumulations became the focus of the present study. We used visual observations and video footage collected during the dives to investigate the occurrence of the benthic litter, locate the litter piles, and measure their sizes. The video footage was clear enough to allow us to identify litter items larger than 10 cm in size. The scattered litter items were individually counted along the dive tracks on each photographic snapshot, with density denoted by number of litter items per snapshot (nlps, Fig. 3 ). Areas of high density (≥4 nlps) in litter items were delineated ( Figs. 2A , 2B , and 3 ). The length, width, height, and orientation of the litter piles were measured (Table S1). These estimates may have an error of up to 20%–30% as evaluated by repeat observations of individual litter piles. One 27-cm-long piston core was sampled at 1 cm intervals for grain-size analysis using a laser particle-size analyzer to understand the nature of the bottom sediment ( Fig. 1C ). Details of the background and methods are provided in the Supplemental Material. RESULTS Benthic litter in the canyon occurred either as scattered items or in piles ( Figs. 2 and 3 ). Statistics from video footage shot along the canyon thalweg revealed that the scattered litter items were dominantly plastics (∼89%), and up to 88% of these plastics were distributed within the scours (Fig. S1). Litter density varied from 0 to >50 nlps ( Fig. 3 ; Fig. S1). Notably, the highest litter densities were not found in the deepest portions, or centers, of the scours; instead, they were found on the west and south (downstream) sides of the scours ( Figs. 2A , 2B , and 3 ). The west and south boundaries of the high-density litter areas were topographically higher than their east and north counterparts by 1.8–16.0 m and 6.4–12.6 m, respectively, although there was one exception at transverse profile T4 ( Fig. 3 ). In general, litter density decreased laterally away from the scour centers along the transverse and longitudinal profiles ( Fig. 3 ). Litter items near large boulders and topographic obstacles were noted to occur generally on their upstream sides ( Figs. 2C and 2I–2K ). We delineated 71 litter piles in this study, all of which were distributed in the scours ( Figs. 2A and 2B ). The litter piles averaged 2–61 m in length, 0.5–8 m in width, and 0.1–1.2 m in height, with a maximum width and height of up to 12–15 m and 5–6 m, respectively (Table S1). The largest litter pile, consisting of three segments, measured 61 m in length and ∼241 m 3 in volume ( Figs. 2G–2H and 2L–2M ). The litter piles were mostly northwest- to northeast-oriented, parallel or at an acute angle to the adjacent canyon thalweg. The remaining smaller litter piles (<5 m 3 in volume), and those occurring upstream of obstacles, were roughly east-west–oriented, transverse to the valley ( Figs. 2A–2B and 2I–2K ). The majority of the litter piles were distributed in clusters on the up-valley dipping slopes downstream of the scour centers ( Figs. 2A and 2B ). Comparison between repeated dive observations showed that considerable changes in the configuration and components of litter piles had taken place ( Figs. 2I and 2J ). Grain-size analysis showed that the bottom sediment consisted of clayey silt with sandy silt interbeds. The clayey silt was characterized by unimodal grain-size distributions with mode at 6–7Φ (15.6–7.8 μm), while a secondary mode at 2–4Φ (250–62.5 μm) was recorded for the sandy silt interbeds ( Fig. 1C ). DISCUSSION The heterogeneous but focused distribution of the litter indicates that it has experienced reworking in the canyon. Litter items that accumulated on the upstream side of obstacles suggest that they were transported by down-valley gravity-driven flows. The occurrence of poorly sorted sediment piles, including large blocks sometimes mixed with plastic items ( Figs. 2E and 2F ), may signal the occurrence of more cohesive flows (such as debris flows). The plastic particles and fragments appeared to be incompletely mixed with seabed sediment ( Figs. 2 and 3 ), which may be a result of kinetic sieving ( Middleton, 1970 ). The latter could potentially bring up relatively light and large plastic fragments on the top layer of the flow deposits. The unimodal grain-size distribution of the silt-dominated sediment is typical for turbidites ( Visher, 1969 ) and different from hemipelagic deposits, which are often polymodal, being composed of a variety of sources ( McCave et al., 1995 ; Holz et al., 2004 ). The bimodality of sandy interbeds is characteristic of many basal turbidite layers ( Reynolds, 1987 ). Therefore, the transport and deposition of sediment and litter in the canyon are probably linked to turbidity currents, with lesser influence by submarine landslides. The litter accumulations occurred largely at the immediate downstream end of the scour centers, which may be associated with changes in the morphodynamic conditions. Currents traversing the steep upstream slopes of the scours accelerate toward the central depression, most likely undergoing a hydraulic jump and decelerating against the upstream-facing slopes. The two scours at the chute toe are interpreted as two cyclic-step bed forms formed by alternating Froude supercritical and subcritical flows ( Figs. 4A and 4B ; Fildani et al., 2006 ; Cartigny et al., 2011 ). The decelerated post-jump subcritical flows may favor the deposition of litter items on the up-valley dipping slopes downstream of the scour centers ( Fig. 4 ; Carvajal et al., 2017 ). Depending on the velocity of the decelerated currents, litter items could be either deposited further downstream by more powerful currents or laid down directly at scour centers by weaker currents ( Figs. 2A and 2B ). Large boulders and topographic features act as obstacles to litter items carried in turbidity currents, akin to log jams in fluvial systems. Preferential distribution of the litter items to the west side of the scours ( Figs. 2A , 2B , and 3 ) might be associated with the Coriolis effect, which would deflect the bulk of the currents to the right (looking downstream; Komar, 1969 ). The possible involvement of the intermediate WBC, active in the same depth range, is excluded considering that it flows eastward, contrary to the deviation ( Fig. 1A ). Similar to plant materials in floods, the low-density, platy plastic waste tends to be transported in the upper layer of a gravity flow ( Kane and Clare, 2019 ). Nevertheless, we cannot rule out the possibility that large platy plastic items with a particle size of 50–100 cm or more might be transported as traction loads in the decelerated post-jump currents, as indicated by the imbricated plastic items observed in some larger litter piles ( Fig. 2D ). This phenomenon may be explained by the settling velocity: Compared to the silt-dominated sediment in the canyon, plastics may have a much larger size and possibly an increased density due to the densification processes (e.g., mineralization, biofilms, and aggregates of sediments; Kane and Clare, 2019 ); larger plastic items might therefore have hydraulic equivalence with relatively finer mineral grains and be carried in the same parts of flows and be prone to tractional transport ( Fig. 4 ). We rule out the possibility of hyperpycnal flows in view of the headless nature of the canyon. Submarine landslides may constitute an important trigger of gravity flows as discussed above ( Fig. 4A ). Internal wave–induced return flows may be important in reorganizing the litter, as the SCS has the world’s largest known internal waves ( Pomar et al., 2012 ; Alford et al., 2015 ). Being indented into the shelf, the canyon may obtain at its head a stable supply of litter and sediment from the shelf-parallel SCSWC and various shelf currents, especially the return density flows associated with typhoons ( Fig. 4A ). Litter and sediment stored in the canyon head may episodically fail and cascade down slope, resulting in the formation of gravity flows ( Pohl et al., 2020 ). These flows should occur frequently, as inferred from the changing configuration and litter components of the same litter piles by time-lapse dive observations. Nevertheless, the possibility cannot be ruled out that litter items were delivered and deposited by other processes, including cross-shelf transport, up-valley internal tidal flows, slope-parallel WBCs, and direct settling from the water column, although most of litter items deposited by these processes may be finally reworked by gravity flows, as is suggested by the seabed sediment consisting primarily of turbidites. Finally, the litter accumulations in the scours would either be successively buried by newly deposited sediments or be further transported to the deep sea by large gravity flows. IMPLICATIONS AND CONCLUSIONS While the larger-scale dispersal and accumulation patterns of deep-sea plastic waste remain unclear, our study suggests that the benthic litter accumulated in the scours of the submarine canyon studied here was most likely transported and deposited by gravity flows. Although the hypothesis of gravity flow–controlled litter dispersion has been raised by several authors, concrete evidence has been lacking; here, we present supporting evidence from mapping the distributions of both the scattered litter items and the litter piles, combined with grain-size and morphodynamic analyses. Our findings suggest that the headless canyon is subject to frequent turbidity currents and receives litter and sediment delivered by along-shelf currents. These plastic particles and fragments of different sizes are transported down the canyon by turbidity currents, as demonstrated by the grain-size distributions of associated sediment and the preferential distribution of litter piles. This plastic may be reworked on the bed into accumulations that preferentially occur upstream of obstacles and on the upstream-facing flanks of scours. Macroplastic litter, as a specific type of anthropogenic sediment, may be used to reconstruct modern underflows due its distinct characteristics (color, shape, density) that natural sediments do not possess. Finally, canyons may be a staging point for plastics that will ultimately be delivered to the deep ocean basins, where they will interact with delicate ecosystems, highlighting the need for mitigation of plastic waste dispersal into the natural environment. ACKNOWLEDGMENTS We thank Pinxian Wang, Kang Ding, and Zhimin Jian for leading the expeditions, and the crew of the R/V Tansuoyihao , the pilot team of the manned submersible Shenhaiyongshi , and the onboard diving scientists for their technical support during the expeditions. We thank Andrea Fildani, Ian Kane, and an anonymous reviewer for their constructive comments, and Ian Kane for language polishing. This work was funded by the National Natural Science Foundation of China (grants 91028003, 41676029, and 41876049) and the National Key Research and Development Program of China (grant 2016YFC030490). 1 Supplemental Material. Study background and methods, and Figure S1 and Table S1. Please visit to access the supplemental material, and contact [email protected] with any questions. © 2021 The Authors Gold Open Access: This paper is published under the terms of the CC-BY license. /foreach /foreach /.widget-items /.module-widget <h3>Affiliations</h3> Archive Early Publication About the Journal Geology Science Editors Instructions for Authors Permissions About the Society Events Join the Society Publisher Bookstore Publisher Homepage Contact the Society Open Access Policy Online ISSN 1943-2682 ISSN 0091-7613 Copyright © 2023 Geological Society of America About GSW Our Story Contact Us How to Subscribe Privacy Policy Resources For Librarians For Industry For Society Members For Authors Help FAQ Terms & Conditions Explore Journals Books GeoRef OpenGeoSci Connect Facebook LinkedIn Twitter YouTube 1750 Tysons Boulevard, Suite 1500 McLean, VA 22102 Telephone: 1-800-341-1851 Copyright © 2023 GeoScienceWorld .global-footer-link-list { float: left; margin-right: 10px; } .global-footer-link-list { margin-left: 6.5rem !important; } .global-footer-link-list:first-of-type { margin-left: 0 !important; } .global-footer-link-list .list-title { font-family: "Lato",sans-serif; } .global-footer-link-wrap { align-items: stretch; display: flex; flex-direction: row; flex-wrap: wrap; justify-content: space-between; } .global-footer-link-list li { margin-bottom: 1.25rem !important; } .global-footer-link-list li a { color: #343434; font-size: 0.9rem; } .global-footer-site-details li { color: #343434; margin-bottom: 10px; } .global-footer-block .global-footer-site-details { display:block; } .global-footer-block .global-footer-site-details li { float:none; } .global-footer-block .global-footer-site-details li:first-of-type { text-align:left; } .global-footer-site-info a { display: block; height: 40px; margin-bottom: 10px; width:255px; } .pg_article .fb-featured-image { height: 154px; width: 120px; } /* SS padding fix */ .pg_sspage .page-column--center { padding:2rem; } /* Umbrella footer fix */ body[data-sitename="geoscienceworld"] .widget-SitePageFooter { background: #00436d; min-height: 70px; } body[data-sitename="geoscienceworld"] .ad-banner-footer { display:none; } /* HP layout fixes */ .pg_index .widget-instance-Home_MainContentB1 .widget-SelfServeContent { margin: 38px 25px 0 0!important; } .widget-instance-Home_MainContentB2 .widget-SelfServeContent.widget-instance-rmg_Home_Row3_Left { width:65%!important; } .pg_index .society-second-row .widget-SelfServeContent { flex: unset!important; } body[data-sitename="geologicalsocietyofamerica"].pg_index .widget-instance-Home_MainContentB1 .widget-dynamic-inner-wrap>div:nth-child(1) { max-width:100%!important; } body[data-sitename="geologicalsocietyofamerica"].pg_index .widget-dynamic__header { display:block!important; } .pg_index .row-related-titles .related-title-text a { color:#00436d!important; } .widget-instance-Home_MainContentB0 { background-repeat: repeat; } body.pg_Index[data-sitename="geoscienceworldjournals"] .widget-instance-Home_MainContentB0 div[class*="_Home_Row1_Middle"] { left: unset; bottom: unset; margin:auto!important; } body.pg_Index[data-sitename="geoscienceworldbooks"] .widget-instance-Home_MainContentB0 div[class*="_Home_Row1_Middle"] { left: unset; bottom: unset; margin:auto!important; } @media (max-width: 1024px) { body.pg_Index .widget-instance-Home_MainContentB1 .widget-SelfServeContent .homepage-panel-wrap .homepage-panel-image { margin-bottom:1rem; } } body.pg_Index ul.homepage-list { margin: 0!important; } @media (max-width: 900px) { body.pg_Index[data-sitename="geoscienceworld"] ul.homepage-list, body.pg_Index[data-sitename="geoscienceworldbooks"] ul.homepage-list, body.pg_Index[data-sitename="geoscienceworldjournals"] ul.homepage-list { column-count:2; } } @media (max-width: 600px) { body.pg_Index[data-sitename="geoscienceworld"] ul.homepage-list, body.pg_Index[data-sitename="geoscienceworldbooks"] ul.homepage-list, body.pg_Index[data-sitename="geoscienceworldjournals"] ul.homepage-list { column-count:1; } } .pg_Index .widget-SelfServeContent .global-footer-link-wrap ul { list-style-type: none; } /* Mobile menu fix */ @media (max-width: 930px) { .site-theme-header-menu { top: 50%!important; } } /* Footer logo fix */ body[data-sitename="claysandclayminerals"] .journal-footer-colophon, body[data-sitename="southafricanjournalofgeology"] .journal-footer-colophon { width: 100%; margin-top: 1rem; } /* My Account logo fix */ body.pg_MyAccount .global-nav { width: auto; } /* GeoRef Record spacing fix */ .geoRefRecordSummary { line-height:1.5rem; } /* Book TOC By - DOI consistency */ .pg_Book .book-bottom-section .chapter-authors-prefix { font-size: 14px; } .pg_Book .book-bottom-section .chapter-doi .chapter-doi-label { font-weight: 400; } /* Advanced Search Spacing fix */ .pg_AdvancedSearch { line-height: 1.5rem; } .pg_AdvancedSearch label.inline { padding: .5625rem 0; } .pg_AdvancedSearch .advanced-search-text { background-color:#F6F6F6; } /* My Account Spacing fix */ .pg_MyAccount h2 { margin-bottom:1rem; } .pg_MyAccount table { margin-top:1rem; } .pg_MyAccount input { border:1px solid #ccc; border-radius:4px; } .pg_MyAccount .input-wrap { margin:2rem 0; display:flex; align-items:center; } .pg_MyAccount .change-email-address-wrap input, .pg_MyAccount .change-email-address-wrap select { width: 400px; height: auto; margin-top:0.25rem; margin-bottom:0; } .pg_MyAccount label, .label-placeholder { font-weight:bold; width:250px; margin-right:1rem; text-align:right; } .pg_MyAccount .offset-wrap { margin: 1rem 0; margin-left:266px; } .pg_MyAccount .success-message { display:none; } /* SS h tag fix */ .pg_SelfServePage .content-main_content .widget-SelfServeContent h1, .pg_SelfServePage .content-main_content .widget-SelfServeContent h2, .pg_SelfServePage .content-main_content .widget-SelfServeContent h3, .pg_SelfServePage .content-main_content .widget-SelfServeContent h4, .pg_SelfServePage .content-main_content .widget-SelfServeContent h5 { width: auto; float: none; } /* Ordered Lists fix */ ol.number { margin-left:2rem; } /* Superscript fix */ sup { font-size:12px; } /* Save Search Modal fix */ .solrSearchSaveModel.reveal-modal.small.open div { margin: 1rem auto; } /* Splitview - Standard view icon fixes*/ .toolbar-wrap a.standard-view::before { font-family: white-label; content: "\e941"; } .toolbar-wrap a.split-screen::before { font-family: white-label; content: "\e940"; } /* CrossRef fixes */ .pg_CrossRefCitingArticles .content-main_content { margin: 0 auto; line-height: 1.5; float: none; max-width: 1200px; padding: 0; } .pg_CrossRefCitingArticles .content-main_content .crossref-citedby { padding: 1rem; margin: -2rem 0 2rem 0; background: white; border: 1px solid #ddd; } .pg_CrossRefCitingArticles .content-main_content .crossref-citedby-citedArticleCitation { margin: 1rem 0 } .pg_CrossRefCitingArticles .content-main_content .crossref-citedby-citedByNoteText { padding: .75rem; margin: 1rem 0; border: 1px solid #ddd; border-radius: .25rem; } .pg_CrossRefCitingArticles .content-main_content .crossref-citedby__entry { padding: 2rem 0; border-bottom: 1px solid #ddd; } .pg_CrossRefCitingArticles .content-main_content .crossref-citedby__entry:last-of-type { border-bottom: none; } .pg_CrossRefCitingArticles .content-main_content .crossref-citedby__entry-title { font-weight: bold; } .pg_CrossRefCitingArticles .content-main_content .crossref-citedby__entry-author li { display: inline; } .pg_CrossRefCitingArticles .content-main_content .crossref-citedby__entry-author li:after { content: ", "; } .pg_CrossRefCitingArticles .content-main_content .crossref-citedby__entry-author li:last-of-type:after { content: " "; } .pg_CrossRefCitingArticles .content-main_content .crossref-citedby-noResultsMessage { margin: 3rem 0; } table th, table td { white-space: unset; } .table-wrap table th, .table-wrap table td { white-space: nowrap; } .license {text-align: left;} [data-sitename="segdiscovery"].pg_Index .widget-info-wrap .article-issue-info .volume, [data-sitename="segdiscovery"].pg_Index .widget-info-wrap .article-issue-info .issue { display: none; } .sticky-footer .icon-general-close {display: none;} .token-wrap .institution-token-redeem-container .token-redeem-item:last-of-type { border-bottom: none !important; } .token-account-link-wrap { border-bottom: 1px solid #dbdbdb; } .token-wrap { padding: 1rem 1rem 0 1rem; } Full Text Full Text data&figures Data & Figures contents Contents georef GeoRef supplements Supplements references | Benthic plastic litter is a main source of pollutants in oceans, but how it disperses is largely unknown. This study by Guangfa Zhong and Xiaotong Peng, published today in Geology, presents novel findings on the distribution patterns and dispersion mechanisms of deep-sea plastic waste in a submarined canyon located in the northwestern South China Sea. Evidence collected from a series of manned submersible dives indicate that the plastic litter items transported and deposited in the canyon are most likely controlled by turbidity currents. Here the plastic litter items are highly heterogeneously distributed: Up to 89% of them occur in a few scours of the canyon. The plastic items are mostly accumulated in longitudinal litter piles of 2-61 m long, 0.5-8 m wide, and 0.1-1.2 m high on average. Plastic particles and fragments generally occurred on the upstream-facing sides of large boulders and other topographic obstacles, indicating obstruction during down-valley transportation. Furthermore, the litter piles were mostly distributed in the up-valley dipping slopes downstream of the scour centers, which is tentatively linked to the deceleration of turbidity currents after shedding down the steep upstream slopes of the scours and undergoing a hydraulic jump at the scour centers. This interpretation is supported by the sedimentological evidence from grain-size analysis of associated seabed sediment. The results of this study lend support to the hypothesis of turbidity-current-controlled dispersion of plastic litter and bear implications on deep-sea environmental protection and surveillance. The focused and patterned distribution of benthic plastics in the canyon that can be reasonably explained by morphodynamic interactions sheds light on monitoring or even removal of deep-sea macro-plastic pollutants. Photo showing plastic particles and fragments occurred on the upstream-facing side of a large boulder. Upstream is on the right side. A higher-resolution photo is available. Credit: Guangfa Zhong and Xiaotong Peng Bathymetric contour maps in the upper (A) and lower (B) scours at the downstream end of the steeper middle reach of the canyon, showing distribution of the plastic litter piles in the up-valley (northward) dipping slopes downstream of local scour centers. Color dotted lines indicate litter piles, with dots denoting locations and lines indicating orientations. Color dashed lines show dive tracks. Thin solid black lines represent isobaths. A higher-resolution version of this figure is available. Credit: Guangfa Zhong and Xiaotong Peng | 10.1130/G48536.1 |
Nano | New nanowire transistors may help keep Moore's Law alive | Vertical nanowire array-based field effect transistors for ultimate scaling, Nanoscale, 2013,5, 2437-2441. DOI: 10.1039/C3NR33738C Abstract Nanowire-based field-effect transistors are among the most promising means of overcoming the limits of today's planar silicon electronic devices, in part because of their suitability for gate-all-around architectures, which provide perfect electrostatic control and facilitate further reductions in "ultimate" transistor size while maintaining low leakage currents. However, an architecture combining a scalable and reproducible structure with good electrical performance has yet to be demonstrated. Here, we report a high performance field-effect transistor implemented on massively parallel dense vertical nanowire arrays with silicided source/drain contacts and scaled metallic gate length fabricated using a simple process. The proposed architecture offers several advantages including better immunity to short channel effects, reduction of device-to-device variability, and nanometer gate length patterning without the need for high-resolution lithography. These benefits are important in the large-scale manufacture of low-power transistors and memory devices. via IEEE Spectrum Journal information: Nanoscale | http://dx.doi.org/10.1039/C3NR33738C | https://phys.org/news/2013-05-nanowire-transistors-law-alive.html | Abstract Nanowire -based field-effect transistors are among the most promising means of overcoming the limits of today's planar silicon electronic devices, in part because of their suitability for gate-all-around architectures, which provide perfect electrostatic control and facilitate further reductions in “ultimate” transistor size while maintaining low leakage currents. However, an architecture combining a scalable and reproducible structure with good electrical performance has yet to be demonstrated. Here, we report a high performance field-effect transistor implemented on massively parallel dense vertical nanowire arrays with silicided source/drain contacts and scaled metallic gate length fabricated using a simple process. The proposed architecture offers several advantages including better immunity to short channel effects, reduction of device-to-device variability, and nanometer gate length patterning without the need for high-resolution lithography . These benefits are important in the large-scale manufacture of low-power transistors and memory devices. Introduction The development of electronics and the continuous improvement in circuit performance have historically been driven by the “simple” downscaling of the basic circuit building block: the MOS transistor. Today, the physical limitations of nanoscale transistor operation (in particular the increasing power consumption per chip) have led to the development of innovative MOS architectures, such as multi-gate devices (FinFET and tri-gate approaches) to improve electrostatic control of the channel. The natural evolution of these architectures is the gate-all-around (GAA) transistor 1,2 fabricated on a semiconductor nanowire (NW). These devices represent an ideal design for electrostatic control of charge inversion and permit further reductions in transistor size. However, the current flowing through such devices in the “on” state (current drive) remains low due to the small cross-sectional area of the NW. It is therefore essential to implement these transistors on nanowire arrays rather than on single NWs in order to combine excellent electrostatic control with increased current capability and an improved signal/noise ratio. The numerous methods for fabricating NWs may be grouped into bottom-up (B-U) and top-down (T-D) approaches. In B-U methods, the NWs are grown on a substrate using chemical deposition techniques, whereas nanostructure formation in T-D fabrication is based on the selective etching of a patterned planar material. Each method has its own advantages and drawbacks: the B-U route enables NWs to be grown using a large variety of materials, while the T-D approach may be quickly integrated into standard CMOS processes with a very good reproducibility and NW control (position, diameter, and pitch). From an integration point of view, NWs may be processed horizontally (planar) or vertically. Horizontal NW approaches are limited in terms of NW density. NWs fabricated using etching processes (T-D) have, in the best case, a density slightly higher than those fabricated using conventional planar technology, 3 while high-density arrays are very difficult to achieve using horizontal growth techniques (B-U). In order to obtain a planar orientation, horizontal growth must be guided in cavities 4 or the NWs must be grown separate from the substrate and then relocated 5,6 in a complex, contamination-prone, and poorly reproducible process. Multi-layer horizontally stacked NWs 7 require extremely complex processes, and achieving the desired device miniaturization is fraught with difficulty. On the other hand, vertical integration is a particularly attractive approach because its 3-D character is directly compatible with both T-D and B-U growth methods and the process results in extremely high integration densities 8 (70% surface shrink compared to planar architecture 3 ), opening the way to new integration approaches. Despite this promising potential, a vertical approach has not yet been used to demonstrate an outstanding scaled architecture because of challenges such as contact formation at the bottom of the wires, 9 and perfect control of the thickness and flatness of spacer layers. 10–13 In this work, we realize a FET device implemented on a vertical NW array with a sub-14 nm gate length ( L g ) that demonstrates excellent electrostatic behavior and promising scaling properties in view of reaching sub-5 nm architecture. An easy-to-manufacture nano-transistor The proposed architecture was implemented on dense Si NW arrays obtained using a top-down approach that couples e-beam lithography with plasma etching and sacrificial self-limited oxidation. 14 The process is based on the stress retarded oxidation phenomenon 15 and enables fabrication of very reproducible NWs with perfectly controlled diameter and position and much lower dimensional variability than other top-down approaches. Free standing NWs with diameters as small as 16 nm 14 have been patterned starting from a (100) bulk wafer with a p-type doping level of 8 × 10 18 at. cm −3 . The diameter dimension is taken at the middle part of the nanowire. Fig. 1a is a schematic view of the device in which terminals embedded in a dielectric matrix are positioned at the bottom, center, and top of the NW for use as the source, gate and drain contacts. Fabrication begins with formation of a thin gate oxide (5 nm) using dry oxidation followed by an anisotropic plasma etch to remove the oxide layer at each nanowire termination. A 15 nm platinum layer is anisotropically deposited using e-beam evaporation, and temperature activation (RTP N 2 H 2 , 500 °C/3 min) is used to create silicided contacts at the bottom and the top of each NW (the source and drain contacts). The greatest technological challenge in fabricating competitive 3D vertical devices is obtaining nanometer-scale control of the layer engineering using conventional methods rather than high-resolution lithography. A key step is the construction of the insulating layer between the contact electrodes (source, gate, and drain, Fig. 1a ) in order to achieve symmetrical ultra-small scale devices. Previous attempts 11–13 exhibited sloping layers (up- or down-slope depending on the process conditions). The corresponding gate layers, which necessarily reproduce the shape of the underlying insulator, therefore induced high parasitic capacitances and prevented successful downscaling. By employing a spin-on glass filling step followed by a chemical etch-back, we achieved perfect control of the insulator layer planarization to obtain a flat topology as well as nanometer-scale thickness control. An inorganic flowable resist with a chemical structure (hydrogen silsesquioxane) similar to silicon dioxide was spin-coated on the nanowire array to embed the entire network in a dielectric matrix. The material flow ability provided excellent layer flatness, particularly over the wire networks, where differences in topology were less than one nanometer as measured using atomic force microscopy (S1, ESI † ). Using a highly dilute solution of hydrofluoric acid in deionized water (1 : 1000), we obtained very precise control of the etching rate (approximately 1 nm s −1 ). This in turn provided excellent control of the surface position and resulted in very low surface roughness (similar to the roughness of the dielectric surface before etching). The dielectric structure of the spacer is less dense than the thermal SiO 2 gate dielectric, leading to a large etching selectivity (>1 : 5000) and preserving the gate insulator during wet etching. Planarization was used to position the top surface of the dielectric spacer at the desired height, and an anisotropic metal deposition ( e.g. Cr with a work function of 4.5 eV) was subsequently performed to define gate structures surrounding the NWs. This is particularly interesting for nano-transistor realization because the gate length is simply defined by the thickness of the deposited gate material. As indicated in red in the schematic provided in Fig. 1a and the tilted SEM image in Fig. 1b , the metal layer includes an extension to receive the connection to the gate contact. This extension is rotated by 90° compared to the source extension (yellow) to minimize gate–source overlap and associated parasitic capacitances. A second planarization is used to create the gate–drain spacer and define the top contact (drain). The fabrication ends with a conventional back-end process including vias and metallization. Fig. 2 is a TEM crosssection of the final device where the possibility of integrating massively parallel dense NW arrays with symmetrically silicided S/D and scaled metallic gate-all-around ( L g ∼ 14 nm) architecture is demonstrated. Vertical NW array-based transistors are much easier to manufacture than conventional FET architectures (as well as being lower in cost) because the gate length is defined without high-resolution lithography. Furthermore, no highly doped S/D junctions with high thermal budgets and complex processing are needed. In this work, the structures were implemented on dense vertical NW arrays obtained using top-down technology, but the design is also suitable for bottom-up approaches where, for example, well-ordered vertical InAs 16 NW arrays have been reported. Fig. 1 Representation of a vertical nanowire array-based field effect transistor. (a) An artist view and (b) a SEM aerial view of a vertical FET device implemented on a dense nanowire array with extrinsic access connecting different levels (top, bottom and gate contacts). In the inset of (a), a zoom of a nanowire with a cross-section view in the top part shows the gate stack composed of the gate oxide and the metal gate surrounding the NW. Each termination is silicided symmetrically compared with the gate. Fig. 2 Transmission electron micrographs of the vertical nanowire array transistor. (a) TEM crosssection in tilted view with false color of the device with a gate surrounding each nanowire, a symmetrical silicided S/D (PtSi) contacts and 60 nm low κ (2.7) dielectric spacers separating the S/D contacts to the gate, (b) a zoom of the TEM crosssection with a nice planarity of the stacked layers and (c) a zoom of the GAA region with the 5 nm SiO 2 gate oxide and the 14 nm gate length. Competitive electrical performance Fig. 3a and b are examples of the transfer and output characteristics of such a device ( L g = 14 nm, NW diameter = 30 nm, t ox = 5 nm, and 225 NWs in parallel) in which behavior remarkably close to ideal is observed. The off current is 5.2 × 10 −10 A and the I on / I off ratio is greater than five decades at a power supply voltage V DD = V G = V D = −0.8 V. This device also exhibits very good immunity to short channel effects (SCEs) considering the scaled gate length achievable thanks to the gate-all-around configuration, which offers excellent electrostatic performance. The sub-threshold slope is below 100 mV dec −1 at 300 K and the drain induced barrier lowering (DIBL) is 7 mV V −1 . SCE parameters generally tend to worsen when the gate length is scaled and therefore affect transistor performance (switching speed) as the supply voltage is reduced. The output characteristics saturate very well at low V D (−0.4 V) and the bowing trend of the linear regime indicates that the transport is not limited by the access resistance. However, the current drive is limited by a relatively low hole mobility value in the (100) direction, which collapses in a very short gate length device. 17 In modern Si transistors, the current drive increase is no longer obtained by the conventional scaling of the transistor but in the largest proportion by the introduction of process boosters as strain-induced mobility enhancement. 18 High mobility channels can be introduced in our vertical NW array architecture by using a process induced stress 19 (oxidation, silicidation) or by implementing NWs with high intrinsic mobility (III–V, Ge). Fig. 3 Static current–voltage characteristics. Static characteristics of a pFET with a gate length of 14 nm on a 225 NW array with 30 nm diameter. (a) I D – V G curves for V D from −0.1 V to −0.7 V with −0.15 V steps, that demonstrate a very good immunity against short channel effects with sub-threshold swing below 100 mV dec −1 , DIBL of 7 mV V −1 , off current in the pA range and the on/off current ratio for voltages between V G = 0 V/ V G = −0.8 V and V D = −0.8 V larger than 5 decades and (b) I D – V D curves for V G from −0.1 V to −1 V with −0.1 V steps. It is important to stress that the proposed architecture is different from the conventional MOSFET configuration based on highly doped S/D regions, which are difficult to fabricate at the nanoscale, especially in 3D configurations. It consists of a metallic S/D MOSFET approach, 20 (PtSi silicide contacts) that are connected to uniformly doped NWs (8 × 10 18 at. cm −3 ). The carrier injection properties such as thermionic injection are improved due to the high doping level of the semiconductor, which greatly enhances the tunneling contribution. 21 A platinum silicide-based contact is known to offer an intrinsic low Schottky barrier height for holes, which is particularly interesting for achieving low contact resistances in p-type applications. In addition, it also provides low resistivity n-type contacts 22 when a substantial concentration of a donor-like dopant is present at the interface. This leads to a drastic process simplification while preserving a low metal/Si contact resistivity. Finally, the gate-all-around configuration can deplete the small volume of semiconductor when the device is turned off, in a manner similar to junctionless devices. 23 In Fig. 4a and b the immunity against short channel effect (DIBL and sub-threshold swing) and the evolution of off-state current and the I on / I off ratio are presented as functions of the NW diameter. For large nanowires ( Φ > 40 nm) gate control over the channel is weak, resulting in high leakage current and static characteristic degradation (increase in short channel effect parameters). Devices constructed using nanowires smaller than 40 nm in diameter maintain good electrostatic integrity and efficiency in suppressing SCEs. This offers a substantial relaxation of the requirements for nanowire diameter, which is beneficial for gate length scaling. Fig. 4 Immunity against short channel effects and threshold voltage variability of the vertical nanowire array transistor. Based on 14 nm GAA architecture, the evolution of (a) sub-threshold swing and drain induced barrier lowering as a function of nanowire diameter and (b) the on/off current ratio and off current as a function of NW diameter. The 4 parameters are following the same trend, where nice behavior is obtained with NW diameters below 40 nm. Larger diameters induce a loss of gate control over the channel, which quickly degrades the electrical performance. (c) Threshold voltage variability for a 30 nm diameter NW transistor for several numbers of NWs in each device. The V T extraction method used is the extrapolation in the linear region method (ELR). 27 In the 3 graphs, the solid lines are given to guide the eye. Ultimately scaled, SCE immunity, and variability The observed good immunity against short channel effect with such NW diameters can at first sight appear surprising. Indeed, according to the scaling theory 24 for cylindrical GAA MOSFETs ( i.e . nanowire MOSFETs) efficient control of SCE for a given transistor can be achieved only if the ratio L eff /2 λ is greater than 2, where L eff is the effective channel length and λ is the natural length of the transistors defined as: 24 (1) with ε nw and ε ox being the dielectric constants of silicon and silicon oxide, Φ nw the diameter of the nanowire, and t ox the thickness of the gate dielectric. Considering a conventional GAA architecture with doped p–n junctions, a diameter of approximately 10 nm is predicted by eqn (1) to ensure SCE control with a 14 nm gate length device. In contrast, the absence of highly doped junctions in our configuration results in an increase in the effective channel length induced by fringing capacitances at the edge of the physical metallic gate, 25 whereas the conventional highly doped S/D regions geometrically demarcate the electrical channel length without any degree of freedom. On the other hand, because of the accumulation mode regime of highly doped NWs the increase in electrical channel length is unaffected when the device is turned on. Another concern in scaling is variability, which is a major challenge for the design of nanoscale MOSFETs. Imperfections in the Si/SiO 2 interface or the silicide/semiconductor contacts, irregularity in location, and dopant concentration have become very important in nanometer-scale devices. 26 The particular configuration of nanowires with large surface/volume ratios exacerbates device-to-device fluctuations in the current–voltage characteristics, particularly the threshold voltage. Fig. 4c illustrates the variation of threshold voltage as a function of the number of nanowires addressed in one device. The interest in addressing a large assembly of nanostructures in parallel stems from the fact that it mitigates the effects of the previously described fluctuations and imperfections, which have a strong impact on the behavior of a single nanostructure if addressed separately but are averaged when a large number of nanowires are considered together. 28 Conclusion We demonstrated the possibility of integrating massively parallel dense NW arrays with silicided S/D contacts and scaled metallic gate length in a process readily amenable to manufacturing. The proposed architecture is efficient for ultimate gate length transistor scaling because of its very good immunity to short channel effects even when the semiconductor body is not as thin as required in conventional structures and its potential for minimizing device-to-device electrical variability. The structure opens the way for integration of new materials such as high mobility III–V channels and new device concepts such as the band-to-band tunneling FET, 29 which will greatly benefit vertical NW architecture. Acknowledgements Authors wish to thank F. Cristiano and D. Troadec for TEM analysis. This work was supported by the European Commission through the NANOSIL Network of Excellence (FP7-IST-216171) and through the NANO-TEC (FP7-257694) and by the French RENATECH network (French national nanofabrication platform). | (Phys.org) —Two French researchers, Guilhem Larrieu and Xiang‑Lei Han, may have succeeded in possibly setting back the date to which Moore's Law would no longer apply by creating a new kind of nanowire Field-Effect Transistor (FET). In their paper published in the journal Nanoscale, the two describe how they built a "gate-all-around" made of 225 nanowires, each with its own 14nm-thick chromium layer that serves as a gate. In their search to find new ways to cram more electronics onto the same size chips, researchers have turned to FETs. Transistors on chips are the parts that control the flow of electricity—figuring out a way to make them smaller is a vital part of keeping Moore's Law alive. One way to do this is to do away with wires and instead use nanowires. However, because of their small size, nanowires aren't capable of carrying enough current to do the work necessary on a chip. To get around that, researchers have tried creating bundles of nanowires; but thus far, the gates to control them have been too unwieldy. In this new effort, the researchers tried a different approach. First they created a forest of 225 nanowires by etching a slab of silicon—the bottom half of each nanowire is submersed in a material that serves as a source. Just up from that base, the researchers applied a chromium layer wrapped all the way around the nanowire to serve as the gate. Above that was another layer of material that serves as the sink. This simple design allows each nanowire to be controlled by its individual gate, and the researchers report the thickness of the gate is what makes it all work. At 14nm, the gate can be made short enough to continue to allow for the control of the current. The result is a transistor that thus far appears to be a workable way for increasingly more circuitry to be added to a computer chip. Top surface topography of the low-k dielectric layer that covers the Si NW array using Atomic Force Microscopy (AFM). Credit: Nanoscale, 2013,5, 2437-2441. Should the new design pan out, it won't keep Moore's Law alive forever, of course. One day, researchers will reach a point where it's no longer possible—due to the laws of physics—to add more processing power to a computer chip. As such, new research will necessarily be focused on ways to build smarter computers using different ideas, rather than new materials. | 10.1039/C3NR33738C |
Medicine | Unexplored genomic control regions yield the key to finding causes of rare disease | Non-coding variants disrupting a tissue-specific regulatory element in HK1 cause congenital hyperinsulinism, Nature Genetics (2022). DOI: 10.1038/s41588-022-01204-x , www.nature.com/articles/s41588-022-01204-x Journal information: Nature Genetics | https://dx.doi.org/10.1038/s41588-022-01204-x | https://medicalxpress.com/news/2022-11-unexplored-genomic-regions-yield-key.html | Abstract Gene expression is tightly regulated, with many genes exhibiting cell-specific silencing when their protein product would disrupt normal cellular function 1 . This silencing is largely controlled by non-coding elements, and their disruption might cause human disease 2 . We performed gene-agnostic screening of the non-coding regions to discover new molecular causes of congenital hyperinsulinism. This identified 14 non-coding de novo variants affecting a 42-bp conserved region encompassed by a regulatory element in intron 2 of the hexokinase 1 gene ( HK1 ). HK1 is widely expressed across all tissues except in the liver and pancreatic beta cells and is thus termed a ‘disallowed gene’ in these specific tissues. We demonstrated that the variants result in a loss of repression of HK1 in pancreatic beta cells, thereby causing insulin secretion and congenital hyperinsulinism. Using epigenomic data accessed from public repositories, we demonstrated that these variants reside within a regulatory region that we determine to be critical for cell-specific silencing. Importantly, this has revealed a disease mechanism for non-coding variants that cause inappropriate expression of a disallowed gene. Main Genetic discovery in Mendelian disease has focused on identifying highly penetrant variants affecting the function of genes expressed in clinically affected tissue(s). While this approach has proven successful, the underlying etiology of over 3,000 presumed monogenic diseases remains undefined 2 , 3 , 4 , 5 , of which many have notable clinical and genetic heterogeneity 6 , 7 . Congenital hyperinsulinism (CHI) is characterized by inappropriate insulin secretion during hypoglycemia. It is a clinically and genetically heterogeneous disease for which, despite extensive sequencing efforts, the underlying etiology is not known in up to 50% of individuals 8 , 9 . To identify new etiologies, we performed whole-genome sequencing of 135 individuals with biochemically confirmed, persistent CHI without an identified disease-causing variant in a known gene (Supplementary Table 1 ). We initially searched for genes containing new coding de novo or biallelic variants in two or more individuals. When no such genes were found, we turned to the non-coding genome and searched for de novo copy number variants. This identified a single locus within intron 2 of HK1 containing two heterozygous ~4.5-kb deletions with a 2,787-bp overlap in two unrelated individuals (Extended Data Fig. 1 ). Digital droplet PCR confirmed the deletions in both probands and an affected twin. We next searched for de novo single-nucleotide variants and indels within the minimal deleted region in our whole-genome-sequencing discovery cohort and in a replication cohort of 27 individuals with CHI who had undergone pancreatectomy as part of routine clinical care (Supplementary Table 1 ). This identified seven different de novo variants in 12 probands. In two cases, the variants had been inherited by similarly affected offspring (Fig. 1 ). All variants were new 10 , 11 and within a 42-bp conserved region constrained against variation in the Genome Aggregation Database (gnomAD) 12 . Fig. 1: Schematic representation of the HK1 gene and the identified variants. a , Schematic representation of the HK1 gene (GRCh37/hg19 chromosome 10, 71,029,756–71,161,637). The full-length testis-specific isoform ( ENST00000448642 ) and the shorter ubiquitously expressed isoform ( ENST00000359426 ) are depicted. The positions of the two ~4.5-kb deletions within intron 2 of the shorter isoform and the seven different heterozygous single-nucleotide variants and indels within the minimal deleted region identified in 14 probands are shown. The number of probands with each variant is provided. An asterisk (*) denotes a single proband with two de novo variants (Extended Data Fig. 8 ). The variant positions are given according to the GRCh37/hg19 genomic coordinates. b , Partial pedigrees showing inherited HK1 variants in three families. Filled symbols represent individuals with CHI. ‡ Haplotype analysis confirmed that the HK1 variant had arisen de novo in patient 3.2. M, HK1 variant; N, no variant; NA, DNA not available. Pedigrees for the 11 probands with de novo HK1 variants are not shown. Full size image Overall, the finding of nine different de novo variants in 14 probands, co-segregating with disease in three additional family members, provides overwhelming evidence of disease causality. All 17 individuals with a HK1 non-coding variant had severe early-onset CHI (median age at diagnosis: birth (interquartile range (IQR), 0–14 d)). These variants were not associated with macrosomia at birth (median birthweight z score, 0.61 (IQR, 0.10–1.84)). This suggests that insulin secretion was not markedly increased in utero, given its role as a potent growth factor during fetal development. In all cases, the hyperinsulinism persisted; the eldest individual still known to be affected was 18 years of age. These findings suggest that the variants act to disrupt glucose-induced insulin secretion throughout post-natal life and into adulthood but have less impact during fetal development. The HK1 variants caused a beta cell-specific defect with no common additional features between patients. In five individuals, medical management was ineffective, leading to pancreatic resection. Of these, the only case that had resolution of CHI developed insulin-dependent diabetes following near-total pancreatectomy. Histopathological analysis of resected pancreatic tissue ( n = 2) demonstrated a discrete pathology, distinguishable from diffuse disease resulting from ABCC8 variants, the most common cause of CHI (Extended Data Fig. 2 ). An overview of the clinical features is provided in Supplementary Tables 2 and 3 . HK1 encodes a glycolytic enzyme that is silenced in the pancreas and liver but expressed in all other mature tissues (Extended Data Fig. 3a–c ), where it supports glucose metabolism, ensuring cell survival 13 . Within the pancreatic beta cell, glucose is phosphorylated to glucose 6-phosphate by glucokinase (GCK; hexokinase 4), which, due to a low binding affinity ( K m ~8 mM), acts as the pancreatic glucose sensor coupling insulin release to the prevailing glucose concentration 14 . Hexokinase 1 has a markedly higher affinity for glucose ( K m < 50 µM) 14 . Silencing of HK1 in favor of GCK in the beta cell therefore ensures appropriate glucose sensing, minimizing insulin release at low glucose levels. While biallelic loss-of-function HK1 variants have been reported to cause non-spherocytic hemolytic anemia 15 , dominant or recessive coding variants have not been described in individuals with defects in glucose homeostasis, in keeping with the absence of HK1 protein in beta cells. In resected pancreatic tissue of individuals with non-coding variants, we found that HK1 was expressed and colocalized with insulin in the islets of affected tissue but not in controls (Fig. 2 ). HK1 did not colocalize with glucagon (secreted by pancreatic alpha cells), suggesting that the impact of the variant was beta cell specific (Extended Data Fig. 4 ). These results confirmed that the variants cause HK1 to be inappropriately expressed in the pancreatic beta cell and explain why there is increased insulin secretion in these individuals during hypoglycemia. Fig. 2: HK1 expression is present in the beta cells of donors with HK1 -variant CHI but not in non-hyperinsulinism controls. a , Staining of HK1 (red), insulin (cyan) and 4,6-diamidino-2-phenylindole (DAPI) (dark blue) in pancreatic tissue resected from patient 6 (Supplementary Table 2 ) with a HK1 variant. Confocal imaging of sections demonstrates that HK1 colocalizes with insulin, whereas no HK1 expression is observed in the non-hyperinsulinism (HI) control donor. Scale bar, 100 µm. b , HK1 expression (median fluorescence intensity (MFI)) is significantly increased in the donors with HK1 hyperinsulinism (patients 1 and 6; n = 2 donors; 17,015 beta cells) when compared to that in non-hyperinsulinism control donors ( n = 2 donors; 21,408 beta cells). A two-tailed Mann–Whitney ( U = 590,830; **** P < 0.0001) test was performed. All cell data within a violin plot with the median (line) and IQR (dashed lines) are presented. Source data are provided. Source data Full size image We identified a putative islet cis -regulatory domain encompassing our critical region 16 bound by a broad set of islet transcription factors, with NKX2-2 and FOXA2 most prominent (Fig. 3 ). Single-nucleus assay for transposase-accessible chromatin with sequencing (snATAC-seq) data in human islets 17 revealed a peak of open chromatin in beta cells and showed that the relevant HK1 promoter remains accessible (Fig. 3b ). Human islet high-throughput chromatin conformation capture (Hi-C) data 18 confirmed that both our region and the relevant HK1 promoter are contained within a well-insulated domain (Extended Data Fig. 5 ), without evidence of contact to distal loci (Fig. 3d and Extended Data Fig. 5b ), further supporting direct regulation by the critical region on the promoter. Fig. 3: HK1 variants disrupt transcription factor binding sites within a cis -regulatory domain. a , Expression of HK1 and GCK for comparison over the course of pancreatic differentiation from embryonic stem cells to maturing beta cells, expression data over a beta cell differentiation pseudotime 19 , Gaussian process (GP) regression median (line) and 95% confidence interval (shaded regions) are shown. DE, definitive endoderm; GT, gut tube; FG, foregut; PE, pancreatic endoderm; EP, endocrine precursors; IB, immature beta cells; AU, arbitrary units. See Extended Data Fig. 3 for more HK1 expression data. b , Transcription factor binding in human islets 16 and chromatin accessibility from snATAC-seq in the cluster of cells assigned to beta cells 17 over the HK1 locus; the region containing the variants is marked as a gray box. chr, chromosome; RPM, reads per million. Chromatin-accessibility data in other endocrine cell types is provided in Extended Data Fig. 6 . c , Chromatin accessibility during pancreatic differentiation over the critical region. Stages (ES, embryonic stem cell; early or late PP, pancreatic progenitor) describe ATAC-seq data from in vitro differentiation 21 . Beta cells give accessibility in snATAC-seq data 17 (as shown in b ). d , Hi-C data in human islets 18 reveal that HK1 is contained within a well-insulated domain; heatmap gives log-scale contact frequencies; the white triangle and circles mark chromatin loops called in the same study that span HK1 ; additional loops are marked in Extended Data Fig. 5 . The region ±850 kb of HK1 is shown. ROI, region of interest. e , Transcription factor motif families disrupted by the variants; shown are those motif families maximally disrupted by each variant (red font). Notable secondary motif families are given in Extended Data Fig. 8 and Supplementary Table 6 . Left, scatterplot of normalized motif scores for the reference versus mutated score (motif score normalized by maximal motif score); black line gives a 1 − 1 line of equal motif. The number gives the final two digits of the chromatin position, for example, 45 (71,108,645). Right, sequence logo for the three maximally disrupted families. Members of each family are expressed in beta cells (Extended Data Fig. 7 ). Full size image During development, HK1 is highly expressed in embryonic stem cells but is progressively downregulated during pancreatic cell type differentiation, becoming extinguished during beta cell maturation 19 , 20 (Fig. 3a and Extended Data Fig. 3d ). GCK shows increased expression during differentiation as HK1 is downregulated (Fig. 3a ). Despite being absent from beta cells, HK1 is expressed in mature stellate and duct cells, suggesting differential regulation across pancreatic cell types (Extended Data Fig. 3e ). Chromatin-accessibility data show that our critical region remains closed over pancreatic cell differentiation until pancreatic progenitor stages, when it becomes accessible and bound by a complement of factors in the pancreatic progenitor regulatory network 21 , 22 (Fig. 3c and Extended Data Fig. 6b,c ). This suggests that the region is necessary for HK1 repression in late pancreatic development and mediates beta cell-specific control in adult cells but that it does not have a role in the reduction of HK1 expression between embryonic stem cells and early pancreatic progenitors (Fig. 3a ). Analysis of histone modifications during pancreatic differentiation in human islets 21 , 23 , cell-sorted human pancreatic alpha, beta and exocrine cells 24 and the EndoC-βH1 beta cell line 25 revealed that repression is actively maintained. The critical region is marked by a bivalent state encompassed by focal peaks of the active enhancer marks histone 3 lysine 4 monomethylation (H3K4me1) and histone 3 lysine 27 acetylation (H3K27ac) and a broader and dominant polycomb H3K27 trimethylation (H3K27me3) repressive domain (Extended Data Fig. 6d ). This domain is diminished in exocrine cells, supporting the idea that this mode of repression is specific to endocrine cell types (Extended Data Fig. 6d ). Analysis of transcription factor motif families revealed that the variants disrupt transcription factor-binding sites for FOX, NKX2 and nuclear factor of activated T cells (NFAT) families (Supplementary Table 5 and Fig. 3e ), which each have family members expressed in beta cells (Extended Data Fig. 7 ). This is supported by chromatin immunoprecipitation (ChIP)–seq data revealing that NKX2-2 and FOXA2 were prominently bound at this region in islets (Fig. 3b ). FOXA2 loss of function causes CHI 26 , and NKX2-2 is able to recruit a large repressive complex that regulates beta cell specification through DNA methylation 27 and may therefore have a role in maintaining the repressive epigenetic state at the HK1 locus 21 , 23 , 24 , 25 , 28 . Multiple members of the NFAT family are expressed in beta cells (Extended Data Fig. 7 ), with NFATC2 inducing beta cell proliferation in human islets 29 . Analysis of snATAC-data from human islets revealed endocrine cells present in hormone-high and hormone-low states 17 , with the former associated with increased promoter accessibility over secreted endocrine genes and the latter over cell cycle genes. Our critical region is accessible in the hormone-high state of alpha, beta and delta cells, and, when compared between hormone-low state cells, only remains accessible in beta cells (Extended Data Fig. 6a ). This suggests that HK1 repression is actively maintained during beta cell proliferation and not during alpha or delta cell proliferation, which, given the capacity of NFATC2 to induce beta cell proliferation, supports this factor in maintaining HK1 repression. Discovering non-coding regulatory variants controlling HK1 and access to a broad range of publicly available epigenomic data 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 30 , 31 enabled us to identify a regulatory element critical for the selective silencing of an otherwise ubiquitously expressed gene within a single cell type. It is interesting to note that interruption of transcription factor binding is implicated in each of the nine distinct variants and the locus is decorated by both active and repressive epigenetic marks, suggesting that continual transcription factor binding is necessary for maintenance of repression in beta cells. Future work should prioritize regulatory regions surrounding disallowed genes with similar epigenetic marks and exploration of whether different regulatory mechanisms exist to silence disallowed genes within the beta cell. Over 60 beta cell disallowed genes have been described, although the mechanism(s) controlling cell-specific silencing in humans have not been fully determined 1 , 32 . Linkage analysis of HK1 to CHI 33 and HK1 expression in beta cells 34 have been reported in several patients, but the genetic etiology was not established. Promoter variants in the beta cell disallowed gene SLC16A1 have also been reported in two families with exercise-induced hyperinsulinism. However, these variants increased transcription across cell types, leading to the hypothesis that differences in the post-transcriptional regulation of mRNA across tissues could explain the beta cell-specific phenotype 35 . The identification of non-coding HK1 variants in individuals with CHI represents a rare example of regulatory non-coding variants affecting a gene in which coding variation does not cause the same phenotype 36 . These findings highlight a role for undiscovered regulatory variants causing disease through inappropriate expression of a normally functioning protein in a specific cell type. Non-coding variants affecting the regulation of HK1 cause CHI through its aberrant expression in beta cells. These findings are important for future efforts to discover non-coding regulatory variants as they establish a critical role for disallowed genes in Mendelian disease. Methods Participants CHI was defined as an inappropriately high level of plasma insulin at the time of hypoglycemia associated with inappropriately suppressed ketones and free fatty acids presenting within the first 12 months of life. The definition of hypoglycemic was based on the recommendations of the Pediatric Endocrine Society (blood glucose <2.8 mmol L −1 in the presence of detectable insulin) 37 . Clinical details of the participants are provided in Supplementary Table 1 . Individuals with CHI were recruited by their clinicians for molecular genetic analysis to the Exeter Genomics Laboratory. Disease-causing variants in the known CHI genes had been excluded by targeted next-generation sequencing in all cases 38 . Ethical considerations This study was approved by the North Wales Research Ethics Committee (17/WA/0327, IRAS project ID 231760) and was conducted in accordance with the Declaration of Helsinki, with all participants or their parents providing informed consent for genetic testing. All tissue samples were studied with full ethics approval (West of Scotland Research Ethics Committee, reference 20/WS/0074, IRAS project ID 283620 or nPOD) 39 . No participant received compensation for entering this study. Sample collection and DNA extraction Peripheral whole-blood samples (1–5 ml) were taken in EDTA blood-collection tubes. Automated DNA extraction was undertaken on a chemagic STAR (Hamilton Bonaduz) using the chemagic STAR DNA blood extraction kit (CMG-1756 (PerkinElmer)). Whole-genome sequencing Whole-genome sequencing of DNA extracted from peripheral blood leukocytes was completed on 135 probands with CHI ( n = 3 with Illumina HiSeq 2500, n = 69 with Illumina HiSeq X10, n = 63 with BGISEQ-500). The mean read depth across the whole genome was 36.9 (s.d. = 4.9). An additional 191 family members were also sequenced. The sequence data were aligned using BWA-MEM version 0.7.15 and processed using a pipeline based on the GATK best practices (Picard version 2.7.1, GATK version 3.7). Variants were annotated using Alamut Batch Standalone version 1.11. CNVs were called across the genome by SavvyCNV 40 using a bin size of 2 kbp. We also developed a new tool (FindLargeInsertSizes, available at ) to detect small (≥1-kbp) deletions, insertions, inversions and translocations using read-pair information. We used this tool to screen all whole-genome-sequenced samples for structural variants within HK1 intron 2. When a variant was identified, Sanger sequencing (for single-nucleotide variants or indels) or digital droplet PCR (for deletions) was performed on samples from each available family member to confirm genome-sequencing results. Digital droplet PCR was performed on leukocyte DNA from patients 1 and 2.1 and their unaffected parents to confirm the results of the whole-genome-sequencing deletion analysis. Digital droplet PCR was also performed on the affected twin brother of patient 2.1 and the 27 individuals with CHI of unknown cause who had undergone pancreatectomy. Relevant primers (Supplementary Table 4 ) were used in reactions for droplet generation, PCR and detection using the Bio‐Rad QX200 Digital Droplet PCR EvaGreen system to search for copy number changes. Reactions were performed according to the manufacturer’s recommendations with an annealing or extension temperature during PCR of 59 °C. All data were analyzed using Bio‐Rad QuantaSoft software version 1.6.6.0320. Sanger sequencing of the HK1 regulatory element Sanger sequencing was performed on DNA extracted from peripheral blood leukocytes from 27 individuals with CHI of unknown cause who had undergone pancreatectomy. Briefly, a 397-bp genomic region (GRCh37/hg19, chromosome 10, 71,108,536–71,108,932) within intron 2 of HK1 ( NM_033497 ) was amplified by PCR (primer sequences are listed in Supplementary Table 4 ). PCR products were sequenced on an ABI3730 capillary machine (Applied Biosystems) and analyzed using Mutation Surveyor version 3.24 software (SoftGenetics). When a variant was identified, samples from family members were tested to investigate co-segregation, and microsatellite analysis using the PowerPlex kit (Promega) was performed to confirm family relationships. Histopathology Formalin-fixed paraffin-embedded pancreatic tissue was available for immediate analysis from two individuals with HK1 variants (patients 1 and 6, Supplementary Table 2 ), age-matched non-hyperinsulinism controls ( n = 2) and age-matched tissues from individuals with hyperinsulinism due to an ABCC8 variant (hyperinsulinism controls, n = 2). Immunohistochemistry was performed on 5-µm-thick sections of tissue 41 . For high-content quantification of tissue, sections were first digitized using the 3DHistech Pannoramic 250 Flash II slide scanner (3DHISTECH) and quantified using QuPath (available at ) 42 . Immunofluorescence After dewaxing and rehydration, pancreatic samples from patients 1 and 6 with HK1 variants (Supplementary Table 2 ) and four age-matched control tissues (hyperinsulinism controls (individuals with ABCC8 variants ( n = 2); non-hyperinsulinism donors ( n = 2)) were subjected to heat-induced epitope retrieval in 10 mM citrate (pH 6) buffer for 20 min. After blocking, the sections were probed in a sequential manner with rabbit monoclonal anti-hexokinase 1 (Abcam, ab150423; 1/100, overnight), followed by detection with goat-anti-rabbit Alexa Fluor 488 antibody (Invitrogen). Following washes, sections were then probed with mouse monoclonal anti-glucagon antibody (Abcam, ab10988, 1/2,000 for 1 h), followed by detection with goat-anti-mouse Alexa Fluor 555 antibody. Finally, sections were probed with guinea pig anti-insulin (Agilent, IR002, 1/5 for 1 h), followed by detection with goat-anti-guinea pig Alexa Fluor 647 antibody in the presence of DAPI (1 µg ml −1 ) to identify cell nuclei. Sections were not stripped between successive antibody incubations. After mounting, the sections were imaged with a Leica DMi8 confocal microscope (Leica Microsystems) (using a focal plane of 0.4 µm), and the distribution of HK1, insulin and glucagon was examined in multiple islets. In addition, sections were scanned at 40× magnification using an Akoya Biosciences Vectra Polaris Automated Quantitative Pathology Imaging System for quantification analyses. The quantification of HK1 expression in islets was determined using the Random Forest Classifier Module (version 3.2.1851.354), DenseNet AI version 2 and HighPlex FL version 4.04 modules included in the Indica Labs HALO Image Analysis Platform (version 3.2.1851.354). The Random Forest Classifier was used to identify islets, and then the DenseNet AI version 2 module was used to identify beta and alpha cells within the islets. This enabled exclusion of HK1-positive non-islet cells such as red blood cells, stellate cells and endothelial cells, which are frequently found in proximity to the islet. The median fluorescence intensity of HK1 expression in beta and alpha cells was then calculated using the HighPlex FL version 4.04 module. This pipeline was applied to all corresponding tissue sections. GraphPad Prism version 9.2.0 (332) was used to demonstrate the median and IQR of beta and alpha cell median fluorescence intensity in each of the donors assessed. Epigenomic analysis All published ChIP–seq, ATAC-seq, RNA-seq and single-cell RNA-seq (RNA-seq) datasets used were obtained from accessions provided in Supplementary Table 5 . All code and data required for all analyses, including to determine disrupted motifs and generate figures, are provided at and . Public bulk ChIP–seq and ATAC-seq ChIP–seq and ATAC-seq reads were aligned with Bowtie 2 (ref. 43 ) to the GRCh37/hg19 genome with Bowtie 2 version 2.3.5.1 with default parameters for single-end reads and with additional options ‘-I 0 -X 1000–no-discordant–no-mixed’ for paired-end reads. Alignments were filtered for those with mapping quality >30, and then reads with identical aligning coordinates were treated as duplicates and collapsed to a single alignment. For islet H3K27me3 data from the Roadmap Epigenomics Project, aligned reads were downloaded in BED/TagAlign format 23 . All data are visualized as reads per million using . For human islet single-nucleus ATAC-seq data 17 ( GSE160472 ), reads from the authors’ combinatorial barcode approach were aligned as described above and separated into cell types using author-provided cluster labels: . Human islet Hi-C data were obtained from experiment accession TSTSR043623 and file accession DFF064KIG (.hic file) and TSTFF938730 (bedpe file) 18 . The .hic file from the experiment was downloaded and converted to .cool format using hic2cool ( ) with default parameters. Contact matrices were obtained using Cooler 44 at 5-kb resolution using KR balance and log transformed for visualization. To quantify the histogram of contact strengths for all islet loops, loop bases were extended by two 5-kb Hi-C bins upstream and downstream, and the mean of the KR balance-normalized matrix of this region was quantified per loop. Single-cell RNA-seq expression datasets scRNA-seq data collected over a time course of pancreatic differentiation 19 projected onto a differentiation pseudotime were obtained from the reference’s supplementary table. We identity consistent temporal trends using GP regression, following the approach that we have previously applied 45 . Briefly, to stabilize variance, we transform data using log (α y + β), where y is the gene expression for a given gene, α = 100 and β = 1 and then perform GP regression using Matern52 kernel and invert the transformation to report GP median and 95% confidence intervals. All GP regression was performed with GaussianProcesses.jl ( ; ). For scRNA-seq data in human islets 30 , accession GSE101207 , we used gene counts per cell for the size healthy donors and normalized by depth per cell. In Extended Data Figs. 3 , 7 and 8 , we show box plots of normalized counts for each gene over healthy donors. Finally, we obtained data on gene expression during beta cell maturation from the Broad Single Cell Portal 20 . In Extended Data Figs. 3 , 7 and 8 , we show mean normalized counts for each gene over annotated cell types. Motif analysis To assess motifs maximally disrupted by the variants, we took all motifs in the non-redundant JASPAR database 46 and found the maximal scoring match that spanned the position of each variant for both reference and mutated sequences using . Motif scores are likelihood-ratio scores of motif PWM over a background of the A, C, G and T frequencies in the hg19 genome. We normalized all motif scores by the maximal score for each PWM and to both primary and secondary candidate motifs that may be disrupted by the variants. We considered two tiers: tier 1 having normalized motif score ≥0.6 and tier 2 having 0.45 ≤ normalized motif score < 0.6. For each tier, we calculated a disruption score by subtracting the mutated sequence score from the reference sequence score and ranked these across all motifs. To report a single disrupted motif family, we grouped multiple motifs with overlapping alignments by family by removing any trailing numbers from the gene symbol (for example, NKX2-2 ⟶ NKX2, HIC2 ⟶ HIC), and we grouped all FOX factors in the FOX family. Tier 1 disrupted motif families include NFAT, NKX2 and FOX (Fig. 3e ). Patient 11 has two de novo variants: the leftmost is shared with other patients and is a candidate for NFAT disruption, and the rightmost interrupts a HIC family motif (Extended Data Fig. 8a ). HIC family members include HIC1 and HIC2, with HIC2 being the most strongly expressed in beta cells (Extended Data Fig. 8b ). Interestingly, HIC2 is a transactivator of SIRT1 (ref. 47 ) and the loss of SIRT1 impairs glucose sensing in beta cells in mice 48 . Tier 2 disrupted motif families include TEAD and SMAD (Extended Data Fig. 8c ). The TEAD family motif shares a TTCA consensus with the NFAT family and is an alternative candidate to NFAT. TEAD1 is expressed in beta cells (Extended Data Fig. 8d ) and plays a critical role in pancreatic progenitors 22 ; however, it should be noted that TEAD1 does not bind the critical region in pancreatic progenitors when the region is bound by FOXA2 (Fig. 3b ). Multiple members of the SMAD family are expressed in beta cells (Extended Data Fig. 8e ); SMAD factors are signal transducers of TGF-β signaling and play an important role in beta cell development, function and proliferation 49 . Statistics and reproducibility No statistical method was used to predetermine sample size as this study involved genetic testing of individuals with a rare monogenic disease. Immunofluorescence studies were performed on pancreatic samples from six donors. This included two individuals with HK1 hyperinsulinism (patients 1 and 6, Supplementary Table 2 ), two ABCC8 -hyperinsulinism controls and two non-hyperinsulinism controls. Each experiment was performed once due to limited sample availability. Representative data from these imaging studies are shown for one of each donor type in Fig. 2a and Extended Data Fig. 4a,b . No data were excluded from any of the experiments described. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All non-clinical data analyzed during this study are included in this published article (and its Supplementary Information ). Clinical and genotype data can be used to identify individuals and are therefore available only through collaboration to experienced teams working on approved studies examining the mechanisms, cause, diagnosis and treatment of diabetes and other beta cell disorders. Requests for collaboration will be considered by a steering committee following an application to the Genetic Beta Cell Research Bank ( ). Contact by email should be directed to S. Flanagan ([email protected]). All requests for access to data will be responded to within 14 d. Accession codes and DOI numbers for all ChIP–seq, ATAC-seq, RNA-seq and scRNA-seq datasets are provided in Supplementary Table 5 . We used the Genome Reference Consortium Human Build 37 (GRCh37) to annotate genetic data (accession number GCF_000001405.13 ). Details of this assembly are provided at . For Fig. 3 and Extended Data Figs. 3 and 5 – 8 , further data can be found at and . Source data are provided with this paper. Code availability The code used in this study can be freely downloaded from , , , , , , and . | Scientists have discovered the cause of a rare condition within a part of the genome that has been largely unexplored in medical genetics. A team at the University of Exeter has found genetic changes in a region that controls the activity of the genome, turning on or off genes, and in doing so they have found a key that could unlock other causes of rare conditions. The finding, published in Nature Genetics, is a very rare case of a cause of disease that only results from changes outside the exome, the region of the genome that codes for genes . It is also the first time that changes have been shown to affect a gene—known as HK1—that does not normally have a role in the relevant body tissue—in this case, the pancreas. Until now, scientists have typically sequenced the part of the genome that describes the genetic code of all genes in individuals with a rare disease. They do this looking for variants in the DNA that affects a protein known to have an important role in the disease-relevant organ. A good example is observed in neonatal diabetes, where genetic variants disrupt the function of the pancreatic protein insulin, causing high blood sugar levels. The team at the University of Exeter's search for a genetic cause of Congenital Hyperinsulinism took a more complex path. In contrast to diabetes, this condition causes babies to secrete too much insulin from their pancreas. That means babies can be born very large, and suffer from problems associated with low blood sugar. If the condition is not treated appropriately, the brain can be starved of vital fuels, which can cause learning difficulties, or even death. Until now, scientists have been unable to find the genetic cause of the condition in up to half of babies with Congenital Hyperinsulinism—one reason why treatments are scarce. The limited medications available often fail to work, sometimes meaning the patient has to endure their pancreas being removed. This often fails to cure the disease or in some cases can cause diabetes. Now, a team led by Dr. Sarah Flanagan, at the University of Exeter, has broken new ground—providing answers for families, and unlocking a new way of investigating the causes of many elusive rare diseases. Dr. Flanagan explained: "We've really struggled to work out what's going on in these 50 percent of babies with no known genetic cause of Congenital Hyperinsulinism. We've been looking for defects in genes for years, but it remained frustratingly elusive." Using state-of-the-art technology enabled the team to sequence the genomes of 17 individuals with Congenital Hyperinsulinism with an unexplained cause, revealing a revelation—the genetic variants that were causing the disease did not occur within a protein but within a 'regulatory switch' which is important for turning on and off a protein in the pancreas. The impact of the genetic variants was that HK1 was turned on in the pancreas' of patients with Congenital Hyperinsulinism. The gene, which leads to insulin being produced even when blood sugar levels are low, is usually turned off in the pancreas. But the team found it was active—meaning it was working to lower blood sugar to dangerous levels. Studying a unique collection of pancreatic tissue confirmed this hypothesis. "It's incredibly important to be able to provide answers to parents who have been desperate to know the cause of their child's condition," said Dr. Flanagan. "Now that the HK1 variants have been discovered, routine genome sequencing in sick children would be the perfect method to detect them at clinical diagnosis allowing for improved outcomes. These findings also pave the way for improved treatment of this condition with the development of drugs that inhibit HK1, and consequently insulin production, being a real possibility." "Even more exciting is the potential for this approach to unlock causes of other genetic conditions. We now know that we need to look across the whole genome to find genetic changes that may affect regulatory switches. We need to specifically turn our attention to the proteins that are turned off in the disease-relevant organ tissue and study how and why they are turned off. That approach could rapidly advance genetics and provide answers, and better treatments." The paper is entitled "Non-coding variants disrupting a tissue-specific regulatory element in HK1 cause congenital hyperinsulinism," published in Nature Genetics. | 10.1038/s41588-022-01204-x |
Earth | Which forces control the elevation of mountains? | Sebastian G. Wolf et al, Topography of mountain belts controlled by rheology and surface processes, Nature (2022). DOI: 10.1038/s41586-022-04700-6 Journal information: Nature | https://dx.doi.org/10.1038/s41586-022-04700-6 | https://phys.org/news/2022-06-elevation-mountains.html | Abstract It is widely recognized that collisional mountain belt topography is generated by crustal thickening and lowered by river bedrock erosion, linking climate and tectonics 1 , 2 , 3 , 4 . However, whether surface processes or lithospheric strength control mountain belt height, shape and longevity remains uncertain. Additionally, how to reconcile high erosion rates in some active orogens with long-term survival of mountain belts for hundreds of millions of years remains enigmatic. Here we investigate mountain belt growth and decay using a new coupled surface process 5 , 6 and mantle-scale tectonic model 7 . End-member models and the new non-dimensional Beaumont number, Bm, quantify how surface processes and tectonics control the topographic evolution of mountain belts, and enable the definition of three end-member types of growing orogens: type 1, non-steady state, strength controlled (Bm > 0.5); type 2, flux steady state 8 , strength controlled (Bm ≈ 0.4−0.5); and type 3, flux steady state, erosion controlled (Bm < 0.4). Our results indicate that tectonics dominate in Himalaya–Tibet and the Central Andes (both type 1), efficient surface processes balance high convergence rates in Taiwan (probably type 2) and surface processes dominate in the Southern Alps of New Zealand (type 3). Orogenic decay is determined by erosional efficiency and can be subdivided into two phases with variable isostatic rebound characteristics and associated timescales. The results presented here provide a unified framework explaining how surface processes and lithospheric strength control the height, shape, and longevity of mountain belts. Main Mountain belt evolution in collisional settings comprises crustal thickening and surface uplift, followed by tectonic quiescence and isostatic rebound that may include extensional collapse. Surface processes shape mountain belt surface morphology by counteracting tectonic growth and by causing topographic decay. End-member collisional mountain belt types 9 include (1) active, narrow orogens with high rock uplift and erosion rates, such as those in Taiwan and the Southern Alps of New Zealand (SANZ), (2) active, wide orogens with orogenic plateaus and overall low erosion rates, such as in Himalaya–Tibet and the Andes, and (3) inactive orogens with slowly decaying topography surviving tens to several hundreds of millions of years, such as the Urals or the Appalachians. High erosion rates in small orogens, co-existence on Earth of large and small orogens of variable height, and the long-term survival of orogenic topography raise fundamental questions about the factors controlling the width, height and longevity of mountain belts during their growth and decay phases. In non-glaciated mountain belts, rainfall and river incision control the erosional efficiency, denudation rate and sediment yield 2 , 10 , 11 , implying that climate may set the width, height and relief in growing orogens 2 , 3 , 12 , 13 , 14 , 15 , 16 , 17 , 18 . Erosional efficiency is also thought to control the longevity of mountainous relief 19 , 20 . However, others have shown 21 , 22 , 23 , 24 that finite crustal strength may be the main factor limiting the maximum elevation of orogens under some circumstances, demonstrating the need for a proper representation of tectonic deformation to study the effect of the erosional efficiency on mountain growth and decay, which includes isostasy, mantle lithosphere subduction, discrete faulting, and proper earth-like rheologies. Coupled tectonic–surface process model We use the thermo-mechanical tectonic model FANTOM 7 , 25 coupled to the landscape evolution model FastScape 5 , 6 , resolving the interaction between upper mantle scale tectonic deformation and surface processes at high resolution. FANTOM computes deformation of earth-like materials with frictional-plastic and non-linear thermally activated viscous flow, whereas FastScape solves for river erosion, hillslope processes, sediment transport and deposition (Methods; Extended Data Fig. 1 ). The erosional efficiency depends mostly on fluvial erodibility K f , which spans several orders of magnitude owing to its dependence on rainfall, rainfall variability, lithology, fracturation, vegetation cover, and abrasive agents 14 , 20 , 26 , 27 , 28 , 29 . We present three end-member models with low (model 1), high (model 2), and very high (model 3) fluvial erodibility (Figs. 1 and 2 ). Fig. 1: Key stages for models 1–3. One model per column (model 1, a , d , g ; model 2, b , e , h ; model 3, c , f , i ). a–c , Model snapshots at the end of shortening. From bottom to top: enlarged view of the model domain showing material distribution (sed, sediments; uc, upper crust; mc, middle crust; lc, lower crust; lm, lithospheric mantle) and temperature contours from the thermo-mechanical tectonic model, map-view landscape, swath elevation profile of the landscape, and profiles of uplift ( U ) and average erosion rate ( ė ). t is the model time, Δ x is the amount of convergence, and v c is the convergence rate (1 cm yr –1 ). d – i , Swath profile and corresponding uplift and average erosion rates during the orogenic decay phase I ( d – f ) and phase II ( g – i ). Full size image Fig. 2: Evolution of mountain belt topography. a , Evolution of the mean elevation at the main drainage divide (solid line) with the corresponding shaded swath profile. The mean elevation of the main drainage divide coincides with the maximum mean elevation of the orogens. In model 1 these measures do not coincide around 20–30 Myr, and the maximum mean elevation is additionally shown as a stippled line. The grey shaded area frames the growth phase; the decay phase has a white background. Capped black bars show typical swath profiles of several orogens on Earth; the dot indicates mean elevation. Swath profiles of growing orogens (Andes (And), Himalaya (Him), Taiwan (Tw), SANZ, see Fig. 3 ) are placed with ascending height; Pyrenees and West Alps (W Alps) are placed according to their orogenic decay time 17 , 51 . Note that the swath profiles of natural systems can be explained by a combination of surface process efficiency and crustal rheology (see discussion below and in the Supplementary Information). b , Plot showing the fraction of erosion compensated by uplift. During orogenic growth, models 2 and 3 reach a factor close to one, showing flux steady state. In contrast, model 1 never reaches flux steady state and has scattered values dominated by tectonic uplift and subsidence. During orogenic decay, a short phase with values greater than 1 is followed by a long phase with average values of 0.86. The latter value is close to the ratio of the density between continental crust ( ρ crust ) and lithospheric mantle ( ρ mantle ). Full size image End-member model results In all models, shortening is accommodated by one-sided subduction of the strong lower crust and lithospheric mantle, and mountain building through crustal thickening (Figs. 1 and 2 , and Supplementary Videos ). During the first 10–12 Myr, all modelled orogens first grow in height and then reach a stable maximum elevation (Fig. 2a ). Models 1 and 2 reach a height of 6 km and 5 km, respectively, whereas model 3 is significantly lower at 1.5 km (Fig. 2a ). During convergence, low erodibility model 1 continuously widens through formation of thick-skinned thrust sheets, which control mountain topography and strongly affect drainage patterns. Erosion and rock uplift rates are very low compared to the convergence rate ( v c = 1 cm yr –1 ) and balanced in the orogen centre, but slightly higher and unbalanced at the active orogen flanks. The resulting non-steady-state mountain belt is characterized by rivers flowing in prominent thrust-controlled longitudinal valleys (Fig. 1a ). In contrast, high erodibility model 2 exhibits flux steady state 8 between tectonic influx and erosional outflux (uplift/erosion ratio of approximately 1; Fig. 2b ) and constant orogen width (of approximately 110 km) and height, with discrete faulting continuously perturbing the orogen. The resulting mountain belt is characterized by matching high uplift and erosion rates, consists of more than one thrust sheet, and is cut by sublinear transverse valleys with steady-state river profiles (Fig. 1b ). The very high fluvial erodibility model 3 reaches flux steady state earlier than in model 2, does not produce thrust sheets, and forms a crustal monocline that is exhumed at the retro shear zone of the orogen (Figs. 1 c and 2b ). The maximum elevation of models 1 and 2 is limited by the crustal strength and the crustal buoyancy force is equal to the combination of the integrated strength and overpressure in the orogen foreland, leading to the formation of new thrust sheets (Extended Data Fig. 2 ). In contrast, erosion limits the orogen height in model 3, and the buoyancy force does not reach the magnitude required to form new thrust sheets. End-member models 1–3 represent three types of growing orogens with defining characteristics (Fig. 4 ): type 1, non steady state, strength limited; type 2, flux steady state, strength limited; type 3, flux steady state, erosion limited. The sensitivity to variation in the fluvial erodibility, crustal strength, and decoupling between thin- and thick-skinned deformation during orogenic growth shows that: (1) increasing fluvial erodibility in type 1 orogens decreases the widening rate but not the orogen height (Fig. 4b and Extended Data Figs. 3 a–c and 4 ); (2) type 2 orogens have a similar height to type 1 and their width depends on fluvial erodibility (Fig. 4c and Extended Data Figs. 3 d–f and 4 ); (3) type 3 orogens have a similar narrow width, denoted W min , and their height depends on the fluvial erodibility (Fig. 4d and Extended Data Figs. 3 g–i and 4 ); (4) a lower crustal strength promotes lower height and greater width in strength-limited type 1 and 2 orogens, but does not significantly affect erosion-limited type 3 orogens (Extended Data Fig. 5 ); (5) decoupling between thin- and thick-skinned deformation leads to similar evolution of the topography but significantly different structural style in type 1 orogens (Extended Data Fig. 6 ). The decay of orogenic topography follows two phases with contrasting characteristics (see the Supplementary Information for derivation of scaling laws). Short decay phase 1 quickly removes the short-wavelength fault-controlled topography of model 1, and the narrow orogenic topography of models 2 and 3 (Fig. 1d–f ), and is associated with regional isostatic rebound. Decay phase 2 is characterized by the removal of the long-wavelength topography, and uplift and erosion rates (Fig. 1g–i ) controlled by local isostatic rebound with a ratio of the average crustal and lithospheric densities of 0.86 (Fig. 2b ). The orogenic decay timescale of phase 2 is controlled by the erosional efficiency, the ratio of crust and lithospheric mantle density, and the width and initial height of the orogen (Extended Data Fig. 7 ). A simple scaling law explains why model 1 retains a maximum topography of around 1,500 m even after 150 Myr of decay, whereas models 2 and 3 lose all orogenic topography after a few tens of Myr of decay (Fig. 2 and Extended Data Fig. 7 ). Note that the loss of topography through extensional collapse corresponds to a shift in time on the decay curve (Extended Data Fig. 7h ). Quantifying orogenesis using the Beaumont number Our model results show that crustal strength and surface process efficiency determine the topographic evolution of growing type 1–3 orogens. We define the new non-dimensional Beaumont number, Bm, as the ratio of two well-established non-dimensional numbers relating tectonics and surface processes (Extended Data Fig. 8 ). For growing collisional orogens, Bm is given as the ratio between: (1) the Argand number 30 , Ar, relating the buoyancy force associated with orogenic crustal thickening to the resistance of orogenic foreland crust to deformation and orogen widening; and (2) the surface processes Damköhler number (Da SP ) determining the surface process efficiency (Extended Data Fig. 8 ). Fluvial erosion is dominant in non-glaciated mountain belts, and Da SP IV, corresponding to the uplift-erosion number N e (refs. 2 , 10 ), determines the relative importance of erosional power and surface uplift, so that $${\rm{B}}{\rm{m}}=\frac{{\rm{A}}{\rm{r}}}{{N}_{{\rm{e}}}}\propto \frac{{v}_{{\rm{c}}}}{{F}_{{\rm{i}}{\rm{n}}{\rm{t}}}{K}_{{\rm{f}}}}.$$ (1) Bm provides a simple and unique description of the factors controlling orogenic growth, relating convergence velocity ( v c ), crustal strength ( F int ), surface process efficiency ( K f ), and other more easily measurable parameters (for example, crustal thickness; see the derivation in the Supplementary Information ). Models with variable fluvial erodibility, plate velocity, and crustal strength show systematic variation of Bm, with all models fitting on a single curve (Fig. 3a ): strength-limited non-steady-state type 1 orogens (Fig. 4b ) have Bm > 0.5, strength-limited flux steady-state type 2 orogens (Fig. 4c ) have Bm ≈ 0.4−0.5, whereas erosion-limited flux steady-state type 3 orogens (Fig. 4d ) have Bm < 0.4. In turn, knowing Bm of active orogens enables the approximation of the crustal strength and average fluvial erodibility, which are two highly unknown values. Fig. 3: Beaumont number of the models and mountain belts on Earth. a , Semilog plot of the maximum mean mountain height H normalized by the mean rheologically controlled height h R (unfilled markers, left y axis), and the widening rate \(\dot{W}\) normalized by the widening rate without surface processes \({\dot{W}}_{0}\) (filled markers, right y axis), against Beaumont number. Squares, dots and triangles represent different model sets with variable fluvial erodibility, crustal strength, and plate velocities (model values are given in Supplementary Table 3 ). Large crosses show approximate positions of the SANZ, Taiwan, Himalaya–Tibet, and the Central Andes. b–e , DEMs and swath profiles of four orogens (SANZ ( b) , Taiwan ( c ), Himalaya–Tibet ( d ) and Andes ( e )) with computed average integrated crustal strength ( F int , in units of 1 × 10 12 N m –1 ), orogen-average fluvial erodibility ( K f , in units of 1 × 10 −5 m 0.2 yr –1 ), average convergence velocity ( v c in units of cm yr –1 ; v c = v trench − v South America in the Central Andes), and longitudinality index (LI). Full size image Fig. 4: Characteristic evolution of mountain belts. a , Simplified elevation–time diagram depicting the topographic evolution of mountain belts and its controlling factors. h R is the topographic limit, which depends on crustal strength. b–d , Block diagrams indicating key features of type 1 (non steady state, strength limited, Bm > 0.5 ( b )), type 2 (flux steady state, strength limited, Bm ≈ 0.4–0.5 ( c )), and type 3 (flux steady state, erosion limited, Bm < 0.4 ( d )) growing orogens. U is uplift rate, v c is convergence velocity, and ė is erosion rate; W min is the width of one crustal scale thrust sheet. e , Block diagram indicating key features of the decaying orogens. The diagram is most representative for low K f settings. The uplift rate is dependent on erosion rate, ratio of crust and lithospheric mantle density, and the degree of regional isostatic compensation, captured in the isostatic compensation factor ρ ′. Short-wavelength topography is quickly removed in phase III, before long-wavelength topography is removed slowly in phase IV. Full size image The Beaumont number of active orogens The Southern Alps of New Zealand, Taiwan, Himalaya–Tibet and the Central Andes (see the Supplementary Information for an extensive comparison) represent the full range of orogen types presented here. Computation of Bm and the orogen type classification only requires knowledge of orogen height, width, first-order shortening distribution, and crustal shortening rate, and an estimate of the fraction of eroded crustal material. We also define the longitudinality index (LI; Methods), which is a simple metric of river diversion into longitudinal valleys and fluvial drainage network topology, complementing Bm. The modelled type 1 orogens show non-steady-state river networks dominated by longitudinal valleys with LI ≈ [1.5−4.5], whereas type 2 and 3 orogens have steady-state river networks dominated by transverse valleys with LI ≈ [1.0−1.3] (Extended Data Fig. 9 ). The SANZ is a type example of an erosion-limited type 3 orogen 2 , 12 and has Bm = [0.15−0.48] (Fig. 3a, b and Extended Data Fig. 10a ). Its central part forms by collision of the Pacific and Australian plates at v c ≈ 1 cm yr –1 , is about 80 km wide, exhibits extremely high exhumation rates and is shaped primarily by transverse valleys with median LI = 1.4 (refs. 31 , 32 , 33 , 34 ). It resembles a crustal monocline with crustal shortening largely accommodated by thrusting on the Alpine fault and minor thrust motion in its hanging wall 32 , 35 . The fluvial erodibility is high and the crustal strength can be confined to a range typically observed on Earth (Fig. 3b ). We propose here that Central Taiwan matches the primary characteristics of a flux steady state but strength-limited type 2 orogen with Bm ≈ 0.4 (Fig. 3a, c and Extended Data Fig. 10b ): (1) it has a constant orogen width of 80–100 km in the centre, (2) it has a non-monoclinal orogen structure with active thrusting on both orogen flanks, (3) it exhibits very high shortening and erosion rates 11 , and (4) it has primarily transverse valleys (LI = 1.6) 36 , 37 , 38 , 39 , 40 . Its maximum mean height of 2.8 km implies a low crustal strength of F int = 1.1 × 10 12 N m –1 , consistent with the weak passive margin that constitutes the mountain belt foreland 37 , 38 , 40 . Flux steady state in Taiwan requires extremely high K f ≈ 47 × 10 −5 m 0.2 yr –1 , as plate convergence is very high (Fig. 3c and Extended Data Fig. 10b ). Himalaya–Tibet is a type example of a strength-limited type 1 orogen with Bm = 1.5 ± 0.5 (Fig. 3a, d and Extended Data Fig. 10c ) with consistent characteristics: (1) it is a wide growing orogen with a central plateau, (2) it has a maximum mean height of around 5 km, (3) it has a high average convergence rate ( v c ≈ 7 cm yr –1 ), and (4) it has a longitudinal fluvial topology with LI = 2.8 (refs. 41 , 42 , 43 ). The use of the Bm number enables the estimation of the average fluvial erodibility K f = [4.7−12.5] × 10 −5 m 0.2 yr –1 and integrated strength F int = 2.7 ± 0.4 × 10 12 N m –1 (Fig. 3 ), agreeing with model values and independent estimates 44 . A high average fluvial erodibility implies extremely high erosion rates in the wet frontal Himalayas, several magnitudes higher than in the dry mountain belt centre, consistent with exhumation data and modelling studies 4 , 45 , 46 . The Central Andes 47 have also characteristics of a strength-limited type 1 orogen with Bm = 3 ± 1 (Fig. 3a, e and Extended Data Fig. 10d ): (1) it is actively shortening and widening ( v c ≈ 1 cm yr –1 ) 48 , (2) it exhibits a common maximum mean elevation of around 4.4 km despite highly variable mountain width along strike, and (3) it has a prominent longitudinal fluvial network with median LI = 2.1. The similar height but variable width of the Andes along strike is consistent with initial growth in height and then in width, and can be explained either by an along-strike change in erodibility and climate, or by variations in the amount of crustal shortening 48 . We briefly summarize the primary controls determining the different orogenic system. High convergence rate dominates over high fluvial erodibility in the non-steady state type 1 Himalaya–Tibet orogen, whereas a similar fluvial erodibility but a lower convergence rate leads to a type 3 steady-state SANZ orogen. The type 2 Taiwan orogen is dominated by an extremely high fluvial erodibility, which otherwise would probably be a type 1 orogen given its weak foreland crust and high convergence rate. In contrast, low shortening rates in the Central Andes dominate over a very low fluvial erodibility. The Bm number for the various natural systems indicates that the crustal strength on Earth varies only by a factor two to three, whereas plate convergence and fluvial erodibility each span a greater range and determine orogen type (Fig. 3 ). Channel steepness indices ( k sn ) for the modelled orogens are consistent with actively growing and decaying orogens on Earth 18 , 49 , 50 (Methods; Extended Data Fig. 9h ). The modelled k sn values clearly show that rheological control on the maximum topographic relief on Earth explains the observed upper limit of steepness indices on Earth. Characteristic mountain belt evolution We propose here that collisional orogenic topographic evolution follows four characteristic phases (Fig. 4a ) that can be understood by considering the interplay of convergence velocity, crustal rheology, and surface process efficiency. Phase I initial orogen growth occurs until the maximum elevation ( h R ) that is supported by the crustal strength is reached (types 1 and 2) or until an erosion-limited steady state is reached (type 3). Subsequent phase II orogen growth occurs either at steady state with constant width for a high surface process efficiency (types 2 and 3) or at non steady state when erosion does not balance the tectonic flux, resulting in orogen widening at a constant maximum elevation (type 1). The Bm number provides a measure of the steady-state versus non-steady-state nature, and the underlying controlling factors. Decay of orogenic topography is controlled by the erosional efficiency and isostatic rebound in response to erosion. Phase III exhibits fast decay and removal of short-wavelength topography. Slow and long-term decay of topography in phase IV is controlled by surface process efficiency and local-isostatic rebound. We conclude that the topographic evolution of collisional orogens is determined by the combination of plate velocity, crustal rheology, and surface process efficiency. The new Beaumont number uniquely defines mountain belts as type 1, 2 or 3, explains their underlying controls, and enables approximation of the crustal strength and average fluvial erodibility. In strength-limited orogens (types 1 and 2), erosional efficiency may control mountain belt width but not height, and the mountain belt elevation puts constraints on the rheology of a growing orogen. In contrast, high surface process efficiency in erosion-limited type 3 orogens determines orogen height and leads to a characteristic structural style with constant, low orogen width. Provided the erosional efficiency does not change, Bm also enables predication of the timescale and rate of orogenic decay once convergence stops, as mountain belt longevity is mostly dictated by the erosional efficiency. The Central Andes, for instance, are likely to survive for a long time, possibly longer than Himalaya–Tibet, whereas Taiwan will probably be removed within a few Myrs of tectonic inactivity. The results presented here provide a unified framework for the controls of collisional mountain belts subject to the interaction between surface processes and tectonics. Methods Thermo-mechanical landscape evolution model We use the two-dimensional arbitrary Langrangian–Eulerian (ALE), finite-element model FANTOM 7 , 25 , computing thermo-mechanically coupled, incompressible, plane-strain, viscous-plastic creeping flows to investigate mountain building during continent–continent collision ( Supplementary Information ). We couple FANTOM to the two-dimensional landscape evolution model FastScape 5 , 6 . FastScape directly interacts with the thermo-mechanical model, in that any deposition and erosion feeds back to the thermo-mechanical computation through its effect on gravitational stress redistribution and on rheology (see the Supplementary Information for a detailed coupling description). FastScape solves for stream power river incision (the K f term in equation ( 2 )), hillslope diffusion (the K d term), sediment transport and deposition in rivers ( G term) and filling of local depressions, that is, lakes and mountain foreland basins: $$\frac{\partial h}{\partial t}=U-{K}_{{\rm{f}}}{A}^{m}{S}^{n}+{K}_{{\rm{d}}}{\nabla }^{2}h+\frac{G}{A}{\int }_{A}\left(U-\frac{\partial h}{\partial t}\right){\rm{d}}A,$$ (2) where h is the topographic elevation, t is the time, U is the uplift rate, K f is the fluvial erodibility, A is the catchment area upstream, S is the local slope, K d is the hillslope diffusion coefficient, G is a dimensionless deposition coefficient, and m and n are the stream power exponents. Denudational power is largely set by the efficiency of river erosion, which depends on the coefficients m , n , G, and K f . The values of m , n , and G are relatively well known with n ranging from 1 to 3 (ref. 52 ), m / n varying from 0.3 to 0.5 (ref. 53 ) and G being of the order of 1 (ref. 54 ). The fluvial erodibility K f is characterized by large uncertainty and spans a wide range, as it incorporates variations as a function of climate, rock type, vegetation, abrasive agents and channel geometry 53 . Typical values lie between 1 × 10 −6 m 0.2 yr –1 and 1 × 10 −4 m 0.2 yr –1 (ref. 53 ), assuming m = 0.4 and n = 1. We designed three end-member models with low K f = 0.5 × 10 −5 m 0.2 yr –1 (model 1), high K f = 5 × 10 −5 m 0.2 yr –1 (model 2), and very high K f = 20 × 10 −5 m 0.2 yr –1 (model 3), with m = 0.4, n = 1, and G = 1. The hillslope diffusion coefficient is constant with K d = 1 × 10 −2 m 2 yr –1 (refs. 55 , 56 ). The models are actively shortening for 25 Myr at a rate of 1 cm yr –1 , leading to 250 km of convergence, followed by orogenic decay without active shortening until the orogenic topography is removed by erosion. The Beaumont number We define the non-dimensional Beaumont number, Bm, that captures the interaction between surface processes and tectonics. We propose the name Beaumont number, as Beaumont et al. 1 , and Christopher Beaumont and his group during the following years, initially developed coupled tectonic–surface process models similar to ours, and used them to investigate the feedback between tectonics, surface processes, and ultimately climate. The Bm number is defined as the ratio between the essential non-dimensional number describing tectonics ( N Tec ) and the essential non-dimensional surface process Damköhler number (Da SP ) determining surface process efficiency (Extended Data Fig. 8 ): $${\rm{B}}{\rm{m}}=\frac{{N}_{{\rm{T}}{\rm{e}}{\rm{c}}}}{{{\rm{D}}{\rm{a}}}_{{\rm{S}}{\rm{P}}}}.$$ (3) For mountain belts growing by crustal thickening, one can show that the Argand number, Ar, of the continental crust is the key non-dimensional number describing tectonics 30 , 57 , 58 (see the Supplementary Information for the derivation). Da SP IV is the non-dimensional number determining mass in-flux and out-flux in systems dominated by fluvial surface processes, so that $${\rm{B}}{\rm{m}}=\frac{{\rm{A}}{\rm{r}}}{{{\rm{D}}{\rm{a}}}_{{\rm{S}}{\rm{P}}}{\rm{I}}{\rm{V}}}$$ (4) is the Beaumont number that captures the interaction between surface processes and tectonics in actively growing collisional mountain belts. A large Bm implies inefficient surface processes, which results in type 1 orogens, and a low Bm means that surface processes are efficiently counteracting orogenic growth, which results in type 3 orogens. Specifically, in the case when surface processes are determined by the (extended) stream power law, as in our models, Da SP IV corresponds to the uplift-erosion number N e (refs. 2 , 10 ) and $${\rm{B}}{\rm{m}}=\frac{{\rm{A}}{\rm{r}}}{{N}_{{\rm{e}}}}.$$ (5) See the Supplementary Information for a full derivation of Bm and the associated values. Longitudinality index We define the longitudinality index (LI) as the quotient of actual river length and shortest distance between river source and orogen boundary. To compute LI we use the modelled landscapes (Extended Data Fig. 9 ), and the 90-m-resolution digital elevation model (DEM) CGIAR-SRTM v.4.1 of each investigated orogen (Extended Data Figs. 9c–f ). The DEMs are re-scaled to a 250 m (Alps, New Zealand, Taiwan) and a 1,000 m (Himalaya, Andes) resolution to ensure computational feasibility. Following Whipple et al. 3 , we assume that any point in the landscape with a critical drainage area A = max(min( A ), 1 × 10 5 m 2 ) is a source point of a river. The orogen boundaries are manually picked to align with the strike of the outermost significant orogenic topography; in our models the orogen boundaries are defined at an average model elevation of 350 m. Only rivers draining through the predefined boundaries are considered in the computations. River length and long profiles are computed using the FastScape steepest-descent algorithm, with hydraulic continuity ensured by lateral connection of local minima, for instance, related to artefacts in the DEM. We note that our calculations create only theoretical river networks that, for instance, also connect endorheic basins to the base level. We consider this first-order representation of the fluvial networks as sufficient for our data analysis. Rivers naturally build a dendritic network with laterally flowing tributaries. To separate tributaries from river flow diverted by tectonic activity, we impose a simple minimum distance between source point and orogen boundary of 15 km. The minimum distance is reduced to 12 km in the north-west flank of the SANZ, to ensure proper source point coverage in this area. We stress that points with high longitudinality indices and large distance to the orogen boundary indicate that the tectonic topography is persistently not removed by surface processes. These source points are therefore more significant than source points close to the boundary, where newly forming thrusts only transiently divert river flow. For simplicity, we did not introduce complicating factors, such as, for instance, distance weighting, to our data points. Most notably the SANZ are characterized by many Quaternary glacial valleys. For our analysis we assume that glaciers follow the pre-existing river network, so that longitudinality indices are to first order also meaningful in theses orogens. Steepness index We compute the steepness index in Fig. 3d with \({k}_{{\rm{sn}}}={A}^{\frac{m}{n}}S\) (see Wobus et al. 59 ). Yuan et al. 6 showed that including sediment deposition, as used in our model simulations (Fig. 2 ), increases the steepness index at steady state by a factor of 1 + G . However, to simplify comparison of our model landscapes with existing data sets 18 , we use the conventional \({k}_{{\rm{sn}}}={A}^{\frac{m}{n}}S\) . Extended Data Figure 9h shows the steepness index of the whole model domain as a swath profile, with the median of the values represented by a bold line. Swath boundaries represent whisker caps of a standard box-and-whisker plot of along-strike model data. Data availability All data supporting the findings of this study are contained within the article and Supplementary Information . Code availability Numerical models are computed with published methods and codes, described in the Methods and Supplementary Information . The code for longitudinality index calculations is available from the corresponding author on request. | Scientists have come up with a new classification scheme for mountain belts that uses just a single number to describe whether the elevation of the mountain belt is controlled mainly by weathering and erosion or by properties of the Earth's crust, i.e., the lithospheric strength: the "Beaumont number" (Bm). It's named after Chris Beaumont, a scientist who, together with his team, developed coupled models of surface processes and tectonic forces. The scientists report about their findings in the current issue of Nature. A Beaumont number between 0.4 and 0.5 means that the mountains are in a so-called flux steady state in which the controlling factors of mountain growth are tectonic forces and the lithospheric strength, balanced by weathering processes as, for example, in Taiwan. With a Bm value lower than 0.4, mountains are also in a flux steady state but with erosion as controlling factor like the Southern Alps of New Zealand. A Beaumont number above 0.5 means that the mountains still grow (non-steady state) with lithospheric strength controlling the process. Examples for this type are the Himalaya-Tibet mountains and the Central Andes. This classification is resolving a long-standing question whether tectonic forces and strength of the Earth's crust are the controlling factors of mountain elevation or whether weathering processes are in control. The new study says it can be one or the other—depending on geographic location, climate and underground properties. The team of scientists led by Sebastian G. Wolf of Bergen University in Norway used a new coupled surface process and mantle-scale tectonic model for their study by combining the thermomechanical tectonic model FANTOM with the landscape evolution model FastScape. Thus, they were able to reconcile high erosion rates in some active orogens with long-term survival of mountain belts for hundreds of millions of years. Jean Braun of the GFZ German Research Centre for Geosciences, who co-authored the paper, says that "with our Beaumont number we can determine to which proportion tectonics, climate, and crustal strength control the height of mountain belts. And, for most mountain belts, this can be done without complex measurements or assumptions; all that is needed is a knowledge of the rate of convergence obtained from present-day plate velocities or plate reconstructions, the height of the mountain obtained from a topographic map and the widening rate obtained from the geological record. In a nutshell: Whether a mountain is short or tall is the product of slow or fast convergence, wet or dry climate, or strong or weak crust." The Beaumont number shows which of these three factors is dominating. | 10.1038/s41586-022-04700-6 |