{"text": "We honor Theo Hellbr\u00fcgge's acclaimed endeavors in the rehabilitation, or rather the prehabilitation of handicapped children. So far, he has focused on obvious handicaps, and we trust that he will include concern for everybody's silent handicaps in the future by screening for abnormal variability inside the physiological range. Therein, we introduce cis- and trans-years, components of transdisciplinary spectra that are novel for biology and also in part for physics. These components have periods, respectively, shorter and longer than the calendar year, with a counterpart in magnetoperiodism. Transyears characterize indices of geomagnetic activity and the solar wind's speed and proton density. They are detected, alone or together with circannuals, in physiology as well as in pathology, as illustrated for sudden cardiac death and myocardial infarction, a finding calling for similar studies in sudden infant death syndrome (SIDS). As transyears can beat with circannuals, and depend on local factors, their systematic mapping in space and time by transdisciplinary chronomics may serve a better understanding of their putative influence upon the circadian system. Longitudinal monitoring of blood pressure and heart rate detects chronome alterations underlying cardiovascular disease risk, such as that of myocardial infarction and sudden cardiac death. The challenge is to intervene in a timely fashion, preferably at birth, an opportunity for pediatricians in Theo Hellbr\u00fcgge's footsteps. The discovery in biology of far-transyears, 15\u201320 months in length , is in kFigure For discussion by transdisciplinary nomenclature committees, terms in English are emphasized. With advice by Prof. Robert Sonkowsky, proposed Latin equivalents are added for vanishing classicists. Essentially, \"ad-transannual\" means \"a little longer than a year\"; \"ad-cisannual\" means \"a little shorter than a year\"; \"transior-annual\" means \"much longer than a year\"; and \"citerior-annual\" means \"much shorter than a year\". Some specific limits that seem reasonable in the light of available physical and biological evidence are given in the scheme. The single syllable 'ad' is preferred to the 2-syllable 'prope', 'juxta', 'propter', 'minus' (paired with 'plus') or the 3- or 4-syllable 'proprior', 'proximus', 'vicinus', or propinquus'. While to a purist among grammarians the coinages adtransannual and adcisannual may seem preposterous (a word constituting itself an illustration of cumulative prefixes) precisely because of the piling on of prefixes, there are also other precedents in Late Latin such as exinventio (\"discovery\") and perappositus (\"very suitable/apposite\"). Normal assimilation of 'd' to 't' and 'c', respectively, may then result in the spellings and pronunciations \"attransannual\" [at-trans-annual] and \"accisannual\" [ak-sis-annual] acceptable as English pronunciation, notably by speakers with native romance languages, who may face difficulty with the near and far as added prefixes.Difficulties may stem from the fact that analyses usually provide estimates in frequency (not period) terms, and from the criterion of 95% CIs that may not be available. We need to allow for situations when, because of too-wide (or unavailable) CIs, we can diagnose only a candidate trans- or cis-annual component, when 95% CIs of \u03c4 overlap the limit distant from the year. By the same token, we may not be able to specify near or far, e.g., because of the brevity of the series. In other words, we cannot say whether we have a near- or a far- trans- or near- or far- cis-year, when there is an overlap by 95% CIs with the corresponding finer limits, shown on the scheme Figure .For the case of \"circannual\", we again go by 95% CIs rather than by the point estimate. In the circannual case, the 95% CI overlaps the 1-year estimate under usual conditions, bearing in mind that under unusual, e.g., constant conditions, circannuals are also amenable to free-running, in which case the 95% CI may no longer cover 1 year but will have to be tested further for non-overlap with the pertinent environmental cycle in the case of a biologic cycle and vice versa for non-overlap of a natural environmental cycle with an anthropogenic cycle. In the trans- or cis-annual case, the 95% CI does not cover the 1-year period under usual conditions, i.e., cis- or trans-annuals can be asynchronized rather than desynchronized. Strictly speaking, circannual cannot be an overall term, but almost certainly, whatever committees may decide, it will be (mis-)used as such. \"Far-\" and \"near-\", \"cis-\" and \"trans-\" and \"citerior-\" and \"transior-\" annual are hyphenated here only to indicate their derivation and need not be written with hyphens. We propose using circannual, transannual or cisannual and their refinements, only operationally as a function of periods and their 95% CIs. Matters of synchronization, desynchronization or asynchronization may then possibly emerge from the context of a given situation and from further testing.Trans- and cis-years lead to a novel chrono-helio-geobiology, awaiting application of the tools of transdisciplinary chronomics. It has been a challenge to look at circadians for the past half-century, but knowledge concerning them will not be completely useful before we answer another set of questions based on the evidence in Table Table Of interest are great geographic/geomagnetic differences insofar as no transyears, only calendar-yearly components, were detected in 3 locations, while in 3 other locations, transyears were present, in two of these, with a coexisting calendar-yearly component, with nearly equal prominence, while in Minnesota, only a transyear was thus far detected. A clarification of the roles played by local as well as global influences could also be based on transyear vs. calendar-yearly amplitude ratios when both components are present, which, however, is not the case in 4 of 6 locations. There is the challenge of developing eventual countermeasures.But first, we seek a clue as to why, for SCD in Minnesota, the prominence of the transyear exceeds by far any seasonal, thus far undetected influence of the harsh environmental temperature change in its mid-continental climate in the summary of 5 consecutive years, and why, in Arkansas and the Czech Republic, the transyear's prominence is about the same as that of the seasons, and why it seems to be absent in 3 other locations and furthermore why in MI the prominence (gauged by the amplitude) of the calendar year is so far greater than that of the transyear (by contrast to the case of SCD). Systematically collected data from different areas of the world will open a new chapter in transdisciplinary science, with particular pertinence at the extremes of extrauterine life, in natality as well as in mortality.Optimization of the about-yearly spectral region may also be considered, along with Hufeland's consideration of the daily routine in studies aimed at prolonging high-quality life . Notablypar excellence and professor emeritus of social pediatrics at the University of Munich, continues actively as a mentor of the specialty he founded [Beyond 85 years of age, Theodor Hellbr\u00fcgge, chronopediatrician founded -9. Our e founded ,10-14 in founded -58, many founded ,52,53,55 founded -19 to a founded .par excellence, which could benefit from chronomics, the resolution of time-structural (chronome) alterations in the physiological range. Accordingly, chronobiologists honored Theo at a meeting on \"Time structures \u2013 chronomes \u2013 in child development\", leading to a proceedings volume of 256 pages [Theo himself turned in the interim to the care of children with obvious disabilities. He continues with concerns about them to detect early alterations for timely remedies, a preventive task 56 pages . On the 56 pages . The amp56 pages . About 256 pages . These a56 pages , but are56 pages . In conj56 pages and also56 pages (Table 1In his own recent words , Theo alTo continue in his words , in pracBy 1960 at Cold Spring Harbor and agaiTheo Hellbr\u00fcgge's contributions illustrate a solidly founded now widely distributed conceptual structure resting on a productive life's work available again in his own words . A few gWith one of his colleagues , we can \u2022 some in ethology as a method to account for the development of children,\u2022 mother-infant-interactions as a decisive requisite of social development, the topic of the last symposium he sponsored in October 2004\u2022 preverbal communication, as a condition for early speech promotion, especially for infants with impaired hearing,\u2022 the plasticity of the infant's brain as a neurobiological basis for early health promotion,\u2022 enriching integration of infant and child as part of a socially intact community,\u2022 preventive medical-check ups aiming at an early diagnosis of abnormality,\u2022 earliest diagnosis of risks as a condition of PREhabilitation \u2013 which he called rehabilitation, to gain a financial niche for his actions in existing laws.Hellbr\u00fcgge's conference on chronomes showed a"} {"text": "The majority of residues in protein structures are involved in the formation of \u03b1-helices and \u03b2-strands. These distinctive secondary structure patterns can be used to represent a protein for visual inspection and in vector-based protein structure comparison. Success of such structural comparison methods depends crucially on the accurate identification and delineation of secondary structure elements.PALSSE (Predictive Assignment of Linear Secondary Structure Elements) that delineates secondary structure elements (SSEs) from protein C\u03b1 coordinates and specifically addresses the requirements of vector-based protein similarity searches. Our program identifies two types of secondary structures: helix and \u03b2-strand, typically those that can be well approximated by vectors. In contrast to traditional secondary structure algorithms, which identify a secondary structure state for every residue in a protein chain, our program attributes residues to linear SSEs. Consecutive elements may overlap, thus allowing residues located at the overlapping region to have more than one secondary structure type.We have developed a method .PALSSE is predictive in nature and can assign about 80% of the protein chain to SSEs as compared to 53% by DSSP and 57% by P-SEA. Such a generous assignment ensures almost every residue is part of an element and is used in structural comparisons. Our results are in agreement with human judgment and DSSP. The method is robust to coordinate errors and can be used to define SSEs even in poorly refined and low-resolution structures. The program and results are available at Subsequ10-helix , \u03c0-helix10-helix , \u03b2-turns10-helix , \u03b3-turns10-helix , and \u03b2-b10-helix have bee\u03b1 distances and i, i+1, i+2, i+3 C\u03b1 torsion angles). A more comprehensive algorithm, DSSP, was subsequently developed and is based on a careful analysis of backbone-backbone hydrogen bond energies and geometrical features of the polypeptide chain [The first algorithm for the automatic delineation of secondary structure was proposed by Levitt and Greer . They dede chain . It is vde chain . Like DSde chain . It usesde chain .\u03b1 geometry allows locating breakpoints in both \u03b1-helices and \u03b2-strands and allows generation of residue based pairing information helpful in determining edges of \u03b2-sheets.Secondary structures in experimentally determined protein coordinate data often deviate from the ideal geometry and thus methods of secondary structure assignment that use different logic and cutoffs can vary significantly in their assignments. Maximum variation is seen near the edges of SSEs and consensus secondary structures defined by different algorithms have been proposed in order to define SSEs accurately . Attempt\u03b1 distances and torsion angles [\u03b1 coordinates of the protein structure. However, most of these programs assign secondary structure properties to individual residues of a protein chain. For the purpose of vector-based structural similarity searches, a secondary structure definition of the linear segments (elements) that can be used to approximate the protein structure in a simplified form as a set of interacting SSEs is required. Using DSSP [A strong correlation exists between hydrogen bonding patterns and Cn angles . Algoritn angles , DEFINE_n angles , VOTAP [n angles and P-SEn angles assign sing DSSP assignmeing DSSP . Our anaPALSSE)\" to identify SSEs from the three-dimensional protein coordinates. The method is intended as a reliable predictive linear secondary structure definition algorithm that could provide an element-based representation of a protein molecule. Our algorithm is predictive in that it attempts to overlook isolated errors in residue coordinates and is geared towards defining SSEs of proteins relevant to vector-based protein structure comparison. For the purpose of similarity searches, use of just the major SSEs, namely the \u03b1-helix and the \u03b2-strand that can be approximated by vectors will suffice, as they typically incorporate the majority of the residues in a protein.Here we describe a method \"Predictive Assignment of Linear Secondary Structure Elements and \u03b2-strands .\u03b1-helices and \u03b2-strands are the predominant and most distinct types of secondary structures observed in proteins ,9,19. Us\u03b1 distance and i, i+1, i+2, i+3 C\u03b1 torsion angle (step 1 in methods). Next, probable helix and \u03b2-strand elements are generated by selecting consecutive residues that belong to the same category . Quadruplets of residues, formed by two pairs of hydrogen-bonded consecutive residues that satisfy criteria of distances and angles (steps 3 and 4), are constructed from residues that do not meet the strict criteria for helix definition in step 1. A quadruplet is the smallest unit for defining potential \u03b2-sheets and is formed from a set of four C\u03b1 atoms that are linked with two covalent bonds and two pseudo-hydrogen bonds (see step 3 in methods). The quadruplets are joined together, end-to-end in the direction of covalent bonds, to form ladders of consecutive pairs of residues . The ladders of paired residues are joined to form paired \u03b2-strands. Helices defined previously are split, using root mean square deviation (RMSD) of constituent residues about the helix axis, so that they can be represented as linear elements (step 8). \u03b2-strands are split using various geometrical criteria and pairing of neighboring residues (step 9).Our method was developed for predictive assignment of linear SSEs. Preliminary helix and \u03b2-strand categories are assigned to residues, based on i, i+3 CThe program's main output is in the PDB file for\u03b1-C\u03b1 distance and torsion angle are used to select core regions of secondary structures. The parameters are then made less restrictive to identify and assign residues that do not follow the idealized pattern of \u03b1-helices and \u03b2-strands. Residues that individually might fail the test for a secondary structure state, due to either hydrogen-bonding criteria or \u03c6 and \u03c8 angles, or both, may be placed in an element if the geometric parameters and pairing conditions of the neighboring residues support the inclusion. This makes the algorithm predictive in nature; therefore helices and \u03b2-strands might be defined in regions that show a helical tendency or have neighboring \u03b2-strands respectively, even if the polypeptide model at that region is erroneous. We have found the criteria of a minimum of 3 residues with at least 2 residues pairing with a neighboring \u03b2-strand [\u03b1-based also fail to distinguish between such turns and helices [C\u03b2-strand to perfo\u03b2-strand . \u03b2-stran helices . As show\u03b1 distance and torsion angles. Bent helices thus remain separate even though their edges might overlap. Gently curved helices are not broken unless the angle between vectors representing the broken sections is greater than 20\u00b0. The broken sections are chosen using an approach (described in methods) to minimize the number of acceptably linear elements, while retaining the maximum number of residues from the original helix. A similar breaking angle of 25\u00b0 has been observed by Richards and Kundrot [Our method attempts to provide a definition of SSEs that can be approximated using vectors. It is possible for edges of elements to overlap, leading to more than one secondary structure state for a particular residue. If required, these elements are broken using geometric criteria and directionality changes over a stretch of residues, including pairing for \u03b2-strands, to obtain linear elements. Specialized algorithms to detect curved, kinked and linear helices based on local helical twist, rise and virtual torsion angle are known . We have Kundrot . This anThe reliability of secondary structure definition by our algorithm was checked by plotting the main chain torsion angles \u03c6 and \u03c8 , individ10-helices.Most of the \u03c6 and \u03c8 angles fig. are founNearly all \u03c6 and \u03c8 angles for \u03b2-strand residues over-predicted by our method fig. fall in Residues that fail strict secondary structure assignment when explicit hydrogen-bonding criteria are considered, cannot be used to properly form SSEs. Therefore, predictive assignment with our algorithm is preferred. The following reasons may account for the necessity of predictive assignments. Protein structures are intrinsically flexible. Hydrogen bonds that are present in some family members might be absent in other homologues. Domain interactions, loops, insertions and deletions can all influence the secondary structure around them. Crystal packing and solvent interaction can also account for changes observed in residue coordinates. Further, models based on X-ray data are not always as accurate as they are believed to be . TherefoOur algorithm shows a higher degree of robustness than either DSSP or P-SEAKeeping the above results in perspective, the amount of secondary structure missed by the two other programs with respect to each of DSSP, P-SEA and our method was studied fig. . Our proDefinitions from our program were compared with assignments by DSSP , P-SEA , DSSPConIn this article, we show two examples of our study fig. , with th\u03b1 distances. Results from SSTRUC are clearly different and residue coverage is poor when compared with other programs for the average NMR structure \"1ahk\". Residue coverage and element identification for SSTRUC is similar to that by DSSP and STRIDE for \"1fjg\". Residue coverage for helix and \u03b2-strand definition by PROSS [Our algorithm shows a marked difference as compared to other programs when low-resolution and NMR structure coordinates are processed. Residue coverage is greater for our definition when compared with DSSP and P-SEby PROSS is low a\u03b1 atoms. Our method is predictive in nature and SSEs defined by us can include residues that do not form ideal \u03b1-helices and \u03b2-strands. Assignments are similar to helix and \u03b2-strands defined by a residue-based approach, like DSSP [The algorithm developed by us can assign linear helix and \u03b2-strand SSEs, from only Cike DSSP , for higThis method has been developed for simplified representation of protein structures for similarity searches with other proteins. It should not be used if an accurate residue level definition is necessary. Compared to other programs, we have found our algorithm to perform well in terms of defining linear elements reliably for both helices and \u03b2-strands and yet yield a high residue coverage. Visual judgment of results supports our definitions.The algorithm has been implemented as a computer program \"PALSSE\" (Predictive Assigner of Linear Secondary Structure Elements). It is written in Python and C and has been tested on the GNU/Linux platform on the i386 processor architecture. The software is available online .The sequences from the SEQRES records of PDB files, cAlgorithm development was monitored by manual inspection of the results produced by the implemented code. For this, a dataset of 295 domains (checkset) consisting of randomly chosen representative structures for every fold in the SCOP database (version 1.63) ,34 belonA set of high-resolution structures (culled PDB set) was used for comparing the final program output with that from other programs, with respect to reliability and robustness towards coordinate errors. For this, a list of the 100 longest non fragmented PDB chains having resolution better than 1.6\u00c5 and sequence similarity less than 20% were obtained from the culled PDB database .We used the Python programming language (v2.3) t\u03b1 distance and i, i+1, i+2, i+3 C\u03b1 torsion angle. 2: Delineation of probable core regions of helix and \u03b2-strand elements from residues that pass strict criteria. 3: Formation of quadruplets of residues connected by two covalent and two hydrogen bonds as seeding units for \u03b2-sheets. 4: Initiation and extension of paired \u03b2-strands using quadruplets of paired residues. 5: Breaking consecutive non-single helices and \u03b2-strands taking into consideration residue pairing and neighboring elements. Steps of our algorithm need to be run sequentially for the results to be meaningful.Our method has five major steps that are used to sequentially process the PDB coordina\u03b1 coordinates of every residue of the molecule are processed from the N- to C- terminal end and a simple C\u03b1-C\u03b1 distance . This is the only step in which our algorithm deals with secondary structure as a property of the individual residue and not as that of an element. Cnce fig. and C\u03b1 t+3) fig. are usedensively ,8,15,21 \u03b1-C\u03b1 distances \u03b4<8.1 \u00c5, -35\u00b0\u2264\u03c4\u2264115\u00b0 \u21d2 \u03c1=loose-strand \u00a0\u00a0\u00a0(2)\u03b4>8.1 \u00c5, -180\u00b0\u2264\u03c4<-35\u00b0 | 115\u00b0<\u03c4\u2264180\u00b0 \u21d2 \u03c1=within (50.1\u00b0 \u00b1 2\u03c3) \u21d2 \u03c1=strict-helix \u00a0\u00a0\u00a0(3)\u03b4\u22646.4 \u00c5, \u03c4 \u03b1-C\u03b1 distance, \u03c4 is C\u03b1 torsion angle , \u03c3 is standard deviation of torsion angle (= 8.6\u00b0), \u03c1 is propensity or different type . Since we always process PDB files fr\u03b1 distance and i, i+1, i+2, i+3 C\u03b1 torsion angle. Loose-helix SSE templates (SSE template henceforth referred to as SSET) consisting of at least five residues are formed from every set of consecutive loose-helix residues and the three residues immediately succeeding them. This process ensures that a helical element is generated from only a single continuous region of loose-helix residues. At this stage, it is possible that the third residue of a five residue loose-helix SSET has not passed the cutoffs for distance and torsion angle with any other residue. Strict-helix seed-SSE is defined by a group of three consecutive strict-helix residues. Strict-helix SSETs are formed from a strict-helix seed-SSE and the three residues immediately after it. This implies that every residue in a strict-helix SSET passes the strict cutoffs of distance and torsion angle with at least one other residue making the minimum length of a strict-helix SSET six residues. All loose-helix SSETs that do not contain at least one strict-helix residue, other than in the last three residues, are discarded. The remaining loose-helix SSETs denote possible helix templates.We use a group of at least two consecutive loose-helix residues to generate a loose-helix seed-SSE. It is possible to get a single loose-helix residue in \u03b2-hairpins whereas a set of consecutive loose-helix residues is more likely to be a part of a helix as it signifies that four residues out of a five residue group (i \u2013 i+4) have passed cutoffs for i, i+3 CA loose-strand-forming seed-SSE is defined by a group of loose-strand residues, with at least one residue in the group. Extending every loose-strand seed-SSE to the i+3 position at the C- terminal end gives rise to loose-strand SSETs. Overlaps between elements are formed during extension of the C- terminal end of the seed-SSE to the i+3 residue leading to a maximum overlap of two residues between any two SSETs.\u03b1 residues, linked by a pair of covalent bonds and a pair of hydrogen bonds which are paired stretches of consecutive residues. We start by defining and identifying the smallest unit of such a network of residues, namely a quadruplet, which is formed by four Cnds fig. .Since the covalent bonds that link a quadruplet of residues are easy to define confidently from their sequence and coordinates, and their hydrogen bonds are not always clear, we decided to use parameters that depend on covalently linked rather than hydrogen bonded residues neighboring the quadruplet fig. . At mostThe first parameter C\u03b1-C\u03b1 distance between paired residues . Probability of obtaining a particular Z-score is used for scoring quadruplets.\u03b1 distance, half of 4 scores for the second parameter angle and 2 scores for C\u03b1 torsion angle) are used for the three parameters. Technical limitations of the computer's ability to work with numbers close to zero were carefully avoided by rejecting probabilities close to zero . The table of times-sigma and negative logarithm of the probabilities were kept for lookup during scoring of actual quadruplets found by DSSP [As the probability values are very small and could be subject to floating point errors over multiple operations, a negative logarithm was used to convert them to positive numbers. Thus, lower numbers represent better scores. The total score of the quadruplet is obtained by adding the individual parameter scores. Equal weights , \u03b2-strands are considered as part of the same \u03b2-sheet if at least three residues are paired.10-helices, defined in the above steps, are based on relaxed criteria to avoid missing out residues that could potentially be part of a helical element. Although a rudimentary form of element edge delineation is obtained by the use of C\u03b1 distance and torsion angles during delineation of probable helical elements in step 2, these methods are capable of detecting only drastic changes in the helix axis. Our relaxed criteria of helix definition designed to include \u03c0 and 310 helices, also allows curved, kinked and bent helices [Helix SSETs representing \u03b1, \u03c0 and 3 helices to be inManual judgment of the results at this point indicated that our program's delineation of helices were acceptable in terms of residue coverage. Presence of bent helices was noticed in the checkset (described previously) and we decided to split them into linear elements without loss of constituent residues. We used helices defined by our program for calculation of parameters for breaking helices. Since DSSP does not10 helices as distinct or part of the same element and also short (<8 residue) \u03b1-helices. The first method calculates the principal moment of the helix residues and used the eigenvector corresponding to the largest eigenvalue as the helix axis. This method thus depends only on the spread of the residues in space and does not take into account the linear connectivity of the helix residues. Errors in helix axis assignment were observed for \u03c0-helices and short \u03b1-helices. Due to the spread of residues being more on the diametrical plane of these helices, the axis found using the eigenvector method lay closer to the plane of the diameter instead of being normal to it. This method gave good results for longer helices. However, as we considered short \u03b1, \u03c0 and other opened up helices for breaking and final definition, this method was not used.The helix breaking method relies on an analysis of the RMSD of helix residues around the helix axis. Two different methods were used to generate the helix axis. Only one of the methods was finally adopted fig. . We explBased on the observations above, a rotational fit method was used to determine helix axis ,45 fig.. This inOur program was run on the \"statset\", described previously, and all helices were extracted for study. The RMSD of helix residues from the helix axis were calculated and analyzed fig. . All helThe average RMSD and average standard deviation calculated above were used to obtain broken helices (as described below) that were analyzed manually. Our algorithm does not break a helix showing a slight curvature in its structure, or containing a few distorted residues. For bent helices, we decided to use two cutoffs to determine breakpoints. Flexibility is represented by a Z-score representing the allowed deviation as multiples of the standard deviation of helix residues . A sharp bend in the helix axis is measured by the angle between the axes of two neighboring helices as calculated by the rotational fit method (described above). The broken helices observed were manually inspected to determine the optimum Z-score and the break angle fig. .\u03b1 distance and i, i+1, i+2, i+3 C\u03b1 torsion cutoffs; fig. A breaking method for helices was developed in order to determine the correct cutoffs for Z-score and angle of break. The method considers every possible breakpoint in a helix and attempts to choose the optimal result. We define broken helices as single elements only if the piece with the highest RMSD is still acceptable fig. . MultiplThis step of the calculation is computationally intensive due to the large number of possibilities that are considered. To prevent the algorithm from taking abnormally long to complete in special cases, we avoid checking for breaks in single helices that are longer than 50 residues.\u03b1 distance and angle) and by using neighbor pairings for constituent residues. As \u03b2-strands have a natural tendency to curve, we take care not to break short gently curved \u03b2-strands, nor break them at bulges. Sharp bends in \u03b2-strands, gently curved but long \u03b2-strands, which cannot be optimally represented by a single linear vector, and \u03b2-strands which do not have at least two residues shared by two different \u03b2-sheets are considered for breaks. \u03b2-strands are broken using geometric criteria after checking for the possibility of a bulge being located near the potential breakpoint. We prefer to retain large regions of connected \u03b2-strands rather than split them into isolated pieces, and therefore place more importance on residue pairings than on geometric criteria of individual \u03b2-strands. Restricting the minimum length of a \u03b2-strand to three residues (as described above) makes \u03b2-strand breaking a sensitive operation, as it is possible to lose small \u03b2-strands completely if breaks are located either within the small \u03b2-strands themselves or on the \u03b2-strand pairing with the small \u03b2-strand. Thus, we try to retain residues in short \u03b2-strands if they are linked to part of a larger \u03b2-sheet. However, it is more likely for short \u03b2-strands to arise due to chance proximity of extended regions. These short \u03b2-strands may hinder correct representation of the entire \u03b2-sheet. Although our program does not aim to perfectly describe \u03b2-sheets, we do try to detect and remove short \u03b2-strands that are either not well connected to a larger \u03b2-sheet or located close to breaks in neighboring \u03b2-strands of the \u03b2-sheet. The \u03b2-strand breaking methods are designed not to depend on the length of the \u03b2-strands being broken, however, they do depend on the order in which they are applied on the original \u03b2-strand and its paired neighbors. \u03b2-Strands are broken in four steps, where each step works on the complete set of \u03b2-strands obtained after applying the previous breaking method. Bulges are taken into consideration at every step.Bent \u03b2-strands are split to obtain linear elements using both geometric criteria were generated and joined, it is possible for two ladder arms to be formed from consecutive residues or be with a maximum of only one residue common between them, with the residues on the other arms having no connectivity fig. . A minim10 helices, or may be distorted while still showing helical tendency. Overlap with previously defined \u03b1-helices and \u03b2-sheets were considered for this step.In previous steps, residues not part of a strict-helix SSET were considered for generation of ladders of residue pairs and \u03b2-strands. Some of these residues do not finally participate in any \u03b2-strand formation. These residues, not part of \u03b1-helices or \u03b2-strands, can potentially contribute to formation of \u03c0 and 3All loose-helix SSETs having no residue in a previously defined \u03b1-helix or a \u03b2-strand are considered at this step. To loosen the criteria and to allow over and under-wound helices to be detected, any presence of a strict helix-forming residue, other than in the last three residues of the previously defined element, leads to rejection of the template from consideration, as these have already been considered for helices in above steps. The templates are checked for a maximum overlap of two residues with \u03b2-strand elements and shortened at the edges if required. The templates are also checked so that no residue overlapping with a \u03b2-strand at the edge, pairs with more than one \u03b2-strand residue. Finally, helix templates occurring at \u03b2-hairpins are removed. Any helix template that overlaps with \u03b2-strands on both edges and has less than five non-overlapping helix residues is rejected. All remaining loose-helix SSETs are included in our final helix definition.As described previously, we consider a minimum of two pairs of residues to determine linked \u03b2-strands which themselves are at least 3 residues long. Consecutive residues of \u03b2-hairpins are considered linked for this purpose, even though they do not actually form a hydrogen bond. Sheets are assigned for each group of \u03b2-strands that can be traversed by a consecutive pair of linked residues. Breaks in \u03b2-strands located in previous steps are taken into consideration for this step. Breaks caused by changes in pairing (described previously) are treated as permanent, and the \u03b2-strands on either side are kept in separate \u03b2-sheets. Breaks caused by geometric evaluation of the local region are treated as potential breaks. A geometric break is ignored unless it affects all paired residues on either side of it. Thus, geometric breaks are used only if multiple \u03b2-strands need to be broken to split the \u03b2-sheet. Keeping a rigid criterion for the use of geometric breaks for individual \u03b2-strands ensures flexibility for the entire \u03b2-sheet, as larger \u03b2-sheets tend to show a gradual bending. A more drastic bending shows up as a sequence of geometric abnormalities, thus allowing proper use of the potential geometric breaks detected previously.Assignment of \u03b2-strands to \u03b2-sheets is for ease of motif searches only. Our program does not define \u03b2-sheets for the purpose of domain definition. As such, ambiguity regarding whether two \u03b2-sheets should be linked or separate might arise in cases where there is a gradual bending of the sheet or where two or more \u03b2-strands link them at the edges. It is also possible for two different \u03b2-sheets to be linked together if a few \u03b2-strands in each sheet are distorted and are in close proximity to each other. In the majority of cases, however, our program defines \u03b2-sheet boundaries correctly.Robustness towards coordinate errors was estimated by checking consistency of definition using randomly shifted PDB coordinaIM designed and implemented the algorithms, tested program performance, analyzed the results, and drafted the manuscript. SSK contributed to algorithm development and provided expert judgment of program output. NVG conceived of the study and participated in its design and coordination. All authors read and approved the final manuscript."} {"text": "The identification of local similarities between two protein structures can provide clues of a common function. Many different methods exist for searching for similar subsets of residues in proteins of known structure. However, the lack of functional and structural information on single residues, together with the low level of integration of this information in comparison methods, is a limitation that prevents these methods from being fully exploited in high-throughput analyses.Here we describe Query3d, a program that is both a structural DBMS (Database Management System) and a local comparison method. The method conserves a copy of all the residues of the Protein Data Bank annotated with a variety of functional and structural information. New annotations can be easily added from a variety of methods and known databases. The algorithm makes it possible to create complex queries based on the residues' function and then to compare only subsets of the selected residues. Functional information is also essential to speed up the comparison and the analysis of the results.With Query3d, users can easily obtain statistics on how many and which residues share certain properties in all proteins of known structure. At the same time, the method also finds their structural neighbours in the whole PDB. Programs and data can be accessed through the PdbFun web interface. A lot of information on the relationship between structure and function lies hidden in the high number of known protein structures. Protein local structure comparison methods are powerful instruments in helping elucidate the mechanisms that connect protein structural features to the protein's function. Comparison methods can highlight correlations between spatial positioning of single aminoacids and their interactions with the surrounding environment.In the last ten years many new and highly effective comparison methods have been developed (for a review see ). Since However the ability to provide and embed biological information in the comparison algorithm should be considered even more important than speed. To accomplish this, a high degree of integration of databases and functional annotation programs is needed. Many comparison methods do not treat integration aspects and, by concentrating their efforts on the comparison algorithm, consider aminoacids independently of their biological context. Often protein residues are described as set of points associated with physico-chemical characteristics, with no additional information on their real or supposed functions.The structural biologist who uses local comparison methods to find similarities between a specific protein of interest and a database of structures, needs to access residues biological function and properties in two different phases: when choosing the structural pattern of the query protein and when analyzing the comparison results in search of a biological rationale for the structural similarities.The biological information shared by comparison methods is so poor that users need to do a lot of manual browsing among different databases both before and after the structural comparison. If the comparison is not between two single proteins or motifs but between a structure and a set of structures, or between a set of motifs and a database of structures, the manual work needed for the analysis of the results increases rapidly and becomes unaffordable in the case of high-throughput analyses.Some of the developed methods in the pre-run phase provide the user a number of selected sets of structural motifs to search with. The most frequent case is the one where the user is given a single list of structural motifs automatically extracted from a single database. PDBSITESCAN gives a WEBFEATURE runs on None of the cited methods makes it possible to combine or integrate information coming from different lists of motifs or databases by allowing the users to search for sets of residues characterized by properties of different types (i.e.: the solvent-exposed residues of a PROSITE motif).A comparison method along this lines is ASSAM . ASSAM gHere we describe Query3d, a new method that integrates many existing databases and programs for 3D functional annotation together with a fast structural comparison algorithm. Nine data sources have been interconnected ranging from solvent exposure to ligand binding ability, location in a protein cavity, secondary structure, functional pattern, protein domain and catalytic activity. All this functional information is bound to the single residue and not to the structure as a whole, allowing the user to perform detailed queries on the features of single residue sets. All the structural and functional data are stored locally and managed by a fast and powerful database management system which is also able to perform fast and high-throughput local structural comparisons.Query3d is both a database management system (DBMS) oriented to protein structural analysis and a structural comparison algorithm. These two features can be used individually or combined, giving rise to three types of analysis, as described below.The first option is the use of Query3d as a local structural comparison program. Regions of local similarity can be searched between any pair of protein structures, between a protein chain and the whole PDB or also between any two arbitrary and chosen subsets of aminoacids in a structure.The second possibility consists in the use of Query3d as a DBMS devoted to the functional analysis of protein structures. The program provides access to a rich database of functional and structural information on all PDB residues. Users can create arbitrary complex queries on all known structures. For instance, users can ask about the number and identity of the residues sharing a chosen set of properties. A typical query subset can consist of all residues that are able to bind ATP or ADP, are not hydrophobic and belong to a loop. The program returns the total number of such residues per chain in the whole PDB and selects these residues for further analysis.However, the most interesting application of our method is obtained from the combination of DBMS and the comparison algorithm. By using these two features at the same time, users can create automated and customized selections of functional residues to be searched for structural similarity with the whole PDB or with other residue selections. For instance, the previously described binding sites can be compared with all residues lying in the major cavities on the surface of a set of catalytic domains.Structural and biological data for comparison and functional querying was derived using the PdbScan package (manuscript in prep.). PdbScan is a set of programs created to build a common interface and access method to the PDB structure database and the major existing databases and methods of proteins functional/structural annotations. PdbScan output is a residue-oriented relational database where all protein residues with their main characteristics extracted from the PDB are stored together with other information mapped from other data sources. In order to generate these data, PdbScan runs locally a variety of annotation methods or imports functional information on protein structure from different existing databases. Each different source of data information is called a feature. Examples of residue features present in PdbScan are: secondary structure, solvent accessibility, conservation, interaction with a ligand and position in a protein domain or in an enzyme active site. For each feature present in the database, a single value is assigned to each PDB residue. For example, a residue can bind ATP, can be solvent-accessible, can be present in an SH3 structural domain and share a certain conservation value in a multiple alignment of homologous proteins.The program can run two types of user queries: simple or complex ones.Simple queries involve only a single feature. By using this type of query, users can select all residues sharing a single common annotation or any number of annotations belonging to the same feature. For example, with the 'binding sites' feature, users can select all residues interacting with ATP, but also all residues interacting with ATP or ADP or Phosphate. The complete selection of all the annotations of a feature is also possible, e.g., all residues involved in the binding of any ligand.Complex queries can be created by combining pairs of simple queries generated with selections in different features. Combinations are created using Boolean operators AND, OR and NOT. We propose an example of a complex query that combines, with an AND operation, a simple query on the 'binding sites' feature with a simple query on the 'secondary structures' feature. This query could select all residues in the PDB that are located in an alpha helix and are able to bind ATP or ADP. Complex queries can also combine other complex queries and not only simple ones.After each selection (simple or complex query) the DBMS can return three different levels of information: i) the total number of PDB residues selected by the query and the total number of PDB chains sharing at least one selected residue; ii) the complete list of chains with at least one selected residue together with the total number of residues selected in each chain; iii) the complete list of selected residues, together with the complete list of annotations of each residue, in each selected protein chain.Selecting 'ATP' and 'ADP' in the 'binding sites' feature we obtain a list of all the 8998 residues distributed among 840 chains of the PDB that are able to bind ATP or ADP.Selecting 'hydrogen bonded turn' in the '2D structures' feature we obtain a second list of all the 1400109 residues in 49278 chains that are in a 'loop'.By operating an intersection using the 'AND' operator between the two list of residues we obtain a new list of 1164 residues in 570 chains that are in a loop and are also involved in ATP or ADP binding.The structure comparison method in Query3d is designed to find the largest subset of matching aminoacids between two complete protein chains or between two sets of selected residues. The program can search for structural local similarity between selected residues of any pair of user queries. The matching process is completely sequence independent so local similarity has to be intended between residues that are neighbours in space. The output of the program is, for each pair of compared chains, a list of the residues found to be similar. The detailed description of the comparison algorithms is given in 'Methods'.The method was found to function correctly in previous authors' works. The method was applied in different test cases and proved capable of finding significant local structural similarities, even in the absence of protein sequence or protein fold similarities. More specifically, the algorithm proved capable of recognizing five different known difficult cases of local structural similarity and has been extensively used in a structural genomics function prediction experiment . In 13]13], the Query3d is open source. It can be accessed through the web or, if special conditions of use are required, Query3d can be installed locally.. Through the pdbFun interface, all major features of Query3d are available. Help and tutorials in the website facilitate use. Selections of residues can be created and listed in tables. Users can combine selections and manually refine them by adding and removing single residues. Structural comparisons can be launched between selections while structural matches can be visualized instantly. Moreover in pdbFun a java viewer of protein structures is provided to help the user in selecting residues and analyzing structural comparison results.A server running the program can be accessed through a web interface. The web interface is called pdbFun and is aThere are two cases where local installation of Query3D becomes necessary: the need to perform long and computationally intensive structural comparisons or to calculate a large number of selections or comparisons. The public server cannot guarantee all the CPU time needed for an all-versus-all comparison. Or, if many different selections of residues have to be generated and compared, running a batch job on a personal computer is the fastest and most effective thing to do.The software is available for UNIX/linux platforms. Communication with the server program is carried out through text files and the PostgreSQL database.We have developed Query3d. By using this program, the structural biologist can easily select a set of interesting residues according to their biological or structural properties in the whole PDB. The selected residues can be analyzed, counted and manually modified. When the user is satisfied with the selections, structural comparisons can be launched.Query3d is a new and flexible methodology dedicated to the study and analysis of protein structures. Given the amount of functional information associated to each residue, the method can give an answer to an extremely high number of possible questions of biological relevance. The number of possible combination of queries is so high that it is difficult to envisage all the possible applications of the method.Future directions include a higher degree of flexibility in the type of possible residue searches. For example, we are going to introduce pattern matching on arbitrary residue features. Possible patterns could be defined in protein-sequence or in a volume of space. These could use not only residue type but also residue features, such as secondary structure or solvent accessibility. The final goal is to transform Query3d in an instrument to search with simple operations in the space of known protein structures, for arbitrarily chosen functional and structural conformation of residues.Query3d loads annotated residues data together with residue coordinates and other structural information. An important characteristic of the program is its being independent of the type of data stored. So different versions of PdbScan data can exist, containing different functional information, or even customized features. If a version of PdbScan is used where ten features are implemented, each residue can share a maximum of ten annotations.Features available are those currently generated by the PdbScan package. They embed nine data sources: solvent exposure as given by the naccess program , surfaceEach protein chain in the PDB is stored locally and described as a set of non-connected, and therefore sequence independent, residues. Each residue is characterized by a set of attributes, such as type of residue, list of neighbour residues, position in space and a list of functional/structural information. Two residues are considered neighbours if the distance between their C alpha atoms is less than 7.5 Angstroms. Two points describe the three-dimensional position of each residue: the first one corresponds to the C alpha atom, while the second is calculated as the geometric average of all the residue side chain atoms. This second set of coordinates gives information on the direction in space in which the side chains are pointing. The last type of information is a complete list of functional and structural properties of the aminoacid in this structure. This information comes from PdbScan (see previous paragraph) and is used by Query3D to permit the users to select the residues that have to be counted or considered for a structural comparison (see next paragraph).During the comparison, the program tries to match the maximum number of residues between two protein chains. Two sets of residues are considered similar if they fulfil three criteria: neighbourhood, structural similarity and biochemical similarity.The first criterion demands that all residues present in the set of matching aminoacids are neighbours of at least one of the other residues in the set. This guarantees that all matched aminoacids are neighbours in space, saving a lot of comparison time. The biological motivation for this constraint is that local comparison algorithms are always used to find similarities between active sites, binding sites and other localized areas in protein structures. In these regions, two areas of distant residues are not expected to be conserved.The second matching criterion concerns structural similarity. This demands that all sets of matched residues have a root mean square deviation (r.m.s.d.) lower than a certain threshold. The lower the threshold, the faster the program, since a higher number of matches is excluded in the early stages of comparison without the need to explore them further (see next paragraph). However, using a too low r.m.s.d. threshold increases the probability of missing evolutionarily distant similarities . The present threshold is 0.7 Angstrom and represents a good compromise between speed and accuracy. The r.m.s.d of the match is calculated by using all the matched residue points, both the C alpha and the side chain points. The inclusion of a side chain point in the calculation of the global match r.m.s.d. ensures that also side chains' direction needs to be conserved between two sets of matching residues.The last criterion is based on the biochemical similarity of residue types. To evaluate this type of similarity, we use a substitution matrix. The default matrix is the Dayhoff one . AccordiGiven two protein structures, Query3D is guaranteed to find the two largest sets of matching residues that fulfil the matching criteria described in the previous paragraph. In order to do so, an exhaustive depth-first search is performed exploring all the possible combinations of aminoacids belonging to the two different proteins.The algorithm starts by creating all the possible length 1 matches. These are composed of a single aminoacid from the probe structure matched to a single aminoacid belonging to the target structure. For example, if the probe and target structures are composed of n and m aminoacids, respectively, n \u00d7 m length 1 matches can be generated. All these matches are evaluated using the matching criteria and, in case, discarded. Only those matches that are considered valid are extended.Match extension consists of the generation of new possible length 2 matches starting from each valid length 1 match. All possible pairs of neighbour aminoacids are added to the first two. For example, let the first aminoacids in the probe structure have i neighbours and the corresponding matched aminoacid in the target structure have j neighbours. The algorithm generates i \u00d7 j new matches of length 2. Again all these new matches are evaluated and, if possible, iteratively further extended to length 3 matches. The process of validation and extension is repeated until no more valid matches can be generated. At this point, the algorithm stops and saves the longest match found. By doing so, the algorithm guarantees that all possible combinations of subsets valid for a structural match have been explored.Note that in the match extension phase only residues selected by the user are considered among all the available neighbours. This simple fact ensures that structural matches can only include aminoacids that have been chosen by user queries. In order to save time when comparing very similar protein chains, the algorithm is stopped when a match reaches a certain number of residues. We noticed that a match size limit of 10 residues is enough to prevent the program from spending too much time trying all possible combinations of good matches among globally similar structures.One important feature of Query3d is its speed both in running queries and in comparing structures.All queries can be run in a few tenth of a second. PDB actual size exceeds 12 million aminoacids, and this performance would not be possible using common databases. Query3d relies on a fast queries algorithm in C that avoids all disk accesses. All the essential information necessary for performing queries and structural comparison is stored in a compressed format in less than 1 GB of RAM. We reckon that with this compression level, foreseeable size increase in PDB in the near future will remain compatible with RAM sizes available for simple desktop PCs.Protein structure superposition has been optimized. We managed to keep structural comparison time very low. The time needed to compare a protein structure composed of 200 residues with a medium size protein chain in the PDB is only 60 milliseconds on a common 3 GHz pentium4 processor. The search for a local structural similarity between a 200-residue protein and a non-redundant PDB database, at 30% identity, composed of about 4000 structures, can be completed in less than 4 minutes. To obtain these results we developed an optimized C function implementing the quaternion method for calculating the r.m.s.d. between two set of points . We diffAV participated in the design of the work. MHC participated in the design of the work and helped to draft the manuscript. All authors read and approved the final manuscript."} {"text": "Carbohydrates play a critical role in human diseases and their potential utility as biomarkers for pathological conditions is a major driver for characterization of the glycome. However, the additional complexity of glycans compared to proteins and nucleic acids has slowed the advancement of glycomics in comparison to genomics and proteomics. The branched nature of carbohydrates, the great diversity of their constituents and the numerous alternative symbolic notations, make the input and display of glycans not as straightforward as for example the amino-acid sequence of a protein. Every glycoinformatic tool providing a user interface would benefit from a fast, intuitive, appealing mechanism for input and output of glycan structures in a computer readable format.A software tool for building and displaying glycan structures using a chosen symbolic notation is described here. The \"GlycanBuilder\" uses an automatic rendering algorithm to draw the saccharide symbols and to place them on the drawing board. The information about the symbolic notation is derived from a configurable graphical model as a set of rules governing the aspect and placement of residues and linkages. The algorithm is able to represent a structure using only few traversals of the tree and is inherently fast. The tool uses an XML format for import and export of encoded structures.The rendering algorithm described here is able to produce high-quality representations of glycan structures in a chosen symbolic notation. The automated rendering process enables the \"GlycanBuilder\" to be used both as a user-independent component for displaying glycans and as an easy-to-use drawing tool. The \"GlycanBuilder\" can be integrated in web pages as a Java applet for the visual editing of glycans. The same component is available as a web service to render an encoded structure into a graphical format. Finally, the \"GlycanBuilder\" can be integrated into other applications to create intuitive and appealing user interfaces: an example is the \"GlycoWorkbench\", a software tool for assisted annotation of glycan mass spectra. The \"GlycanBuilder\" represent a flexible, reliable and efficient solution to the problem of input and output of glycan structures in any glycomic tool or database. Chemically, monosaccharides are aldehydes or ketones with two or more hydroxyl groups. They can exist as linear molecules but more usually they cyclize to form ring structures. Complex carbohydrates are formed by combinations of monosaccharides covalently linked by glycosidic bonds: two types of glycosidic bonds can be formed, \u03b1 and \u03b2, depending on the orientation of the anomeric centres of the monosaccharides involved. Each hydroxyl group of a monosaccharide constitutes a possible point of formation for a glycosidic bond. Therefore, glycans can have very complex structures with many branching points. The monosaccharides are classified according to the number of carbon atoms they contain, the position of the anomeric centre, and its chiral handedness. Glycans are classified accordingly to the number of monosaccharide units they contain: large polysaccharides containing many thousand repeating units whereas smaller oligosaccharides can contain between two and ten monosaccharides.Carbohydrates are the most abundant type of biological molecules. The smallest structural unit of a carbohydrate is called a glycoconjugates. Apart from their well known use in energy storage and expenditure, the roles of carbohydrates in living organisms are varied and fundamental. Glycans can have structural and modulatory functions by themselves or can modulate the function of the molecules to which they are attached by the specific recognition of the glycan structure by carbohydrate-binding proteins (lectins). Glycans regulate both the folding and degradation of proteins. Moreover, since the outer cell membrane is covered by carbohydrates, they mediate interactions with other cells of the same organism or other pathogenic organisms such as viruses, bacteria and multi-cellular parasites. The critical role of glycans in diseases and their utility as biomarkers for pathological conditions have now been widely recognized [Carbohydrates can be found as homo-polymers but are usually attached to other biomolecules such as lipids or proteins to form complexes named cognized and haveThe additional complexity of glycan structures compared to proteins and nucleic acids has slowed the advancement of glycomics. The branched structure of glycans means that most of the bioinformatic tools developed for analysis of linear proteins and nucleic acids cannot be simply adapted to the sequencing of glycan structures. Moreover, no extensive database of known glycan structures and related primary data is currently available, although various initiatives in this direction are currently in place. Therefore, the current situation sees a critical lack of software tools for almost every aspect of glycomic research .The branched non-sequential nature of glycan structures and the great diversity of their constituents imply that the input of a structure into a computer readable format is not as straightforward as writing a sequence of characters for representing the amino acid sequence of a protein chain. Additionally, numerous alternative symbolic notations are commonly adopted to represent glycan structures in publications. Every glycoinformatic software tool providing a user interface would benefit from a fast, intuitive, graphically appealing mechanism for input and output of glycan structures. For example, database application would profit from an easy to use interface for performing structure searches and displaying search results.A user friendly input/output tool for glycan structures should satisfy three fundamental requirements: provide an intuitive interface to build structures with minimal user interaction, allow the encoding in a standard format also readable with other software tools and create conventional and appealing graphical representations of glycans. These requirements are usually diverging. A common practice is to employ graphical editors for drawing the structures to be published in research papers. Graphical editors give the user the highest freedom in creating the graphical representation. However, it is realistically impossible for a software tool to extract useful information about the glycan structure from the resulting geometrical representation. Moreover, graphical editors require a large amount of user interaction for conveying the desired result. Some input tools that have been developed for searching structure databases mimic the operations of a graphical editor. The symbols for saccharides and linkages are usually predetermined but the user can freely position the saccharides on the drawing board. Two editors follow this approach: KCAM ,4 and DrThe EUROCarbDB design studies aim to cThe representation of glycan structures in a computer readable format is an essential prerequisite for the exchange of information between databases and for the use of stored data by bioinformatic tools. The string encoding format for glycan structures proposed by IUPAC-IUBMB was explA critical problem with the previous formats is exemplified by the representation of monosaccharide components. The convention used for naming monosaccharide monomers results in multiple names for the same basic chemical configuration. This ambiguity is not easily addressed by software tools. One of the efforts taken by the EUROCarbDB project has been the development of a controlled vocabulary for monosaccharides centred on a database of monosaccharides units. Each unit is represented as an unmodified saccharide base type (e.g. glucose) plus a list of substituents and modifications. EUROCarbDB has proposed a glycan encoding format based on this monosaccharide representation, Glyco-CT . Glyco-CThe \"GlycanBuilder\" uses the Glyco-CT format for the import/export of glycan structures, while suitable libraries are being developed by EUROCarbDB for translating Glyco-CT into other common encoding formats and vice versa.Symbolic representations of glycans consist of a series of geometric symbols that represent monosaccharide units connected by lines to indicate glycosidic linkages. Symbolic notations are widely used in publications, especially for representing mammalian structures, because they enable a very compact way of describing complex glycans, useful when a large series of structures needs to be displayed (i.e. the profile of glycans expressed by a cell population). Moreover, symbolic representations are much easier for humans to recognize and allow a rapid comparison of structures when determining differential expressions in biological contexts. Unfortunately, no standard notation exists and many different types of symbols and conventions can be found in the literature.The Consortium for Functional Glycomics (CFG) has issued a recommendation for the symbolic representation of the glycan structures present in mammalian organisms. This notation is used by all Consortium databases and is being employed in the second edition of the book \"Essentials of Glycobiology\" . Due to A second proposal for representing linkage information, originally suggested in , has beeThe \"GlycanBuilder\" is based on an automatic rendering algorithm able to transform a computer encoded glycan structure into a pictorial representation determined by the chosen symbolic notation. The encoded structure is first parsed and transformed into a data object that stores all the information about the glycan . The data object is then rendered into the desired output media using a graphical model representing the symbolic notation of choice. The rendering process is completely automatic: the spatial placement and aspect of residues depends only on the symbolic notation. This means the \"GlycanBuilder\" can be used as an automated component for displaying glycan structures. Additionally, when the \"GlycanBuilder\" is employed as an interactive drawing tool, this feature makes the user free from any responsibility regarding the drawing and creates faster and easier-to-use interfaces.The internal object used to store and manage information about glycans in the \"GlycanBuilder\" is a tree structure whose nodes are the glycan residues and whose edges are their linkages. Each node represents a drawable object, an enlarged concept of structural constituent comprising: saccharides, reducing-end specificators and markers, substituents and saccharide modifications. Reducing-end specificators are used to identify possible modifications at the reducing-end terminal (or no modifications). A special node is also defined for collecting glycan terminals with unspecified linkages, a common way of describing structures with incomplete topological information . Each condition matches a certain parent, linkage or child attribute: residue type, linkage position, anomeric state, position of the anomeric carbon or residue class . The attributes of a rule are: a set of possible positions for this residue, a flag identifying if the residue is on a border region, a flag identifying if the orientation of the child is the same as the parent or should rotate accordingly to the position angle, a flag identifying if the position is \"sticky\". If a position is \"sticky\", all the subsequent children are placed in position 0: in this way it is possible to force sub-trees in regions with a change in orientation to be drawn as a sequences, thus avoiding the creation of spiralling series of residues Figure . ExampleUOXFIf linkagePosition==6 then // \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0position=45;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0border=false;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0change orientation = false;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0sticky=false;CFGIf residueType==Fucose then // \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0position={-90,90};\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0border=false;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0change orientation = true;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0sticky=true;BothIf residueClass==Substituent then // \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0position={-90,90};\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0border=true;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0change orientation = false;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0sticky=false;After the rules have been applied, the positioning algorithm has to decide the actual position of each node among the possible choices. All the children of a residue are considered altogether in order to better redistribute them in the space around the parent. The placement of each node is decided starting from the residues with stricter constraints. Firstly, all the residues with a single possible position are assigned. Secondly, the border residues are sequentially placed in an empty border position if available or in the less crowded position otherwise. When deciding the less crowded position the booking algorithm makes no difference between border and non-border regions. Finally, the remaining children are sequentially placed in the less crowded positions. The automatic placement can be overridden on user request, in case a specific arrangement of the structure is needed to represent particular conditions.The algorithm for computing the bounding boxes of residues is the core component of the rendering process. The bounding box indicates at which coordinates a residue will be displayed on the output media. The procedure is described here only for a right-to-left orientation: the others are simply rotated by multiples of 90 degrees with respect to this . The algorithm recursively navigates through the tree structure performing the following phases for each node Figure :1. compute the bounding box of the current residue with a free spatial placement Figure ;2. recursively compute the bounding boxes for all the sub-trees by grouping for region and align the sub-trees in each region Figure ;3. align centre of border regions -90 and 90 with the current residue and place the regions on the bottom and top of the residue Figure ;4. align left regions and place them on the left of the current residue without clashing with border regions Figure ;5. align centre of regions -90 and 90 with current residue and place the regions on the bottom and top of the residue not clashing with the other regions Figure .During phase 2 each region is computed separately. Firstly, the bounding boxes are computed recursively for each residue of every sub-tree. Once the bounding boxes of the residues of a sub-tree are computed, they are aligned with the residues of the adjacent sub-tree in the region. The alignment of two sub-trees is computed clockwise . The alignment is performed to minimize the spatial distance between the sub-trees and to avoid collisions between their residues. An example of top-to-bottom alignment of one sub-tree on the right of another is given in Figure 1. align the roots of the two sub-trees on the right of their bounding boxes Figure ;2. compute the size of the shift needed to move one sub-tree on the bottom of the other maintaining a minimum distance between all their nodes Figure ;3. translate all the residues of the sub-tree according to the computed shift value Figure .This alignment and collision solving algorithm is used during all the other phases to align entire regions. The alignment and collision solving algorithm is the most delicate part of the rendering process and it is responsible for the optimal placement of residues.The alignment of left regions with the current residue has three special cases Figure . If regiThe computation of bounding boxes for the bracket residue and its children (antennae with uncertain linkage) is performed separately. The bounding boxes for these residues are computed after the rest of the structure. The bracket residue is placed on the left of the structure and its bounding box is as large as the whole structure. Its children are aligned top to bottom on the right of the bracket. For each antenna the bounding boxes are computed as for the normal structure.Residues and linkages are drawn as geometrical shapes and not as bitmaps so that resolution is not affected at different sizes and scales. A residue is drawn by fitting the chosen shape inside the bounding box, while a linkage is displayed by drawing a styled line connecting the centre of the parent's bounding box with that of the child. The aspect of residues and linkages is part of the graphical model and it is also stored in configuration files. The residue style matches a single residue type. Various graphical attributes can be set for the residue style: shape , fill colour (RGB format), fill style , associated text and text colour. Some of the shapes, like those for reducing-ends, are oriented to point towards either the parent or the children residue. The linkage style is decided using the same matching operator described for positioning rules. The graphical attributes that can be set for a linkage are: line style , line shape and which linkage information to display. Anomeric state and linkage position can be displayed: never, always, only if known. Linkage information is displayed near the linkage line, next to the corresponding residue: anomeric state is displayed next to the child and linkage position next to the parent.The automatic rendering algorithm detailed in the previous section enable the \"GlycanBuilder\" to be employed in several different applications. A visual editor for glycan structures has been developed both as a stand-alone Java application [see additional file Mass spectrometry is the main analytical technique currently used to address the challenges of glycomics as it ofThe \"GlycanBuilder\", a software tool for building and displaying glycan structures, has been presented here. The tool is based on an automatic rendering algorithm that can draw glycan structures in a chosen symbolic notation with no user intervention. The symbolic notation is encoded in a graphical model as a set of rules specifying the style and placement of structure residues. The graphical model is independent from the rendering algorithm and can readily be exchanged to specify different symbolic notations. The rendering algorithm is able to produce high-quality compact representations of structures which are ready for publication purposes. The type of structures that can be represented is not restricted to any particular type or biological context. The algorithm is inherently fast and is able to draw a structure using only few traversals of the glycan tree, with no iterative optimization steps. The automated rendering process enables the \"GlycanBuilder\" to be used both as a user-independent component for displaying glycan structures and as a rapid and easy-to-use drawing tool. The computer encoded structures can be imported into the tool and exported from it using the Glyco-CT format. The \"GlycanBuilder\" can thus be integrated in other applications to create intuitive user interfaces for input of glycans or to display encoded structures using symbolic notations. The tool is available as a stand-alone Java application [see additional file \u2022 Project name: EuroCarbDB \u2013 GlycanBuilder\u2022 Project home page: \u2022 Operating system(s): Platform independent\u2022 Programming language: Java\u2022 Other requirements: Java 5.0 or higher\u2022 License: GNU GPL\u2022 Any restrictions to use by non-academics: no licence neededThe author(s) declare that they have no competing interests.AC developed the software tool and tested it, created the website and drafted the manuscript. AD oversaw the project, recognized the validity of the approach, and edited the paper. SH participated in the definition of the requirements of the software, tested the software tool and helped to draft the manuscript. All authors read and approved the final manuscript.The ZIP archive contains the stand-alone version of the \"GlycanBuilder\" structure editor. To install the tool, extract the content of the ZIP archive in a folder of your choice. To run the tool, double click on the file GlycanBuilder.jar. The tool has been tested under Windows, Linux and Mac OS X. In order to run the GlycanBuilder tool, the Java Runtime Environment (JRE) version 5.0 must be installed on the computer.Click here for file"} {"text": "Relating features of protein sequences to structural hinges is important for identifying domain boundaries, understanding structure-function relationships, and designing flexibility into proteins. Efforts in this field have been hampered by the lack of a proper dataset for studying characteristics of hinges.Using the Molecular Motions Database we have created a Hinge Atlas of manually annotated hinges and a statistical formalism for calculating the enrichment of various types of residues in these hinges.We found various correlations between hinges and sequence features. Some of these are expected; for instance, we found that hinges tend to occur on the surface and in coils and turns and to be enriched with small and hydrophilic residues. Others are less obvious and intuitive. In particular, we found that hinges tend to coincide with active sites, but unlike the latter they are not at all conserved in evolution. We evaluate the potential for hinge prediction based on sequence.Motions play an important role in catalysis and protein-ligand interactions. Hinge bending motions comprise the largest class of known motions. Therefore it is important to relate the hinge location to sequence features such as residue type, physicochemical class, secondary structure, solvent exposure, evolutionary conservation, and proximity to active sites. To do this, we first generated the Hinge Atlas, a set of protein motions with the hinge locations manually annotated, and then studied the coincidence of these features with the hinge location. We found that all of the features have bearing on the hinge location. Most interestingly, we found that hinges tend to occur at or near active sites and yet unlike the latter are not conserved. Less surprisingly, we found that hinge residues tend to be small, not hydrophobic or aliphatic, and occur in turns and random coils on the surface. A functional sequence based hinge predictor was made which uses some of the data generated in this study. The Hinge Atlas is made available to the community for further flexibility studies. Motions play an essential role in catalysis and protein-ligand interactions. In particular, hinge bending motions account for 45% of motions in a representative set from the Database of Macromolecular Motions comprisiThere are three levels of hinge prediction. The easiest case occurs when the atomic coordinates are available for two or more conformations of a given protein. In this case it is possible to visually inspect the motion to determine the hinge location, as we have done here. The process can also be automated with various available packages, including FlexProt,6, HingeThe problem of finding flexible hinges between rigid regions based on sequence is in some ways similar to the problem of finding domain boundaries, which can be flexible or inflexible. Although little work has been done on the former problem, several algorithms exist to address the latter. In one significant contribution, Nagarajan and Yona analyzedIn this article we focus on the characterization of these hinges based on sequence. To that end, we compiled the Hinge Atlas, a manually annotated dataset of hinge bending motions, as well as a separate computer annotated dataset, both available for further studies. The Hinge Atlas has several applications. First, the statistical properties of hinges can be studied . Second, it can be used to benchmark hinge prediction programs. Third, by homology hinge annotations could potentially be transferred to proteins where the existence and location of a hinge are unknown. Fourth, the annotations could conceivably be used in future protein motion prediction programs. The first application was of most interest to us in the current work.Our molecular motions database serves a wide variety of purposes, helping investigators understand the motion characteristics of individual proteins, as well as statistical properties of large groups of motions. It is the ideal platform for the current study, since it contains over 19000 morphs. A morph is a set of atomic coordinates for two homologous protein structures , plus several structures which our morph server generates as interpolations between the two. Our ser1. Are certain residue types differentially represented in hinges?2. Do certain pairs of amino acids coincide with hinges?3. Can sequence be used to predict hinges?4. Do hinges coincide with active sites?5. Do hinges prefer certain secondary structural elements?6. Do hinge residues share physicochemical or steric properties?7. Are hinge residues conserved in evolution?As our first task, we computed the rate of occurrence of each residue type in the Hinge Atlas. Certain amino acids were found to be differentially represented in hinges in a statistically significant fashion. We also investigated whether certain consecutive pairs of residues were differentially represented in hinges. In the course of the above, we observed that one of the overrepresented residues (serine) is potentially catalytic; this was the original motivation for question 4 above. To answer that question, we searched the Catalytic Sites Atlas (CSA) for closOur next task was to investigate hinge coincidence with secondary structure. Hinges are generally believed to occur in disordered regions, but this belief has never been tested or quantified rigorously to our knowledge.Following up on our finding that hinges coincide with active site residues, we went on to the question, are hinge residues more likely to be conserved than other residues, as active sites are? We ranked the residues by relative conservation and examined the differences between hinge and non-hinge residues.Significant correlations between sequence features and hinges were found in the above analyses. We computed Hinge Indices for each of these which may be used to relate sequence features to flexibility. We then sought to determine what predictive value sequence might have on its own and whether various sequence features collectively could be used for prediction.We first made a simple GOR (Garnier-Osguthorpe-Robson) ,31-like As a second approach, we made a composite Hinge Index, which we call HingeSeq, from the Hinge Indices of each of the sequence features found to be the strongest indicators of flexibility. The statistical significance of this measure was computed much as for the individual sequence features. To show that the measure is predictive, we again divided the Hinge Atlas into training and test sets and recomputed the relevant Hinge Indices to include only training set data. We used the regenerated HingeSeq to predict hinges in the test set and generated a Receiver Operating Characteristic (ROC) curve.As a final step, we examined MolMovDB as a whole to determine whether any particular database bias was in evidence. We also used resampling to checkPrior to generating the manually annotated Hinge Atlas, we used computational methods to generate a dataset of hinge residues for our statistical studies. We began by running FlexProt, a leadi1. Motion was domain wise, i.e. two or more domains could be observed moving approximately as rigid bodies with respect to each other.2. The identified hinge was located in the flexible region connecting two rigid domains, rather than in the domains themselves.3. The morph trajectory was sterically reasonable, i.e. chains were not broken in the attempt to interpolate motion.We found that FlexProt's Maximal,36 RMSD Note that the definition of a hinge given in the introduction allows for a hinge of zero length. FlexProt indeed often returned such hinges. To deal with this, in all cases one residue on each side of the hinge, was taken to also belong to the hinge. Thus most hinges are two residues long. At the end of this process, the computer annotated set contained 273 morphs.As described, the computer annotation of hinges requires significant human intervention and the results were often debatable. Many of the hinge annotations differed slightly but visibly from the boundary between rigid domains, such that the backbone flexions that could account for the domain motion were not seen in the predicted hinge region. In other cases hinges were missed, and some annotations appeared where no hinge existed. The more flagrantly misannotated hinges were removed from the dataset, but making the manual culling too stringent would simply have resulted in a dataset too small to be statistically meaningful. For these reasons, the computer annotated dataset was not used in most of this work. Nonetheless, the computer annotated dataset is arguably more objective then the manually annotated set described below, and so is made available to the community.To address the accuracy issues, we decided to generate a manually annotated set of hinges \u2013 the Hinge Atlas. To generate this set we first created the Hinge Annotation Tool which can also be used by the public as we will now explain.The creation of publicly accessible tools for manual annotation of hinges involved significant changes to the morph page. The morph page is the primary point on MolMovDB for analHighlighting the Hinge Atlas hinges (described below) on the animated morph movie is a matter of going to the morph page and clicking on the \"Hinge Analysis\" tab as above and clicking the \"Show Hinge Atlas hinge\" button. The annotated hinge location will be rendered in green spacefill style, which contrasts with the white trace used elsewhere in the protein.technical question of how we annotated hinges. In this section we clarify the motivation for the Hinge Atlas and its applications and answer the scientific question of how we decided on the precise location of the hinge for each morph.The tools described above answer only the For each morph in the Hinge Atlas, we used the Hinge Annotation Tool as described to select the hinge location. Motivated in part by our long term goal of providing a resource that could be used in motion prediction work, and in part by a desire to deepen basic understanding of protein motion, we asked ourselves the following question:Would it be possible to approximately reproduce the observed motion by allowing flexure at the hinge points but keeping the regions between hinges rigid?In order for this question to be answered in the affirmative, the hinge selection should be the one to best meet the following criteria:1. The \u03c6, \u03c8, and \u03b1 (effective \u03b1-carbon to \u03b1-carbon) torsion 2. Amino acids on either side of the hinge residues must be co-moving with their respective rigid regions.3. Rotations of one of the rigid regions about the hinge region must not result in significant and irreconcilable steric clashes.structure analysis tools section on the morph page. However often large rotations of the main chain are induced by multiple cooperative torsions in the hinge, and these may be individually small, particularly in \u03b1-helices[In order to use (1) as a useful guide to selecting the hinge location, we made use of the torsion angle charts and graphs in the \u03b1-helices. The use\u03b1-helices. Nonethedefinition of a hinge. Sometimes the hinge was slightly longer than others, and in those cases we added more residues to the hinge, up to a limit of about five residues in total. If the hinge was distributed over too many residues such that no one short stretch could be said to constitute the entire hinge, then the morph was discarded from the Hinge Atlas, since the motion was not hingelike. Criterion (3) is a practical requirement of a working hinge. If substantial flexure at points outside the hinge is required to avoid domain interpenetration, then the choice of hinge location is incorrect, or the motion is not hinge but rather shear or unclassifiable[Criterion (2) is a ssifiable.The next question was, how to select the morphs which would be annotated and included in the Hinge Atlas. The entire Database of Macromolecular Motions (MolMovDB) with in mySQL format. The same data is available in tab-delimited text format which is human readable and importable into MS Excel and other packages. Another link on the same page facilitates the download of the interpolated structure files associated with each morph in the Hinge Atlas set.Clicking on the thumbnail image leads to the \"movies\" page, where users can browse through the 214 proteins in the Hinge Atlas. Clicking on any of the protein thumbnail images, in turn, leads to the corresponding morph page, where the hinge annotation can be viewed as described in the \"Hinge Annotation Tool\" sectionThroughout this study, we will be comparing how often a particular entity occurs in hinges versus everywhere in the Hinge Atlas or another of the datasets described above. The statistical analysis will be the same regardless of the particulars, so we will here present the general approach and later only mention adjustments particular to the specific question addressed.First we defined the following variables:D = total number of residues in the datasetH = total number of residues in hinges in the datasetC = classification scheme used to create groups of residue positions. For example, C could be secondary structure, degree of conservation, etc.c = a particular grouping of residues, where c \u2208 C. For instance, if C = secondary structure, then c = helix is the class of all residues in helices, c = strand is the class of all residues in strands, etc. Another example might be C = evolutionary conservation, with c = cons1 = top 20% most conserved residues, c = cons2 = second 20% most conserved, etc.ac = set of all residues of class c in the dataset.dc = number of times residues of class c occurred anywhere in the dataset.hc = number of times residues of a particular class c occurred in hinges.These can be used to estimate various probabilities as follows:p(ac) = dc/D is the prior probability of c \u2013 in other words, the probability that residues of class c occur anywhere in the dataset.p(ac|h) = hc/H is the conditional probability that a residue belongs to class c, given it is a hinge.p(h|ac), the probability that a residue is a hinge given it is in ac. We obtain this from Bayes' rule:A quantity that is of interest in hinge prediction is the posterior probability Equation 1Where the prior probability that a residue is a hinge is given by HI, similar to the domain linker index used in Armadillo[We further define the hinge index Armadillo:Equation 2ac in hinges, over the expected. Note that this argument is close to the likelihood ratio H is so small compared to D. The quantity HI yields an intuitive measure of the enrichment of certain classes of residues in hinges, with positive numbers indicating enrichment and negative numbers indicating scarcity. Just because the HI is nonzero, however, does not mean that the differential representation has statistical significance. To establish the latter, we considered two statistical hypotheses:The argument of the log is the ratio of the observed frequency of occurrence of classes of amino acids tatistics because H0: The null hypothesis.hc is a randomly distributed random variable with mean \u03bch. The null hypothesis states that:Assume p(ac|h) is given by the hypergeometric distribution (Equation 3).If this is true, then the hinge set is chosen without replacement in an unbiased fashion from the dataset, and H1, The alternate hypothesis.This states that:p(ac|h) is not p(ac), and therefore the null hypothesis can be rejected. We test this as follows. If it is the case thatIt is equivalent to saying that H0 iff our p-value hc or more residues of class ac could be found in hinges, assuming H0 and given H, D, and dc. The argument of the sum is the hypergeometric function, which gives the probability that dc residues taken without replacement from a set of D residues of which H are hinges, would contain exactly x hinges:and if we choose a significance threshold of 0.05, we can reject Equation 3Otherwise, if it is the case thatH0 iff our p-value then we reject C = amino acid type, and c to designate each of the 20 canonical amino acids. HI scores and p-values were thus calculated for each of 20 identifications of c corresponding to the 20 canonical amino acids.We applied the described statistical formalism to the problem of amino acid frequency of occurrence in hinges by taking pairs of amino acids in hinges, but since 400 sequential pairs are possible the significance of the results was much lower and no conclusion could be drawn.We found that glycine and serine are overrepresented in a highly significant fashion. We also found phenylalanine, valine, alanine, and leucine to be underrepresented, albeit with lower significance Figure , Table 1As mentioned earlier, the fact that one of the overrepresented residues is potentially catalytic led us to suspect that hinge residues are more likely to occur in active sites, or within a few residues of an active site, than would be expected by chance. This would make sense from a biochemical and mechanical perspective. Hinge motions are often opening and closing motions of domains intended to expose the active site, which often would be located at the center of the motion, i.e. the hinge.Prior work shows thIn order to annotate the active site locations, we BLASTed the morpC = distance from the nearest active site, in residues.c = successively: active site residues, amino acids 1 residue away from the nearest active site residue, 2 residues away, etc.D = 28050 residues in the dataset of 94 proteinsH = 378 hinge residues in the datasetdc = residues of class c in the datasethc = residues of class c in hinges.The results are shown in Figure It is generally accepted that hinges tend to avoid secondary structure. However this belief has, to our knowledge, never been tested on a quantitative basis, and indeed numerous counterexamples can be found. For instance, the hinge in calmodulin and tropHI scores and the p-values as before, letting C = secondary structural element type and c designate e.g. helix, coil, etc.STRIDE recognizWe found that three types of secondary structure were differentially represented in hinges with extremely high significance. We conclude that hinges are less likely to occur in \u03b1-helices, and are more likely to occur in turns or random coils Figure . For theHI scores and p-values, letting C = physicochemical grouping, and c = aliphatic, polar, charged, etc. We discovered that aliphatic and hydrophobic residues were very significantly underrepresented. Overrepresented were small and tiny residues would occur more frequently in hinges, and this would help explain the amino acid propensities reported earlier in this work. To check and quantify this, we grouped amino acids into several non-exlusive categories. Followis Figure .We next investigated whether hinge residues are conserved. Since certain residue classes are preferred in hinges, one might suspect that hinge residues would be conserved. First, we BLASTed each of We sorted the residues in Hinge Atlas morphs according to the magnitude of the information content scores. We then divided the residues into five bins of equal size. If hinge residues are conserved, then there should be an enrichment of hinge residues in the top bins, which correspond to the most conserved residues. On the other hand, if hinge residues are hypermutable, there should be more of them in the bottom bins, corresponding to the least conserved residues. Because it is widely agreed that active sites should be conserved, we used the conservation of active sites as a control.c is a label applied to residues that ranked in a given percentile bin, e.g. the top 20% most conserved. For that bin p(ac|h) = hc/H is thus the ratio of the number of hinge residues in the bin divided by the total number of hinge residues. Similarly, p(ac) = dc/D is the ratio of the number of residues in the dataset in the bin divided by the grand total of residues in the dataset. To determine the statistical significance of HI scores, we calculated the p-values using the hypergeometric distribution with the dc, hc, D, H defined above.To quantify the enrichment, we calculated the HI scores as described previously. Here, For the control set, we performed the same calculation but made the following changes to the variable definitions:D is the total number of residues in this set.1. Our dataset was no longer the Hinge Atlas, but rather the \"Catalytic Sites Atlas (nonredundant)\" set described earlier. ac still represents residues in the dataset belonging to a given conservation rank bin. dc is the total number of residues in that bin.2. hc now represents the number of active site residues in a given bin corresponding to c. Similarly, H represents the total number of active site residues in the dataset.3. We found that hinge residues distribute evenly in the top 80%, and have a slight but statistically very significant enrichment in the bottom 20% bin Figure . Thus hiThe Hinge Atlas pools enzymes together with non-catalytic proteins. We reasoned therefore that perhaps only hinges in non-catalytic proteins are hypermutable, and that if we analyzed a set consisting only of enzymes, then the propensity of active sites to occur in hinges would lead to conservation, rather than hypermutability of hinge residues for that set.To test this idea, we decided to calculate the propensity of hinges to occur in specific bins of conservation score, for the 94 proteins in the Hinge Atlas with CSA annotation, rather than for the larger set of 214. For this set we also found that hinge residues occur more frequently among the 20% least conserved residues for each protein Figure , Table 5Even this test, however pools together hinges that are near the active site (or contain one or more active site residues) with hinges that occur at some distance from it. So we selected from the 94 proteins a small set that had at least one active site residue in the hinge, and removed the active site residues themselves. We then calculated the propensity of hinge residues to occur in the five conservation bins. This set was found to be too small, however, and statistical significance was too low to draw a conclusion (data not shown). A study using the set of fragment hinge motions described earlier was similarly inconclusive.The hypermutability of hinge residues that we found is reasonable because hinge residues tend to be on the surface of proteins (see below) rather than in the more highly conserved core. Hinges are less likely to be buried inside domains because they would then be highly coordinated with near neighbors and hence less flexible. The apparent contradiction of hypermutability on the one hand and enrichment of active sites on the other is dealt with in the Discussion section.To support our argument that hinge residues are hypermutable partly because they occur on the surface, we quantified the degree to which the latter is the case. To do this, we used a solvent accessible surface area (ASA) calculation program ,53 with Perhaps the simplest hinge consists of a single point on the chain separating two rigid regions. However it is also possible for the chain to pass multiple times through the same region, or to have multiple independent hinge regions. This leads to the question, how many proteins had single hinge points, versus a larger number of hinge points? We answer this question in Table GOR,31 methoOnce the table was generated, it was used on the test set. The score for a given residue was taken to be the sum of the scores for the residues in positions -8 to +8 from that residue. The scores were computed for all residues in the test set, except those less than eight residues from either end of the chain. The idea is that a threshold score can be chosen and residues scoring higher than this threshold are considered more likely to be hinges. Note that where Robson and Suzuki used a different fitting parameter for each type of secondary structure, we used no fitting parameter, since we were interested in only one \"secondary structure\": the hinges. The rates of true and false positives and negatives were calculated for each choice of score threshold over a range.Our training set numbered 136 proteins from the computer annotated set. We tested the method on a test set of 137 proteins from the same set and obtained a ROC curve is assigned according to residue type by looking up the corresponding value in Table HIsecondary\u00b7structure(i) isobtained according to secondary structure type from Table HIactive\u00b7site(i) as 0.4 for residues four or fewer amino acid positions away from the nearest active site residue, and 0.0 elsewhere. The highest values of HS(i) correspond to residues most likely to occur in hinges.Thus Clearly, extending this method is only a matter of obtaining amino acid propensities to occur in hinges according to additional classifications. The resulting index can then simply be included as an additional term in the above formula, with no need for adjustable weighting factors.-12, thus the measure shows high statistical significance. However since only about 5% of the residues scoring over 0.5 were annotated hinges, HingeSeq is not likely to be sensitive enough to be used alone for hinge prediction.We evaluated the statistical significance of this measure much as for the individual sequence features. We counted the number of residues in the Hinge Atlas with a HingeSeq score above 0.5, and within that set the number of hinge residues. We compared this to the total number of hinges and the population size of the Hinge Atlas Table . Using tWe nonetheless wished to show that HingeSeq is predictive, rather simply reflectling peculiarities of the dataset. To this end, we divided the 214 proteins of the Hinge Atlas into a training set numbering 161 proteins, and a test set numbering 53. Of the 214 Hinge Atlas proteins, the 94 proteins with annotation from the CSA were apportioned such that 71 were included in the training set and 23 in the test set. We tested the performance of the predictor by means of ROC (Receiver Operating Characteristic) curves. We need to define a few terms in order to use these:HS(i)greater than or equal to a certain threshold.Test positives: Residues with HS(i)less than a certain threshold.Test negatives: Residues with Gold standard positives: Residues annotated as hinges in the Hinge Atlas.Gold standard negatives: Residues which are not in hinges according to the Hinge Atlas annotation.True positives (TP): Those residues that are both test positives and gold standard positives.True negatives (TN): Residues that are both test negatives and gold standard negatives.False positives (FP): Residues that are test positives and gold standard negatives.False negatives (FN): Residues that are test negatives and gold standard positives.HS(i). For a good predictor, the true positive rate will increase faster than the false positive rate as the threshold is lowered, and the area under the curve will be significantly greater than 0.5. The ROC curve is shown in Figure The ROC curve is simply a plot of the true positive rate (same as sensitivity) vs. false positive rate (1-specificity), for each value of the threshold, as the threshold is varied from +1 to -1, a range which included all possible values of These findings assume that the dataset used does not contain significant bias or artifacts, either in the composition of the entire dataset or of the hinges within it. To substantiate this, we performed various studies as follows.In order to find out whether the MolMovDB database contained any bias in amino acid composition, we extracted the sequences of all the morphs in MolMovDB and counted the total occurrence of each residue type. Suspecting that redundancies might bias the result, we clustered the sequences and recounted the amino acid residues in the same way. We compared these numbers to publicly available amino acid frequencies of occurrence for the PDB (Protein Data Bank) Figure . The amiWe also sought to determine whether there existed a bias towards particular protein classes, in either the Hinge Atlas or the nonredundant set of MolMovDB morphs from which it was compiled. To do this, we first counted the number of times each top-level Gene Ontology (GO) term under the \"molecular function\" ontology was associated with a protein in the Hinge Atlas. Where the annotation was given for deeper levels, we traced up the hierarchical tree to retrieve the corresponding top level term in the ontology. Thus we found, for example, that 14 proteins in the Hinge Atlas were associated with the term \"nucleic acid binding.\" We repeated this procedure for the PDB as a whole as well as for the non-redundant set of 1508 morphs in MolMovDB from which the Hinge Atlas was compiled. The results for the 10 most frequently encountered GO terms are shown in Table To compare the Hinge Atlas counts to the PDB counts in an overall fashion, we used the chi-square distribution with 162 degrees of freedom (from 163 GO terms and 2 datasets) and obtained a chi-square value of 121.1. This corresponds to a p-value of 0.9931, so there is no statistically significant difference in the distribution of these terms in the Hinge Atlas vs. the entire Protein Data Bank.The Hinge Atlas and computer annotated sets were compiled differently, therefore one might suspect that the hinges from one set might comprise a statistically different population from the hinges of the other set. If this were the case, then one of the two sets would be preferable to the other, otherwise if the populations were essentially the same then the two sets could potentially be used interchangeably. It is therefore necessary to quantitatively compare these two populations. It is also necessary to confirm that within one set, the hinge residues are a statistically distinct population from the rest of the set; if this were not true then the amino acid propensity data reported earlier would not be meaningful.-4 , while only three pairs were comprised of two proteins from different kingdoms. Thus the conformational changes are likely to reflect experimentally observable motions rather than evolutionary effects.frequency of occurrence of amino acid types. The method consists of drawing random samples and computing the frequency of occurrence of a given amino acid type in that sample. We present the results for glycine, the residue type most overrepresented in hinges.As a further test of confidence in the Hinge Atlas, we decided to look for sampling artifacts in the hinge set. Resampling or bootsj. Within that sample we counted the following:We randomly chose 1/8 of the 214 proteins in the Hinge Atlas. The sample was labelled with an index j,j,aGLY) : the number of glycine residues in hinges in sample j,aGLY) : the number of glycines in NON-hinge residues in sample j,sample frequency of occurrence of glycines within hinges within sample j, andsample frequency of occurrence of glycines among NON-hinge residues in sample j.j = 1 to 10000, randomizing the sample each time. For the case of aGLY = glycines, we generated bins 0.02 wide and counted the number of times values of aGLY) and aGLY) occurred in each interval.We repeated the above for for The results for glycine are shown in Figure average sample frequencies by the the thus-obtained standard deviation:Since the distribution is approximately Gaussian the standard deviation of the difference between means should be obtainable by summing the standard deviations of the two sample frequencies in quadrature. The z-sFrom the cumulative Gaussian distribution, events Correlations were found between hinges and several sequence features. We found that some amino acid types are overrepresented in hinges, and much of this can be explained on the basis of physicochemical properties. Small residues appear to be preferred, especially the \"tiny Ser, Gly, and Ala. Aliphatic and hydrophobic residues tend not to be in hinges. We found that residues within four amino acid positions of an active site are significantly more likely to be hinges. This is most likely related to the fact that hinge bending motion is often related to the catalytic mechanism of the enzyme. Active site residues most logically occur inside the binding cleft and therefore are likely to be in the hinge or close by. Some of these results are intuitive, but are nonetheless useful in buttressing the less expected results. Further, even the intuitive results have in many cases never been rigorously tested or put on a quantitative footing.Surprisingly, hypermutable residues are more likely than conserved residues to occur in hinges. This was found to be true not only for the Hinge Atlas set of 214 proteins (which includes proteins with no annotated active sites), but also for the subset of 94 enzymes with CSA annotation Figure . This mastructure-based hinge predictors which analyze the interactions within the domains and between the domains and the solvent, but which pay no particular attention to the hinge region itself , or which implicitly[This raises the question, why would residues that are functionally important not be conserved? The answer may be that it is the intricate network of interactions within the hydrophobic core of rigid regions on either side of the hinge that needs to be conserved, and notmplicitly or explimplicitly find higstabilize proteins, for instance by periodically bridging consecutive turns of \u03b1-helices or by interacting across the contact interface between two such helices[One might also ask, is it possible that co-evolution occurs in hinge residues even in the absence of independent (single-site) conservation? Repeatedly investigators have found that co-evolving residue pairs tend to be proximal in space and stabh helices. This isSequence in the immediate neighborhood of a hinge was not found to be sufficient for substantive hinge prediction by a GOR-like method, although the latter is successful at predicting secondary structure. Similiarly, no particular sequential pairs of amino acid types were found to be overrepresented in hinges. However, we did find that combining amino acid propensity data with hinge propensities of active sites and secondary structure yielded some predictive information. The prediction method we present can easily be extended as additional hinge propensity data is reported. Indeed the publicly available Hinge Atlas can be used not only to obtain such data but also to test the resulting predictors. As an additional application, the Hinge Atlas can potentially be used to help find hinges by homology. We note, for instance, that a hinge occurring in the helix connecting the two EF hands of calmodulin has also been found in the evolutionarily related Troponin C.We found that the amino acids glycine and serine are more likely to occur in hinges, whereas phenylalanine, alanine, valine, and leucine are less likely to occur. No evidence was found for sequence bias in hinges by a GOR-like method, nor for propensity towards sequential pairs of residues. Hinges tend to be small, but not hydrophobic or aliphatic. They are found less often in \u03b1-helices, and more often in turns or random coils. Active site residues were found to coincide significantly with hinges. Interestingly, however, the latter were not conserved. Lastly, hinges are also more likely to occur on the protein surface than in the core.A consistent picture of hinge residues is suggested. In this view, hinges often occur near the active site, probably to participate in the bending motion needed for catalysis. They avoid regions of secondary structure. They are hypermutable, possibly due to the fact that they occur more often on the surface than in the core. These correlations yield insights into protein flexibility and the structure-function relationship. Strong sequence-based hinge prediction, however, remains a goal for future work.SF annotated hinge locations, performed the statistical studies and wrote the manuscript, web tools, and most of the algorithms. LL computed the evolutionary conservation of hinge and active site residues. NC ran FlexProt and generated graphics for all morphs in MolMovDB, and in other ways provided high performance supercomputing support to the hinge prediction project. MG supervised the project and edited the paper. All authors read and approved the final manuscript."} {"text": "Protein motions play an essential role in catalysis and protein-ligand interactions, but are difficult to observe directly. A substantial fraction of protein motions involve hinge bending. For these proteins, the accurate identification of flexible hinges connecting rigid domains would provide significant insight into motion. Programs such as GNM and FIRST have made global flexibility predictions available at low computational cost, but are not designed specifically for finding hinge points.within structural domains than between them, and that fragments generated by cleaving the protein at the hinge site are independently stable. We implement this as a tool within the Database of Macromolecular Motions, MolMovDB.org. For a given structure, we generate pairs of fragments based on scanning all possible cleavage points on the protein chain, compute the energy of the fragments compared with the undivided protein, and predict hinges where this quantity is minimal. We present three specific implementations of this approach. In the first, we consider only pairs of fragments generated by cutting at a single location on the protein chain and then use a standard molecular mechanics force field to calculate the enthalpies of the two fragments. In the second, we generate fragments in the same way but instead compute their free energies using a knowledge based force field. In the third, we generate fragment pairs by cutting at two points on the protein chain and then calculate their free energies.Here we present the novel FlexOracle hinge prediction approach based on the ideas that energetic interactions are stronger Quantitative results demonstrate our method's ability to predict known hinges from the Database of Macromolecular Motions. Proteins fold reliably into conformations essential for their function. The coordinates reported as representing a protein structure, however, are in fact averages over an ensemble at low temperature, at least when solved by X-ray crystallography. Specific motions are thermodynamically permitted about this equilibrium position and often play an important role in enzyme catalysis and protein-ligand interactions. The motions can be classified according to the size of the mobile units, which may be fragments, domains or subunits,2 They cThe mechanism of motion is difficult to observe directly. NMR studies can yield root mean square fluctuations and order parameters. Opticaltwo nontrivial modes are correlated with active site location, and argue that this is the hinge point. Similarly, Rader et al[sign of the displacement, and also perform some physically motivated postprocessing of the results.Computational simulations have been used for several decades to predict protein dynamics. However expense generally prohibits the all-atoms modeling of large systems without substantial simplifications. Even foder et al showed tder et al use the Similarly, much work has been done to solve the related problem of finding domain boundaries, which can be flexible or inflexible. Nagarajan and Yona have sho45% of motions in a representative set from the Database of Macromolecular Motions have been found to move by a hinge bending mechanism -3. Keatione conformation is known. In an early contribution, Janin and Wodak[Numerous valuable contributions have been made to the computational prediction of protein hinges. If the structure has been solved in two different conformations, then the hinge can be identified by visual inspection or by use of FlexProt or DynDoand Wodak developeand Wodak,23-26 usand Wodak, but thiand Wodak successfand Wodak procedurand Wodak is an apDomains can move relative to each other only if the motion is permitted energetically. Thus if two domains have many interdomain interactions they are unlikely to separate. Similarly, if a motion results in the exposure of large hydrophobic areas on the protein, then the energetic and entropic cost of solvation will make that motion less likely to occur.For these reasons, we argue that if two or more domains are joined by a hinge, and if a peptide bond is broken on the protein, the energetic cost of separating and solvating the two resulting fragments will be lowest if that break is in a hinge. Conversely, if the break is inside a rigid domain, the energetic cost will be high. We will show how this idea leads to a hinge prediction method.The idea of evaluating the cost of separating two fragments can be implemented using the minimization and single point energy evaluation features available in almost any molecular mechanics engine. This energy of separation is equivalent, up to an additive constant, to the difference in enthalpies between the two fragments generated by introducing a single cut on the protein chain on the one hand, and the original, undivided chain on the other hand. This energy evaluation can be carried out for every choice of cut location, and the resulting energy vs. cut location graph should have minima at locations that coincide with flexible hinges between domains. We will explain the methodology in detail.energy minimization step, to relieve any close contacts or unnatural bond lengths or angles in the undivided chain which would bias the results. For this we use TINKER's minimize routine with the OPLS-All Atom[residues i \u2013 1 and i. This divides the protein into two fragments, numbered 1 and 2 protein. This includes bonded and non-bonded interactions. In the energy evaluation step we again use the OPLS-All Atom force field with the SASA implicit solvent model. Note that this step, and this step alone, will change in the second variant of FlexOracle.We start with an -All Atom force fi-All Atom continuui, we compute fragment single point energies Efrag1(i) and Efrag2(i). We argue that \u0394E(i) = Efrag1(i) + Efrag2(i) - EC is related to the energy change associated with hinge motion about the selected hinge, as follows.For each choice of cut location E(i) represents the intra-fragment energy gained or lost by breaking all of the interactions between fragment 1 and fragment 2, as might occur in an opening motion. It also includes the solvation energy which might be gained or lost. The quantity EC is a constant independent of the cut location and can be set to zero without consequence.The quantity \u0394incorrect choices of the hinge location, i.e. cut locations that are actually inside one of the domains, many inter-fragment interactions would be broken. Also, significant hydrophobic areas would be exposed on the surfaces of fragments 1 and 2. In either case, \u0394E(i) would be relatively high.Even when the actual motion of the protein is not an opening one, the method should have predictive value because for i and computing \u0394E(i) for values of i that are scanned from 2 through N. We then plot \u0394E(i) vs. i and expect that minima on this graph will correspond to hinge locations.Clearly, we can repeat the procedure of cutting the protein before residue Discussion of specific proteins) than for double, triple, etc. stranded hinges (e.g. GluR2). We will return to this point later.It is to be expected that there exists a \"single-cut\" error associated with the fact that we are cutting the backbone at only one location. In many proteins, the backbone crosses the hinge region two or more times. Thus the single-cut predictor gives significantly clearer results for single-stranded hinges . However it also had to be lower in energy than the Standard molecular mechanics force fields do not account for the backbone and side chain entropy, which is not needed to calculate dynamics. For our purposes entropy is important, since it is possible that changes in freedom of motion influence conformational change. Therefore we sought to improve the method by using the FoldX,33 forceenergy minimization step described above (for the TINKER version) was still carried out using the OPLS-All Atom force field, but in the energy evaluation step, also described above, calculation of the fragment energy was now carried out using the FoldX force field. All other steps were carried out exactly as for the TINKER version.In the FoldX version of the single-cut predictor, the two cuts in the backbone, at residues i and j. To do this the single index i was replaced with the indices i and j. These define two fragments consisting of the following residues:Although accounting for the entropy was an important improvement, the method described above is still implicitly geared towards the detection of single-stranded hinges since it cuts the chain at a single location. One obvious way to deal with double stranded hinges is to make not one but i - 1) and (j to N)Fragment 1: residues 1 to (i to (j - 1)Fragment 2: residues We initially tried using CHARMm with the Born Solvation Model to compute the enthalpies of the fragments, but the computational expense was prohibitively high and the accuracy relatively low. We found that if instead we computed the free energy using FoldX, the predictor became accurate and the expense reasonable.i and j corresponding to the hinge location one should ideally generate two fragments for every possible choice of i, j but in practice we found that restricting i and j to multiples of four was sufficient to locate the hinge in most cases and the resulting 16-fold reduction in computational expense brought the method into the realm of practical calculation on a single processor. Additional savings were obtained by restricting the range of i, j, to no fewer than 5 residues from either terminus and requiring that i \u2264 (j-8), although numbers greater than 8 could potentially be used for even greater savings. To put this more concisely the calculation scheme looks like this:In order to find the choice of i = 8 to N - 5 - 8 step 4)for (j = i + 8 to N - 5 step 4)\u00a0\u00a0\u00a0for (\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0compute FoldX_energy (stability of fragment 1 + fragment 2)The free energy of folding for each of the two fragments was computed separately by means of a 'Stability' run in FoldX 2.5.2. FoldX_energy is the sum of the two energies. Once FoldX_energy was calculated for all such pairs of fragments it was plotted, with energies coded with blue = lowest energy and yellow = highest as shown in figures i, j indices of a minimum were near the diagonal, meaning the corresponding fragment 2 was small. Such minima were discarded since the diagonal energies are generally small and we are not interested in small fragment motions.1. The i and j were near the termini. These minima were also discarded, because the termini are usually flexible but we are not studying those motions.2. Both 3. Of the minima that did not fall in cases 1 or 2, the lowest minimum sometimes had one of its two indices near a terminus, but the other substantially far from either terminus. In this case the former index was discarded for the reasons cited in (2) but the latter index tended to coincide with a single-stranded hinge.4. Of the minima that did not fall in cases 1, 2, or 3, the lowest very often indicated the location of a double stranded hinge.5. Lastly, on occasion the minimum reported following cases (3) or (4) did not correspond to the known hinge location, but one of the higher minima not eliminated per cases 1 and 2, did.To identify and deal with the various cases, some clustering and postprocessing were needed, as follows.i, j that resulted inAs a preliminary step, we flagged all choices of FoldX_energy < min(FoldX_energy) + (max(FoldX_energy) - min(FoldX_energy))\u00b70.1If this resulted in fewer than 30 fragment pairs, we instead flagged the 15% of pairs with lowest energy. All the remaining (unflagged) elements were not considered to be candidates for the hinge location.i, j = 25, 25. The pairs flagged in the culling step were each assigned to the nearest centroid. The location of each centroid was then recomputed for each resulting cluster, and the pairs were once again reassigned to the nearest recomputed centroid. This process was repeated until all centroids stopped moving. The lowest-energy element of each cluster was taken as the local minimum corresponding to that cluster.The next step was to identify and separate the local minima, for which we employed the k-means clustering algorithm. Centroids were initially generated in a regular grid spaced 50 residues apart starting at global minimum. Any minima such that i \u2265 (j - 24) were discarded since they border the diagonal, per case (1) above. If for any minimum both i and j were within 20 residues of the termini, that minimum was also discarded, per case (2). For the lowest remaining minimum, if only one of the two indices was within 20 residues of a terminus, then the protein was identified as having a single-stranded hinge, per case (3). The index near the terminus was discarded and the remaining index was taken to be the location of the single-stranded hinge. Otherwise, both indices were taken together to indicate the location of a double stranded hinge, per case (4). Since the calculation was done only for every fourth residue, the hinge prediction was reported as a range:The minima found in the preceding step were recorded in order of energy, with the lowest corresponding to the i -2 to i +1Hinge 1: residues j -2 to j +1Hinge 2: residues Statistical evaluation section. We do, however, discuss these secondary predictions in the Discussion of specific proteins section.Case (5) occurred somewhat less frequently, and so although our program outputs the remaining local minima these are much less accurate than the primary hinge prediction and were not used in the We tested our method against 20 pairs of protein structures , in the Hinge Atlas Gold (HAG), a dataset of manually annotated hinges publicly available on our Database of Macromolecular Motions,34-36. WThe HAG provides a carefully curated collection of 20 homologous pairs of single-chain protein structures. Its nam1. The structure is soluble and independently stable, rather than relying on other chains or molecules to maintain its conformation.2. The structural coordinates were obtained by x-ray crystallography, with the exception of calcium-free calmodulin.3. At least two sets of atomic coordinates are available, and together they represent a domain motion that is biologically relevant or thermodynamically feasible.4. The motion involves two or more rigid domains moving about a flexible hinge.Each of these pairs of protein structures, also known as morphs, has an annotated hinge location. This location was chosen prior to running any hinge prediction codes, by visual inspection of the corresponding morph movie. We have found manual annotation to be more reliable than the use of automated methods such as FlexProt, DynDom, or Hingefind, which depend on user-adjustable parameters and sometimes incorrectly assign the hinge location. The process of inspection and annotation was aided by the \"Hinge Annotation Tool\" available on the morph page for each morph in MolMovDB. It consists of a set of arrow buttons which adjust the position of a window of residues, which are highlighted as the protein moves. This tool can also take annotations from the public for various uses. The result of the annotation effort is a set of hinge residues for structural pairs against which FlexOracle and other hinge predictors can be tested.Discussion of specific proteins section. First, however, we evaluate the performance of FlexOracle on the HAG as a whole.One must bear in mind that the hinge annotation is not encyclopedic. It is based on the comparison of two sets of structural coordinates, but other motions not reflected by this measure may be thermodynamically feasible. In some cases FlexOracle predicted hinges not annotated in HAG but for which we later found experimental evidence in the published literature. Since the point of the HAG is to be objective rather than comprehensive, in these cases we did not change the annotation or our scoring of the predictor results. Some of these discrepancies are discussed in the Methods section, FlexOracle assumes hinges do not simply correspond to points of globally lowest energy, but rather to local minima identified and postprocessed in various ways. The set of residues reported as predicted hinge locations by any of the three versions of FlexOracle are referred to as test positives, and the number of residues in this set we will call M. the residues annotated as hinges in the HAG are referred to as gold standard positives, and the number of these we will call H. In this section we compare the test positives to the gold standard positives to objectively evaluate the predictor. Before we do do so, however, we need to define a few more standard statistical terms as they relate to the current context:As mentioned in the Gold standard negatives: The residues in HAG that are NOT annotated as hinges.TP (true positives): The number of residues that were both test positives and gold standard positives.FP : The number of residues which were test positives and gold standard negatives.TN (true negatives): Number of residues which were test negatives and gold standard negatives.FN : Number of residues which were test negatives and gold standard positives.Population: All of the residues in the HAG. We will call the number of these residues D. Sensitivity (true positive rate) = TP/(TP + FN) = TP/H. This is the ratio of true positives to gold standard positives.Specificity (true negative rate) = TN/(TN + FP) = TN/(D - H). This is the ratio of true negatives to gold standard negatives.Null hypothesis: The statistical hypothesis that the set of test positives is not different from the population in a statistically significant fashion.Alternate hypothesis: The hypothesis that the set of test positives is different from the population in a statistically significant fashion.p-value: This is the probability that a set of residues numbering as many residues as are in the test positive set, and selected randomly from the population, would contain TP or more gold standard positive residues. If the p-value is above 0.05 we conventionally accept the null hypothesis, otherwise we reject the null hypothesis in favor of the alternate hypothesis. Clearly, the smaller the p-value the better the predictor.cumulative hypergeometric function,The p-value is computed for all predictors in this study using the hypergeometric function[x of the H gold standard positive residues in a set of M residues randomly chosen from the population numbering D:where the function gives thWe will use the sensitivity, specificity, and p-value in our statistical evaluation. p-value is a particularly useful quantity, since it compares directly to random picking. The three quantities will be used to evaluate the three versions of FlexOracle and compare to GNM, long a test positives those residues identified as local minima according to the algorithm described in the Methods section, then tabulate the various statistical quantities per the above definitions. GNM requires a slightly different treatment. To evaluate this predictor, we compute the absolute value of the first normal mode displacements and normalize this quantity to range from 0 to 1. The nodes, or points of zero displacement, are taken to correspond to the hinge location. Therefore we take all residues with normalized displacement smaller than 0.02 to be test positives. The results are shown in Table We begin our statistical evaluation with the TINKER and FoldX versions of the single-cut predictor. We take as our We observed qualitatively figures , 5, 6, 7strict criterion and use it for our statistical benchmark. The results are shown in Table -66 \u2013 indicating very high predictive power.The two-cut predictor was run on the 40 proteins in HAG and the results were compared to the hinge annotation. Note that as explained earlier test positives are reported by the two-cut predictor in windows 4 residues wide due to the 4-residue grid spacing. We refer to this window width as the close enough to the correct hinge may for practical purposes be considered a true positive even if it does not coincide exactly. Therefore for a more operational benchmark we widened the definition of the test positives to include 5 residues to the left and right of the predicted hinge location, for a window width of 14 residues (loose criterion). When a gold standard positive residue was found within the 14-residue window, this was considered a true positive. The test was considered a success for a given protein if there were no false positives or false negatives under this criterion. The test was considered a partial success if there were one or more true positives but also one or more false positives and/or false negatives. Finally, the test was a considered a failure if there were no true positives for that protein. The results are shown in Table This proves the statistical significance of the test but in practice for a given protein a prediction that is in some sense Under this criterion there were 47 true positive hinge points. For these, the average distance between the center of the gold standard positive residues and the center of the test positive residues was 1.66 residues. For 29 out of the 47, the distance was 1 or 0 residues. Thus even under the loose criterion the predictions had a tendency to line up closely with the HAG hinges. This can be appreciated in Figure Also in the same figure one can observe that the predictor did not work well for the two pairs of proteins with triple-stranded hinges.One must keep in mind that as we mentioned earlier, the HAG annotations reflect hinges chosen under a very specific crystallographic criterion and are not encyclopedic. Therefore for some of these \"failures\" it is possible that the prediction is correctly suggesting a motion which is thermodynamically permitted but is not reflected in the pairs of structures used to generate the hinge annotations. We will discuss this for specific cases in the following section.We chose six representative proteins from the 40 structures in the HAG for detailed discussion. These reflect some of the diversity of the set and illustrate the salient features of the algorithm. For each of these, we present structural images with the annotated hinges highlighted. We also present and discuss the results of running the three versions of FlexOracle on the structure. The FlexOracle results for all 40 HAG structures can be viewed online.The single-cut version of FlexOracle naturally works best on single-stranded hinges. This condition is less common, and in fact most proteins in HAG have two strands in the hinge, and a couple even have three. We will show that the single-cut predictor nonetheless has predictive ability in these cases, although the two-cut predictor is much more accurate.The two-cut predictor, in contrast, is specifically designed to handle double-stranded hinges. It is also designed to respond to single stranded hinges by discarding one cut of the pair as described earlier. We did not attempt to extend the method to explicitly treat the case of triple stranded hinges.Under either scheme, only one chain is analyzed at a time, in the absence of ligands, bound metals, or additional subunits of a complex. We show that the method is robust under removal of small ligands from co-crystallized coordinate sets. The method obtained mixed results with Calmodulin (see discussion below) so we do not recommend only careful use with metal-bound proteins. Similarly, care should be taken with single subunits taken from complexes, since these have not been tested rigorously.Folate is a vitamin essential for cell growth and replication, in its sole function mediating the transfer of one-carbon units,40. FolaIonotropic glutamate receptors (iGluRs) are responsible for fast synaptic transmission between mammalian nerve cells. iGluRs are a class of transmembrane proteins that form glutamate-gated ion channels, including the AMPA receptors GluR1-4. The transmembrane gate of iGluRs opens briefly in response to glutamate released by a presynaptic cell.The GluR2 ligand binding core has been crystallized in progressively more tightly closed conformations, in the order of ligand binding apo>DNQX>kainite>glutamate~AMPA. This progression follows the binding affinity except that AMPA binds with ~20-fold higher affinity than glutamate but produces the same effect on the conformation of the ligand binding core. The degree of closure, in turn, appears to control the receptor activation, as measured in terms of either peak current or steady state current in presence of the desensitization blocker cyclothiazide. Thus glutamate and AMPA are full agonists and produce the same maximal domain closure and consequent activation, whereas kainite is a partial agonist and results in lesser activation.The well-characterized progressively stronger binding of the four ligands mentioned provides potentially fertile ground for motion prediction and ligand binding studies. In Figure Under the loose criterion, the two-cut predictor was successful in predicting the hinge.The LIR family is composed of eight human proteins sharing significant sequence identity with LIR-1. LIR proteins are believed to be inhibitory receptors, similar to killer inhibitory receptors (KIRs) on human NK cells. LIR and KIR proteins belong to the immunoglobulin superfamily (IgSF). The extracellular region of LIR-1 contains four IgSF domains. The structure examined here is a fragment containing domains D1 and D2. The single-cut predictor results are clearly successful Figure , since tProtein kinases modify substrates by transferring a phosphate from a nucleotide to a free hydroxyl on a Ser, Thr or Tyr residue. The open conformation of cAPK appears to be stable in the apo form, as well as in complex with a peptide inhibitor. The closed form is stable in complex with peptide inhibitor and ATP. ATP precedes the peptide in an apparently preferred binding order.i is a hinge residue, can be expected to require less energy without ligand than with. This argues that removing ligands from the structure should increase accuracy over the alternative. In fact the single-cut predictions are roughly as accurate for the closed conformer as for the open[The closed form is analyzed in Figure the open. The twoRBP belongs to a sizeable family of soluble gram-negative bacterial periplasmic binding proteins with diverse ligands and functions. They are abundant and bind their substrates with high affinity and specificity, and thus easily sequester nutrients appearing in sporadically in the environment The openResults for the apo form are shown in Figure 2+, at the C-terminal lobe, but only CaM binds Ca2+ at the N-terminal lobe. Correspondingly, the C-terminal lobes in the two proteins are structurally very similar to each other, while the N-terminal lobes are very different[CaM is a major calcium-binding protein, regulating enzymes in many tissues. It is kdifferent. Both thdifferent. For theDiscussion of specific proteins section At this time only the single-cut predictor is run automatically on all submissions, but users may contact the author to have the two-cut predictor run on any submitted protein. The user should bear in mind that results may be of limited accuracy for membrane proteins and proteins bound to complexes or large substrates. If metals strongly affect the stability and motion of the protein, as is the case for EF hands, this may also limit accuracy. Lastly, if the hinge seems sterically unreasonable the reader should consider the possibility that the hinge has three or more strands or the motion is not hingelike.Users may submit PDB-formatted files through our Hinge Prediction page, linked to from the MolMovDB front page. They wiThe results of running FlexOracle and other hinge prediction algorithms on the HAG can be seen on our website. Links tThe ability of FlexOracle to predict the hinge location for domain hinge bending proteins was demonstrated. We found that FlexOracle gives similar results for apo and ligand bound structures when the ligand is a small molecule or molecules. However mixed results for the calcium bound form of calmodulin suggest care should be exercised when applying the method to proteins with bound metals. We further found that hinges often coincide with minima of the single-cut FlexOracle energy, but in the case of two-domain proteins comprised of one contiguous and one discontiguous domain, the hinge can occur instead near the boundary between a broad \"mountain\" of high energy (corresponding to the contiguous domain) and wide \"shoulders\" of low energy (corresponding to the discontiguous domain). Further, if the linker consists of closely spaced parallel strands, the hinge tends to occur a few residues into the \"mountain\" side of this boundary. Aside from the matter of bound metals, these issues are not a concern for the two-cut predictor, which is significantly more accurate than the single-cut predictor. The former works well for single as well as double stranded hinges, but not for triple-stranded hinges. The FlexOracle method addresses directly the problem of locating the primary hinge for hinge bending proteins."} {"text": "Bongo (Bonds ON Graph), to predict structural effects of nsSNPs. Bongo considers protein structures as residue\u2013residue interaction networks and applies graph theoretical measures to identify the residues that are critical for maintaining structural stability by assessing the consequences on the interaction network of single point mutations. Our results show that Bongo is able to identify mutations that cause both local and global structural effects, with a remarkably low false positive rate. Application of the Bongo method to the prediction of 506 disease-associated nsSNPs resulted in a performance similar to that of PolyPhen and PANTHER . As the Bongo method is solely structure-based, our results indicate that the structural changes resulting from nsSNPs are closely associated to their pathological consequences.Recent analyses of human genome sequences have given rise to impressive advances in identifying non-synonymous single nucleotide polymorphisms (nsSNPs). By contrast, the annotation of nsSNPs and their links to diseases are progressing at a much slower pace. Many of the current approaches to analysing disease-associated nsSNPs use primarily sequence and evolutionary information, while structural information is relatively less exploited. In order to explore the potential of such information, we developed a structure-based approach, Bongo, which describes protein structures as interlinked amino acids, can identify conformational changes resulting from nsSNPs that are closely associated with pathological consequences. Bongo requires only structural information to analyze nsSNPs and thus is complementary to methods that use evolutionary information. Bongo helps us investigate the suggestion that most disease-causing mutations disturb structural features of proteins, thus affecting their stability. We anticipate that making Bongo available to the community will facilitate a better understanding of disease-associated nsSNPs and thus benefit personal medicine in the future.Non-synonymous single nucleotide polymorphisms (nsSNPs) are single base differences between individual genomes that lead to amino acid changes in protein sequences. They may influence an individual's susceptibility to disease or response to drugs through their impacts on a protein's structure and hence cause functional changes. In this paper, we present a new methodology to estimate the impact of nsSNPs on disease susceptibility. This is made possible by characterising the protein structure and the change of structural stability due to nsSNPs. We show that our computer program The introduction of large-scale genome sequencing technologies has dramatically increased the number of single nucleotide polymorphisms (SNPs) in public databases. For example, the NCBI dbSNP database Genetic variations, such as SNPs, are likely to contribute to susceptibility to complex diseases such as cancer Bongo , which uses graph theoretic measures to annotate nsSNPs. Graph theory has found many applications in the study of protein structures during the past two decades. For example, Ahmed and Gohlke used graphs to identify rigid clusters for modelling macromolecular conformational changes Bongo uses graphs to represent residue-residue interaction networks within proteins and to assign key residues that are important for maintaining the networks. The novelty lies in the application of a graph theory concept, vertex cover, by which key residues are identified for analyzing structural effects of single point mutations.For analyzing structural effects of nsSNPs, we have developed an approach, Bongo uses to evaluate structural impacts of point mutations, and explain their roles in terms of stabilising protein structures. We further describe the algorithm of Bongo, where a graph concept vertex cover was adapted to identify key residues, and we calibrate Bongo over eight single point mutations that result in a range of different structural changes in the p53 core domain. We evaluate the false positive rate of Bongo for 113 mutations where wild-type and mutant-type crystal structures have been demonstrated to have negligible differences in backbone conformation. Eventually, we evaluate the performance of Bongo by testing its ability to distinguish disease- and non-disease-associated nsSNPs in protein structures in the PDB (Protein Data Bank) Here we begin by describing the use of interaction graphs to represent protein structures. We then introduce the \u2018key residues\u2019 that A point mutation in a protein may often give rise only to a rearrangement of amino acid side chains near the mutation site, although sometimes a more substantial movement of polypeptide backbone locally or globally results. The former changes can be analysed by looking at the inter-residue interactions that a mutation creates or abolishes between its neighbouring residues. However the same approach may not be applicable to the latter, since simply paying attention to interactions immediately around a mutation site is not sufficient to predict structural effects on a larger scale.Bongo to provide an alternative approach by operating on interaction graphs, which are computationally more convenient. In our model, residue-residue interactions occur either through direct connection or through indirect links that involve intermediate residues. Such connectivity is based on \u2018key residues\u2019 that are important in maintaining the overall topology of the network, and thus the stability of the folded structure. These key residues eventually serve as reference points to evaluate whether a mutation can induce structural changes in a protein away from the mutation site.In order to understand structural changes at a longer distance, we represent a protein as a residue-residue interaction graph, in which vertices represent residues and edges represent interactions between residues see mor. Of courBongo measures the impact of a mutation according to its effects on key residues; it formulates the structural changes in a protein as changes of the key residues in a corresponding interaction graph. Here we adapt a variant of the vertex cover, defined in graph theory as a minimum set of vertices (residues) that are crucial to forming all the edges (interactions), to represent the key residues.Bongo; we discuss the exact criteria under which a mutation is deemed damaging below.In Bongo derives the interaction graph of a protein by considering each residue as a vertex and each residue-residue interaction, including hydrogen bonds, \u03c0\u2013\u03c0, \u03c0\u2013cation, and hydrophobic interactions, as an edge. The weight on each edge differs according to the total number of cross-secondary structure interactions as well as number of interactions with individual residues. The weighting scheme was calibrated against eight disease-associated mutations in the p53 core domain analysed by Fersht and co-workers Bongo defines the key residues as the minimum weighted vertex cover . Indeed, we have observed in some cases , which has accuracy around 80% for predicting stability changes resulting from mutations when the three-dimensional protein structure is known. We consider only mutations that cause |\u0394\u0394G|<3kcal/mol since they affect the stability without totally abolishing the overall structure of the protein. The median number of |\u0394\u0394G|<3kcal/mol is used to calculate the correlation with the priority of key residues in order to avoid data skewness.), \u0394\u0394G, of key residues identified from the p53 core domain (PDB: 1TSR). When we considered the top half of the key residues ranked by their priorities, \u0394\u0394G relates to the priority of key residues with a Pearson correlation r\u200a=\u200a0.61 and a significantly small p-value less than 0.001 (r\u200a=\u200a\u22120.04) between assumptive priority and \u0394\u0394G of non-key residues when the lower half of key residues, ranked by their priorities, is included. This is likely due to uncertainties in the definitions of key residues that are ranked with lower priorities: Since Bongo stops selecting key residues only when no edges are left in a graph, the key residues that have lower priorities may not have structural meaning but are simply chosen in order to complete the selection process . In an attempt to exclude the uncertain key residues, we analysed how far the correlation is valid by gradually including key residues that have priorities in the lower half, in order of decreasing priorities. There is an acceptable correlation r\u200a=\u200a0.52 when we consider up to three fourths of overall key residues, which suggests that the bottom one quarter key residues are not reliable indicators of structural effects. Thus Bongo does not consider the bottom quarter key residues so that their uncertainty does not affect the prediction results.We noticed that the correlation is weaker of a mutation is calculated according to the key residues affected by the mutation, i.e.I is the total impact value, Kj is the priority of each key residue that is in Kwt but not in Kmt. N is the total number of key residues in Kwt, which normalise the size of proteins.Since the structures of the mutant proteins are not often available for nsSNPs, Bongo is shown in Bongo considers mutations with I>1 to cause structural effects, which is the criterion calibrated over mutations in the p53 core domain.Thus each mutation is systematically quantified by its impact value I database Bongo uses structure as input.In the previous sections, we have shown that Bongo, we calculated its sensitivity and specificity for sequence scoring. The result shows that PANTHER has the PPV and NPV values comparable to those of PolyPhen and Bongo compared to that of PolyPhen (50.7%) and PANTHER (76.6%), and its specificity (82.4%) is high compared to that of PolyPhen (65.8%) and PANTHER (31.8%). This suggests that, although Bongo has a similar predictive value to that of PolyPhen and PANTHER, Bongo's high specificity and low sensitivity yields many less false positive predictions. We can thus be more confident about the cases that are predicted as disease-associated by Bongo than those predicted by PolyPhen. Regarding the low sensitivity of Bongo, we suppose this is due to the fact that Bongo is not able to predict mutations that only affect the function of proteins, e.g., the mutations in active or other interaction sites. We may improve Bongo's ability in predicting functional site mutations in the future work.In addition to the predictive value, Bongo predicted 142 of them to cause structural effects, which suggests that about 28% of nsSNPs that are involved in Mendelian diseases resulting from single protein mutations may cause extensive structural effects in proteins. However, the figure for nsSNPs involved in multigenic diseases like diabetes may not be so high as they exist individually in the population as a whole at high levels, but contribute only rarely to multigenic diseases when occurring with several other nsSNPs.Among the 506 disease-associated nsSNPs in our test-set, Bongo, which uses graph theoretic measures to evaluate the structural impacts of single point mutations. Our approach has shown that identifying structurally important key residues in proteins is effective in predicting point mutations that cause extensive structural effects with a substantially lower false positive rate. Furthermore, our approach gives clues about the effects of nsSNPs on the structures of proteins, thus providing information complementary to methods based on sequence. By comparing our approach with PolyPhen and PANTHER in analyzing nsSNPs, we have also shown that structural information can provide results of quality comparable to those that use sequence and evolutionary information in predicting disease-associated nsSNPs.We have developed a method, Bongo considers structural information including hydrogen bonds, \u03c0\u2013\u03c0, \u03c0\u2013cation, and hydrophobic interactions, as well as secondary structure information. (1) Hydrogen bond: we use HBPLUSProvatDSSPIn the residue-residue interaction graphs, http://www.mathworks.com/products/matlab/), where the best solution was chosen on the basis of the best calibration result over the eight mutations listed in Bongo on the 506 disease-associated nsSNPs, which are distributed in proteins from many different families, is comparable to that of PolyPhen , which is an open source graph visualization project from AT&T Research.All the structural information is transferred into graphs by using Graphviz is a graph such that V is the set of residues and E is a set of edges. An edge is defined between residue u and v if they exhibit one of the following interactions: backbone bonding, hydrogen bonds (H-bonds), \u03c0\u2013\u03c0, \u03c0\u2013cation, and hydrophobic interactions. Each edge is initially given a weight of 1. We then normalise interactions between two secondary structures by dividing the weight with the total number of cross-secondary structure interactions. Intra-secondary structure interactions are normalised in the same way. For interactions involving a group of residues, namely hydrophobic interactions, we normalise them by the Vonoroi surface area of each residues.S of a graph G\u200a=\u200a is the set of vertices such that for every edge , either u or v is included in S. In the interaction graph terms, this amounts to picking a set of residues that covers every interaction in the graph. In Bongo, since the interactions are weighted, we consider the vertex cover problem G\u200a=\u200a where c: V \u2192 R+ is the function that assigns weight to each vertex. A vertex cover set is said to be minimum if it contains the set of vertices that covers all interactions with smallest possible weight.Since the key residues capture the vertices that are essential to maintain the interactions, we model them through the vertex cover set of the graph The algorithm used to select key residues captures the concept of pulling out one piece each time in a tower of wooden pieces, with the difference that in our case the pieces pulled out are key pieces but not redundant ones :G\u200a=\u200a, pick the residue with highest weighting, if more than one residue has the same weighting, pick them all. That is, pick the set U\u200a=\u200a{v: c(v)\u2264c(u) \u2200 u,v \u2208 V}.Given a graph G\u200a=\u200a where W\u200a=\u200aV\\U and F\u200a=\u200aE\\{ \u2208 E: v \u2208 U \u2228w \u2208 U}.Remove all key residues and the edges connected to it. That is, replace the graph with F is empty.Repeat (1) and (2) until no edge is left in the graph, i.e., Bongo for a specific graph will be the same when Bongo repeats the selection process again. Taking advantage of the priorities assigned to each key residue, Bongo eventually quantifies the effect of a point mutation by considering the priority of key residues affected.The algorithm reflects the importance of key residues in order of selection: key residues selected in an earlier time are more important, in terms of having higher priorities in maintaining the interaction network, than others that are identified later. Since there is a specific order of choosing vertices, the approximate vertex cover chosen by Dataset S1The 113 mutations that have negligible structural effects.(0.02 MB PDF)Click here for additional data file."} {"text": "ADP-glucose pyrophosphorylase (AGPase), a key allosteric enzyme involved in higher plant starch biosynthesis, is composed of pairs of large (LS) and small subunits (SS). Current evidence indicates that the two subunit types play distinct roles in enzyme function. Recently the heterotetrameric structure of potato AGPase has been modeled. In the current study, we have applied the molecular mechanics generalized born surface area (MM-GBSA) method and identified critical amino acids of the potato AGPase LS and SS subunits that interact with each other during the native heterotetrameric structure formation. We have further shown the role of the LS amino acids in subunit-subunit interaction by yeast two-hybrid, bacterial complementation assay and native gel. Comparison of the computational results with the experiments has indicated that the backbone energy contribution (rather than the side chain energies) of the interface residues is more important in identifying critical residues. We have found that lateral interaction of the LS-SS is much stronger than the longitudinal one, and it is mainly mediated by hydrophobic interactions. This study will not only enhance our understanding of the interaction between the SS and the LS of AGPase, but will also enable us to engineer proteins to obtain better assembled variants of AGPase which can be used for the improvement of plant yield. ADP-glucose pyrophosphorylase (AGPase) is a key heterotetrameric allosteric enzyme involved in plant starch biosynthesis. In this study, we have applied computational and experimental methods to identify critical amino acids of the AGPase large and small subunits that interact with each other during the heterotetrameric structure formation. During the comparison of the computational with the experimental results we also noted that the backbone energy contribution of the interface residues is more important in identifying critical residues. This study will enable us to use a rational approach to obtain better assembled mutant AGPase variants and use them for the improvement of the plant yield. These two subunits are encoded by two distinct genes ADP-glucose pyrophosphorylase (AGPase) is a key regulatory allosteric enzyme involved in starch biosynthesis in higher plants. It catalyzes the rate limiting reversible reaction and controls the carbon-flux in the \u03b1-glucan pathway by converting Glucose-1-phosphate and ATP to ADP-glucose and pyrophosphate using Mg2\u03b22) structure have been solved yet. This is due to the difficulty of obtaining AGPase in stable form. However, it is critical to elucidate the native heterotetrameric AGPase structure and identify the key residues taking place in subunit-subunit interactions to obtain a more detailed picture of the enzyme. Understanding the structure and the hot spot residues in the subunit interface will enable us to manipulate the native enzyme to get a stable form which can be utilized for improving the yield of crops. The feasibility of such an approach has been shown previously 97, Pro327, Ile330, Ile335, Ile339, Ile340, and His342 are involved in lateral interaction with the potato AGPase SS whereas residues Arg45, Arg88, Arg92, and Trp135 are involved in longitudinal interaction with the potato AGPase SS. The effect of these mutations on the interactions of the LS and the SS of potato AGPase were further characterized in vivo using the bacterial complementation and the yeast two-hybrid methods. Also, experimental results indicated that the backbone binding\u0394G energy of the interface amino acids is a decisive parameter for the subunit-subunit interaction rather than side chain binding\u0394G or total binding\u0394G energies. This study will highlight the important structural aspects of AGPase structure and provide insights for further attempts to engineer a more functional form of the enzyme.Recently crystal structure of SS was found in a homotetrameric form by Jin et al. elec terms are compensated by unfavorable \u0394Gpolar terms. Hence, total electrostatic interactions \u0394Gelec, favor binding of subunits. Contributions from van der Waals and non-polar solvation energies also favor interactions thus being the major forces that drive the association of subunits. These results are in agreement with our previous work To determine the critical amino acid residues of the potato AGPase LS that interact with potato AGPase SS, we performed MM-GBSA method which calculates the binding free energy and decomposes the energy at the amino acid level. The binding free energy differences for the longitudinal (D2) and lateral (D1) dimers of the modeled heterotetramer binding|>3.0 kcal/mol), then it is considered as a hot spot. Hot-spot residues for D1 and D2 and their binding free energy components together with the standard deviations are shown in 2 upon subunit complexation and it must satisfy this condition for at least 160 of the snapshots. Based on these requirements, a total of 79 residues in D1 were classified to be part of interfaces. A total of 19 out of 79 interface residues (8 in LS and 11 in SS) in D1, are hot-spots. The hot-spot residues in LS are mostly non-polar in general with the exception of Asn97, Thr328, and His342. Seven of the hot-spots in SS for D1 are also non-polar, too. Residues SSLys288, SSTyr308, SSLys313 and SSThr320 make up the polar region in this interface. Overall interaction in the lateral dimer is mediated by amino acids that have hydrophobic side chain in this interface. The remaining two residues are Trp135 in LS and Trp120 in SS as interface residues. Number of hot-spots (five) in D2 is relatively less than the residues in D1. In contrast to D1 hot-spots, which are generally non-polar, there are three basic hot spot amino acids (Arg20 in SS in conco308 in SS (in D1) shows the highest free energy difference with a |\u0394Gbinding| value of 6.75 kcal/mol upon complexation. We see that favorable contributions to \u0394Gbinding for this residue are dominated by Eele and Evdw . Indeed, several H-bonds are formed by Tyr308 and several polar residues . Tyr308 is also in close contact with non-polar residues, such as Pro322 in LS which account for the favorable van der Waals interactions. The unfavorable contribution of polar solvation energy comes from these interactions and it is observed to be compensated by the favorable electrostatic term. The backbone and side chain contributions to the total free energy are \u22121.99 and \u22124.75 kcal/mol, respectively for this residue. Pro327 in LS has the second highest |\u0394Gbinding| energy difference with a value of 5.03 kcal/mol. It should be noted that this residue is highly conserved and makes van der Waals contacts with Gly40, Ala41, Ile285, Ile324 and the aromatic ring of Tyr43 in SS. These interactions explain the hydrophobic contribution of Pro327 to the total |\u0394Gbinding|. The backbone and side chain contributions to the total free energy are \u22121.80 and \u22123.24 kcal/mol, respectively for Pro327.As can be seen from 330, Ile335 and Ile340 that constitute a hydrophobic core at the inner layer of \u03b2-helix domain. The bulky side-chain groups of these residues make strong hydrophobic interactions with each other as well as their counterparts in SS. In fact, favorable \u0394Gbinding for these amino acids are mainly driven by the van der Waals forces . This is especially important to decide whether the side chains or backbone interactions are important to define critical residues, hot spots, in AGPase complex.Lysvdw term has no contribution for Arg45 stabilization in LS during dimerization and Trp120 in D4 were also classified as hot-spots in our AGPase model. All the other residues, except for the Glu94, also have negative \u0394Gbinding values which mean that they are stabilized upon complex formation. However, they were not considered as hot-spots since their change in \u0394Gbinding values according to free energy decomposition are higher than our cutoff value . We see that while the important amino acids reported by Jin et al. binding energy in our model, they are less stabilized in the homotetrameric SS with a \u0394Gbinding value of \u221229.03 kcal/mol structural gene of Escherichia coliE. coli glgC\u2212 (containing pML10). The ability of LS mutants to form a functional heterotetrameric AGPase was assessed by exposing mutant colonies to I2 vapor to monitor the glycogen accumulation. The residues of the LS listed in and334 and Lys336 adjacent to the Ile335 were replaced with Ala. These LS mutants were transformed into E. coli glgC\u2212 cell lines containing the WT SS. Cells were exposed to iodine staining to see the effect of mutation on the heterotetrameric assemblies. As seen in E. coli. Our results are in agreement with previously reported data where they showed that lateral interaction is mainly mediated by the hydrophobic amino acids in homotetrameric enzymes of the potato SS and Agrobacterium AGPases within the \u03b1\u03b2 domain of AGPase E. coli. For example when Ile339, and Ile340 were changed to Ala , there were no heterotetrameric assemblies between the potato LS and SS AGPase subunits in E. coli of the potato AGPase LS in longitudinal interaction with the potato AGPase SS by bacterial complementation and yeast two hybrid assays. Residues, Arg45, Arg88, and Arg92 were mutated to the Ala whereas Trp135 was mutated to Arg by site-directed mutagenesis in pML7 vector. Mutants were transformed into E. coli glgC\u2212 (with the pML10). Only the LSArg88Ala mutants have glycogen deficient phenotype and they were unable to complement glgC\u2212 gene compared to cells containing wildtype AGPase genes and His89 to Ala. Bacterial complementation result indicated that cells harboring mutant LS constructs can complement glgC\u2212 in E. coli were expressed with the WT SS in acid see . Then, c45, Arg88, Arg92, and Trp135 of the LS with interaction of the SS, Arg88 had the highest backbone energy ( and E. coli glgC\u2212 containing the pML10 (WT SS). As seen in glgC\u2212 gene in E. coli and in turn glycogen production. These results point out that the backbone energy of these residues showed an additive effect when they combined and caused disruption of the heterotetrameric assemblies.When we analyzed the backbone energy contribution of Argrgy and 4. ThenAgrobacterium AGPase have been reported 340 and SS-Ile324 provided by two water molecules in D1. These water molecules form several hydrogen bonds with the Ile residues separately during the simulation and the same network as in ele and \u0394Gpolar values for Lys288, Lys313 in SS (D1) and Arg45, Arg88 and Arg92 in LS (D2). Based on these considerations, although our MM-GBSA calculations may not be perfect, they are fairly consistent with the experimental results. Total of 79 (38 in LS and 41 in SS) residues in lateral interaction and 53 residues (27 in LS and 26 in SS) in longitudinal interactions were classified to be part of interfaces. Free energy decomposition scheme was applied to identify the critical residues in the LS-SS interfaces. In both cases, residues that showed 3.0 kcal/mol energy drop upon complexation of the LS and the SS were defined as hot-spots. A total of 19 out of 79 interface residues (8 in LS and 11 in SS) and 5 of 41 residues (4 in LS and one in SS) were accepted as hot-spots. Interestingly, the identified hot-spot residues in LS are highly conserved among different species (333) from the maize endosperm LS AGPase that participates in interactions with the SS. Our analysis of interface residues of potato LS indicated that Tyr275 (corresponding to maize His333 LS AGPase) is not involved in interaction. This specific residue may be solely responsible for heat stability rather than any interaction between the subunits. There are many studies performed to understand the subunit-subunit interaction of the AGPase subunits mainly carrying out domain swap between the different species subunits Plant AGPases contain two different subunits encoded by two different genes 97, Pro327, Thr328, Ile330, Ile335, Ile339, Ile340, and His342, were involved in lateral LS-SS interaction whereas the LS residues of Arg45, Arg88, Arg92, and Trp135 were involved in longitudinal LS-SS association. These residues were mutated and the effect of these mutations on the interactions of LS and SS were characterized in vivo using the yeast two-hybrid method and the bacterial complementation assay. Mutating the LS residues at position 88, 327, 330, 335 and 339/340 abolished the interaction between the LS and the SS subunits. On the other hand, the LS residues at position 45, 92, 97, 328 and 342 have no significant contribution to the subunit-subunit interactions. When we compared the MM-GBSA results with the experimental data, we made a very interesting and insightful observation: Residues with favorable backbone free energy terms actually correspond to critical residues and Asn97 residues have an insignificant effect in the lateral interaction. Changing the LS residues at position 327, 330, 335 and 339/340 abolished the interaction between LS and SS and inhibited the glycogen synthesis; however, His342Ala mutants were able to synthesize glycogen. Experimental results confirmed the computational analysis using MM-GBSA method, and exhibited a remarkable concordance with backbone binding\u0394G energy values instead of side chain or total binding\u0394G.As discussed above, the heterotetrameric structure consists of lateral and longitudinal interactions. The LS residues, Asnresidues Table 6.339 side-chain is excluded from the inner part of the \u03b2-helix domain and van der Waals energies have a dominant contribution to the favorable \u0394Gbinding of this residue which is mainly due to the interactions with Ile338 of the SS. Mutation of this residue to Ala will not only decrease the Evdw term, but may also force the side-chain of alanine to be included in the inner layer of \u03b2-helix domain. This is highly possible since Ile340, whose side-chain is involved in the inner layer, is mutated to alanine at the same time and the residue at position 321 of the SS is also an alanine. Such an inclusion will certainly result in steric clashes between the side-chains and disturb the interface structure of \u03b2-helix domain where it makes important interactions with SS. In other words, this mutation can affect the interactions both energetically and structurally.We have previously mentioned that Ile88 residue has the highest value and mutant cells were defective in glycogen synthesis. The backbone free energy decomposition of Trp135 residue is less favorable than Arg88 but more than Arg45 , and Arg92 . Consistently, glycogen synthesis was reduced in cells expressing Trp135Arg when compared with Arg45Ala and Arg92Ala. Although mutation at residues of Arg45 and Arg92 displayed no effect on the subunit-subunit interactions, the coupling of these residues with Trp135 abolished the heterotetrameric structure formation, hence, inhibited the glycogen synthesis suggesting the importance of each identified residue and cooperative effect of these residues.Among the four candidate hotspot residues involved in longitudinal LS-SS association, the backbone free energy decomposition of Arg328 and Ile330 double mutation to alanine seems to have an effect on D1 formation according to yeast two hybrid result (data is not shown) because these residues account for a total of \u22128.17 kcal/mol. We see in 328 has a minor contribution to \u0394Gbinding compared to its backbone . Having high backbone energy increases the chance of an alanine mutation to disturb the interactions between the subunits. In addition, an alanine mutation at this position might result in a decrease in the electrostatic interactions, which is \u22124.21 kcal/mol. Thus, these balancing changes decrease the overall effect of the alanine mutation. It is obvious that an alanine mutation at Ile330 position will certainly decrease the van der Waals effects of this residue. As previously mentioned Ile330 makes two H-bonds with Ala317 in SS which accounts for the \u22122.99 kcal/mol electrostatic interactions.Thr97, Pro327, Ile330, Ile335, Ile339, Ile340, and His342 are critical for the interaction with the SS of AGPase. Longitudinal interaction by the LS AGPase with the SS subunit is mediated by the Arg42, Arg88, Arg92, and Trp135. Second, we found that dimer 1 is much more stable compared with dimer 2 due to the hydrophobic interaction in dimer 1. Finally, backbone energy is an important deterministic parameter for the protein-protein interaction.The data presented in this paper allow us to reach the following conclusions. First, critical amino acid residues of the potato LS AGPase subunit that interact with SS subunit were identified using MM-GBSA and experimental methods. Lateral interaction between the LS and SS subunits was mainly mediated by the hydrophobic amino acids as shown previously for homotetrameric AGPase . For the first time we have shown the amino acids of the LS subunit that are important for such interactions The amino acids AsnPotato tuber AGPase large and small subunits share 53% sequence identity according to the CLUSTALW + atoms, in order to neutralize the systems. All the histidine residues were charged as +1 at their N\u03b5 atoms in order to establish unity. Particle Mesh Ewald (PME) method 4 steps using conjugate gradient method and keeping the backbone atoms of the solute atoms fixed. Minimization was completed by an additional 104 steps with all the atoms relaxed to remove the bad contacts. The systems were then gradually heated from 0 K to 300 K in 150 ps using canonical ensembles (NVT) during which the C\u03b1 atoms of the solutes were restrained by applying 2 kcal mol\u22121 \u00c5\u22122 force constants. Subsequent shift into isothermal-isobaric (NPT) ensembles was done and harmonic restraints on the C\u03b1 atoms were gradually removed in 80 ps after which the systems were equilibrated with an additional 100 ps. NPT simulations were performed for 8 ns at 300 K from which the last 4 ns was used to extract the snapshots with 20 ps time intervals. The 200 snapshots were then used for interface residue identification and binding free energy calculations together with free energy decomposition scheme were treated as the true interface amino acids.Snapshots are taken from the last 4 ns of the simulations . Interface residues were determined using NACCESS complex, Greceptor, Gligand are the energies of the complex, receptor and ligand respectively. Each term on the right hand side of Eq 1 can be represented as shown in the following equation:MM is the total mechanical energy of the molecule in gas phase, Gsol is the solvation free energy and TS is the entropic term. Each term in Eq (2) can be written as follows:MM represents the bonded and non-bonded interactions as a sum of electrostatic (columbic), van der Waals (Lennard-Jones) and internal strain energies. This term is calculated by classical molecular-mechanics methods using standard force fields such as parm96 force field polar) is computed in a continuum solvent environment by using the GBSA method. Non-polar solvation energy (Gnon-polar), which is considered to be the sum of a solute-solvent van der Waals interaction and solvent-solvent cavity formation energy, is approximated by using an empirical formula such as Gnon-polar\u200a=\u200a\u03b1\u00d7SASA. According to this formula, non-polar solvation energy of a molecule is proportional to the solvent accessible surface area (SASA) of that molecule in a solvent, where \u03b1 was taken as 0.005 kcal\u2022\u00c5\u22122relative binding free energies see refIn this study, MM-GBSA MM) of the proteins were calculated by the SANDER module applying no cutoff value for non-bonded interactions. Dielectric constants for the solute and solvent were taken as 1 and 80, respectively; and the solvent probe radius was adjusted to 1.4 \u00c5. Residues in interfaces of the subunits that showed at least 3 kcal/mol energy decrease, upon complexation, according to the per-residue free energy decomposition were considered as hot-spots.In all the calculations the LS was treated as the receptor and the SS as the ligand. Gas phase energies .Site-directed mutations of the specified hot spot residues were introduced to potato AGPase LS by PCR. Plasmids pML7, pGBT7K-LS, or pGAD-SS were used as template. PCR reaction was performed in a total volume of 50 \u00b5l containing approximately 50 ng of plasmid samples, 20 pmol of each primer, 0.2 mM dNTPs, and 2.5 unit Dream Taq DNA polymerase (MBI Fermentas) with appropriate primers indicated at Table I. Conditions for the 18 cycles of amplification reaction were 95\u00b0C for 30 s, 50\u00b0C for 30 s and 68\u00b0C for 14 min. Before the first cycle reaction mixtures were kept at 95\u00b0C for 4 min and at the end of the 18Yeast-two hybrid assays were performed as described previously E. coli AC70R1\u2013504 (glgC\u2212), carrying the SS cDNA expression plasmid pML10. The particular contribution of each mutant to the LS-SS interaction was evaluated by their ability to complement the glgC\u2212 mutation and synthesize glycogen on Kornberg medium enriched with 2% glucose. Glycogen accumulation phenotypes was detected by iodine staining The WT or mutant LS cDNA containing pML7 plasmids were sequentially transformed into glgC\u2212) cells were grown in 25 ml of LB medium and then induced with 10 mg/L of nalidixic acid and 200 \u00b5M isopropyl- b-D-thiogalactopyranoside (IPTG) at room temperature for 20 h when the culture OD600 reached 1\u20131.2. The cells were harvested by centrifugation and disrupted by sonication in 1 ml lysis buffer . The crude homogenate was centrifuged at 14,000 g for 10 min. The resulting supernatant was used. Protein levels were determined by Bradford assay (Bradford 1976) according to the manufacturer's (Bio-Rad) instructions .AC70R1\u2013504 with a Mini-Trans-Blot electrophoretic transfer cell (Bio-Rad) at 90 V for 1 hr. After pre-blocking with 5% BSA dissolved in Tris-buffered saline (TBS), the membrane was incubated with anti-LS or anti-SS primary antibodies (1\u22362000 diluted in 0.15% Tween20/TBS) for 1 hr at room temperature. After a series of washes the membrane was subsequently incubated with HRP-conjugated secondary anti-rabbit IgG antibody (1\u22365000 diluted in 0.15% Tween20/TBS) for 1 hr. Proteins were visualized by Amersham ECL plus western blotting detection system . The blot was exposed to autoradiography film.Native-PAGE was performed using a Bio-Rad Mini-PROTEAN III electrophoresis cell. Cell lysates containing 10 \u00b5g total protein was mixed with Laemmli's sample loading buffer except \u03b2-mercaptoethanol and reducing agent. Samples were electrophoresed on 3\u201313% polyacrylamide gradient gel (pH 7.0) with 1X running buffer at constant 100 V at 4\u00b0C for 2 hrs. Western blotting and protein visualization were performed as described above. The observed position of protein complexes was compared with BSA oligomer running pattern."} {"text": "Biological evolution conserves protein residues that are important for structure and function. Both protein stability and function often require a certain degree of structural co-operativity between spatially neighboring residues and it has previously been shown that conserved residues occur clustered together in protein tertiary structures, enzyme active sites and protein-DNA interfaces. Residues comprising protein interfaces are often more conserved compared to those occurring elsewhere on the protein surface. We investigate the extent to which conserved residues within protein-protein interfaces are clustered together in three-dimensions.Out of 121 and 392 interfaces in homodimers and heterocomplexes, 96.7 and 86.7%, respectively, have the conserved positions clustered within the overall interface region. The significance of this clustering was established in comparison to what is seen for the subsets of the same size of randomly selected residues from the interface. Conserved residues occurring in larger interfaces could often be sub-divided into two or more distinct sub-clusters. These structural cluster(s) comprising conserved residues indicate functionally important regions within the protein-protein interface that can be targeted for further structural and energetic analysis by experimental scanning mutagenesis. Almost 60% of experimental hot spot residues were localized to these conserved residue clusters. An analysis of the residue types that are enriched within these conserved subsets compared to the overall interface showed that hydrophobic and aromatic residues are favored, but charged residues (both positive and negative) are less common. The potential use of this method for discriminating binding sites (interfaces) versus random surface patches was explored by comparing the clustering of conserved residues within each of these regions - in about 50% cases the true interface is ranked among the top 10% of all surface patches.Protein-protein interaction sites are much larger than small molecule biding sites, but still conserved residues are not randomly distributed over the whole interface and are distinctly clustered. The clustered nature of evolutionarily conserved residues within interfaces as compared to those within other surface patches not involved in binding has important implications for the identification of protein-protein binding sites and would have applications in docking studies. The analysis of sequence conservation in a protein family is a useful method for identifying residues that are functionally important - for catalytic activity or binding, or responsible for providing stability to the folded structure -10. ResiThe question addressed in this paper is whether the subset of conserved residues in a protein-protein interface occurs scattered across the interface, or cluster together in three-dimension? It is possible that the conserved residues would form one or more localized clusters within the interface as it would enable the formation of \"functional motifs\". It has recently been shown in protein-DNA interfaces that the most stabilizing residues (putative 'hotspots') are those that form clusters of conserved residues at the interface . The resOf the large number of residues comprising a protein-protein interface, only a few contribute significantly to the free energy of binding. These \"hot spot\" residues are generally occluded from bulk solvent, being surrounded by other less important residues . It is p2 of surface area upon complexation are considered as belonging to the interface [The sets of interfaces used were 122 homodimers and 204 nterface .The sequence variablility at each interface residue position is calculated as the Shannon entropy (s) in sets of homologous protein sequences :(1)i(k) is the probability that the ith position in the multiple sequence alignment is occupied by a residue of class 'k', and s(i) is the sequence entropy of that position. A low value of sequence entropy, s(i) implies that the position has been subjected to relatively higher evolutionary pressure than another position in the same alignment having a higher sequence entropy value. Multiple sequence alignments were obtained from the Homology-Derived Secondary Structure of Proteins (HSSP) database [where, pdatabase . The datdatabase . The amiEq. 1 makes use of the probability (or frequency) of occurrence of each residue class in a given aligned position. However, it does not take into account the \"background\" frequencies of these amino acids. It has been shown previously that the use of background frequency information significantly improves entropy-based functional site prediction within protein structures . In ordeback(k) denotes the background frequencies of the amino acids in group 'k', and the remaining terms are the same as in Eq. 1. This relative entropy measure is similar to the one used by Wang and Samudrala [where pamudrala . A higheamudrala . We alsoFor each interface with 'n' residues an average value of sequence entropy was calculated:int) were assumed to constitute the conserved residues. (2) We also selected the subset of conserved residues with sequence entropy lower than the average less the standard deviation (< s>int - \u03c3). It may be mentioned that the values of the mean and standard deviation used for selecting the set of conserved interface residues were calculated for each individual interface. (3) Finally, we also used only those residues with the sequence entropy value of 0.0, i.e., the fully conserved residues.We used three different criteria with increasing levels of stringency to identify the conserved interface residues, and compared the results. (1) Interface residues with sequence entropy values lower than the average s is the number of residues in the set, Npairs is the number of different pairs of residues in the set given by: Npairs = (Ns-1).Ns/2; and, rij is the distance between the centers-of-mass of the two residues in question, i and j. Greater the value of Ms, greater is the degree of spatial clustering of the residues in the set. The advantage of this inverse-distance based formula is that one or a few outlier positions are unable to significantly influence the overall value of Ms for the entire set. The values of Ms that are obtained are continuous and can be used in ranking different sets of residues.where Ns,cons) and then for the whole interface . The contrast between the spread of inter-residue distances between the two sets , \u03c1, is an indicator of the extent of clustering of evolutionary conserved residues,For each interface Eq. 3 was employed twice, once for the subset of conserved residues the conserved residues are clustered within the interface . However, the occurrence of isolated conserved residues has been dealt with while considering cluster size.s,cons) was compared to Ms values obtained for 1000 random subsets of interface residues of the same size in each structure. The average (and SD) of the Ms values calculated for the 1000 random subsets was compared to Ms,cons obtained for each interface.The degree of clustering of conserved interface residues . To ideThe clustering analysis was also carried out on a set of 26 protein-protein complexes for which experimental alanine scanning mutagenesis on the interface residues has been carried out. The list of complexes used has been described in our earlier paper . Interfas values (Eq. 3) for both the conserved and the overall residues in the patch were computed. The procedure was repeated for each patch. Finally, the surface patches were arranged in descending order of \u03c1 (Eq. 4) and the rank of the true interface in relation to all the other surface patches was found out.Three different procedures were used for the identification of surface patches. Method 1: NACCESS was run Two variations were also explored in the algorithm used to generate surface patches. Method 2: Instead of using standard cutoffs for all the proteins in the dataset, individual cutoffs were used for each protein depending on the size of the particular interface. For each interface the maximum distance between any two atoms was found out and the radial cutoff was set as half that value. This step is likely to generate surface patches of a size which will more closely approximate the size of the true interface, than a cutoff based on the average value calculated over the whole database. Method 3: In addition to using individual cutoffs for each protein, vector constraints were used while selecting surface neighbors around each central residue . This stAll three definitions of the surface patches result in approximately contiguous, circular regions of the protein surface which overlap each other. We also evaluated how the generated patches sampled the true interface region by calculating a percentage overlap - fraction of residues common between the real interface and the surface patch relative to the total number in the interface:where NrI is the number of residues in the true interface patch, and NrC is the number of residues in the generated surface patch. The numerator defines the set of residue common between the real interface and the calculated patch.s (Eq. 3) is a simple but useful measure for assessing the degree of spatial clustering of a group of points (residues in this case) in space [s uses an inverse distance relationship, residues that are close together will mainly influence its value and one or a few outliers will not unduly affect it. A high value of Ms indicates that the set of residues under consideration are mostly clustered together. Ms is calculated both for the subset of conserved residues as well as for the whole set of interface residues. The ratio (\u03c1) of Ms for the conserved subset to that for the entire interface gives an indication of the clustered (or dispersed) nature of the distribution of the evolutionary conserved subset. \u03c1 > 1.0 indicates that the conserved residues are relatively more clustered compared to the whole interface. A few representative examples of interfaces where the evolutionary conserved residues are clearly clustered together are shown in Additional file s,cons and Ms,int for each interface are plotted. A point lying above the diagonal indicates that Ms,cons is greater than Ms,int , implying that the conserved residue subset is more clustered in space relative to the overall interface . We repeated the clustering analysis but this time the calculation of sequence entropies was carried out using Eq. 1a (which takes amino acid background frequencies into account). However, this additional step did not affect the clustering results significantly . A fewer number of residues from each interface are labeled conserved, but the conclusion that the conserved residues are clustered within the interface remains the same . From the same interface, subsets of residues (of the same size as the number of conserved residues) were selected randomly and their Ms values were found out. The average Ms,random value of 1000 such random subsets were computed for each interface and compared against Ms,cons. In 96.7% (117/121) homodimeric and 87.7% (341/389) protein complex interfaces, we found that the randomly selected groups of residues were indeed less clustered than the conserved residue subset in the interface, and this difference was statistically significant at the 1% level (P < 0.01).We also found that subsets of evolutionary conserved interface residues are significantly more clustered than what is observed in subsets of the same size consisting of randomly selected interface residues per chain, which on average possesses an interface area of 1000 \u00b1 422 \u00c52. Since however, on average, homodimer interfaces are almost twice the size of protein complex interfaces [2 of the interface area were 13.9 and 14.9 for homodimers and complexes, respectively. The number of conserved interface residues per subunit (or chain) as a function of the interface size has been plotted in Additional file On average, the homodimer interfaces contain 27 \u00b1 16) conserved interface residues per subunit (comprising 52 \u00b1 29 interface residues), with the average interface area being 1941.2 (\u00b1 1108.2) \u00c5terfaces , the num6 conserv2 and all but one of the interfaces with 3 or more sub-clusters possess interfaces larger than 2000 \u00c52. A similar observation can also be drawn from the complexes - out of the 193 single-cluster interfaces, 94.3% (182 cases) have interface areas within 1200 \u00c52 whereas 54 of the 69 (~ 80%) interfaces with 3 or more sub-clusters have interfaces > 1200 \u00c52.The larger interfaces often comprise of multiple distinct clusters of evolutionary conserved residues. The number of sub-clusters formed by the subset of conserved residues can be easily identified by a simple geometric clustering algorithm which uses the average linkage method of a particular amino acid (X) is defined as the ratio of the frequency of occurrence of that amino acid in the conserved residue subset compared to its frequency in the whole of the interface region, i.e.,Certain amino acid types are enriched in the conserved residue clusters. The relative enrichment of each of the 20 amino acid types in the conserved subsets compared to the overall interface has been calculated Figure . The enrThe same types of residues are found to be preferred in conserved residue clusters in both homodimeric and protein complex interfaces, namely, hydrophobic , Cys, Gly, and the aromatic residues . Except for Gly the observed preference matches with the propensities of residues to occur in interface core . The onlResidues targeted for alanine scanning mutagenesis are distributed over all the residue classes and have a wide range of sequence conservation was then used to sort the surface patches (in descending order of \u03c1). A ranking of the true interface patch relative to all the other surface patches was then calculated . A rank of 1 indicates that the true interface is present in the top 10% of all surface patches and a rank of 10 indicates a location in the lowest 10% range in the distribution of \u03c1 for all surface patches. Figure To study if the results depend on the nature of the complex we made a functional classification of heterocomplexes as interfaces belonging to enzyme-inhibitor, antigen-antibody, signaling complexes, and Others. For each of these four types, we found out the prediction accuracy separately Table . In all int - < \u03c1>)/\u03c3, where \u03c1int is the value (Eq. 4) for the real interface, and < \u03c1> is the average vale for all surface patches in the protein, \u03c3 being the standard deviation. For the homodimers, about 40% (49/121) interfaces contain conserved residues which are significantly more clustered compared to conserved residues present within other surface patches . For the complexes, such significant clustering of conserved residues within the interface was observed in 38% (148/389) cases. Hence, for these interfaces, the clustered nature of the conserved residues alone is sufficient to distinguish the true interface from remaining surface patches.We further examined the statistical significance of the degree of clustering of conserved residues within true interfaces as compared to that in random regions of the protein surface. The Z test was used for this purpose, defined as Z = the distribution of conserved residues in interfaces, (2) the degree of overlap between the subset of conserved residue positions and experimentally determined binding hot spots, and, (3) the prediction of the interface using the distribution of conserved residues.s,cons > Ms,int) can be seen in Figures A \u03c1 value of > 1.0 indicating the clustering of conserved residues relative to all the residues in the interface [et al. [Experimental approaches for the identification of functionally important residues on protein surface involve mutagenesis of a large number of residues and recoding the change in activity or binding to other proteins. However, considering the large size of the protein-protein interfaces and without hod (ET) searches [et al. also com [et al. ,26,45,46 [et al. ,23,24,47 [et al. ,48 and tIt is known that interface hot spot residues form clusters within densely packed 'hot regions', where they form networks of interactions contributing cooperatively to the stability of the complex . TherefoWe investigated the potential use of the clustering of conserved residues for the identification of the binding site by comparing this feature in the real interface against all other surface patches Figure . In abouA question may be asked if a direct assessment by first identifying conserved residues on the protein surface and then searching for spatial clusters could have been performed . Methods like the Evolutionary Trace (ET) use the Recently, machine learning techniques, such as Support Vector Machines and Neural Networks have also incorporated the use of sequence conservation metrics to enhance the likelihood of predicting which surface residues of a given protein form an interface -58. In oPC conceptualized the work that was carried out by MG. MG and PC participated in interpretation of the data and writing the manuscript. Both the authors have read and accepted the final version of the manuscript.The file contains three tables (numbered S1 to S3), and nine figures (numbered S1 to S9).Table S1. Values of the parameters indicating the clustering of conserved residues in individual interfaces.Table S2. Location of experimental hot spots within the conserved residue clusters in the interface.Table S3. Distribution of 462 alanine scanned interface residues among the seven residue classes.Figure S1. Representative examples of interfaces showing the clustered nature of evolutionarily conserved residues.Figure S2. Plots of Ms,cons versus Ms,int.s,cons versus < Ms,random>.Figure S3. Plots of MFigure S4. Number of interface residues and conserved residues as a function of interface area.Figure S5. Multiple clusters of evolutionary conserved residues in protein interfaces.Figure S6. Distribution of cluster size.Figure S7. The level of sequence conservation of residues subjected to alanine scanning experiments.Figure S8. Comparison of the clustering of conserved residues within the subunit interface and other surface patches.Figure S9. Plot of Ms,cons versus Ms,int for interfaces from the bound forms 124 protein complexes described in the Protein-protein Docking Benchmark version 3.0.Click here for file"} {"text": "The number of available structures of large multi-protein assemblies is quite small. Such structures provide phenomenal insights on the organization, mechanism of formation and functional properties of the assembly. Hence detailed analysis of such structures is highly rewarding. However, the common problem in such analyses is the low resolution of these structures. In the recent times a number of attempts that combine low resolution cryo-EM data with higher resolution structures determined using X-ray analysis or NMR or generated using comparative modeling have been reported. Even in such attempts the best result one arrives at is the very course idea about the assembly structure in terms of trace of the C\u03b1 atoms which are modeled with modest accuracy.In this paper first we present an objective approach to identify potentially solvent exposed and buried residues solely from the position of C\u03b1 atoms and amino acid sequence using residue type-dependent thresholds for accessible surface areas of C\u03b1. We extend the method further to recognize potential protein-protein interface residues.Our approach to identify buried and exposed residues solely from the positions of C\u03b1 atoms resulted in an accuracy of 84%, sensitivity of 83\u201389% and specificity of 67\u201394% while recognition of interfacial residues corresponded to an accuracy of 94%, sensitivity of 70\u201396% and specificity of 58\u201394%. Interestingly, detailed analysis of cases of mismatch between recognition of interface residues from C\u03b1 positions and all-atom models suggested that, recognition of interfacial residues using C\u03b1 atoms only correspond better with intuitive notion of what is an interfacial residue. Our method should be useful in the objective analysis of structures of protein assemblies when positions of only C\u03b1 positions are available as, for example, in the cases of integration of cryo-EM data and high resolution structures of the components of the assembly. Chemical nature and structural context of residues in a protein generate diversity in the contribution of residues towards stability and function of the protein in vitro from the purified components. In the recent times cryo-electron microscopy has emerged as a very important technique to obtain structural information about these assemblies Owing to the advent of high throughput proteomic studies in combination with the computational methods, a vast amount of information is becoming available on the protein assemblies and protein-protein interaction networks In the present study we first present an objective method to recognize the buried and exposed residues in the structures of proteins with positions of C\u03b1 atoms alone available. Given the reasonable success of this approach and given the importance of interactions between proteins in an assembly Interestingly in-depth assessment of our approach to identification of interaction interface residues solely from C\u03b1 positions points to structural contexts where the proposed approach identifies interface residues more effectively than the traditional approaches which use positions of other atoms such as those in the sidechains.The general approach to recognize protein-protein interaction interfacial residues solely from the positions of C\u03b1 atoms mimics the popular approach used for protein-protein complex structures with all the atomic positions available and using the solvent accessibility calculations. Though there are a few criteria for identifying interfacial residues in complex structures with all the atomic positions available, in our approach based solely on C\u03b1 positions we mimic the following criterion which has been used commonly in the literature For a residue to be considered in a protein-protein interface solvent accessibility of the residue in the complex should be \u22647% and in the absence of interacting subunit the accessibility should be \u226510%.The primary challenge in using an alteration of this criterion for complex structures with positions of only the C\u03b1 atoms available is to identify the equivalence of 7% and 10% sidechain accessibility for accessible surface area of C\u03b1 atoms as a function of the residue type.Sidechain orientation is a key factor that determines extent of solvent accessibility. Absence of sidechain positions in low resolution structures with only C\u03b1 positions available makes recognition of solvent exposed and buried residues non-trivial. However relative orientation of virtual bonds connecting contiguous C\u03b1 atoms gives a rough indication of sidechain orientation.Our approach to recognize solvent exposed and buried residues based solely on C\u03b1 positions involves calculation of accessible surface area values of C\u03b1 using a probe sphere of appropriate radius. In this analysis we have used 1464 high resolution (\u22642\u00c5) crystal structures of proteins which are largely non-homologous with positions of all the non-hydrogen atoms available. Solvent accessibilities of all the residues in these proteins employing the standard probe radius of 1.4\u00c5, which is commonly used for all-atom models, have been calculated. We have generated a separate coordinate dataset of only C\u03b1 atoms in these protein structures consciously deleting the coordinate data for all non-C\u03b1 atom types. We refer this dataset as \u201cC\u03b1-only structures\u201d. This dataset is not entirely equivalent to a dataset of low resolution structures with only C\u03b1 positions available as the accuracy associated with C\u03b1 positions in the dataset of C\u03b1-only structures is expected to be higher (owing to the higher resolution) than that of true low-resolution structures. However, as shown earlier In order to recognize the radius of the probe sphere that is appropriate for the structures with only C\u03b1 positions available we have calculated accessible surface area values of C\u03b1 atoms for the entries in the dataset of C\u03b1-only structures using a series of probe of radii namely (in \u00c5), 2.1, 2.5, 3.0, 3.2, 3.4, 3.5, 3.6, 3.8, 4.0. Accessible surface area (expressed in square Angstroms) of a C\u03b1 atom corresponding to a specific residue, calculated using a specific probe radius in a given protein structure, is compared to accessibility value (expressed as %) of the same residue calculated using all the available atomic positions and using a probe radius of 1.4\u00c5. Two measures have been employed to assess the correspondence between the accessibility values and accessible surface area values.A simple correlation coefficient has been calculated corresponding to a specific probe radius for every protein structure in the dataset of C\u03b1-only structures. Distribution of correlation coefficients has been studied for the range of probe radii for every structure in the data set. We seek to choose the probe radius that generally provides highest correlation coefficient for most of the structures in the data set.per defines the deviation in the rank correlation between the two distributions for a given probe radii:Rank order of the buried residue positions corresponding to the increasing order of accessible surface area of the C\u03b1 atoms for a specific probe radius is compared to the rank order of the buried residues in the same protein using all-atom model and the probe radius of 1.4\u00c5. The parameter iAll and RiC\u03b1 correspond to accessibility rank of a buried residue (characterized by \u22647% solvent accessibility) from full-atom structures and ASA rank of the same residue in the C\u03b1-only structure calculated for a specific probe radius. N corresponds to the number of buried residues.Here RNo standard cut-off values in terms of ASA values are available to determine the buried residues solely from the positions of C\u03b1 atoms. Hence, we identified residue type dependent cut-off for accessible surface area values of C\u03b1 atoms corresponding to 7% and 10% solvent accessibility. Towards this, correlation between surface area values of C\u03b1 atoms from C\u03b1-only records, obtained for each one of 20 residue types and the accessibility values for the same residue as obtained using the whole atom record and 1.4\u00c5 probe radius. The value of C\u03b1 accessible surface area corresponding to the 7% and 10% accessibility was then calculated from the regression lines. The ASA values obtained in such a way were then used as cut-offs to identify the residues with \u22647% accessibility and \u226510% accessibility from the C\u03b1-only structures.Having identified residue type-dependent equivalence of 7% and 10% solvent accessibility for C\u03b1 only coordinate sets it is a straightforward exercise to use the criteria of \u22647% and \u226510% to recognize interfacial residues in the protein-protein complex structures with only C\u03b1 positions available.Protocol section various radii for the probe sphere have been used to calculate accessible surface areas of C\u03b1 atoms. Correlation coefficient has been calculated between accessibility values from full-atom models and ASA of C\u03b1 atoms in C\u03b1-only structures for various probe radii. For a dataset of 1464 high resolution, largely non-homologous protein structures we had calculated the percentage solvent accessibilities of residues using all atom model and the classical probe radius of 1.4\u00c5. A dataset of C\u03b1-only structures has been formed by deleting the positions of all the non-C\u03b1 atoms from the dataset of 1464 proteins and this dataset is referred to as \u201cC\u03b1-only structures\u201d. As mentioned in the Protocol section the parameter per defines the correlation between the ranks of buried residues arranged in the increasing order of percent solvent accessibilities and ranks of same residues arranged according to the ASA of C\u03b1 atoms, calculated using various probe radii, from the dataset of CA structures. per values of under 20% as a function of probe radii. It can be seen that at about 3.5\u00c5 of probe radius the number of protein structures having a good per value of under 20% reaches almost the maximum. Thus, from two independent analyses we identified 3.5\u00c5 as the appropriate probe radius for accessibility calculations of C\u03b1-only structures.We have also used rank correlation of buried residues in identifying, independently, the most suitable probe radius for use with C\u03b1-only structures. As mentioned in the Protocol for each of the 20 residue types we have analyzed the relationship between percentage solvent accessibility calculated from full-atom models using a probe radius of 1.4\u00c5 and ASA of C\u03b1 atom from C\u03b1-only structures for a probe radius of 3.5\u00c5. As mentioned in the section on Sens_bur) can be defined as the number of buried residues identified out of the total number of actual buried residues while the specificity as the actual number of true buried residues out of the total number of the residues that have been identified as the buried residues. As indicated in the Table, for the heterogeneous dataset that has been used here, the method recognized the buried residues with significantly high accuracy of about 85%. It has covered about 90% of the buried residues out of total number of buried residues. For any method while it is very important to correctly recognize the positives, it is equally important (sometimes even more important) to recognize the negatives correctly. Hence, we defined the sensitivity and specificity values in terms of non-buried (exposed) residues as well. The sensitivity of the exposed residues then can be defined as the number of residues identified as exposed residues from the total number of actual exposed residues. The specificity is defined as the actual number of exposed residues out of the total number of residues identified as exposed residues. As can be seen from the correlation, sensitivity, specificity and accuracy values listed in Using an independent data set of 1100 high resolution protein structures, we have recognized buried and exposed residues using the positions of C\u03b1 atoms only and using the thresholds defined for each of the 20 residue types. The buried and exposed residues thus identified were assessed by calculating sensitivity and specificity values for the two classes of the residues namely buried and non-buried (exposed), and the overall accuracy as well as the correlation coefficient using the expressions given in the An alternate approach to identifying solvent exposed and buried residues starting solely from C\u03b1 positions is to generate all atom models from C\u03b1 trace and employ the traditional solvent accessible surface area calculations on the dataset of coordinates of all the atoms in the proteins. For this purpose we have employed two methods to generate positions of sidechain atoms: the sidechain modeling approach employed by Sali and Blundell in their comparative modeling software MODELER Having obtained these encouraging results, the method was then further extended to recognize the residues in the interface of protein-protein complexes.Interface residues have been recognized for a high resolution dataset of 1100 protein-protein complex structures using the accessibility criteria mentioned in an earlier section. The residues were tagged as the interface residues if the accessibility values in complex form were less than or equal to 7% and in the isolated chain the accessibility value of the same residue increases to greater than or equal to 10%. In case of the C\u03b1-only structures of the protein-protein complexes the ASA cutoff values corresponding to the above mentioned accessibility cutoffs were calculated for each amino acid as mentioned previously . The intAs mentioned previously in case of the buried residues, to validate the results obtained in case of the C\u03b1-only structures the sensitivity and specificity values were calculated for two classes of the residues namely interface and non-interface residues. Also, the accuracy and the correlation coefficient values were calculated using the formulas mentioned in the A few residues were identified as interface residues while apparently they are not interfacial residues. Hence, the apparent false positive residues were further looked at more closely. The visual inspection of these residues in Pymol It is possible that residues in the periphery of the interface with solvent accessibility values greater than 7% even in the complexed form interact with the associated protein. These residues may not be considered as interfacial residues due accessibility values greater than 7% in the complexed form. Our method based solely on C\u03b1 positions capture these cases successfully despite the absence of sidechain positions.Further these \u201cfalse positives\u201d were found to be fairly conserved in the course of evolution (data not shown) reinforcing the important role of these residues in the formation of protein-protein interaction interface.Apart from accessibility based method there are several other methods A set of protein structures at low resolution was considered with only C\u03b1 positions available and the An approach has been developed to identify the buried and exposed residues in proteins solely based on the positions of C\u03b1 atoms. As shown using a large number of protein structures with complete atomic positional entries available the method works with very good accuracy, sensitivity and specificity. It is interesting to note that specificity, sensitivity, accuracy and correlation of the results of proposed method is better than that of all-atom models generated starting solely from C\u03b1 positions. Aside, the proposed method does not involve the otherwise additional step of sidechain modeling in order to identify solvent exposed and buried residues solely from C\u03b1 positions.The approach has been extended to recognize residues in the protein-protein interfaces. Assessment of the performance reveals that the proposed method works well. In fact the structural roles of residues those are recognized as interfacial in our approach, but not in the approach using full-atom model suggest that our approach is useful even if the complex structure has positions of all the atoms available. The proposed approach seeks to mimic the solvent accessibility-based identification of protein-protein interface as applied to all-atom structures. The extent of agreement between the results of proposed approach and inter-subunit distance-based approach is a reflection of difference in perceptions and definition of protein-protein interfacial residues.The proposed method is highly relevant in the analysis of low resolution structures with only the C\u03b1 positions available. Our work has a specific impact on the emerging low resolution pictures of fundamentally important protein assemblies obtained by embedding atomic resolution structures in cryo-EM maps. Results of our approaches employed on such structures should highlight the fundamental principles of stability and specificity of multi-protein assemblies and evolution of such complexes.The two different datasets have been used in the present study namely a set of 1464 high resolution structures (comprising monomers) and a set of 1100 structures of protein-protein complexes. These datasets were culled using PISCES NACCESS Performance of the method was measured by calculating the following parameters;Sensitivity (buried) or Sensitivity (interface)\u200a=\u200aTP/(TP+FN)Specificity (buried) or Specificity (interface)\u200a=\u200aTP/(TP+FP)Sensitivity (exposed) or Sensitivity (non-interface)\u200a=\u200aTN/(TN+FP)Specificity (exposed) or Specificity (non-interface)\u200a=\u200aTN/(TN+FN)Accuracy\u200a=\u200a(TP+TN)/NCorrelation Coefficient\u200a=\u200a((TP*TN-FP*FN)/(sqrt((TP+FN)(TP+FP)(TN+FP)(TN+FN))))Where TP : True positives; FP : False positives; TN : True negatives and FN : False negatives.Figure S1Accessibility plots for Aspargine, Glutamine, Aspartate and Glutamate(101.02 MB TIF)Click here for additional data file.Figure S2Accessibility plots for Alanine, Valine, Leucine and Isoleucine(101.02 MB TIF)Click here for additional data file.Figure S3Accessibility plots for Phenylalanine, Tyrosine, Tryptophan and Methionine(101.86 MB TIF)Click here for additional data file.Figure S4Accessibility plots for Lysine, Arginine, Histidine and Proline(101.02 MB TIF)Click here for additional data file.Figure S5Accessibility plots for Serine, Threonine, Cysteine and Glycine(101.67 MB TIF)Click here for additional data file.Table S1supporting information table(0.04 MB DOC)Click here for additional data file."} {"text": "Many protein structures determined in high-throughput structural genomics centers, despite their significant novelty and importance, are available only as PDB depositions and are not accompanied by a peer-reviewed manuscript. Because of this they are not accessible by the standard tools of literature searches, remaining underutilized by the broad biological community.To address this issue we have developed TOPSAN, The Open Protein Structure Annotation Network, a web-based platform that combines the openness of the wiki model with the quality control of scientific communication. TOPSAN enables research collaborations and scientific dialogue among globally distributed participants, the results of which are reviewed by experts and eventually validated by peer review. The immediate goal of TOPSAN is to harness the combined experience, knowledge, and data from such collaborations in order to enhance the impact of the astonishing number and diversity of structures being determined by structural genomics centers and high-throughput structural biology.TOPSAN combines features of automated annotation databases and formal, peer-reviewed scientific research literature, providing an ideal vehicle to bridge a gap between rapidly accumulating data from high-throughput technologies and a much slower pace for its analysis and integration with other, relevant research. Structural biology uses experimental methods, such as X-ray crystallography and NMR spectroscopy, to provide atomic-level information about the three-dimensional shapes of biological macromolecules. Such detailed information often delivers a critical component of the puzzle that has to be solved in order to understand the function of a macromolecule. Information provided by structural biology, while essential, by itself is not enough to decipher protein function without associated information from biochemistry, molecular and cellular biology, genomics, and other fields of biology. Historically, experimental structure determination of a protein was a long and painstaking process, which was generally initiated only after a significant body of biochemical and biological evidence about the function of a specific protein had been assembled. The very length of the structure determination process, with many steps extending into months, offered ample time to perform additional analyses, integrate all available information, and even carry out additional experiments to clarify the most interesting questions that were raised during the investigation. Such a multi-pronged approach to the characterization and analysis of a structure of a single protein was typically conducted by single investigator groups with broad experience or through collaboration among multiple laboratories with synergistic expertise. These collaborations were forged by standard mechanisms of communication in the scientific community.This standard model of a structural biology project is now changing, largely because of the development of technological platforms capable of high-throughput protein structure determination. These platforms have reduced the cost and shortened the time for determination of the structure of a novel protein . A signiAt the same time, the interest, relevant expertise, and resources to complete the characterization and analysis of the proteins determined by SG centers do exist in the broader community, i.e., mostly outside of the structure determination centers. However, the traditional mechanisms of forming collaborations and communicating results cannot keep pace with the high-throughput production of SG centers. Such traditional mechanisms include forming personal networks that arise from extended discussions and interactions at meetings and from local collegial interactions within universities and institutes. These mechanisms have become woefully inadequate as the period for protein structure determination has shrunk to days/weeks rather than months/years. Given the very diverse set of proteins that are worked on by PSI centers, the expertise needed to fully analyze each of them is difficult to find even in the largest labs. Finally, another important difference between an SG center and a standard structural biology lab is the high-throughput aspect of structural genomics. With one structure being determined, on average, each working day, there is simply not enough time to integrate relevant, non-structural data and/or to follow a traditional approach to establishing collaborations with appropriate groups with synergistic experience.Structural biology is not the only field struggling to maintain a balance between high-throughput data accumulation and a much slower pace for its analysis and integration with other, relevant research. The former is driven by the rapid pace of technological development, whereas the latter is limited by the standard methods of data analysis and assimilation. The equivalent amount of time and effort that was once needed to sequence a single gene can now yield the sequence of an entire genome; similarly, the effort needed a few years ago to analyze the expression pattern for a single gene can now yield a genome-size DNA expression array. In contrast, the time needed to research a particular question and look for possible connections to other data and/or experiments, as well as the time to find and consult with a colleague who is knowledgeable in another, often connected field, are not easily changed by technology. As a result, a significant percentage of data obtained by high-throughput techniques remain suspended in the no-man's land of \"unpublished results\"; for instance, the almost 18,000 microarray experiments in the Stanford microarray database have led to only 449 publications , and almhttp://en.wikipedia.org/w/index.php?title=Wikipedia:No_original_research. On the other hand, creating new knowledge is a critical part of science. Wikipedia's dependence on already verified information allows it to avoid issues such as impartial mechanisms for ensuring the reliability of information and assigning credit for individual contributions. These problems are addressed by peer-reviewed literature, but at the cost of slower speed of dissemination, as well as other constraints imposed by the inherently inefficient structure of the institutionalized peer-review process.This growing gap between data creation and analysis has led to exploration of new approaches for exchanging and disseminating information, ranging, for example, from blogs to wikis to networking sites. Most of these recent attempts have been enabled by the emergence of the Internet, which has changed, and is still changing, the way people communicate, both in science and beyond. It is interesting to note that the most important developments in the Internet era, from the Internet itself to the concept of the World Wide Web , were drhttp://www.nigms.nih.gov/Initiatives/PSI/psi_biology/. Mechanisms similar to those we outline here could be employed to address similar issues in these fields. By focusing on protein structures, TOPSAN has a particular relationship to the recently developed Proteopedia [Most biology focussed wikis, such as WikiPathways , Proteopteopedia or PDBWiThe Open Protein Structure Annotation Network http://topsan.org, was developed at one of the PSI high-throughput production centers, the Joint Center for Structural Genomics (JCSG), and has since become a collaborative project involving personnel from other PSI centers, as well as from other institutions. Since its inception, the JCSG has recognized the need for high-throughput annotation and the further analysis of its targets for structure determination and, subsequently, for solved structures; these processes are conducted through a combination of automated annotations, detailed knowledge-based analyses, and follow-up collaborations to advance the functional understanding of the structures being determined. A particularly illustrative example that predates our current instantiation of TOPSAN can be seen as a result of our determination of a novel thymidylate synthase, TM0449 [Thermotoga maritima. Previously known only from genomic complementation studies in Dictyostelium [TOPSAN, , TM0449 from Theostelium , the nonostelium -18.Figure In this article, we describe our ongoing implementation of TOPSAN and lay Authorship tracking mechanisms ensure both proper assignment of credit and accountability of contributors. Entries are peer-reviewed by users with established records of accomplishment and credentials to ensure content reliability; at this point, the review is largely performed by senior JCSG scientists. All users of the system implicitly accept the collaborative rules, as any TOPSAN entry is, in fact, an open invitation to collaborate with the SG center that solved this particular structure. This invitation is supported by significant preliminary data and results, such as biological material and sequence and structure analyses that often provide detailed hypotheses on the function of the newly solved protein. Such collaborations can lead to traditional, peer-reviewed papers, and, encouragingly, many of the TOPSAN entries are, in fact, progressing this way.One view of the TOPSAN system then is that of a stepping-stone to peer-reviewed publications, where the openness of our system allows for the immediate establishment of collaborations and TOPSAN pages are treated as a mechanism to link individual collaborators and unpublished results to achieve the ultimate goal of publishing in a, standard, peer-reviewed journal. Another, more far-reaching view of TOPSAN is that of a \"live\" protein annotation and collaboration platform, i.e., a platform that might lead to novel forms of ongoing, virtually continuous scientific communication and knowledge creation. For collaborative science, this approach offers a new paradigm especially pertinent to the \"omics\" era. While we personally favor this view, the actual outcome will ultimately be determined by the general community of users and contributors.http://www.topsan.org is a new type of communication platform for annotations and follow-up analysis and research on proteins. At this point, TOPSAN focuses on proteins targeted or solved by structural genomics centers; however, users are free to create pages on any protein they are interested in, and many, actually most, of the TOPSAN pages are compiled for proteins whose structures are not yet determined. Furthermore, many TOPSAN pages increasingly do not focus on individual proteins, but on protein families, specific organisms, or pathways , and several hundred user-created pages that do not fall into the above categories. Over 1,000 pages contain human-curated annotations, ranging from a few sentences to several pages, with a median of over 150 words, and of quality from relatively trivial annotations to publication-quality manuscripts. The content to date has been amassed from over 400 registered users, although the majority has been contributed by a smaller subset of the most active contributors. While most of the TOPSAN pages focus on experimentally determined protein structures, TOPSAN was of great utility in the development of the complete metabolic reconstruction for maritima , in whichttp://creativecommons.org/licenses/by-nc-sa/3.0/legalcode, which allows content to be available for others to build upon and share legally. In addition to open-content licensing, TOPSAN provides an application programming interface (API) that enables external sources to retrieve content easily. API access is available to collaborating sites to retrieve content from TOPSAN protein pages. This has been implemented for PFAM, where the contents of an individual TOPSAN protein page are currently being imported to PFAM by a PFAM server via the TOPSAN API. This feature is standard in the PFAM 24.0 release. In this case, we developed a custom PHP script (files.topsan.org/retrieve.php?uniprotId = xxx) for PFAM servers to readily access TOPSAN page content in real time. A structured XML page is provided for each protein on TOPSAN. A Uniprot ID is passed as an argument to the script, which then queries the TOPSAN MySQL database for the current annotation of the page as well as for any file attachments/images. This information is formatted as an XML document and returned to the requesting client. The API is flexible and can easily be used for third parties to retrieve our content. TOPSAN pages are searchable with all Internet search engines; our internal analysis shows that about 50% of TOPSAN traffic \"arrives\" from Google and other search engines, and that users directly view specific protein pages that contain keywords that are a subject of an individual search. At the same time, the API access to TOPSAN allows for integration into other resources. For example, the Calit2 Visualization Lab at UCSD displays TOPSAN protein annotations from within their structure gallery CAVE 3D wall display. This protein structure display is utilized in courses both at UCSD and at The Scripps Research Institute.TOPSAN aims not only to collect information about proteins whose structures have been determined, but also to share and disseminate this information. TOPSAN pages are free to view without registration, and TOPSAN content is licensed under the Creative Commons share-alike license The current implementation of TOPSAN was developed using MindTouch, an enterprise open-source collaboration and integration platform, which provides access to tools and scripting that can be utilized to develop a customized, interactive website. Scripts and customized templates defining TOPSAN-specific protein pages integrate several sources and types of information; namely, existing annotation automatically parsed from established resources and user-created content that can be edited by all registered users. The MindTouch platform offers some core features, such as open-source code; user account handling, including advanced security features; and extensive backup and version tracking capabilities that make it an excellent platform choice on which to build TOPSAN. At the same time, MindTouch is comparable in many ways to other platforms. For instance, the initial version of TOPSAN, which was in use until late 2008, was developed using the now-defunct JOTSPOT platform; the TOPSAN server was then migrated to the MindTouch platform, maintaining its overall look, feel, and functionality.A remote application (TopsanApp) is used to retrieve protein information from external resources and to create and store pages for specified proteins on the platform through an API. This application is coded in C# built on the .NET framework Figure and firsAccess to view or edit a page can be set for all protein pages to be at the individual or group level. As part of our customization, we have developed a registration process that allows for some level of initiation and management of new users. All unregistered guests have access to view all protein pages on TOPSAN. Upon registration, a profile page, which may be edited by the new user, is created. It is here that they may enter scientific interests, upload profile images, and add their publications. A user's profile page is linked to every protein page contribution thereafter. After registration, the site administrator conducts a basic verification procedure that includes checking the user's e-mail address and cursory qualifications to ensure a level of validity of the user and protect the system from abuse and spam. Upon successful verification, a user is given \"Contributor\"-level permissions that allow him/her to edit all protein pages and respective discussion pages.A new protein page can be created on TOPSAN when requested by a user, but can be also created automatically by the system. At this point, pages are created automatically for all JCSG targets when they pass the crystallization stage and for all targets from other PSI centers when structures are deposited in PDB. At any moment, two fields--protein summary and ligand summary--are available for editing to registered users. Other fields on the page, including links to precalculated analyses, are prepared automatically and placed there by annotation scripts based on parsing of input from several databases, such as targetDB or PDB, and annotation/analysis servers, such as PFAM , SCOP 2, PDBSum An important part of studying a protein is comparing it to other proteins. TOPSAN allows users to create groups of proteins using a tagging system. This feature gives the user the ability to create customized higher-order groupings based on their own criteria, expanding on the predefined classification systems already in use in the scientific community, which were mapped onto TOPSAN proteins using a similar system. For instance, SCOP hierarchical classification protein systems were mapped onto TOPSAN, where each protein that has a SCOP entry can be annotated at its respective Class, Fold, Superfamily, or Family level, as defined in SCOP. In addition to allowing users to describe features at the higher order of the individual structure, community-based structure classification to the SCOP hierarchy could be propagated back to SCOP. Utilizing pre-defined classification systems on TOPSAN to create groupings of proteins for higher-order annotations is currently being expanded to use CATH , PFAM 2, and metNitrosomonas europaea, solved by the JCSG in 2006 and deposited in PDB with code 2ich, was the first structure for the PFAM09410 protein family (unpublished observation). This family, called DUF2006 (Domain of Unknown Function #2006) has over 400 homologs in bacteria, archaea, and fungi. The PFAM DUF2006 family overlaps with the COG5621 family of predicted secreted hydrolases (COG5621) and is distantly related to two other PFAM families of hydroxyneurosporene synthases (PFAM0743) and Svf1-like proteins (PF08622). While this information was available prior to structure determination, structure analysis of NE1406 subsequently revealed that it consists of two repeats of a novel variant of an up-and-down \u03b2-barrel structure and is structurally similar to proteins from the calycin superfamily of integral transmembrane proteins in mitochondria and in the outer membrane of Gram-negative bacteria. This structural similarity, accompanied by very low sequence similarity led to an ongoing debate about possible homology and functional similarity between these two groups of proteins. On one hand, the characteristic sequence signature of the calycin superfamily was present in the N-terminal half of NE1406 and, in addition, two (out of three) short conserved regions (SCR), characteristic of the lipocalin family, were also present in NE1406. On the other hand, analysis of the structural superposition of NE1406 with members of the calycin superfamily revealed a number of systematic differences; the \u03b2-sheets forming the NE1406 barrel were both longer and flatter than those in calycins, resulting in a narrower opening at the bottom of the barrel, at the site of the binding site in calycins. Secondary structure elements, such as the long C-terminal \u03b1-helix characteristic of lipocalins , which represent a structurally and functionally distinct subclass [A TOPSAN page usually starts from a simple summary of the structure and (if known) functional annotations of the family to which it belongs. Very often, distant structural similarity provides additional hints, leading to further questions and hypotheses. For instance, the crystal structure of the protein NE1406 from subclass of the cFigure http://www.topsan.org/explore?pdbid=2ich and has contributions from Alexey Murzin, Arne Skerra, Andrei L. Lomize, Darren Flower, and members of the JCSG team. This type of interaction involving scientists in several locations around the globe could have happened naturally over a longer period of time using standard interaction mechanisms, such as chance encounters in scientific meetings. However, the TOPSAN platform allowed this interaction to happen in cyberspace over the course of a few days among participants who never met in person and some of whom did not even know each other.Experimental verification of the NE1406 connection to calycins or lipocalins would present the first evidence of a lipocalin-related protein in the archaea domain and would settle the question of whether this family may have arisen via horizontal transfer to eukaryotic cells from the endosymbiotic alpha-proteobacterial ancestor of the mitochondrion . This anhttp://topsan.org, a collaborative environment developed by the Bioinformatics Core of the Joint Center for Structural Genomics to facilitate annotations and collaborative research in order to characterize protein structures solved by Protein Structure Initiative production centers and other structural genomics groups, and in turn, to facilitate the integration of these structures with research in the broad biological and biochemical community. Structural genomics, in its quest to provide broad coverage of protein space, frequently targets uncharacterized proteins, whose specific functions are unknown. Sometimes, an analysis of distant homology relationships or of the data from expression arrays suggest a possible relation to another, better-characterized protein family, but the reliability of such predictions and the extent of possible functional similarity varies from case to case and needs expert analysis. For other structures, features that are novel and impossible to predict from the sequence can be gleaned from the structure and hint at possible, previously unknown twists in the evolution of some members of otherwise, well-characterized protein families. In many such cases, structure analysis may lead to a new structural or functional hypothesis or an interesting speculation, which while very intriguing, would likely remain unpublished using standard criteria for peer-review publications. TOPSAN opens up a venue for such information to become available to a wider audience, increasing the chance that such partial information would be debated, discussed, and eventually combined with other supporting information from experimental work in another lab or, encouragingly, would prompt another researcher to perform a critical experiment to evaluate and test the function or the hypothesis. TOPSAN strives to maintain quality that approaches peer-reviewed publications by ensuring the contributions come from registered users, who during registration have to present credentials as to their expertise in biology, as well as by monitoring and evaluating contributions to identify, on one hand, spammers or abusers of the system, and on the other, expert users who are ready to assist in ensuring quality by providing oversight of other users. With a mechanism of tracking authorship of contributions, TOPSAN offers great possibility that disparate collaborators, often unknown to one other, can pool their information and resources to arrive at a body of significant new knowledge. With these goals, TOPSAN aims to occupy a niche different from that of the Wikipedia-type scientific wikis and that of databases or depository sites. Nevertheless, the primary goal of both types of resources is to provide easier access to already existing and validated information. At the same time, with its emphasis on ease of use and its lack of requirements for a minimal contribution size, TOPSAN is different from the peer-reviewed, standard scientific literature and allows more spontaneous, rapid communication and quick interaction.In this article, we describe TOPSAN, The Open Protein Structure Annotation Network The TOPSAN project evolved from an internal JCSG effort to annotate and more fully characterize the proteins that were determined in our center. This history is responsible for the current implementation being strongly focused on protein structure, but the TOPSAN concept can be readily generalized to any high-throughput project, such as PSI:Biology, to novel structures of other macromolecules determined by a structural biology lab, and more broadly, to any research domain where there is a need for a wider collaboration and a chance that critical pieces of information or expertise for any given project or research area are already known or available from members of the scientific community. The essence of the TOPSAN approach is to encourage new collaborations and explore the use of diverse, often disparate data to find new, integrative views on important biological problems. Therefore, while at this point TOPSAN is an experiment in the annotation and analysis of proteins targeted by structural genomics, it is also a model for collaborations in the potentially much larger and more complex research fields that are emerging in biology and other research disciplines.DW developed the TOPSAN website, including page scripting and wrote sections of the paper; SSK developed the core concepts of TOPSAN, planned and supervised the development of TOPSAN implementation, and wrote sections of the paper; CB developed sample TOPSAN pages and edited numerous specific pages, contributed to the discussions about the TOPSAN concept, and wrote sections of the paper; IAW helped interconnect the TOPSAN concept with the research of the JCSG and provided input to the shape and form of TOPSAN pages; AG helped develop the TOPSAN concept, supervised the TOPSAN project, and wrote the paper; JW helped develop the TOPSAN concept, supervised the TOPSAN project, and wrote the paperAll authors participated in discussions and revisions of the paper. All authors read and approved the manuscript."} {"text": "A new algorithm is presented allows protein specificity residues to be assigned from multiple sequence alignments alone. This information can be used, amongst other things, to infer protein functions. We use a new algorithm to identify specificity residues and functional subfamilies in sets of proteins related by evolution. Specificity residues are conserved within a subfamily but differ between subfamilies, and they typically encode functional diversity. We obtain good agreement between predicted specificity residues and experimentally known functional residues in protein interfaces. Such predicted functional determinants are useful for interpreting the functional consequences of mutations in natural evolution and disease. The diversity of biologic phenomena arises from the complexity and specificity of biomolecular interactions. Nucleic acid and protein polymers encode and express biologic information through the specific sequence of polymer units (residues). The sequences and corresponding molecular structures are under selective constraints in evolution. At specific sequence position, changes in sequence alter intermolecular communication and affect the phenotype and can lead to disease -6. DetaiIdentifying interaction sites on protein molecules is difficult, both experimentally and theoretically. Most proteins have complicated three-dimensional shapes with interaction sites that are composed of contributions from nonsequential residues. Even with the three-dimensional structure known, however, the sites of functionally important interactions may not be obvious. Mutational experiments to probe the contributions of individual residues to such interactions are expensive. Computational methods to simulate the interactions of biologic macromolecules in molecular detail do not yet have adequate power and accuracy. Fortunately, biologic evolution has recorded rich and highly specific information in genetic sequences. For proteins, this provides the opportunity to analyze conservation patterns in amino acid sequences and extract valuable information about specific protein-partner interactions. In particular, residues in protein active sites and protein binding sites are under sufficiently strong selective pressure to allow their identification from an analysis of protein family alignments.In a sufficiently diverse family, globally conserved residues are easily identified and are likely to be conserved as a result of strong selective constraints. A number of research groups have developed sophisticated methods to identify additional key residues that are involved in protein structure and function, especially residues that are strongly conserved within each subfamily but differ between subfamilies -18. If sWe present a new algorithm with which to solve the combinatorial complex problem of identifying specificity residues and, simultaneously, the corresponding optimal division into subfamilies. In our approach, called combinatorial entropy optimization (CEO), we optimize a conservation contrast function over different assignments (clusterings) of proteins to subfamilies. Hierarchical clustering is used We validate the method by comparing sets of predicted specificity residues with sets of experimentally known functional residues, such as interaction residues observed in three-dimensional macromolecular complexes, and we obtain good agreement between prediction and observation. Interestingly, the predictive power of the method goes beyond protein-protein interactions and is applicable to any functional constraint that conserves specific residue types in particular positions across all members of a protein subfamily.The implementation of the method takes a A . We tested the robustness of the results with respect to parameter changes. To explore the choice of A, we conducted tests in a number of protein families with A ranging from 0.0 to 1.0, in 0.001 increments. Ideally, the selected set of characteristic residues varies slowly with A in a region of suboptimal A. The tests determined that A = 0.6 to 0.9 as the optimal range, and we tested all local minima of \u0394S0(A) in this range. We tested the robustness of the results for many protein families, with representative results for two protein families in Additional data file 1. We conclude that the assignment of sequences to subfamilies is reasonably consistent with prior biologic knowledge and that the selection of characteristic residues is reasonably stable in the range A = 0.6 to 0.9. For example, for protein kinases, of the top 30 characteristic residues at the overall minimum (A = 0.68), ranked by the column-specific difference entropies, 26 are in the top 30 at the second best local minimum (A = 0.72); alternatively, for ras-like small GTPases, of the top 20 residues at A = 0.833, 19 are in the top 20 at A = 0.85.The clustering algorithm partitions the sequences of a protein family into subfamilies and simultaneously selects a set of characteristic residues. The value of the contrast function, which is optimized; the number of subfamilies; and the set of the characteristic residues, which constitute the resulting optimal configuration, depend on the value of the parameter A = 0.6 to 0.9 in increments of 0.025 and reports results for the value of A for which \u0394S0(A) is minimum. For typical protein families this procedure yields results that resonate well with the biologic intuition of protein family experts , and the selection of characteristic residues is a good starting point for detailed analysis and design of mutational experiments. After an initial scan, users can of course select any range of granularity parameter A as input and obtain more fine grained or more unified families as output.As a practical consequence of these tests, for a given protein family alignment the current software implementation of the algorithm scans the values To illustrate typical results of the CEO algorithm applied to families of amino acid sequences, we chose the small GTPases, a large and functionally diverse protein domain family with members, probably, in all eukaryotes. These GTPases are molecular switches, timed by their rate of GTP hydrolysis, which is regulated by a number of interaction partners . GTPase These multiple functional interactions provide an ideal testing ground for specificity analysis. A plausible evolutionary scenario involves repeated genomic duplication of an evolutionary ancestor and subsequent selection of variants, following mutation, in which the new family members have taken on a specific function. For the more than 100 distinct small GTPases in, for instance, mammalian genomes, many functions are known but our knowledge is far from complete. It is therefore interesting to analyze in which way our specificity analysis agrees with known divisions into functional protein subfamilies and to make explicit predictions pointing to candidate residues for mutational functional experiments.Our analysis of 126 unique human sequences in the Protein Families (PFAM) Ras family defines 18 subfamilies, with from 2 to 15 proteins per subfamily and 22 specificity residues that optimally discriminate between these subfamilies . For example, all Ras and Rho proteins (as far as names have been assigned in the literature) are in distinct subfamilies. Finer levels of classification also appear to agree with known functional classifications; for example, Rab5A, Rab5B, and Rab5C are in a subfamiliy distinct from that of Rab6A, Rab6B, and Rab6C. As a result of systematic focus on specificity conservation patterns in our method, the implied functional distinctions between subfamilies constitute predictions when the protein class is known but functional details are not yet known.Many of the 22 specificity residues in the ras family of GTPases map to well known interaction sites, triggers, and readout points of conformational change, such as the switch I region , the switch II region (residues 63 to 70), plus six additional residues with other biologic molecules. Although such detailed predictions may be the subject of a subsequent analysis, we propose here that the following residues in the ras-type GTPases that are not in the 'switch' regions and have not been observed in protein-protein contacts in three-dimensional structures are particularly interesting Figure : G75, E7Various functional constraints can give rise to patterns of specificity residues, including macromolecular interfaces. To assess the predictive utility of the method for the prediction of interactions, we compared the overlap between the set of predicted specificity residues with known binding sites in several protein complexes. Although evolutionary constraints on specificity residues can be the result of any kind of functional interaction, residues in protein-protein interactions and protein-nucleic acid (NA) interactions are particularly well defined in three-dimensional structures of macromolecular complexes. A strong overlap of predicted specificity residues with binding sites would indicate that the method correctly identifies functional constraints on binding site residues. If that is the case, then one would expect a reasonable fraction of specificity residues to be binding site residues. We therefore assess the predictive potential of the implied prediction method, aware of the risk for over-prediction in cases in which other functional constraints operate outside binding sites.P < 0.1 in 19 out of 21 complexes (at level P < 0.05 in 14 complexes). In practice, interpreting specificity residues as predicted binding site residues would yield accurate predictions in about half of the cases, which is a reasonable level for planning mutational experiments. The remaining cases do not necessarily represent false-positive predictions, because other types of functional constraints, such as internal support of interaction sites or requirements of overall protein stability and correct folding, may also give rise to subfamily-specific conservation patterns. We now present specific examples of the distribution of specificity residues within the context of three-dimensional structure complexes.To evaluate the overlap of predicted specificity residues (and conserved residues) with binding sites, we analyzed known three-dimensional structures of eight protein-protein/peptide complexes and five protein-NA complexes containing 19 unique proteins or protein domains belonging to 15 different so-called superfamilies from the Structural Classification of Proteins database . To comp2) in all eukaryotes. CDK2 forms complexes with cyclins (E and A) and specifically phosphorylates numerous substrates, such as retinoblastoma protein (pRb), retinoblastoma-like protein 1 (p107), cell division control protein CDC6, cyclin-dependent kinase inhibitor p27, tumor suppressor p53, and transcription factor E2F1. Currently, 72 proteins are reported in the Human Protein Reference Database as interacting with CDK2. CDK2 is tightly regulated; it requires specific activating phosphorylation at position Thr160 by a CDK-activating enzymatic complex (CAK); it can be inhibited by the Ink4 and Cip1/Kip1 families of cell cycle inhibitors or by phoshorylation in the glycine-rich loop by the Wee1 or Myt1 kinase. To derive specificity residues in CDK2, we used 390 sequences of protein kinases related to CDK2. We also derived specificity residues for cyclin A (379 sequences for domain N and 238 sequences for domain C).Specificity residues computed from family alignments reflect functional constraints. The distribution of specificity residues is particularly interesting for proteins engaged in multiple interactions. An example is the cell cycle kinase cyclin-dependent kinase CDK2, which plays a key role in the cell cycle of the cell cycle kinase CDK6, illustrates the potential power of specificity residue analysis in predicting binding site residues. The 21 specificity residues for p19-INK4d, predicted from our analysis of the alignment of 1048 human ankyrin repeats, map primarily to one patch on the surface of the molecule that vary characteristically between subfamilies but are conserved within each subfamily. The computational procedure ranks the key residues by their contribution to the optimal value of the contrast function, defined in terms of combinatorial entropy. One can use this residue ranking to prioritize further analysis and design experiments. The method also provides a signal-to-background criterion that is used to automatically classify all residues into three broad classes: specificity residues, conserved residues, and 'neutral' residues.As far as we know, the first algorithmic approaches to the problem of identification of specificity residues appeared in the mid-1990's, from the groups of Sander and CoheThe algorithm performs well in practice and has been tested in many protein families in consultation with domain experts. In the future, one interesting refinement of the algorithm would be a strict distinction between paralogous (same species) and orthologous (different species) variation, provided that enough sequences are available. We are also interested in applying the method to signal enhancement in the derivation of evolutionary trees by restricting phylogenetic analysis to the subset of functionally constrained residues. Our earlier work has demonstrated the way in which evolutionary trees of this type appear less noisy and potentially reach further back in evolutionary time . In anotOur results and examples demonstrate that the method can be used to identify functionally important residues from sequence information alone, without the use of three-dimensional structure or experimental functional annotation. Multiple applications are possible. The ability to locate functional determinants will be useful for the identification of residues in active sites that determine binding specificity; for the prediction of binding sites of protein complexes with other proteins, NAs, or other biomolecules; for assessing the biologic or medical significance of nonsynonymous single nucleotide polymorphisms; and for planning sharply focused mutation experiments to explore protein function. A particularly valuable application may be the design of therapeutic compounds that are highly specific to one (or a select few) of a series of paralogous proteins.The method is publicly accessible via a web server hosted ivice versa; the two extremes of 'one sequence per subfamily' and 'all sequences in a single subfamily' are uninformative).On the intuitive level, the algorithmic problem is as follows. First, divide a given multiple sequence alignment into subfamilies such that each subfamily has a characteristic conservation signature at a number of sequence positions. Then, optimize the information in the subfamily division to achieve a reasonable compromise between the number of proteins in a subfamily and the number of characteristic residues positions used to distinguish the subfamilies from each other . The total number of permutations in a column formula :Nk is the number of sequences in subfamily k; N\u03b1,i,k is the number of residues of the type \u03b1 in column i of subfamily k. The numerator is the total number of permutations of Nk symbols and the product in the denominator divides out the number of indistinguishable permutations for each residue type \u03b1.Here We then use the statistical or combinatorial entropy :Whereis an additive measure for comparing different distributions of residues. The statistical entropy depends on subfamily size. The entropy of the union of two subfamilies is always greater than or equal to the sum of entropies of the individual subfamilies. The entropy is equal to zero when all sequences are separated into subfamilies of a single sequence each ; the entropy is maximal when all sequences are united in one family . The dependence of the statistical entropy on subfamily sizes allows one to formulate an optimization problem, namely find the distribution of sequences into subfamilies that is maximally different from a random distribution of sequences. Subfamilies of sequences with many conserved residue patterns (which change across subfamilies) will contribute the most to the optimal solution.We define specificity residues as residues that are conserved in a subfamily but differ between subfamilies. Thus, one is challenged to determine simultaneously the best division of the set of sequences into subfamilies and the subset of residues that best discriminates between these subfamilies. 'Best' is defined in terms of a contrast function that aims to measure the degree to which the specificity residues are distinctly different in each subfamily. The value of the contrast function is minimal for the best solution, with the result reported as a set of specificity residues and corresponding sequence subfamilies. The sections below describe the contrast function, the meaning of 'best', the optimization algorithm, and a criterion for selecting the top-ranked specificity residues.i of the alignment, one can compute the combinatorial entropy Si, as defined by Equation 3 (above). At one extreme, the column-specific Si is zero if residues of one type populate this column in each of the clusters, no matter whether this residue type is the same in all clusters or differs between clusters , namely whereWhere N\u03b1,i is the number of residues of type \u03b1 in column i and N is the total number of sequences (lines) in alignment. (Because X! = \u0393(X + 1) [and \u0393(X + 1) .)Si = Si - L columns of the alignment:As the numerical measure of order over disorder, the entropy difference \u0394S0 is a negative number, this means that the absolute value of \u0394S0 is maximized.)is the contrast function to be minimized in the process of finding the best decomposition into subfamilies. as in Equation 6, and then choose the partitioning with the lowest value of \u0394g method with eacN clusters, each containing one sequence, in each clustering step all pairs of clusters are considered as merger candidates. The pair of clusters with the lowest value of the guide function is merged into one cluster. The merger steps are repeated until all sequences are in one cluster. At this stage the result is a complete trajectory of merger steps, which can be represented as a tree (not shown) and the task is to choose the best partioning (tree level). The best partioning is defined as the one with the minimal value of \u0394S0, or the maximal absolute value of the combinatiorial entropy difference between the actual and uniformly mixed ('random') distribution of residue types (Equation 6). The complexity of the hierarchical clustering algorithm is of O(N**2 ln N), where N is the number of sequences in the multiple alignment [Starting from lignment .To explore different partitionings of sequences into subfamilies, the guide function includes a penalty term . The penk and m) is defined as follows:The guide function used to evaluate a particular clustering step after merging clusters of size Nk and Nm. This second term simply captures the mere size contribution to the entropy and counteracts the tendency toward trajectories with early emergence of dominant large clusters. This tendency is due to the fact that the entropy of a larger system is always greater than the sum of the entropy values of its subsystems. Whatever the trajectories explored and whatever the devices used to guide the exploration of trajectory space, the evaluation of best partitioning is exclusively based on the combinatorial entropy difference of Equation 6.\u0394A in Equation 7 lead to radically different trajectories in clustering space. When A is approximately 1, the main contribution to the guide function of Equation 7 comes from the entropy difference due to sequence assignment to subfamilies; when A is approximately 0, clustering is driven by cluster size and mergers into smaller clusters are favorable. Changing the granularity parameter A in the guide function over a reasonable range of values and repeating hierarchical clustering explores sufficiently diverse partitionings to reach an optimum . Typical optimal values of A in tests for diverse protein families range between 0.6 and 0.9.Note that although the guide function determines the details of each clustering step, the final optimum is chosen as the minimum of the combinatorial entropy difference (Equation 6) in the two-dimensional space of two variables, the clustering step S0. Thus, if we sort residue columns by their entropy difference \u0394S0 and plot the resulting distribution or in particular subfamilies (specificity residues). Examples of conserved residues are active site residues in enzyme families, and examples of specificity residues are residues lining active sites configured to bind a particular substrate optimally. The combinatorial entropy difference (Equation 6) is greatest for alignment columns with specificity residues , but close to zero for 'nonspecific' columns that do not discriminate between subfamilies. Such 'nonspecific' columns have globally conserved residues or diverse nonspecific residue distributions. All other residue columns have intermediate values of \u0394n Figure , then weWe compared entropy plots for the original alignment with the entropy plot for a randomized alignment . We require \u27e8s\u27e9i < 0.03 and fi 21,< 0.5 for globally conserved columns; mathematical details related to Equations 10 and 11 are provided in Additional data file 3.is the average entropy per residue for the residue distribution in alignment column Specificity residues - and, of course, globally conserved residues - reflect functional constraints that operate in evolution. They are an informational fossil record, most clearly visible over large evolutionary intervals during which the background distribution may vary considerably. The constraints can be of diverse origin, but it is plausible that all constraints can be traced to the requirements of intermolecular interactions that are important for survival. Therefore, prediction of specificity residues has broad applicability for the identification of functional interactions and, as a consequence, for ranking genetic variation, for planning mutation experiments, or for the molecular design of specificity.Here, we test one particular application of the identification of specificity residues from multiple sequence alignments: the prediction of intermolecular interfaces. We use known three-dimensional structures of protein and DNA complexes from the Protein Data Bank (PDB) as defining experimental reality against which predictions are compared. A key limitation is that there may be several such interfaces in a given protein family and that the complexes in the PDB contain only a subset of these. Nonetheless, it is instructive to see the extent to which specificity residues, interpreted as predicted interface residues, overlap with known intermolecular interfaces. A large overlap indicates good prediction accuracy, but over-prediction is expected.N, the number of the known interface residues is L, the number of the specificity residues is S, and the number of the specificity residues in the interface is A. If the specificity residues are randomly distributed, then what is the probability of observing A or more of the S specificity residues in the interface? For reasons of permutational degeneracy, one must compute the total number of indistinguishable variants of A distinct residues assigned to four sets of size K, M, J and (N - K - M - J) residues:To assess whether an observed overlap between specificity residues and intermolecular interface residues is statistically significant, we estimate the expected size of overlap in a random model, in which specificity residues are scattered randomly in the protein and may or may not end up in the known interface by chance. Suppose that the total number of protein residues is A or more of S specificity residues among the L interface residues is given by the following ratio:Then, the probability to observe at random S and L have A or more common residues; and the denominator represents the total number of all possible assignments up to complete overlap of the two sets. To correct for the Nc globally conserved residues, which by definition are excluded from being identifies as specificity residues, we use N - Nc in Equation 12 in place of N.Where the numerator represents the number of all possible assignments for which the sets of size The multiple sequence alignments are the only source of information used in the predictions. Predictions are best for accurate, nonredundant alignments of diverse sequences without significant gap regions. In the interface prediction tests, we used alignments from the 'Superfamily' and PFAMCDK, cyclin-dependent kinase; CEO, combinatorial entropy optimization; NA, nucleic acid; PDB, Protein Data Bank; PFAM, Protein Families.BR and CS specified the problem and developed the algorithm. BR and YA wrote the software and performed the data analysis. All wrote the paper.The following additional data are available with the online version of this paper. Additional data file Source code of the core method is available on request from the authors, subject to acceptance of a public domain license.Presented is a table reporting the results of a robustness analysis of the method, as described in the main text.Click here for filePresented is a table reporting the results of optimal clustering of 126 GTPases of the human Ras superfamily.Click here for filePresented is a tutorial section that explains the link between the common notion of probability entropy (information entropy) and the less well known formulation of combinatorial entropy.Click here for file"} {"text": "Identification of protein-protein interface residues is crucial for structural biology. This paper proposes a covering algorithm for predicting protein-protein interface residues with features including protein sequence profile and residue accessible area. This method adequately utilizes the characters of a covering algorithm which have simple, lower complexity and high accuracy for high dimension data. The covering algorithm can achieve a comparable performance to a support vector machine and maximum entropy on our dataset, a correlation coefficient (CC) of 0.2893, 58.83% specificity, 56.12% sensitivity on the Complete dataset and 0.2144 (CC), 53.34% (specificity), 65.59% (sensitivity) on the Trim dataset in identifying interface residues by 5-fold cross-validation on 61 protein chains. This result indicates that the covering algorithm is a powerful and robust protein-protein interaction site prediction method that can guide biologists to make specific experiments on proteins. Examination of the predictions in the context of the 3-dimensional structures of proteins demonstrates the effectiveness of this method. Revealing the mechanisms of protein-protein interactions is crucial for understanding the functions of biological systems. Furthermore, the ability to predict interfacial sites is also important in mutant and drug design . Structuet al. [The availability of more and more protein structures in the Protein Data Bank (PDB) makes pret al. use protet al. [Traditional methods take protein-protein interaction site prediction as a classification task and separately study each residue. Li Ming-Hui et al. take it et al. [In this study, we mainly focus on a novel method developed for detecting interacting surfaces in proteins starting from their three-dimensional structure. This is particularly important in determining protein function, particularly for proteins of known structure but unknown function. Ofran et al. investiget al. .2.2.1.The covering algorithm method is trained to predict whether or not a surface residue which is located in the interface based on identity of the target residue and its sequence neighbors. Five-fold cross-validation strategy is adopted for our experiments. Specifically, on the each dataset, we divide our dataset to five parts according to 5-fold cross-validation. The training set is composed of four parts and the remainder is the testing set. Thus, we get five training sets and testing sets. Then, we carry out our experiment on these five training sets and testing sets. For each dataset (see collection of dataset), we do ten times. Herein, total 2 \u00d7 5 \u00d7 10 = 100 experiments are implemented and the average performance of the results is used to evaluate each method.2.2.The covering algorithm (CA) classifier is evaluated using 5-fold cross-validation on two kinds of datasets. In order to examine whether the covering algorithm (CA) method learns sequence characteristics that are predictive of target residue functions, we run a control experiment in which the class labels are randomly shuffled. The correlation coefficient (CC) obtained on the class shuffled dataset is 0.0604 (our method with 0.2893 on the Complete data) and \u22120.0065 (our method with 0.2124 on the Trim data) shows that the covering algorithm performs better than a random predictor (CC \u2248 0). 2.3.In some situations (e.g. key interface residue recognition for site-specific mutagenesis), we need to have a higher sensitivity and lower specificity. This requirement can be met by modifying the parameters used by the covering algorithm (CA). 2.4.http://www-nlp.stanford.edu/software/classifier.shtml.Support vector machines (SVMs) and maximum entropy model (ME) are selected to compare with our method. They are all discriminative classification methods. SVMs are a state-of-art method for predicting protein-protein interaction sites ,15,16,28In order to illustrate the effectiveness of our approach, we plotted the ROC curves for the Complete and Trim datasets. As shown in 2.5.Here we give two examples that are predicted by the CA, SVM and ME classifiers. The first example is the refined 2.8 an alphabeta T cell receptor (TCR) heterodimer complexed with an anti-TCR fab fragment derived from a mitogenic antibody . We use The second example is the jel42 Fab fragment/HPr complex . This in3.Each surface residue is predicted to belong to a particular interaction site on the basis of characteristic of residue spatial cluster. Interaction site residues and non-interaction residues are used as positive and negative data, respectively.3.1.IJMS.In our experiments protein-protein interaction data are extracted from a set of 70 protein-protein complexes in an independent study that conInterfaces are formed mostly by residues that are exposed to the solvent if the partner chain is removed, so we mainly focus on surface residues. The solvent accessible surface area (ASA) is computed for each residue in the unbound molecule (MASA) and in the complex (CASA) using the DSSP program . Here, wThe fact that there are more non-interface residues than interface residues in the training set leads to higher specificity and lower sensitivity for many classifiers such as SVMs and ANN ,13. In o3.2.Interface prediction relies on characteristics of residues found in interfaces of protein complexes. The characteristics of interface residues are different. The most prominent involve: sequence conservation, proportions of the 20 types of amino acids, secondary structure, solvent accessibility and side-chain conformational entropy etc. Most of these characters are structure information. In this article, we choose sequence profile and residue accessible surface area as our test character.3.2.1.Sequence profiles are sequence information which denotes its potential structural homolog. Protein function information is embedded in the protein sequence, but how it can be determined is a pivotal problem. A good candidate technique for extracting such information is multiple sequence alignment (MSA). Protein sequence profile is a result of MSA which shows which kind of amino acid appearing in a given position of the protein primary structure. Herein, the protein sequence profiles are extracted from the HSSP database . Each re3.2.2.Accessible surface area (ASA) feature represents the relative accessible surface area . For convenience, we use ASA to represent the relative accessible surface area of residue. ASA of each residue is calculated using DSSP program .n,j is the number of amino acids j in position n, Xn is a residue in position n and ASA(Xn) denotes accessible surface area of residue Xn.In order to include the environment of the target residue, the profiles of sequentially neighboring residues with n windows are also included in the character vector. 3.3.K = {X1, X2,......XK} , X2, ......Xk = , x1, x2,......, denotes input vector of covering algorithm, y1, y2,... yk \u2208 {1, \u22121} denotes label of x1, x2,...xk). Now suppose K is divided to s subsets: in this paper, we discuss s = 2 (i.e. two classes corresponding to interface residue and non-interface residue). First, the original input space is transferred into a quadratic space by the use of a global project function, such as Data-based machine learning explores the rule to predict new data from the observation data. The covering algorithm is proposed by Zhang Ling and Zhang Bo for classification. Suppose that given input set 3.3.1.1 and these points are enclosed set D.Step 1. Making a cover C(i) (i = 1 at the begin), which only covers point of Kj, making a cover C(i) which only covers point of Kj, and then are enclosed set D, i = i + 1, return Step 2 until K/D=\u03a6.Step 2. Taking point of K/D, i.e. p, suppose p belongs to K1, C2,......Ck}. Then taking C1, C2,\u2026\u2026., Ck, if test point is in the Ci which cover point of K1, output 1, otherwise \u22121.Step 3. Suppose we get cover set C ={Ci.In fact, C(i) is a sphere domain with center w and radius r3.3.2.1 or K2 is empty, then stop. Otherwise, suppose that K1 \u2260 \u00d8, randomly selecting ai \u2208 k1.Step 1. if Kai. Suppose C(ai) \u2229 K1 = Di, i = 1,2..., D0 = \u00d8.Step 2. Seeking a sphere domain with center= j = C(ai), K1,j = Cj \u2229 K1, K2 \u2190 K1 / K1,j, k1 \u2190 k2, j \u2190 j+1, go to Step 1 of Algorithm 1.Step 3. CMore details about covering algorithm can be referred from ,26.W = {\u03c9 = (ai), \u03b8 = (\u03b8i)}based on the above equations and by using testing set, the performance of our algorithm can be evaluated.Hence by using the training set we can calculate all the parameters 3.4.In our experiment, predictors are generated using the covering algorithm (CA) to judge whether a residue is located on an interface or not. The CA has simple, lower complexity, high accuracy for high dimension data and frequently demonstrates high accuracy. It can also handle large feature spaces and condense the information given by the training dataset. Here, we consider only surface residues in the training process, the target value of which is 1 (positive sample) if it is classified into interface residue and \u22121 denotes non-interface residue corresponding to negative sample.et al. [We construct our CA predictor using sequence profile and ASA attributes. Following the method used by Fariselli et al. , the inp3.5.Interface prediction has to fulfill two competing demands. The predictor should cover as many of the real interface residues as possible, but at the same time should predict as few false positive as possible. These two demands are measured by sensitivity and specificity, respectively. Let TP = the number of true positives ; FP = the number of false positives (residues predicted to be interface residues that are in fact not interface residues); TN = the number of true negatives; FN=the number of false negatives; N = TP + TN + FP + FN , then sensitivity is:Sensitivity measures the fraction of interface residues that are identified as such. Specificity measures the fraction of the predicted interface residues that are actually interface residues. Correlation coefficient measures that how well the predicted class labels correlate with the actual class labels. It ranges from \u22121 to 1 where a correlation coefficient of 1 corresponds to perfect prediction and 0 corresponds to random guessing.4.Generally speaking, identifying residues in protein-protein interaction sites is an extremely difficult task, let alone in the absence of any information about partner chains. In this paper, as we have presented above, due to the absence of information about research proteins, we propose a new approach to predict interface sites from protein sequence and structure characteristic. This method adequately utilizes the characters of covering algorithm which have simple, lower complexity, high accuracy for high dimension data. A relatively high false positive ratio in protein-protein interaction sites prediction is a troublesome problem. Some investigators reduce the false positive ratio by eliminating isolated raw positive predictions . In our"} {"text": "While protein sequences and structures can be experimentally characterized, determining which residues build up an active site is not a straightforward process. In the present study a new method for the detection of protein active sites is introduced. This method uses local network descriptors derived from protein three-dimensional structures to determine whether a residue is part of an active site. It thus does not involve any sequence alignment or structure similarity to other proteins. A scoring function is elaborated over a set of more than 220 proteins having different structures and functions, in order to detect protein catalytic sites with a high precision, The scoring function was based on the counts of first-neighbours on side-chain contacts, third-neighbours and residue type. Precision of the detection using this function was 28.1%, which represents a more than three-fold increase compared to combining closeness centrality with residue surface accessibility, a function which was proposed in recent years. The performance of the scoring function was also analysed into detail over a smaller set of eight proteins. For the detection of 'functional' residues, which were involved either directly in catalytic activity or in the binding of substrates, precision reached a value of 72.7% on this second set. These results suggested that our scoring function was effective at detecting not only catalytic residues, but also any residue that is part of the functional site of a protein.As having been validated on the majority of known structural families, this method should prove useful for the detection of active sites in any protein with unknown function, and for direct application to the design of site-directed mutagenesis experiments. In genetical studies, identifying mutations at or near an active site can help explain biological malfunctions. Knowledge of an active site, its geometry and physico-chemical properties, is essential for the efficient design of inhibitors of malignant proteins . With exructures . A similructures . Anotherructures -11. Grapructures . An optiructures . Lastly,ructures -16.\u03b1 atoms were considered [This last representation, which facilitates mathematical manipulations of protein structures, is used in the current work. In such networks, each protein residue is a node, and two residues are connected by an edge if they have atoms within a given distance from each other. In the original definition, only contacts between amino-acids Cnsidered ,18. Thisnsidered and for nsidered ,21.Closeness centrality of a node (a residue) within a network (a protein structure), as used in recent studies for the detection of protein catalytic sites ,15, takeThe main features we thus focused on to describe protein residues were the number of 'local' neighbours of a node, i.e. nodes that are distant from this node by a path-length of one or two edges within the residue network. It has been shown that 2-connectivity, the count of the number of nodes distant by at most two edges from a given node, produced a similar efficacy at detecting protein active sites as closeness centrality . Here weMethods for details) and had identified catalytic site residues, as being reported in the Catalytic Site Atlas [Residue interaction networks were generated after the three-dimensional structures of a large test set of 226 proteins. Each of these proteins belonged to a distinct SCOP superfamily .For each residue interaction network, different network parameters were analysed. Individual scores were next transformed into Z-score values on closeness centrality [Methods for definition) of 8.22% for the detection of catalytic sites , the number of residues located at a path-length of three (Dg3) and the type of the residue (Equation 1). It was used to detect catalytic residues over a set of 226 proteins belonging to different structural families. The score obtained for each residue was then transformed into a normalised MDev value. Moreover, the threshold value of MDev was optimized in order to produce a maximal value for a measure of performance that combined the precision and coverage values of the detection. Indeed, in order to have an efficient tool for the prediction of residues interesting for site-directed mutagenesis, it is important both to predict few non-catalytic residues (high precision) and to have a high likelihood that a catalytic site is effectively predicted as such (high coverage). Still, precision tends to increase with increasing values of thresholds, while coverage displays an opposite trend. We thus optimised our detection of catalytic sites for a maximal value of a measure of performance which combined precision and coverage, the F-measure [Our scoring function combined three characteristics of a given residue: the number of residues in contact with it through side-chain atoms or cofactor(s), as well as in the catalytic activity of the protein, even though not directly involved in the catalytic reaction.F1 and F2, MDev1 and MDev2 respectively, with results summarized in Table Detection of 'catalytic' residues was run on the two threshold values which yielded maximal values for TEM \u03b2-lactamase is responsible for bacterial resistance to penicillins and cephalosporins antibiotics. For this protein, catalytic Ser70 was not detected, while the two residues which are likely to play the role of a base for the activation of this serine, Lys73 and Glu166 -29, wereMDev1. Active-site His48, calcium-binding Asp49 and substrate-binding Arg6 [In pancreatic phospholipase, an enzyme involved in the metabolism of phospholipids, catalytic Asp99 was detected, but only at ing Arg6 ,32 were Alkylguanine transferase is a key enzyme in DNA repair which catalyses the dealkylation of O6 from guanine nucleotides. Prediction on this enzyme yielded numerous positive residues, among which catalytic Cys145 was not present. Still, the two residues proposed as activating this residue by deprotonation, His146 and Glu172 , were prFor ubiquitin-conjugating enzyme 1, an enzyme involved in the transfer of ubiquitin entities to protein substrates, none of the three residues predicted as catalytic possessed a described role in enzyme activity . Still, MDev values for our scoring function on each residue of this protein is shown in Figure Phenylalanine hydroxylase catalyzes the aromatic-ring hydroxylation of amino-acid phenylalanine to produce tyrosine. Three of the ligands of the active-site iron of this enzyme were detected Table , while tProlyl-isomerase 1 catalyses the cis-trans isomerisation of proline residues, and recent studies have linked this protein to cancer and Alzheimer's disease . His59 a3+-binding protein, a protein involved in bacterial iron uptake, one residue detected at MDev2 is an iron ligand (Glu57), and the second one (Arg101) interacts with substrate phosphate and is located close to iron both produced a higher average MDev value over catalytic residues from the extended set and was less correlated to Dg3 than e.g. Dg1 and semi-local (Dg3) parameters within the residue-interaction network that describes a protein structure enabled a better detection of protein catalytic sites than closeness centrality, a parameter that considers path lengths between all residues of the network. They therefore suggest that local or semi-local organisation of residues is more critical than whole-protein structural information to define them as catalytic or not, as shown by the increased precision of detection obtained over 226 representative protein structures .i.e. a low false detection rate, and a high coverage of functional sites, are the characteristics one would require for efficient prediction. Indeed, these two criteria will provide both a low rate of negative experiments and a high likelihood of detecting the active site for a given protein. The methodological reason has grounds in the rates of occurences of catalytic residues in the extended set. The 226 proteins have 62083 amino-acids in total, with 777 catalytic residues. Therefore, the sample is highly unbalanced, with percentages of real positives (r+) and real negatives (r-) over this set of respectively 1.3% and 98.7% of full sample. In such a case, small variations in the number of correct predictions (and therefore of non-correct predictions) will have a low influence on measures of performance that use ratio to the number of residues predicted as non-catalytic (p-), e.g. specificity or true negative rate. On the contrary, similar variations will have a high influence over coverage or predictive value of positives, whose evaluation only involves positive residues . For these two reasons, precision and coverage were chosen as performance measures.The final performance of the detection was measured using both precision and coverage, instead of the more classical specificity and sensitivity (coverage) combination. This choice was motivated by two reasons: a practical one and a methodological one. The practical reason is the applicability of the method to the choice of protein amino-acids that would be interesting for site-directed mutagenesis experiments. Both a high rate of correctly predicted sites, F-measure (see Methods). Thresholds on MDev that produced maximal values for this effectiveness measure were chosen in two conditions: one where an equal relative importance was conferred to precision and coverage , and one where precision was given a more important weight . The use of two distinct threshold values provides the user with two sets of residues to analyse of different sizes: a broad set presenting a high coverage, with low chances to miss an active site and more experiments to perform (MDev1), and a narrow set, with both fewer false positives and lower chances to hit an active site, and also fewer experiments to perform (MDev2).In order to obtain a single measure of performance for our detection, precision and coverage were combined into an effectiveness measure, the Z-score on closeness centrality and RSA criteria as proposed by Amitai et al. [F-measure, respective values of 15.1% and 11.5% were obtained when applying the method of Amitai and coworkers at \u03b2 = 1 and \u03b2 = 2, while our method produced values superior to 20% curve and MDev2 (65.8% vs. 31.6%), than at detecting purely catalytic sites . On a smaller set of 8 proteins, use of the same method produced a precision of 45.5% for the detection of catalytic sites and, when extending the measure of performance to all residues that were crucial to protein activity, which we coined 'functional', precision of the detection increased to 72.7%. The present scoring function, while optimised for 'catalytic' residues, thus proved even more efficient at detecting 'functional' residues. The high precision obtained with this method proved the influence of the local environment of residues in structurally organising protein active sites. The method should be of help in designing site-directed mutagenesis experiments with a low time-cost.MDev = MDev1), and another set, which will be a subset of the previous one, with the residues predicted at high precision and average coverage (MDev \u2265 MDev2). An online version for direct submission will soon be available on our web-page .The method can be applied to any protein structure by submission of a PDB file to the corresponding author. Two sets of residues will be produced: one that will only consider the residues predicted as catalytic or functional at high coverage and average precision . Superfamilies which included fewer than two proteins, as well as those belonging to the 'low resolution proteins' and to the 'designed proteins' classes, were excluded. A single protein was randomly selected for each remaining SCOP superfamily. The resulting set contained 226 proteins as listed in additional file A non-redundant set of enzymes was selected from the Catalytic Site Atlas \u03b1, N or carbonyl group) of two distinct residues.Residue interaction networks were calculated from protein three-dimensional structures on all atom-to-all atom contacts. Two residues were considered in contact if they had a pair of not covalently-connected atoms that laid within a distance of 4.2\u00c5. Side-chain-to-side-chain contacts represented contacts between any two atoms not belonging to the amino-acid moiety (CDg1) or on contacts involving only side-chain atoms (Dg1SC). More generally, the Dgp value for a given node, with p an integer number, represents the number of nodes that are located at exactly p steps (or edges) from that node.Different network parameters were calculated for each residue within the resulting networks, such as direct neighbours defined on all-atom contacts used to characterize a residue, the average (xmax) and standard deviation (\u03c3(x)) for that score over each protein residue-residue contact network were calculated. Parameters were then classified either on Z-scores: MDev was chosen in order to measure a deviation from maximum, rather than a deviation from the average as in standardised Z-score. It was moreover preferred to a plain ranking with selection of a fixed number of residues for all proteins, since the number of residues that define an active site can differ from a protein to another and between catalytic functions. MDev produced a value of 0 for a residue with a parameter value x equal to its average over the protein it belonged to, 1 for the residue(s) with x equal to xmax for the protein, and negative values for residues with x values lower than the average parameter value over the protein.For each network scoring function and coverage of positives over the protein set. With r+ and r- the number of real catalytic and non-catalytic residues in the set under consideration, p+ and p- the number of protein residues respectively predicted as involved and not involved in catalysis, the number of correctly predicted catalytic residues, the values for the different measures were:Residues were considered as 'positives', F-measure [For measuring the performance of the detection method, a combination of precision and coverage was also used: the -measure , F\u03b2= they displayed a distribution of MDev values biased towards 1 for catalytic residues from the extended set and ii) they possessed the smallest pairwise correlations between the parameters that were considered present over these 1478 catalytic residues.The likelihood of each amino-acid to be a catalytic residue was considered in our scoring function. A subset of the Catalytic Site Atlas with no overlap with the extended test set was defined, with the following rules: only entries with literature evidences were included, a single chain was considered for PDB entries with multiple chains present in the Atlas, and proteins from the 'low resolution proteins' and 'designed proteins' classes were excluded. The resulting set included 546 proteins, for a total of 1478 catalytic residues. Each residue was thus attributed a The combined scoring function attributed to each residue the following score:kexp and ktype were chosen in order to produce a maximal performance value for the detection, and had final values of 0.25 and 50, respectively.Variable parameters Esch. coli , porcine pancreatic phospholipase , DNA-alkylguanine transferase , ubiquitin-conjugating enzyme 1 from Sacch. cerevisiae , phenylalanine hydroxylase from Chr. violaceum , human peptidyl-prolyl isomerase , ferric binding protein from H\u00e6m. influenzae and bovine \u03b2-trypsin .For a validation at the residue scale of the scoring parameter defined on the extended set, eight proteins belonging to different functional classes were chosen for detailed analysis. These proteins were as follows, with respective PDB three-dimensional structures used to generate the residue contact networks: TEM \u03b2-lactamase from ML provided funding and working tools to IF and PS. IF contributed to the creation of the computer codes and performed part of the calculations. PS created and analysed the extended and validation sets and created and optimised the scoring function. PS and IF wrote the manuscript.MDev values calculated on different network parameters over the catalytic residues present in the extended set of proteins.Distribution of The figure presents distribution of MDev values for the different network parameters that were considered, in order to evidence biases towards the maximum MDev value of 1.Click here for filePairwise correlations for different network parameters for the catalytic residues present in the extended set of proteins. Correlation values for different network parameters over the residues labelled as 'catalytic' in the Catalytic Site Atlas are given for the 226 proteins from the extended set of proteins. Parameters used are closeness centrality, used as a benchmark, and neighbour counts Dg1, Dg2 and Dg3, as well as normalised count Dg1SC-R.Click here for fileEquation 1.Receiver-operator characteristic curve for the detection of catalytic sites over the extended set of proteins when using the scoring function defined in The curve shows the relationships between specificity and coverage when using our scoring function for the detection catalytic sites. Each point corresponds to a different threshold on MDev values.Click here for fileDescription of the extended set of proteins. PDB identity and chain for all proteins from the extended set are provided, as well as the corresponding SCOP domain of the chain used.Click here for file"} {"text": "Shannon entropy applied to columns of multiple sequence alignments as a score of residue conservation has proven one of the most fruitful ideas in bioinformatics. This straightforward and intuitively appealing measure clearly shows the regions of a protein under increased evolutionary pressure, highlighting their functional importance. The inability of the column entropy to differentiate between residue types, however, limits its resolution power.In this work we suggest generalizing Shannon's expression to a function with similar mathematical properties, that, at the same time, includes observed propensities of residue types to mutate to each other. To do that, we revisit the original construction of BLOSUM matrices, and re-interpret them as mutation probability matrices. These probabilities are then used as background frequencies in the revised residue conservation measure.We show that joint entropy with BLOSUM-proportional probabilities as a reference distribution enables detection of protein functional sites comparable in quality to a time-costly maximum-likelihood evolution simulation method (rate4site), and offers greater resolution than the Shannon entropy alone, in particular in the cases when the available sequences are of narrow evolutionary scope. As a groundwork for the mutational study of a protein, many researchers will choose the comparative analysis of the protein homologues. Column entropy in the multiple sequence alignment ,2 has prThis is illustrated in Figure While the entropy is by no means the only method to estimate residue conservation , whereas the background frequencies enable the estimate of the statistical (im)probability of an observed mutation occurring at random. The background frequencies, we suggest, are already available in terms of BLOSUM matrices, even though some adjusting is needed to turn them into matrices of transition probabilities. In distinction from earlier works using joint entropy with Kullback-Leibler background distribution to detect co-evolution across multiple alignment columns , we promns (e.g ). To estmns (e.g ) term deA column in a multiple sequence alignment can be thought of in the following way: If the sequence set were a fair sample of all possible orthologs, and the variability of each residue depended only on its type, the amino acid population in each column would reflect the ease with which they are exchangeable in a general case. Setting aside the problem of the fairness of sample, which we do not attempt to address here, the difference from the expected distribution is a result of the particular evolutionary forces on the residue, or the lack thereof.X \u2013 is evaluated asThe Shannon entropy of an alignment column \u2013 represented by a distribution of residue types x is one of 20 residue types, and the probability of occurrence of x, P(x), is estimated by f(x), the frequency of the appearance of residue type within the alignment column:where N(x) is the number of appearances of residue type x, and L is the length of the column. To find an expression which will incorporate residue mutation propensity, we first look at the expression for joint entropy of two distributionswhere X:and apply it to a single distribution, P is now estimated by the frequency of residue type pairs which can be formed from the residues in the column:N is the number of unordered pairs , which can be formed by taking both x1 and x2 from the distribution X, and L is the column length. The quantity P behaves the same way as the Shannon entropy, as illustrated in Figure A and B. This corresponds to a case of a column taken from an alignment of 30 sequences, and which happens to contain only two residue types. Just as in the case of Shannon's entropy (dashed line), the entropy function defined in Eq. 4 is zero when the set contains only one type of element (i.e. only one residue type), and maximal when the two types are equally represented.where Nx1, x is the nThe joint entropy also has the advantage that it allows for easy incorporation of information about mutational preference of amino acids, following the approach of Kullback and Leibler:Q here plays the role of the \"background\" mutation propensity. In particular, P which is greater than Q will result in negative HBB, indicating that the residue is more conserved than its average mutation propensity would dictate . The most conserved residue still has the minimal score (as in the case of Shannon entropy) which can in this case be less than zero. To estimate Q, we take a matrix of raw pair frequencies originally assembled for the calculation of BLOSUM matrices [P described in Eq. 5 and used in Eq. 6 has no way of distinguishing between the two possible orderings of its arguments; that is, in this model we do not know which residue type was \"earlier\" and which one was \"later\" \u2013 mutations in both direction are equally probable symmetric matrix whose rows and columns sum up to 1. To find Q we use a Monte Carlo procedure: starting from 20 \u00d7 20 identity matrix, we subtract (add) a small quantity from a randomly chosen off-diagonal element, and add (subtract) it from the two corresponding diagonal elements.matrices ,14. Thesnce, see ). TherefQ matrix used in this work was derived from the frequencies in 35% clustering blocks, and can be found in Additional file The optimized quantity is root-mean-square distance of matrix elements to the starting (BLOSUM frequency) matrix. The HBB scores residue columns, we look at two simple examples. First we compare the scoring of two completely conserved columns, one with isoleucines, and one with prolines:To illustrate the way Q = 0.14, and Q = 0.29 is equal to 1 for any conserved column), the value of HBB for the first column is -1.9, and for the second -1.2. Remembering that, just as in the case of Shannon column entropy, the lower number indicates higher degree of conservation, the isoleucine column is by this reasoning under higher evolutionary pressure than proline. That is, since isoleucine is quite prone to mutation , we find it as an element of surprise that it is completely conserved, and attribute this to a special role alanine plays at this particular position in the protein.Since In a slightly more complex example we compare two columns with two values of amino acid types each:HBB = -1.4, compared with HBB = -1.1 for the first column), largely because of the contribution of Q = 0.04 = 0.12). If it is true that in the evolutionary history of our hypothetical protein the isoleucine at this position was replaced by a proline, then this position must be very special, claims this model, perhaps conferring specificity to the proteins function. Perhaps counterintuitively, the second column scores better this procedure \u2013 using the HSSP alignment as a starting point \u2013 still resulted in a set of very similar homologues. In these cases we resorted to 4 iterations of PSIBlast search oHBB in detecting protein interface, compared with the column entropy (green) and rate4site (blue). The results are presented in terms of sensitivity versus surface coverage curves. Definitions of sensitivity and coverage stem from our use of methods which, in one way or another, rank residues by the evolutionary pressure they experience. Coverage in this context refers to the fractional overlap of certain percentage of top ranking residues with the set of surface residues, while sensitivity is the overlap of the same top ranking residues with the target set of interface residues. The question of the optimal choice of coverage is left open, with the understanding that a higher coverage choice detects a larger number of test residues, but also leads to a larger number of false positives. The quality of any method consists precisely of its ability to maximize this hit-to-miss ratio.Figures HBB is then still able to extract information beyond the reach of Shannon's entropy. As shown in Figure 3HBB is capable of detecting parts of the interface down to one percent coverage of the entire surface, even using an evolutionarily narrow selection of sequences. At the same time, the results are quite comparable to the results obtained using a full-blown simulation of evolutionary events . Taking the area under the sensitivity vs. coverage curve as an indicator of the prediction quality , in a Wilcoxon signed-rank test [HBB are indeed different from those using entropy with the p-value <6 \u00d7 10-5. Using the same test, the quality of the predictions by HBB and rate4site are statistically indistinguishable . Rate4site and HBB average the area of 0.73 and 0.72 respectively for this selection of sequences, while the entropy averages 0.62, thus indicating that both HBB and rate4site move the prediction toward more reliable. The last result is the consequence mostly of the inability of the entropy to achieve resolution at small coverage, thus decreasing the area under the curve.The results in Figure ank test , the areHBB still performs comparably to rate4site, and even somewhat better than the column entropy. The average areas under the sensitivity-coverage curve are 0.74, 0.73, and 0.77 for HBB, entropy and rate4site respectively. On the Wilcoxon test, in this case of a sequence sample with lower homology, the results by HBB are more similar to those produced by column entropy than by rate4site .In the following figure, Figure Information analogous to Figures The usefulness of the method is not limited to protein interfaces \u2013 it works as well as rate4site, and better than entropy in detection of catalytic sites for enzymes than the one oblivious to amino acid type, as indicated in Figure The model behind this approach acknowledges that starting from the the alignment column alone it is not possible to establish the residue type in the ancestral allele. Instead, the reasoning goes, in the lack of evolutionary pressure, the observed distribution should reflect the statistical propensity of residues to mutate to each other: if a residue type A is just as likely to mutate to type B as not to mutate at all, and vice versa, we expect to find the two types equally represented in a fair sample of existing alleles. A deviation from the uniform distribution, then, points to an external pressure to maintain a particular type, calling attention to the corresponding position in the protein sequence. This interpretation of the model makes its pitfalls obvious: a sequence sample produced automatically from currently available protein sequence databases is highly unlikely to be fair. (Valdar's method ,24, for Consideration of the inherent problems may yet lead us to an improved approach.HBB is simple, which makes it applicable as a part of a more complex approach [We have shown that a simple heuristic modification of Shannon entropy can match the prediction power of an elaborate evolution simulation. It is worth noting the advantages this brings: approach , and itsapproach . In pracThe data set used in this work is available at the Lichtarge Lab website .IM conceived of the study and implemented necessary software. The method was developed and the manuscript written through collaborative work of all authors. All authors read and approved the final manuscript.Q, structural classification of used proteins according to SCOP, and additional comparative analysis of the method presented here with methods already available in the literature.Background frequencies for residue variability estimates: BLOSUM revisited \u2013 Supplementary Material. The supplement contains the reference distribution Click here for file"} {"text": "We study apo and holo forms of the bacterial ferric binding protein (FBP) which exhibits the so-called ferric transport dilemma: it uptakes iron from the host with remarkable affinity, yet releases it with ease in the cytoplasm for subsequent use. The observations fit the \u201cconformational selection\u201d model whereby the existence of a weakly populated, higher energy conformation that is stabilized in the presence of the ligand is proposed. We introduce a new tool that we term perturbation-response scanning (PRS) for the analysis of remote control strategies utilized. The approach relies on the systematic use of computational perturbation/response techniques based on linear response theory, by sequentially applying directed forces on single-residues along the chain and recording the resulting relative changes in the residue coordinates. We further obtain closed-form expressions for the magnitude and the directionality of the response. Using PRS, we study the ligand release mechanisms of FBP and support the findings by molecular dynamics simulations. We find that the residue-by-residue displacements between the apo and the holo forms, as determined from the X-ray structures, are faithfully reproduced by perturbations applied on the majority of the residues of the apo form. However, once the stabilizing ligand (Fe) is integrated to the system in holo FBP, perturbing only a few select residues successfully reproduces the experimental displacements. Thus, iron uptake by FBP is a favored process in the fluctuating environment of the protein, whereas iron release is controlled by mechanisms including chelation and allostery. The directional analysis that we implement in the PRS methodology implicates the latter mechanism by leading to a few distant, charged, and exposed loop residues. Upon perturbing these, irrespective of the direction of the operating forces, we find that the cap residues involved in iron release are made to operate coherently, facilitating release of the ion. Upon binding ligands, many proteins undergo structural changes compared to the unbound form. We introduce a methodology to monitor these changes and to study which mechanisms arrange conformational shifts between the liganded and free forms. Our method is simple, yet it efficiently characterizes the response of proteins to a given perturbation on systematically selected residues. The coherent responses predicted are validated by molecular dynamics simulations. The results indicate that the iron uptake by the ferric binding protein is favorable in a thermally fluctuating environment, while release of iron is allosterically moderated. Since ferric binding protein exhibits a high sequence identity with human transferrin whose allosteric anion binding sites generate large conformational changes around the binding region, we suggest mutational studies on remotely controlling sites identified in this work. Functional proteins are complex structures, which may remain mainly unmodified as a result of a multitude of mutations Linear response theory (LRT) has been recently used to study conformational changes undergone by proteins under selected external perturbations PRS relies on systematically applying forces at singly selected residues and recording the linear response of the whole protein. The response is quantified as both the magnitude of the displacements undergone by the residues, and their directionality. Closed form expressions that summarize the theoretical implications of the PRS technique in the limit of a large number of perturbations introduced at a given residue are provided. We note that we have previously studied the stability of proteins using a similar sequential perturbation-response approach, based on inserted displacements followed by energy minimization of the system +3 from the mammalian host to the cytoplasm of pathogenic bacteria. To make iron unavailable to such pathogens, host organisms have iron transport systems such as the protein transferrin that tightly sequester the ion. Pathogens have developed strategies to circumvent this approach, one of them being the development of receptors for the iron transport proteins of the host. FBP resides in the periplasm, and receives iron from these receptors to eventually deliver it to the cytosol +3 binding family as well as the host protein transferrin. These host/pathogen iron uptake proteins are thought to be distantly related through divergent evolution from an anion binding function.Using PRS, we analyze the ferric binding protein A (FBP) as an example system, and describe alternative approaches that may have evolved in the structure to control function. The validity of the methodology is supported by molecular dynamics (MD) simulations. FBP is involved in the shuttling of Fe+3 is bound to FBP with remarkable affinity, with association constants on the order of 1017\u20131022 M\u22121 depending on the measurement conditions +3 transport dilemma, suggesting another necessary step for the release of the ion. It is of interest to understand how Fe+3 is eventually released from the binding site for subsequent use by the pathogen. One mode of action that has been suggested involves the control of the Fe+3 release kinetics by the exchange of synergistic anions forming relatively stable intermediates FeFBP is referred to as bacterial transferrin due to the similarities with transferrin in the structural folds, the highly conserved set of iron-coordinating residues, and their usage of a synergistic anion In the current work, we study FBP in detail due to an extensive literature on the iron uptake mechanisms of this and evolutionarily related proteins; moreover, the molecular dynamics (MD) results of the apo structure have previously been analyzed by perturbing a singly selected residue with linear response theory The new tool introduced in this work for the analysis of remote control strategies utilized by proteins is based on applying forces at a given residue as a perturbation, and recording the displacements of all the residues as the response. Since the procedure is repeated sequentially for all the residues in the protein, we term the technique, perturbation-response scanning (PRS). Below, we first review the theory and then outline the details of the PRS technique. Finally, we describe the MD simulations.C\u03b1 atoms. Any given pair of nodes are assumed to interact via a harmonic potential, if they are within a cut-off distance cr of each other , and a total of M interactions for the system of N residues. In the absence of an external force acting on the system, the equilibrium condition for each residue, i, necessitates that the summation of the internal, residue-residue interaction forces must be zero for each residue. Therefore,m coefficient matrix b consists of the direction cosines of each force representing the residue-residue interaction. The row indices of b are x, y, or z. Here \u0394fi is an m\u00d71 column matrix of forces aligned in the direction of the bond between the two interacting residues. For instance, in i has six contacts; and, thus, \u0394fi is a 6\u00d71 column matrix. Following the example outlined in x, y, and z-axes. This algebra gives rise to three independent equations involving six unknown interaction forces, which are the residual interaction forces of residue i with its contacting neighbors. One can write the equilibrium condition (equation 1) for each residue. This results in a total of N sets of equations, each of which involves the summation of forces in three respective directions. Consequently, generalizing equation 1 to the whole system of N nodes and a total of M interactions, one can write the following algebraic system of a total of 3N number of equations consisting of M number of unknown residue-residue interaction forcesM direction cosine matrix B and the M\u00d71 column matrix of residue-residue interaction forces, \u0394f. It is straightforward to generate the matrix B from the topology of the native structure filecr. As an example, apo FBP has 309 residues and a total number of 1542 interactions when the cut-off distance of 8 \u00c5 is selected.In the notation used, F matrix is equivalent to the Hessian G\u200a=\u200aH\u22121, may be used to predict the auto- and cross-correlations of residues. G may be viewed as an N\u00d7N matrix whose ijth element is the 3\u00d73 matrix of correlations between the x-, y-, and z-components of the fluctuations \u0394Ri and \u0394Rj of residues i and j; i.e.,G as a kernel to predict the response of other residues to applied perturbations on selected ones as we discuss next.Thus, rearranging equations 3\u20135, one gets the forces necessary to induce a given point-by-point displacement of residues:\u03b1 atom of each residue by forming the \u0394F vector in such a way that all the entries, except those corresponding to the residue being perturbed, are equal to zero. For a selected residue i, the force \u0394Fi is Our detailed PRS analysis is based on a systematic application of equation 7. We apply a force on the CR) vector of the protein through equation 7, as we explain in detail in the following. Let the elements of G in equation 8 be lmg where l and m denote the indices for the second order partial differential of the total energy with respect to the directionality When a force is applied only at residue i, equation 7 in expanded form becomes:i,G that belongs to the residue being perturbed. The response on a specific residue k due to this perturbation on i is the vector \u0394Rkii and k by Gki .We then compute the resulting . Similarly, \u0394kS are the displacements between the apo and the holo forms obtained from the PDB structures . Thus, kS and the goodness of the prediction is quantified as the Pearson correlation coefficient for each perturbed residue i:R\u03c3 and s\u03c3 are the respective standard deviations of calculated and experimental displacements. Gerstein and coworkers have demonstrated that when comparing two structures, the results from a selected subset of the residues may be more informative Using PRS, we scan the protein by consecutively perturbing each residue, and record the associated displacements as a result of the linear response of the protein. We define and If the collection of forces applied on a specific residue is independent and large in number, they will appear in a spherically symmetric set of directions. The responses, however, may be distributed along a line or in a plane so that the net response is still zero. Thus, although the perturbations are isotropic, the response may well be anisotropic. Deviations from such a spherically symmetric distribution of responses hint at the roles of certain residues in the remote control of the ligand, as will be shown in the i, that are distant from the ligand binding site, l, . For the selected residue, i, k forces are applied such that \u03a3k \u0394Fki\u200a=\u200a0; k is large and ensures that a spherically symmetric region around i is covered. The sum of the responses on each residue j is zero, \u03a3k\u0394Rkj\u200a=\u200a0. The results are visualized as vector plots on the protein structure .For an analysis that probes the directionality of the recorded responses, we proceed as follows: We first concentrate on those residues for which the Pearson correlation between experimental and theoretical displacements is large. Amongst them, we further locate those residues, Gki submatrix to have an understanding on the nature of the response by decomposing it using the transformationU, uj, give the three principal axes of the line of action of residue k in response to perturbations in i. The size of the associated elements, j\u03bb, provide the contribution, Fi (equation 13). Thus, if there is one dominant eigenvalue in Gki, i.e., Fi, they will be projected on the associated eigenvector, u1. Therefore, the collection of responses \u0394Rki to a number of perturbations will all be collected in a line along u1. Similarly, if two eigenvalues dominate, i.e.u1 and u2.One may further analyze the eigenvalue structure of the i may also be measured using equation 20. If the response of the neighbors possesses collectivity, then various symmetries in their action may be expected. For example, if the residues collectively move in a line to open a cap, not only each is expected to have a single dominant eigenvalue, but also the eigenvectors belonging to these dominant eigenvalues are to be parallel; i.e.,\u03b8 is the angle between the two eigenvectors, and k1 and k2 are the two residues whose responses are being compared. The directionality analysis is summarized in Algorithm B, shown in The degree of collectivity of the response of a group of neighboring residues to a perturbation on We analyze FBP in detail, using both PRS and MD. In addition, the PRS methodology is applied to several other cases to demonstrate how the approach may be used. These are included in iC (equation 19) are lower in value, but their relative ordering does not change; for example, the largest correlations in the apo form reduce from 0.98 to 0.90 and those in the apo form reduce from 0.89 to 0.80.The apo and holo forms of FBP have PDB codes 1D9V and 1MRP, respectively; in the latter case the Fe ion is treated as an additional node of the network. The protein has two domains, and upon binding one moves relative to the other as shown in i\u200a=\u200aj) and the results are the same over a range of values. We find that a cut-off distance of cr\u200a=\u200a8.0 \u00c5 on the C\u03b1 atoms of the protein (equation 1) is suitable. We have also verified that the main conclusions of the study are not affected for a range of values between 7.0\u20138.5 \u00c5; the system exhibits six zero eigenvalues at all these cut-off distances. At cr\u200a=\u200a8.0 \u00c5, there are 1542 and 1587 interactions for the apo and holo initial conformations, respectively.As a cut-off distance, we seek the smallest value for which the experimental B-factors are well-reproduced method \u22122 kcal/mol/\u00c5. 500 ps MD runs in the NVT ensemble at 310 K were carried out on the resulting systems. The final structures were then run in the NPT ensemble at 1 atm and 310 K until volumetric fluctuations were stable to maintain the desired average pressure. This process required 500 ps long MD runs at the end of which the average volume is maintained at 196900\u00b1700 and 203300\u00b1600 \u00c53 in the apo and holo structure runs, respectively. Finally, the runs in the NPT ensemble were extended to a total of 10 ns. The coordinate sets were saved at 2 ps intervals for subsequent analysis.Both systems were first subjected to energy minimization with the conjugate gradients algorithm until the gradient tolerance was less than 10The RMSD of the trajectories were calculated . For theA substitutes for G.The correlations between residue pairs derived from the MD trajectories are of particular interest. The snapshots recorded during the MD simulations are organized in the fluctuation trajectory matrix of order 3N\u00d7T, Note that a each nanosecond of an MD simulation takes ca. 9.8 hours on a server with 2 GB memory and eight CPUs each with 2.4 GHz quadcore architecture. Five complete scans of the protein with the PRS method uses three minutes on a single CPU of the same server.3+ binding location, and the structure of the holo form is also known. The overall RMSD between the two forms is 2.48 \u00c5. FBP has two domains, termed as the fixed and moving domains, respectively . We find that the coarsened protein model reproduces the residue-wise MD correlations; the Pearson correlations between the data are 0.76 and 0.74, respectively for the apo and holo forms. In C57 (equation 19), between these curves and the experimental curve is 0.95 and 0.92, respectively for these example cases. The close agreement of the residue-by-residue displacements between the current methodology and the all-atom approach in reference 19 justifies the assumptions that (i) the Hessian obtained from the elastic network adequately describes the system; (ii) it suffices to take the contacts to be homogeneous . We next investigate whether residue 57 is unique in reproducing the conformational response of the protein by performing PRS.The relative magnitudes of the residue-by-residue displacement vectors between the experimental apo \u2013 holo structures after superimposing their fixed domains is shown in the bottom curve of R for each residue and record the response obtained as outlined in the subsection Correlations between predicted and experimental structures. In iC between the calculated and the experimental data for every residue (equation 19). Note that each point on these figures is the result of the comparison of the displacements of the 309 residues in response to a perturbation applied at the selected residue, i, such as that obtained from the correlation between the middle and bottom curves in all applied forces lead to displacements well correlated with those from x-ray structures, with the worst perturbation having a correlation of 0.6\u00b10.1 apo form and (ii) holo form with the Fe ion as an additional node in the network, we compute \u0394 0.6\u00b10.1 . In fact 0.6\u00b10.1 . To dispiC\u200a=\u200a0.58\u20130.75) are not due to high coordination numbers of the involved residues which span a wide range of 4\u201313 contacts, implying a more intricate set of interactions leading to these results.In perturbing the residues of the apo FBP, the residues that give the worst correlations are 105 and 205\u2013207, all in the fixed domain of the protein. Residue 105 is in the core of the \u03b2 sheet structure located in this domain and 205\u2013207 are at the turn adjoining a helix to a \u03b2 strand. Additional residues with the largest deviations between the experiments and predicted values of the structural differences are due to perturbations in the fixed domain. Finally, residues 23 and 249, located in the core of the moving domain also lead to poorer predictions. These relatively low-correlation responses , we introduce a large collection of perturbations in directions that are spherically symmetric around it, so that their vectorial sum is zero. For each perturbation, we monitor the resulting response in the residues directly contacting the Fe+3. The directional response is also analyzed analytically by applying equations 20 and 21.In the previous subsection we have shown that in holo FBP, singly placed forces on the residues listed under the caption to iC>0.9 and are far from the Fe ion (id-Fe \u00bb cr) are D47, D52, both at the tips of distant loops as well as the loop spanning 232\u2013236. The results for the perturbations on D47 are shown in \u03b1 atoms are within 8 \u00c5 of the ion; three are located in the moving domain , and the remaining are in the fixed domain. The volume they take up in the protein is shaded in p1>0.8) whose associated eigenvectors are parallel to each other (cos \u03b8>0.95). The coherence is also obtained for forces applied on residues 52 and 232\u2013236 (iC>0.9 and id-Fe>21 \u00c5). We also perform the same analysis to the correlation matrix obtained from MD simulations . The results are summarized in The residues for which iC>0.9 and id-Fe\u2248cr, destroys this coherence of the cap residues; i.e., they move in a much wider range of directions as exemplified by perturbing Fe ion and residue 57 in the right hand side panels of On the other hand, directly perturbing the Fe ion, as well as other local residues for which In the lower left panel of Thus, by jointly focusing on the specific distantly located residues which (i) invoke a large amount of correlations in the whole protein, and (ii) induce local cooperativity in the binding domain, one may be able to uncover sites that remotely control function in the protein. Interestingly, the same analysis carried out on the N-lobe of human transferrin implies K278 in that protein as an allosteric controller due to their observed ability to mechanically control the cap over the ligand binding region. For FBP, the former type of control has been evidenced by a plethora of experiments where the exchange of synergistic anions forming relatively stable intermediates or the direct action of chelators on the ion have been observed H. influenza strains expressing mutant proteins that are defective in binding the phosphate anion are capable of donating iron, calling for mechanisms of iron transport that do not involve a synergistic anion.+3) has called for anion binding sites on the surface of FBP, similar to those found in transferrin For the particular case of FBP, our analysis suggest the existence of two alternative mechanism of Fe ion release in FBP: (i) Local control of the ion by synergistic anions and chelators acting in the binding groove, and (ii) remote control by ions acting on distant charged residues located in solvent exposed loops Click here for additional data file.Text S1Sample applications of the PRS method.(1.45 MB DOC)Click here for additional data file."} {"text": "To understand how symmetric structures of many proteins are formed from asymmetric sequences, the proteins with two repeated beta-trefoil domains in Plant Cytotoxin B-chain family and all presently known beta-trefoil proteins are analyzed by structure-based multi-sequence alignments. The results show that all these proteins have similar key structural residues that are distributed symmetrically in their structures. These symmetric key structural residues are further analyzed in terms of inter-residues interaction numbers and B-factors. It is found that they can be distinguished from other residues and have significant propensities for structural framework. This indicates that these key structural residues may conduct the formation of symmetric structures although the sequences are asymmetric. Symmetric proteins Ricin Toxin B is composed of two domains with the same beta-trefoil structure of three-fold symmetry et al. detected hidden three-fold sequence symmetry in both domains et al. further showed that these residues are characteristic of the beta-trefoil fold Multi-domain proteins provide ideal models to study the problem above since many of them consist of more than one domains evolved from the same ancestor and have similar structural symmetry but different sequence symmetry. For example, Structural Classification Of Proteins (SCOP) databank Plant Cytotoxin B-chain (PCB) family and all proteins in this family contain two domains with beta-trefoil structure is larger than the degree of symmetry you want to find, we plot a point at . The MRP is formed when this is done for all possible i and d. Two segments are similar if the percentage of their similar residues, obtained by using pair-wise global sequence alignment with PAM250 score matrix, is larger than a chosen number r and when p-value is lower than 0.05.The MRP of a protein sequence R is the Pearson's correlation coefficient between iMRP and rMRP, where iMRP denotes the ideal symmetric MRP corresponding to the real MRP (rMRP) of protein sequence. R reports the presence of non-overlapping repetitive patterns. Because the R value cannot definitely tell us the degrees of similarities of different patterns and so the degree of sequence symmetry, we introduce a parameter S to do this. S is the average value of the Pearson's correlation coefficients between all different patterns and describes the average similarity of different patterns. Therefore, the S value is a measure of the degree of sequence symmetry. For a sequence to be symmetric, both R and S should have large values. The details of this method can be found in ref. 12. It is noted that there existed other methods to find repeats of a protein sequence The parameter pol between the solute and solvent and is calculated byijr is the distance between atom i and atom j. iq and jq are the charges of atom i and atom j. \u03b5 is the dielectric constant of the solvent. \u03b1i is the effective Born radius of atom i, which is related to the effective Born free energy of solvation. The molecular mechanics software we used is Tinker with Charmm27 force field The residue interaction number (RIN) of a residue is the number of the interaction pairs between this residue and other residues that are more than four residues apart along sequence and their potential energies are lower than \u22120.5kcal/mol r\u200a=\u200a0.3 as in the previous paper R values of all domains are larger than 0.5, and all the S values are larger than 0.4 only with one exception 3, ([I/L/V]X[I/L/M])3 and (QXW)3, where X denotes any residue. They are totally composed of twenty-four residues and show three-fold repetitions motifs in detail. The distribution of these motifs in the structure is illustrated in We use another approach to confirm the FTR motifs acting as key structural residues in PCB family. We calculate their inter-residue interactions. The key structural residues should have more interactions with others. RTB is selected as an example too. The average residue interaction number (RIN) of all residues, buried residues, and all residues in FTR motifs is 4.98, 6.31 and 8.50 respectively . The ave3, have stronger interactions with other motifs. This may be that the second motifs are closer to other three motifs buried residues, (ii) symmetrically located in the structure, and (iii) have large residue interaction numbers and small B-Factors. This result may be helpful to design de novo proteins.Supporting File S1Supplementary data (3.50 MB DOC)Click here for additional data file."} {"text": "A multiple sequence alignment (MSA) generated for a protein can be used to characterise residues by means of a statistical analysis of single columns. In addition to the examination of individual positions, the investigation of co-variation of amino acid frequencies offers insights into function and evolution of the protein and residues.conn(k), a novel parameter for the characterisation of individual residues. For each residue k, conn(k) is the number of most extreme signals of co-evolution. These signals were deduced from a normalised mutual information (MI) value U computed for all pairs of residues k, l. We demonstrate that conn(k) is a more robust indicator than an individual MI-value for the prediction of residues most plausibly important for the evolution of a protein. This proposition was inferred by means of statistical methods. It was further confirmed by the analysis of several proteins. A server, which computes conn(k)-values is available at .We introduce H2r, which analyses MSAs and computes conn(k)-values, characterises a specific class of residues. In contrast to strictly conserved ones, these residues possess some flexibility in the composition of side chains. However, their allocation is sensibly balanced with several other positions, as indicated by conn(k).The algorithms Most easily, conserved residues can be identified, which indicate positions crucial for function or structure. Therefore, MSAs are frequently the basis for the prediction of important residues; see e.g. ,2. For qe.g. . Even moe.g. and quite.g. . G\u00f6bel efficient . The mutfficient ,8 or chifficient . A similfficient . In ordefficient .in silico perturbation is a constraint that limits the occurrence of amino acids at a certain position. Each choice selects a specific subset of MSA sequences and may cause variations in the column-specific occurrence of amino acids. Analysing these patterns, Ranganathan and co-workers have proposed the existence of energetically coupled residues [The above methods rely on the computation of a global co-variation statistic for the identification of correlated residues. In contrast to these concepts, methods based on the idea of \"perturbations\" have been introduced recently ,13. An iresidues . A similresidues . In combresidues .The enormous increase of sequence information resulting from genome sequencing projects has broadened the data basis for coupling analysis. Therefore, methods can be used that examine a large number of parameters. Even more, the existence of a high quality MSA is crucial for the analysis of correlated mutations. The sequence space of a protein has to be sampled correctly; otherwise, the quality of the predictions will deteriorate. If similar sequences originating from closely related species majorise an MSA, signals caused by a shared evolution of the proteins may be stronger than correlation patterns. Such bias will influence any calculation. However, methods based on the analysis of perturbations may be susceptible to less accented distortions. In this case, smaller sets of sequences determine predictions and may constitute signals interpreted as perturbations. If these subalignments are dominated by closely related sequences, the predictions may be wrong.H2r, a novel algorithm of that kind. H2r combines classical and well-proven concepts of computer science. It was our aim to focus on reliability even at the expense of sensitivity. We will confirm H2r's robustness and show that coupled residue pairs identified by H2r constitute tightly interconnected networks. Parameters will be introduced that allow the characterisation of these networks and individual residues. It will be demonstrated that the mode of generating MSAs does not markedly influence H2r's results. We study predictions in protein 3D-space and discuss possible reasons for the evolution of correlation patterns.This is why we prefer algorithms exploiting exhaustively the information deposited in each column of an MSA. In the following, we report k have to be found, where an amino acid X occurs with a certain minimal frequency fmin, which originates from Shannon's information theory [A large number of algorithms, utilising quite different principles have been introduced to identify correlated mutations. The co-variance algorithm proposed in uses theduced in , which un theory and is cn theory . Formallk is its entropy H(k); see [A parameter frequently used for quantifying the composition of an individual column (k); see and refef of two variables (columns) k and l isThe entropy MI = H(k) + H(l) - H can be computed. MI-values have been the basis for several analyses. However, it has been shown that raw MI-values are a poor indicator for the prediction of co-evolution [MI-values. For synthetic MSAs, the ratios MI/H or MI/(H(k) + H(l)) have performed best [U, which is a measure for the dependency of k on l and vice versa:Using formulas (1) and (2), the mutual information volution . More remed best . In the U \u2264 1.0: If columns k and l are completely independent, then H = H(k) + H(l) and U vanishes. If the two columns are completely dependent, then H(k) = H(l) = H and U equals 1.0. For the analysis of correlated mutations in MSAs, high values of U indicate a strict pair-wise co-occurrence of amino acids in columns k and l. In more detail, formula (3) has been discussed in [H can directly be deduced from frequencies i = 1..20, all j = 1..20 amino acids, and all combinations of positions k and l. This implies that the MSA has to be large enough to allow a reliable estimation of these frequencies. For a similar approach, a lower limit of approximately 125 sequences has been determined [U-values range as expected; see Additional File It follows that 0 \u2264 ussed in , which ctermined . For synk, l. Additionally, these values allow the assessment of individual positions k. For the ATP synthase \u03b5 subunit of Escherichia coli, it has been made plausible that residues with highest Z-scores deduced from normalised MI-values are more likely to change the activity than those with low values [i.e. for lower U-values. Merely by chance and due to random fluctuations, residue pairs might be assigned a relatively large value resulting in a strikingly high Z-score.To begin with, the outcome of a mutational analysis identifies coupled residue pairs w values . Howeveri.e. by adding up several analyses. We applied this principle for the identification of conspicuous positions. In agreement with previous findings [H2r form tightly connected networks; see Additional File conn(k) as the number of high-scoring pairs a residue k is an element of. Connectivity values differ significantly: For the MSA associated with the PFAM [conn(388) was 10 and conn(111) was 1. In order to illustrate how networks of interconnected residues are located in 3D-space, Figure conn(388)-value. The MSA of PF01053 (Cys_Met_Meta) and the related protein structure (pdb-code 1QGN) have already been a test bed for in silico analysis [conn(k)-approach is the SH3 domain: For a chi-squared approach it has been shown that 5 residues participate in 53 of 92 significant co-variations [For any noisy signal, the quality of a prediction can be enhanced by sampling, findings , high scthe PFAM entry PFanalysis . A furthconn(k) in detail, we examined the outcome of H2r on several datasets. The first two experiments were carried out to estimate the probability of conn(k)-values for MSAs bearing no coupling signals of real proteins.In order to assess the parameter H2r_train. We used H2r to determine the occurrence and frequency of conn(k)-values for these MSAs. However, for the following assessment we had randomly assigned U-values in the range of 0.0 to 1.0 to all pairs k, l. For this test, 500 individual experiments (one MSA each) were analysed. A second test was based on PF01053 that was introduced above. Here, we did 1000 independent experiments by assigning U-values randomly and analysing the distribution of conn(k)-values. Results of both experiments are summarised in Table conn(k)-values \u2265 4 are highly unlikely to occur merely by chance. The frequency for conn(k) = 4 is < 2.5\u00b710-3, a connectivity conn(k) > 6 was not observed in any of these experiments. Please note that only the 75 largest U-values were analysed for each MSA. This has to be considered when interpreting the above frequencies.For parameter optimisation has been determined [SH3_filt, which consisted of 471 sequences. Its U-values were relatively low, the largest one, U was 0.28. This observation indicates that these correlations are much weaker than those observed in Cys_Met_Meta, which possesses a maximal U-value of 0.72. As PF00018 contains 3506 sequences, we used a bootstrapping approach to repeat the experiment several times and to analyse the results statistically. We generated 20 datasets by randomly selecting 210 sequences in each case. The results are summarised in Figure SH3_filt results (compare Panel B) and proposed to accept conn(k)-values \u2265 5. Furthermore, this cut-off was supported by the following correspondences of function and conn(k)-values.As a further test for the robustness of our approach, we analysed PF00018. This dataset subsumes 3506 sequences of SH3 domains. The domain consists of approximately 60 residues occurring in a large number of eukaryotic proteins involved in signal transduction. The 3D-structure of the related Fyn domain -values occurring for individual datasets of the bootstrapping approach made clear that the filtering of the input sequences is a critical step. A random selection of 210 sequences resulted e.g. for residue 135 in conn(k)-values ranging from 2 to 8.However, the extreme variation of U-value determined for an MSA indicates the strength of the coupling signal. Our analysis of synthetic MSAs allows a rough estimation of the values; see Additional File conn(k)-value \u2265 4 can be considered reliable in this case. For PF00018, the maximal U-value was 0.28. For this dataset, biochemical findings allowed us to explain the role of residues with conn(k)-values \u2265 5. As a (conservative) rule of thumb, we propose a cut-off of 0.5: If the maximal U-value is \u2265 0.5, H2r lists residues with conn(k)-values \u2265 4 otherwise those \u2265 5.The maximal conn(k)-values. In order to characterise the outcome of H2r for a sampling on filtered data, we created datasets by randomly choosing 75% or 60% of the remaining sequences. Resulting conn(k)-values showed that the composition of these MSAs did not markedly affect H2r's results -values remained stable.The above random sampling of sequences without any filtering induced a large variation of \u03b5 subunit of E. coli has been extensively mutated and the effects of mutations have been characterised and compiled -values and normalised MI-values were identical for all high scoring residues. This indicates that both parameters allow equally well to quantify the coupling of residues. The maximal U-value was 0.37 in this case. Therefore, H2r considered conn(k)-values \u2265 5 as reliable. This was true for positions 12, 65, 72, and 81. For positions 65 and 81 their susceptibility to mutational effects is known, none has been reported for the remaining two positions. If the four largest U-values were used for predicting conspicuous residues k, a comparison of this approach and the conn(k) method gave the following result: In both cases, 2 positions known to be susceptible to mutational effects were predicted correctly. It is unknown, how mutations affect the other two residues. Thus, if one utilises the concordance with known mutational effects as an indicator for prediction quality, both approaches have a similar performance.The ATP synthase led (see and refeZ-scores . For eaceviously and deteconn(k) and the maximal U-value deduced from PF00018 representing the SH3 domain. The largest U-value is 0.28. For none of the extra residues possessing maximum U-values ranked 2 or below 3, a clear function has been reported in [conn(k)-values are more robust and more reliable than individual U-scores.However, if the coupling signals were less pronounced the number of unclear predictions increased drastically for the maximum score approach. In Table orted in . In summ\u03b5 subunit had in both of the above experiments high MI-values; however no high Z-score has been reported [Interestingly, residues 72 and 73 of the ATP synthase reported . This diH2r to a perturbation based method, we analysed the MSA of globin sequences, as compiled in [U-value was 0.72. 9 residues gained a conn(k)-value \u2265 4. These were \u2013 projected onto 2DN1 \u2013 residues 97, 40, 57, 93, 131, 37, 85, 2, 39. Only 3 of these predictions (printed in bold) were in agreement with previous findings as reported in [In order to compare the output of piled in . For thiorted in .1QGN) [1SHF) [\u03b5 subunit [in silico methods. Please note that we did not compile specific MSAs but used the precompiled full PFAM alignments for the following tests.For the following comparisons, we used the above introduced proteins represented by a PFAM entry and a related protein structure: Cys_Met_Meta , SH3 dom) [1SHF) , and ATP3, 1AQT) . We seleHSRP (data not shown). CorrMut is a server identifying correlations in the evolution of amino acid sequences [HSRP in all three cases (data not shown).The server based on ,24 did nequences . After sH2r: 85 (15), 86 (-), 97 (-), 114 (22), 115 (-). Thus, H2r did not confirm any of the 14 top ranking predictions. Interestingly, our implementation of a G\u00f6bel like algorithm [MI-based methods like H2r and the above algorithms differ quite significantly in their predictions. This statement is further supported by an analysis of a larger dataset reported in the Additional File Based on a chi-squared statistical method, the 25 top co-varying SH3 residues have been computed and ranked . Our resH2r: 85 5, 86 (-)\u03b1\u03b2\u03b2\u03b1 tryptophan synthase complex, which catalyses the final reaction from indole-3-glycerole phosphate + L-serine to L-tryptophan + H2O. The \u03b1 subunit (TrpA) cleaves indoleglycerol-3-phosphate to glyceraldehyde-3-phosphate and indole. The latter is transported through a hydrophobic tunnel to the associated \u03b2 subunit (TrpB), where it is condensed with L-serine to yield L-tryptophan. A sophisticated mechanism of allostery links the \u03b1 and \u03b2 monomers of the synthase[H2r for TrpA and TrpB are plotted as projected onto pdb-entry 1KFJ [conn(k)-value, is an element of the TrpA/TrpB interface [As illustrative examples, we analysed three enzymes of the tryptophan synthesis pathway. TrpA and TrpB constitute the synthase. Both pr synthase. In Figutry 1KFJ . For Trpnterface . Residuenterface . Residueconn(k)-value \u2265 4; conn(90) = 8 is the highest value. Residue 90 is near the lysine, which binds PLP and catalyses the reaction. Residue 19 is an element of the TrpA/TrpB interface [For TrpB, 5 residues possessed a nterface and is anterface and partnterface . The rolH2r predicted 5 residues as suspicious; see Figure Conn(284) was 11. Please note that the residues 235, 297 (distmin = 0.89 \u00c5) and 50, 54 (distmin = 0.72 \u00c5) are contacting residue pairs. For all these residues, the reason for high conn(k)-values is unclear.The anthranilate phosphoribosyl transferase (TrpD) catalyses the group transfer of 5'-phosphoribose from D-5-phosphoribosyl-1-pyrophosphate to the nitrogen atom of anthranilate, which is the third step in L-tryptophan biosynthesis. For TrpD, H2r as a web tool [H2r determines bootstrap supported conn(k)-values and reports the results via email. The web-interface can be utilised to change parameters like the number of HSRPs or the usage of pseudo counts. For parameter selection, please see the Additional File We have implemented a server offering web tool . After ue.g. the prediction of protein 2D-structure [e.g. relevant for the identification of binding sites [H2r supplements the repertoire of entropy-based methods of single residues by extending it to residue pairs. The information associated with high conn(k)-values is comparable to that of strictly conserved positions: Both signals, which are based on statistical analyses, identify suspicious residues. However, in both cases the origin of these signals can only be elucidated by exploiting additional knowledge. A typical example for this enigmatic information is TrpD. 4 of the 5 residues constitute two contacting residue pairs, which supports the significance of the related signal. Nevertheless, the conn(k)-values alone do not explain the function of these residues or the origin of the signals.Incorporating MSAs turned out to improve the outcome of many applications like tructure or fold-tructure . The reang sites -38. A prng sites ,40. Corre.g. sequence logos are frequently used to assess individual columns in MSAs [MI (as defined by formula (4)) was the basis for the work presented in [MI describes the extent of association between residues k and l.Shannon's theory of communication has turned out to be useful in many fields of application. In computational biology, in MSAs . A mutuaented in . In biolMI-values are a poor indicator for the prediction of co-evolution [MI-values have been introduced [However, it turned out that unfiltered volution . Therefotroduced .U as it takes into account the entropy values H(k) and H(l), which express the degree of conservation at positions k and l. U-values are normalised and the results deduced from synthetic MSA_1 has two major advantages: 1) It is less susceptible to signals of a common evolution that might dominate those sequences constituting a perturbation. Generally, these signals are quite strong [U-values plotted in Additional Figure 2 clearly illustrates the inferiority of the perturbation approach. Comparing e.g. the columns representing frac values 0.4 and 0.8 illustrates the loss of information. If (say) a perturbation is due to an amino acid occurring in 40% of the sequences, the information content of the remaining 60% of the sequences is ignored. If a second amino acid induces a similarly strong perturbation, the U-value increases significantly; compare Additional Figure 2. The same is true for other combinations. A perturbation-based approach does not distinguish between these cases. This example makes clear that the analysis of all frequencies We prefer e strong ,42. In aA well-known problem in the analysis of MSA is the interpretation of columns containing gaps. For the identification of correlation patterns, a gap cannot be treated as 21st amino acid when calculating frequencies. In this case, columns consisting mostly of gaps would be identified as strictly coupled. Figure H2r, frequencies U-values. These frequencies are deduced from those sequences possessing a gap neither at position k nor at l. Thus, all dependencies, where gaps are not involved, are determined in a correct manner. Therefore, the U-values will at all positions solely depend on the signals induced by the amino acid propensities. Thus, ignoring gaps is equivalent to an analysis with 20 instead of 21 symbols. This limitation has to be considered when interpreting conn(k)-values.On the other hand, ignoring gaps is not appropriate, too. It could be that a substitution of a small side-chain with a large one induces the loss of a residue position. Columns 5 and 6 of Figure HSRPs, simple algorithms are sufficient for cluster and network generation; see Additional File \u03b3 synthase (1QGN) consists of two domains [1QGNA01 and 1QGNA02. 1QGNA01 binds PLP and consists of residues 48 \u2013 307. 1QGNA02 binds the substrate cysteine [conn(k)-values \u2265 4 are located at the interface of these two domains; compare Figure conn(k)-value of 10, is an element of 1QGNA02, which is not the PLP binding domain. However, this residue is located directly opposite of PLP. For this example, the findings support the notion that conn(k)-values identify residues that signal the concerted co-evolution of domains to form a novel protein function. The functional role of residues possessing high conn(k)-values in the SH3 domain indicates their importance, too. The same is true for most of the conspicuous TrpA/TrpB residues.Correlation signals can be used to compile networks of residues . In the domains , which h domains \u2013 been dcysteine and consH2r were not markedly affected by the mode of MSA generation. This indicates that state of the art programmes and datasets like PFAM offer MSAs of similar quality, which proved to be adequate for coupling analysis. Nevertheless, the composition of the samples fed into an algorithm has to be controlled. Assessing the local quality of MSAs as introduced with T-Coffee [For a reliable prediction of residues that play a major role in protein function or evolution, robustness has to be implemented on all levels of algorithmic design. For success, the generation of high quality MSAs is a critical step. Both the advent of novel algorithms ,49 and mT-Coffee and the T-Coffee could bee.g. by applying a BLOSUM-like scoring function. This is why we are planning to model the biological context more specifically.In addition, it should be possible to improve the above core algorithm. Shannon's theory does only consider the frequency of symbols and does not regard the features of the represented objects. In the case of MSAs, it would be reasonable to analyse the composition of columns and to assess the properties of occurring amino acids conn(k) is a novel parameter for the characterisation of a specific class of residues. In a robust way, it indicates the strength of co-variation detectable among residues. In contrast to strictly conserved residues, amino acid composition is allowed to vary for these residues. However, the instrumentation of these positions is sensitively balanced with several other ones. Just as strictly conserved residues, these ones offer an enigmatic signal of protein evolution or function. For a complete decoding, knowledge about the protein, its function, and evolution has to be considered.k, whose values are linked to a discrete set of frequencies f(ai) of amino acids, the entropy H can be computed according to formula (1). The entropy H of two variables (columns) k and l is defined by formula (2). In order to measure the dependency of k and l, the coefficient U can be computed according to formula (3). All frequencies have to be deduced from an MSA. An implementation for computing U is described in [For a random variable (column) ribed in .f is the total occurrence of all amino acids at position k, S are Blosum50 [\u03bb is a weight factor, with 0 \u2264 \u03bb \u2264 1.0. For H2r, we used \u03bb = 1.0.Blosum50 scores aThe frequencies ai at position k and aj at position l. n(k) is the sum of all ncorr-values were normalised so that the sum of the ncorr-values was equal to the sum of the (uncorrected) ai was determined. If fmax(ai) \u2265 0.95, the column and the related residue were regarded as strictly conserved.For each column of an MSA, the largest frequency of any amino acid S1 ... Sn be the n sequences constituting the input (MSA) sequ_in. For the computation of sequence identity values ident, the number of identical residues (ignoring gaps) was determined. The two parameters identmin and identmax defined the minimal and the maximal sequence identity values used for comparison. In pseudo-code the algorithm works as follows:Let Input: sequ_in = {S1,..., Sn}Output: The set filteredS1 to filteredAdd i = 2 n doFor {Si to all sequences of filtered and determine ident\u00a0\u00a0\u00a0Compare identmin \u2264 ident \u2265 identmax for all comparisons\u00a0\u00a0\u00a0If Si to sequ_in\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Add }identmin was 20% and for identmax it was 90%. Columns possessing more than 25% gaps were masked and not processed further. Please note that the first sequence of the input is always an element of the set filtered.Due to the results of parameter optimisation of two residues k, l as the minimal space between van der Waals radii of any pair of atoms belonging to k or l, respectively. Thus, a distance of 0 \u00c5 indicates that at least two atoms of k and l are in direct contact in 3D-space.For measuring distances of residues, we used routines compiled by M. Gerstein . We defiMZ prepared datasets and multiple sequence alignments and assisted in manuscript writing. RM designed and implemented the algorithm and wrote the manuscript. Both authors read and approved the final version.H2r. Computations used for parameter optimisation and additional performance tests.Parameter optimisation and performance tests for Click here for file"} {"text": "Retinal perfusion variability impacts ocular disease and physiology.To evaluate the response of central retinal artery (CRA) blood flow to temperature alterations in 20 healthy volunteers.Non-interventional experimental human study.Baseline data recorded: Ocular surface temperature (OST) in \u00b0C (thermo-anemometer), CRA peak systolic velocity (PSV) and end diastolic velocity (EDV) in cm/s using Color Doppler. Ocular laterality and temperature alteration (warming by electric lamp/cooling by ice-gel pack) were randomly assigned. Primary outcomes recorded were: OST and intraocular pressure (IOP) immediately after warming or cooling and ten minutes later; CRA-PSV and EDV at three, six and nine minutes warming or cooling.Repeated measures ANOVA.: Pre-warming values were; OST: 34.5 \u00b1 1.02\u00b0C, CRA-PSV: 9.3 \u00b1 2.33 cm/s, CRA-EDV: 4.6 \u00b1 1.27 cm/s. OST significantly increased by 1.96\u00b0C (95% CI: 1.54 to 2.37) after warming, but returned to baseline ten minutes later. Only at three minutes, the PSV significantly rose by 1.21 cm/s (95% CI: 0.51to1.91). Pre-cooling values were: OST: 34.5 \u00b1 0.96\u00b0C, CRA-PSV: 9.7 \u00b1 2.45 cm/s, CRA-EDV: 4.7 \u00b1 1.12 cm/s. OST significantly decreased by 2.81\u00b0C (95% CI: \u22122.30 to \u22123.37) after cooling, and returned to baseline at ten minutes. There was a significant drop in CRA-PSV by 1.10cm/s (95% CI: \u22122.05 to \u22120.15) and CRA-EDV by 0.81 (95% CI: \u22121.47 to \u22120.14) at three minutes. At six minutes both PSV (95% CI: \u22121.38 to \u22120.03) and EDV (95% CI: \u22121.26 to \u22120.02) were significantly lower. All values at ten minutes were comparable to baseline. The IOP showed insignificant alteration on warming (95% CI of difference: \u22120.17 to 1.57mmHg), but was significantly lower after cooling (95% CI: \u22122.95 to \u22124.30mmHg). After ten minutes, IOP had returned to baseline.This study confirms that CRA flow significantly increases on warming and decreases on cooling, the latter despite a significant lowering of IOP. The ocular circulation is geared to meet the nutritional needs and respond to physiological changes to allow the specialized tissues to function optimally. UnderstaOcular warming in healthy individuals increases the retinal blood flow (RBF), while decreasing the choroidal blood flow (CBF). The decrChilling the eye is often resorted to, as in cyclocryotherapy and retinal cryopexy. People at high altitude, polar explorers or those exposed to snowstorms, avalanches or freezing waters risk cryo-injury. No studies were found evaluating the effect of cooling on ocular blood flow (MEDLINE). Katsimpris demonstrated in rabbits that trans-palpebral ocular cooling significantly lowered aqueous and vitreous temperatures, comparable to the effect of direct corneal chilling.Utilizing CDI, we studied the changes in central retinal artery (CRA) blood flow on warming and cooling normal eyes.The study was approved by the institutional review board of the Institute of Ophthalmology, Aligarh Muslim University and informed consent was taken. Twenty healthy young volunteers were recruited from amongst the junior residents of the ophthalmology department. The inclusion criteria were best corrected visual acuity (BCVA) of 20/20 Snellen, with normal anterior segment on slit-lamp biomicroscopy and a normal fundus on indirect ophthlamoscopy. Any volunteer with fever, history suggestive of a rheological disorder, such as diabetes or hypertension, glaucoma, maculopathy, pathological myopia or tear film abnormalities (on Schirmer testing), subjects who had undergone any ocular surgery or were contact lens wearers were excluded. A carefully supervised pilot study was done on five volunteers, where warmth and cooling was done for gradually increasing periods (up to ten minutes) and VA and biomicrosopy carried out at each interval to assess any adverse effect of the intervention.Subjects abstained from drinking tea, coffee and smoking two hours prior to the test and rested for ten minutes before the tests. The study was performed in an air-conditioned room with a controlled temperature range of 20\u00b0C-24\u00b0C, humidity range of 20-25%, and constant brightness. Baseline measures recorded were: Ocular surface temperature (OST) in \u00b0C, using a thermo-anemometer with resolution of 0.1\u00b0C, range of \u221220 to 200\u00b0C and an accuracy of \u00b10.8\u00b0C. The probe was placed in contact with the lower bulbar conjunctiva and the CRA velocity (cm/sec) was measured by Color Doppler images Figs. and 3 obThe laterality of the eyes (right or left) and the temperature alteration (warming or cooling) were randomly assigned by the toss of a coin. Between temperature alterations each subject was allowed a ten-minute break. For ocular warming an electric lamp having one 60 watt tungsten bulb was placed ten cm from the closed lids for ten minutes. The cooling was achieved by using an ice-gel pack with surface temperature \u22645\u00b0C placed in contact (avoiding pressure) with the eye through closed lids for ten minutes.The following primary outcome parameters were recorded sequentially:Immediately after warming (or cooling) at time = 0 min: OST.The intraocular pressure (IOP) was recorded as a secondary outcome parameter with the Pulsair Easy Eye air puff tonometer , as the median of three readings, taken within a minute of warming or cooling.After 3, 6 and 9 min-Peak systolic and end diastolic CRA velocity.After 10 min-OST and IOP were re-measured.P < 0.05 (GraphPad Prism version 5.02); 95% confidence intervals are quoted.Data was analyzed using repeated measures ANOVA (RANOVA) and Tukey test for multiple comparisons with significance set at We chose ten male and ten female healthy postgraduate-volunteers from the department. Their mean age was 26.4 years (SD 1.6). Baseline measures of pulse (\u03bc \u00b1 SD: 81.6 \u00b1 6.3 beats/min); systolic BP (\u03bc \u00b1 SD: 119.1 \u00b1 8.7 mmHg) and diastolic BP (\u03bc \u00b1 SD: 77.1 \u00b1 6.7 mmHg) were within normal range. The baseline values of OST, CRA PSV, EDV and RI and IOP are shown in The OST increased to a mean of 36.4 \u00b1 0.90\u00b0C immediately after warming. This reflects a significant mean rise of 1.96\u00b0C (95% CI 1.54 to 2.37). Ten minutes after cessation of warming, the OST had returned to near baseline values (\u03bc: 34.4 \u00b1 0.92\u00b0C). Compared to baseline, the changes in CRA parameters on warming are shown in The OST decreased to a mean of 31.7 \u00b1 1.38\u00b0C immediately after cooling. This reflects a significant mean decrease of 2.81\u00b0C (95% CI \u22122.24 to \u22123.37). Ten minutes after cessation of cooling, the OST had returned to near baseline values (\u03bc: 34.2 \u00b1 0.93\u00b0C). Compared to baseline, the changes in CRA parameters are shown in P = 0.15; 95% CI \u22120.17to1.57). On cooling there was a significant drop in IOP by 3.32 mmHg The IOP had returned to pre-cooling levels ten minutes after cessation of cooling.At baseline, in 20 eyes, the IOP ranged from 10-19 mmHg and showWe studied the effect of warming (20 eyes) and (for the first time) cooling (20 eyes) on the CRA parameters of 20 healthy young volunteers.Our baseline OST 34.5 \u00b1 0.98\u00b0C) recorded with a thermo- anemometer were similar to those of Nagaoka measured with non-contact infrared radiation thermography. Mori obt8\u00b0C recorOur baseline CRA-PSV of 9.5 \u00b1 2.4 cm/s was similar to that reported by numerous authors.5\u201315 The mThe baseline CRA-EDV of 4.7 \u00b1 1.2 cm/s in our series appears higher than that of Guthoff . In Nagaoka's study the OST rose by 3.3\u00b0C from a baseline of 34.5 \u00b1 0.2\u00b0C to 37.8 \u00b1 0.3\u00b0C on warming for ten minutes and like us, returned to baseline within ten minutes after cessation of warming.[In our study, after ten minutes of warming, the OST rose significantly on cooling but returned to near baseline values after ten minutes (95% CI: \u22120.93 to 0.21). Ortiz demonstrated a much greater lowering by 21.5\u00b0C using extremely cold air stream of \u221219\u00b0C for 40 min. In anothBoth CRA-PSV and EDV significantly decreased on cooling at three minutes, but had returned to baseline values by six . No signP = 0.7). The IOP rose to match the baseline values at ten minutes after cessation of cooling. Perhaps of greater importance is that despite a decrease in IOP the CRA, PSV and EDV were significantly lower on chilling. This suggests that cold exposure of eyes causes a significant reduction in ocular blood flow to the posterior segment and cryo-trauma to the eye would likely be an ischemic insult. Although Goldmann applanation tonometer is considered the gold standard, we preferred the air puff tonometer on account of it being quicker: And time was important for us, since the very nature of the study demanded numerous measurements in a limited time. Equally importantly, we were not studying IOP per se, but the change induced by our experimental intervention. For \u2018change\u2019 we felt the air puff tonometer would suffice.It is important to consider changes in IOP on warming or cooling the eyes, since alterations of IOP may modify ocular perfusion. IOP has been reported to both decrease and showOur study lacks in not having been able to co-evaluate the choroidal circulation and flow, largely on account of a lack of equipment. Subjective estimates of the choroidal blood flow although done, were considered to lack the objectivity to be seriously analyzed. This may be of greater import since choroidal perfusion abnormalities are reported in pathologies like age-related macular degeneration.25Our study confirms that CRA blood flow significantly rises in response to ocular warming. In addition this is the first study to demonstrate a significant lowering of the CRA blood flow on cooling the eye. Cryo-injury to the eye has great importance for people who live in high altitudes or in extreme climatic zones like Ladakh (Tibet) and the Antarctic. The effect of cooling the eye on retinal blood flow sheds some light on this little researched aspect of ophthalmology. Additional research will help us understand the effects of heating and cooling modalities used in ocular therapy. Moreover, we can better predict their adverse effects."} {"text": "Here we studied the activity of \u03b1-bisabolol in acute leukemia cells.We previously demonstrated that the plant-derived agent \u03b1-bisabolol enters cells ex vivo blasts from 42 acute leukemias for their sensitivity to \u03b1-bisabolol in 24-hour dose-response assays. Concentrations and time were chosen based on CD34+, CD33+my and normal peripheral blood cell sensitivity to increasing \u03b1-bisabolol concentrations for up to 120 hours.We tested 50) included mainly Ph-B-ALL cells. AML cells were split into cluster 2 and 3 (45 \u00b1 7 and 65 \u00b1 5 \u03bcM IC50). Ph+B-ALL cells were scattered, but mainly grouped into cluster 2. All leukemias, including 3 imatinib-resistant cases, were eventually responsive, but a subset of B-ALL cells was fairly sensitive to low \u03b1-bisabolol concentrations. \u03b1-bisabolol acted as a pro-apoptotic agent via a direct damage to mitochondrial integrity, which was responsible for the decrease in NADH-supported state 3 respiration and the disruption of the mitochondrial membrane potential.A clustering analysis of the sensitivity over 24 hours identified three clusters. Cluster 1 (14 \u00b1 5 \u03bcM \u03b1-bisabolol ICOur study provides the first evidence that \u03b1-bisabolol is a pro-apoptotic agent for primary human acute leukemia cells. Aliquots of the homogenates were loaded on SDS-polyacrylamide gels at the appropriate concentrations. Electrophoresis was performed at 100 V with a running buffer containing 0.25 M Tris-HCl (pH 8.3), 1.92 M glycine, and 1% SDS. The resolved proteins were electroblotted onto a nitrocellulose membrane using the iBlot\u2122 system . Membranes were then incubated with a mouse monoclonal IgG antibody to poly(ADP-ribose) polymerase (PARP) , with a rabbit polyclonal IgG antibody to BID or with a rabbit polyclonal IgG antibody to \u03b1-tubulin . The membranes were then washed and incubated with an anti-mouse or anti-rabbit IgG peroxidase-conjugated antibody . The blots were washed again and then incubated with enhanced chemiluminescent detection reagents according to the manufacturer's instructions. Proteins were detected using the ChemiDoc XRS Imaging System (Bio-Rad).Cells were homogenized at 4\u00b0C in 50 mM Tris-HCl (pH 8) containing 0.1% Nonidet-P40 (NP-40), 200 mM KCl, 2 mM MgCl2, 10 mM Tris-HCl, pH 7.5, 1 mM sodium orthovanadate, and complete EDTA-free protease inhibitor cocktail . Cells were then chilled on ice for 10 minutes and gently lysed by adding 0.3% (v/v) NP-40. In order to restore an isotonic environment, a solution containing 525 mM mannitol, 175 mM sucrose, 12.5 mM Tris-HCl, pH 7.5, 2.5 mM EDTA, and protease inhibitor cocktail was added. Lysates were first centrifuged at 600 \u00d7 g at 4\u00b0C in order to remove nuclei and then the supernatants were centrifugated at 17,000 \u00d7 g for 30 minutes at 4\u00b0C. The obtained supernatants were collected and used as the cytosolic fraction. The pellets, that contained mitochondria, were washed once with the same buffer and then were resuspended in sample buffer. The cytosolic and the mitochondrial fractions were separated on a 15% SDS-PAGE and probed using a rabbit polyclonal IgG antibody to BID . Then, the membrane with the cytosolic and mitochondrial fractions were probed with a rabbit polyclonal IgG antibody to \u03b1-tubulin and with a mouse monoclonal IgG antibody to Hsp60 , respectively.Cell pellets were suspended in 100 \u03bcL of solution containing 10 mM NaCl, 1.5 mM MgClg) and washed with ice cold buffer A . The pellet was resuspended in 2 mL of buffer A containing 80 \u03bcg of digitonin. After a 1-minute incubation on ice, 8 mL of buffer A were added and cells were centrifuged . The pellet was resuspended in 100 \u03bcL buffer A containing 1 mM ADP, 2 mM KH2PO3 (respiration buffer) and immediately used for the polarographic assay. Cell number and permeabilization was measured by the trypan blue exclusion method.Leukemic cells and normal lymphocytes were centrifuged was started by adding 5 mM glutamate plus malate (G/M) and 5 mM succinate plus glycerol-3-phosphate (S/G3P), which are complex I and complex III/glycerol-3-phosphate dehydrogenase substrates, respectively. The maximal respiration rate (uncoupled respiration) was empirically determined by the addition of 200 nM carbonylcyanide-4- (trifluoromethoxy)-phenylhydrazone (FCCP). Oxygen consumption was completely inhibited by adding 4 \u03bcM antimycin A at the end of the experiments .6/mL were treated with 40 \u03bcM \u03b1-bisabolol for 3 or 5 hours at 37\u00b0C. They were then washed with pre-warmed CM, 4 \u03bcM of the potential sensitive dye JC-1 was added, and they were then placed back into the incubator. After 30 minutes they were washed twice with pre-warmed PBS. An aliquot of each sample was spotted onto a slide, mounted with a coverslip and immediately recorded by an Axio Observer inverted microscope . Visualization of JC-1 monomers (green fluorescence) and JC-1 aggregates (red fluorescence) was done using filter sets for fluorescein and rhodamine dyes (emission 488 and 550 nm respectively). Image captures of random fields using fixed imaging parameters were performed, and previously unviewed areas of cells were captured to avoid photobleaching [Cells resuspended in CM at 1 \u00d7 10leaching . Image aleaching .6 cells were resuspended in 0.3 mL of culture medium containing 10% FBS and incubated for 90 minutes at 65\u00b0C and then overnight at 37\u00b0C in the presence of 0.4 M NaCl, 5 mM Tris-HCl (pH 8), 2 mM EDTA, 4% SDS and 2 mg/mL proteinase K. The lysates were brought to a final concentration of 1.58 M NaCl and centrifuged twice for 10 minutes at 6,000 \u00d7 g to separate the DNA fragments from intact DNA. The supernatants were recovered, and DNA was precipitated by the addition of three volumes of absolute ethanol at -80\u00b0C for 1 hour. The DNA pellets were recovered by microcentrifugation and resuspended in a minimal volume of 40 \u03bcl of 10 mM Tris-HCl (pH 7.4), 1 mM EDTA, and 1 mg/mL DNase-free ribonuclease A. Aliquots of 5 \u03bcg of DNA were then loaded onto a 1% agarose gel containing 0.25 \u03bcg/mL ethidium bromide. After electrophoresis, the DNA was visualized by UV light using the ChemiDoc XRS Imaging System (Bio-Rad).For internucleosomal DNA laddering analysis, 5 \u00d7 10p values < 0.05. The 24-hour IC50 was approximated by using mean cytotoxicity data in the different groups .Student's t-test for means, chi-squared tests, Mann-Whitney U test and Kruskall-Wallis analysis of variance by ranks were considered significant for Due to the lipophilic properties of \u03b1-bisabolol, a preliminary evaluation was performed of the dose-dependent solubilization in the culture medium over 24 hours by a RP-HPLC method. The addition of \u03b1-bisabolol at time 0 was followed by a rapid increase of the measured concentrations during the first 3 hours. After 24 hours, concentrations may be considered roughly constant, though with a slightly downward trend Figure . A doubl50 of 59 \u00b1 7 \u03bcM and were only marginally sensitive to 40 \u03bcM \u03b1-bisabolol over 120 hours.The viability of normal blood cells was evaluated after different times and doses of exposure to \u03b1-bisabolol. The cytotoxicity increased in a dose- and time-dependent manner. Figure +my and CD34+/33+ or CD34+/19+ cells from 5 normal bone marrow samples. These subpopulations were assumed to represent the normal counterpart of acute leukemia blasts and the hematopoietic compartment that is responsible for bone marrow renewal and, eventually, drug toxicity. The 24-hour \u03b1-bisabolol IC50 was 95 \u00b1 7 and 62 \u00b1 9 \u03bcM in CD33+my and CD34+ cells, respectively (p < 0.05). By contrast, no difference was observed between CD34+/33+ and CD34+/19+ cells .Figure ex vivo dose-response cytotoxicity assays at 24 hours in 42 different samples of leukemic cells obtained from patients before any treatment. Table -B-ALL, Ph+B-ALL, and AML cells. The 24-hour dose-response assays showed that \u03b1-bisabolol was cytotoxic to primary Ph-B-ALL cells (33 \u00b1 15 \u03bcM IC50). Though less sensitive, Ph+B-ALL, including Ph+-cells resistant to imatinib mesylate, and AML cells were also killed . Thus, \u03b1-bisabolol is a pro-apoptotic agent for acute leukemia cells ex vivo, particularly for Ph-B-ALL.Based on these data from normal cells, we performed p < 0.05) by comparing differences among experimental samples with regard to responsiveness to apoptotic signals induced by \u03b1-bisabolol. The group with the highest sensitivity to \u03b1-bisabolol (cluster 1: 14 \u00b1 5 \u03bcM IC50) included 2 Ph+ and 6 Ph- B-ALL cases. Thus, a proportion of the Ph-B-ALL cases shared a high sensitivity to \u03b1-bisabolol, although some other Ph-B-ALL were scattered over different sensitivity groups. The AML cases were split into two groups with intermediate and lower sensitivity. Unlike Ph-B-ALL, AML cases as a whole were less sensitive to \u03b1-bisabolol. The Ph+B-ALL cases were scattered all over the three groups but were mainly clustered with intermediate sensitivity AML. Interestingly, introducing both CD34+ and CD33+my cell sensitivity to \u03b1-bisabolol in clustering analysis made it evident that ALL cells as a whole were more sensitive to \u03b1-bisabolol than their normal counterpart (grouped into cluster 3 among less sensitive cells). This analysis demonstrated that some Ph-B-ALL cases may be highly sensitive to the apoptotic mechanisms activated by \u03b1-bisabolol and indicated that the Ph+B-ALL cases and especially the AML cases may well be characterized by variable degrees of resistance to these mechanisms. Still, all leukemia cases were eventually responsive to 65 \u03bcM \u03b1-bisabolol for 24 hours alone or \u03b1-bisabolol associated with imatinib mesylate (3 \u03bcM for 24 hours representative of in vivo effective concentration). In contrast, cells sensitive to imatinib mesylate shared a significant increase in cytotoxicity to \u03b1-bisabolol. For instance, cells from patient Ph+B-ALL #05 , and the combination was clearly synergistic, denoted by CI values <1 for any given Fa [\u03b1-bisabolol was active against PhWe have previously demonstrated that \u03b1-bisabolol binds to the BCL-2 family member BID . To eval-B-ALL, 1 Ph+B-ALL, 2AML) and healthy lymphocytes from 6 donors to determine whether \u03b1-bisabolol treatment affects mitochondrial state 3 and uncoupled respiration. Figure vs. 280.7 \u00b1 11.9 pmol O2/minute/106 cells; p < 0.05). In contrast, the oxygen consumption sustained by S/G3P oxidation was not affected by \u03b1-bisabolol treatment, and the mitochondrial respiration was not stimulated by the addition of FCCP. These data are in line with a loss of mitochondrial integrity in treated leukemic samples, which is responsible for the matrix NADH decrease. This behavior is confirmed by the observation that the respiration in the presence of S/G3P was unaffected. Healthy lymphocyte respiration was not statistically modified by \u03b1-bisabolol treatment in state 3 using G/M and S/G3P as substrates and FCCP as a mitochondrial uncoupler. This is in agreement with the resistance to \u03b1-bisabolol observed in lymphocytes . In fit cells, JC-1 is more concentrated in the mitochondria (driven there by the \u0394\u03a8m), where it forms red-emitting aggregates, than in the cytosol, where it exists as a green-fluorescent monomer. Accordingly, the ratio red/green JC-1 fluorescence can be used as a sensitive measure of \u0394\u03a8m [c translocation and the start of the apoptotic process) is indicated by a loss of red fluorescence and an increase in green fluorescence. Figure -B-ALL #01 out of the 6 tested. Microscopy revealed that in untreated leukemic cells well-polarized mitochondria were marked by punctate red fluorescent staining . Blasts exposed to 40 \u03bcM \u03b1-bisabolol underwent a progressive loss of red fluorescence, indicated by a shift right and downward over 3 and 5 hours . In contrast, normal lymphocytes used as a negative control did not suffer any changes in their microscopy or cytofluorimetric pattern when exposed to a similar \u03b1-bisabolol concentration, indicating that there was no mitochondrial damage . The third subgroup included mainly, but not exclusively, AML samples with an IC50 value of 65 \u00b1 5 \u03bcM. Thus, Ph-B-ALL cases were definitely more sensitive than AML cases, whose IC50 was near to that observed in vitro also in normal leukocytes, except lymphocytes, and in hematopoietic precursors. Nevertheless, previous studies in animal models suggested that similar \u03b1-bisabolol concentrations may be safely administered through daily oral supplementation even on a long-term basis [in vitro are also lower than, or similar to, the concentrations that we measured in the blood and in the brains of healthy mice sacrificed after treatment with 1.4 g/Kg \u03b1-bisabolol. In these mice the blood parameters of liver and kidney fucntionality were preserved and, remarkably, the concentration in the brain exceeded 50 \u03bcM without toxicity. Therefore, an active concentration of \u03b1-bisabolol safely accumulated in a body environment where lymphoid blasts have a tendency to localize and survive protected from a number of curative drugs [By cluster analysis, we separated out three subgroups of leukemias with different sensitivities over 24 hours. \u03b1-bisabolol was effective with an ICrm basis . The \u03b1-bve drugs . A dose +B-ALL cells were also sensitive to \u03b1-bisabolol. In three cases (Ph+B-ALL #01, #04, #06 in Table + human cell line CML-T1. It is not clear, however, whether the synergism depends on internalization mechanics or on intracellular modulation of the damaging actions of each or both drugs. A compound like \u03b1-bisabolol - and others [Phd others - could d others ,28,29.m, which induces outer membrane permeabilization and leads to the apoptotic death of blasts. Our data not only implicate \u03b1-bisabolol for the first time in mitochondrial impairment in human leukemic cells but also suggest that this goes through a peculiar model of cell death, i.e., the formation of a cellular population with intermediate D\u03a8 m which is a feature of apoptosis seen only in a few cell types and never described to date in leukemic blasts [Our biochemical data suggest a direct effect on mitochondrial integrity as a possible mechanism of \u03b1-bisabolol damage to leukemic cells. This behavior is supported by the observed oxygen consumption decrease in the presence of glutamate/malate and by the unaffected respiration rates in the presence of succinate/glycerol-3-phosphate. Microscopy and flow cytometry data show that \u03b1-bisabolol disrupts \u0394\u03a8c blasts .In all leukemia samples treated with \u03b1-bisabolol, BID was found to be expressed in a full-lenght form that was suitable for binding to \u03b1-bisabolol. We failed to demonstrate full-length BID translocation to the mitochondria in leukemic cells as a pro-apoptotic mechanism . Neverthvia lipid rafts and directly involves mitochondrial permeability transition pore opening [Thus, according to our previous and present work, \u03b1-bisabolol enters cells opening , which i opening remains +B-ALL. It is also active against primary AML cells at slightly higher concentrations. Our findings support \u03b1-bisabolol as a possible candidate for the treatment of acute leukemias and establish a basis for studies in animal models.We provide here the first evidence that \u03b1-bisabolol is an effective pro-apoptotic agent in primary ALL cells at concentrations and durations that spare normal blood and bone marrow cells. It retains cytotoxic potential in both imatinib mesylate-resistant and -sensitive PhThe authors declare that they have no competing interests.EC, AR, ACdP performed the research, analyzed data, and performed statistical analysis; MB, EG, CB, RF contributed analytical tools, performed selected experiments and analyzed data; GP contributed criticism; HS suggested the research, contributed ideas and critical scientific knowledge, analyzed and interpreted data; FV chose the clinical setting, designed and performed the research, analyzed and interpreted data, and wrote the paper; all authors checked the final version of the manuscript."} {"text": "Road traffic accidents are the second largest cause of burden of disease in Thailand, largely attributable to behavioural risk factors including drinking and driving, speeding, substance abuse and failure to use seatbelts. The aim of this study was to assess the prevalence and associated factors of non-seatbelt use among drivers during Songkran festival in Thailand.A cross-sectional survey has been performed to determine the prevalence of seatbelt use among Thai drivers (N=13722) during four days of the Songkran festival. For this sample the population of drivers was consecutively selected from 12 petrol stations in four provinces from each of the four main geographical regions of Thailand. The study was conducted at petrol stations at roads in town, outside town and highway at different time intervals when trained field staff administered a structured questionnaire and performed an observation checklist on seat belt use.An overall prevalence of 28.4% of non-seatbelt use among drivers was found. In multivariable analysis demographics , environmental factors , seatbelt use experiences and attitudes and lower exposure to road safety awareness (RSA) campaign were associated with non-seatbelt use.Rates of non-seatbelt use by Thai drivers during Songkran festival was 28.4%. Lower exposure to the RSA campaign was found to be associated with non-seatbelt use among drivers during the Songkran festival. The Road Traffic Injury (RTI) fatality rate in Thailand was 40 per 100,000 populations, i.e., double the world average for low and middle income countries . A numbeSongkran is the New Year celebration in Thailand, set by the solar calendar since ancient times. It takes place between 13 and 15 April. At Songkran festival are major holidays that encourage a million of travellers who travel to/from their hometown and doing the activities during these holiday periods . UnfortuA cross-sectional survey has been performed to determine the prevalence of helmet use among drivers. The recruitment period of this project was during four days of the Songkran festival from 13\u201316 April 2007. For this sample the population of drivers from 12 petrol stations were selected from four provinces from each of the four main geographical regions of Thailand excluding Bangkok. Provinces were Chiang Mai, Lampang, Nakhon Sawan and Phichit in the northern region, Nakhon Ratchasima, Khon Kaen, Udon Thani, and Loei in the Northeastern region, Songkhla, Phuket, Surat Thani, and Trang in the southern region, and Phra Nakhon Si Ayutthaya, Chonburi, Chachoengsao, and Phetchaburi in the central region. In total 48 petrol stations (three petrol stations per province) was selected using quota sampling. In town, the petrol station on the road with the largest shopping mall was selected; out of town the petrol station on the road leading to the largest district was selected; in terms of petrol station on the highway, each province only has one highway. If there was more than one petrol station on the selected road or highway, the largest petrol station was selected. The study team spent four days at each petrol station road venue from 7:00\u20139:00, 13:00\u201315:00, 17:00\u201319:00, 22:00\u201324:00. All consecutive motor vehicle occupants who entered the petrol station were asked to participate by trained personnel (who were students from Chiang Mai University that were trained by the research team) while they were having their gas tank filled. The number of vehicles and time interval for vehicle selection were determined by the availability of field staff to conduct a motor cycle rider observation, interview and alcohol test. The target sample size was 100 drivers from each of the petrol stations per time period, except during 22:00\u201324:00 for which 50 drivers were targeted. Trained field staff administered a structured questionnaire and performed an observation checklist. The project was approved by the Ethics Committee for research in human subjects of the public health programme, Chiang Mai University.The primary outcome of the study was seatbelt use. Seatbelt use was assessed by observation. The questionnaire covered demographic data, vehicle characteristics, history of road traffic accidents, known risk factors such as, age, sex, environmental factors, seatbelt use experiences and attitudes, and exposure to the road safety awareness (RSA) campaign.2 are presented to describe the amount of variance explained by the multivariable model. Probability below 0.05 was regarded as statistically significant.Data were analyzed using Statistical Package for the Social Sciences (SPSS) for Windows software application programme version 19.0. Frequencies, means, standard deviations, were calculated to describe the sample. Data were checked for normality distribution and outliers. For non-normal distribution non-parametric tests were used. Associations of non-seatbelt use were identified using logistic regression analyses. Following each univariate regression, multivariable regression models were constructed. Independent variables from the univariate analyses were entered into the multivariable model if significant at P<0.05 level. For each model, the RThe total sample included 13722 drivers ; 77.4% of the drivers were male and 22.6% female. The majority of the drivers (79.9%) were between 26 to 59\u2009years old and about half (50.7%) were driving a pickup. Driver participation in the study was equally distributed across four of Thailand\u2019s four regions, four data collection times during the day, four dates of data collection and three locations of data collection. The overall prevalence of non-seatbelt use was 28.4% , followed by passenger (22.5%) and pedestrian (2.0%). A large group of participants (46.6%) indicated that they had not usually been using a seatbelt before and 41.5% had not intended to use a seatbelt. The majority (73.7%) perceived a danger of not wearing a seatbelt and 53.0% were highly aware of the danger of not wearing a seatbelt. A significant number of 26.4% indicated that they had been caught by the police because of not wearing a seatbelt and 67.3% perceived a moderate to high risk about being caught by the police because of not wearing a seatbelt. Almost all (90.4%) had heard about the RSA campaign and more than one-thirds (36.3%) had frequently heard or seen the RSA campaign on the radio or on TV. More than half (57.0%) of the participants had been talking to others about the RSA campaign. One-thirds (33.3%) liked the RSA campaign very much, 31.4% frequently followed the TV news reports on road traffic injury (RTI) statistics and more than half (54.7%) believed perceived that the RSA campaign had a high effect , environmental factors , seatbelt use experiences and attitudes and lower exposure to RSA campaign were associated with non-seatbelt use were in this study associated with non-seatbelt use. Phillips et al. found frCaution should be taken when interpreting the results of this study because of certain limitations. As this was a cross-sectional study, causality between the compared variables cannot be concluded. A further limitation was that some variables were assessed by self-report and desirable responses may have been given. Other examples of limitations include that other substance use (illicit drugs) were not assessed, as found to be prevalent in other studies in Thailand . Future Rates of non-seatbelt use by Thai drivers and passengers during Songkran festival was 28.4%. Lower exposure to the RSA campaign was found to be associated with non-seatbelt use among drivers during the Songkran festival.The authors declare that they have no competing interests.PS, KP and SP were the main contributors to the conceptualization of the study. KP, PS and SP contributed significantly to the first draft of the paper and all authors contributed to the subsequent drafts and finalization. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2458/12/608/prepub"} {"text": "The aim of this study was to assess helmet use and associated factors among motorcycle riders during Songkran festival in Thailand. A cross-sectional survey was conducted to determine the prevalence of helmet use among Thai motorcycle riders during four days of the Songkran festival. For this sample, the population of motorcycle riders was consecutively selected using quota sampling from 12 petrol stations in four provinces from each of the four main geographical regions of Thailand. The study was conducted at petrol stations at roads in town, outside town and highway at different time intervals when trained field staff administered a structured questionnaire and performed an observation checklist. Results indicate that 44.2% of the motorcycle riders and 72.5% of the motorcycle passengers had not been using a helmet. In multivariable analysis demographics, environmental factors, helmet use experiences and attitudes and recalling a lower exposure to road safety awareness (RSA) campaign were associated with non-helmet use among motorcyclists. It appears that the RSA campaign may have some positive effect on reducing non-helmet use among motorcycle riders during the Songkran festival. A nat belts ,7. Youthat belts . Survey at belts . Other sat belts , India 6at belts , Iran 89at belts , Nigeriaat belts , Vietnamat belts and geneat belts ). Based at belts ,16, gendat belts ,15), beiat belts , locatioat belts ), time oat belts ,18, weekat belts and haviat belts .et al. [According to the Road Traffic Act 1979, section 122, motorcyclist and passenger are obliged to wear a helmet to protect themselves from harm during driving . In a stet al. found thet al. .From 1997 an active public education programme was undertaken on a national scale to raise awareness about road safety and to support law enforcement. This included dissemination of knowledge through multiple channels, e.g., roadside posters, stickers on the back of vehicles, sporadic radio and TV programmes or spots, public announcements and press reports . After 2Songkran is the New Year celebration in Thailand, set by the solar calendar since ancient times. It takes place between 13 and 15 April. Songkran festivals are major holidays that encourage a million travellers who travel to/from their hometown and doing the activities during these holiday periods . UnfortuA cross-sectional survey was conducted to determine the prevalence of helmet use among Thai motorcycle riders. The recruitment period of this project was during four days of the Songkran festival for 13\u201316 April, 2007. For this sample the population of motorcycle riders from 12 petrol stations was selected using quota sampling from four provinces from each of the four main geographical regions of Thailand excluding Bangkok. Provinces were Chiang Mai, Lampang, Nakhon Sawan and Phichit in the northern region, Nakhon Ratchasima, Khon Kaen, Udon Thani, and Loei in the northeastern region, Songkhla, Phuket, Surat Thani, and Trang in the southern region, and Phra Nakhon Si Ayutthaya, Chonburi, Chachoengsao, and Phetchaburi in the central region. In total 48 petrol stations (three petrol stations per province) were selected. In town, the petrol station on the road with the largest shopping mall was selected; out of town the petrol station on the road leading to the largest district was selected; in terms of petrol station on the highway, each province only has one highway. If there was more than one petrol station on the selected road or highway, the largest petrol station was selected. The study team spent four days at each petrol station road venue for 7:00\u20139:00, 13:00\u201315:00, 17:00\u201319:00, and 22:00\u201324:00. All consecutive motorcycle riders who entered the petrol station were asked to participate by trained personnel (who were students from Chiang Mai University that were trained by the research team) while they were having their gas tank filled. The number of vehicles and time interval for vehicle selection were determined by the availability of field staff to conduct a motorcycle rider observation, interview and alcohol test. The target sample size was 100 motorcycle riders from each of the petrol stations per time period, except during 22:00\u201324:00 for which 50 motorcycle riders were targeted. Trained field staff administered a structured questionnaire and performed an observation checklist. The project was approved by the Ethics Committee for research in human subjects of the public health programme, Chiang Mai University.The primary outcome of the study was helmet use. Helmet use was assessed by observation. The questionnaire covered demographic data, motorcycle characteristics, history of road traffic accidents, known risk factors such as age, sex, environmental factors, helmet use experiences and attitudes, and exposure to road safety awareness (RSA) campaign.P < 0.05 level. For the model, the R2 is presented to describe the amount of variance explained by the multivariable model. Probability below 0.05 was regarded as statistically significant.Data were analyzed using Statistical Package for the Social Sciences (SPSS) for Windows software application programme version 19.0. Frequencies, means, and standard deviations were calculated to describe the sample. Data were checked for normality distribution and outliers. Interaction between predictor variables was also examined and it was found that none of the variables had a Variance Inflation Factor (VIF) value above 2.5. Associations of non-helmet use were identified using logistic regression analyses. A multivariable regression model was constructed. Independent variables from the univariate analyses were entered into the multivariable model if significant at The sample included 18,998 motorcycle riders of whom 320 refused to participate, giving a response rate of 98.3%. Overall, 44.2% of the motorcycle riders had not been using a helmet. Almost half of the motorcycle riders (49.6%) had a passenger of which 72.5% had not been wearing a helmet. The largest group of the motorcycle riders were between 18 to 25 years old (43.9%), followed by 26 to 59 years olds (41.8%), 2.2% were below the legal motorcycle riding age (<15 years), and among those who were 15 to 17 years old 46.1% were illegally riding a size of motorcycle with more 110 cc. The data collection was equally distributed across four regions, four days of data collection; three locations of data collection and data collection across three different times during the day, only data collection in the evening yielded a smaller sample see . About one third of the motorcycle riders (33.4%) indicated that they had been in an accident before, mostly as a rider (75.5%), followed by passenger (22.8%) and pedestrian (1.5%). Almost half had had the intention to use a helmet (44.8%). A large group agreed with the danger of non-helmet use (75.0%) and almost two-thirds were highly aware of the danger of not using a helmet (55.8%). Almost half (47.2%) had been caught with the non-use of a helmet before. The majority (83.9%) had heard about an RSA campaign, 29.2% had frequently heard or seen the RSA campaign on the radio and/or TV, and 29.3% frequently followed TV news reports on road traffic injury (RTI) statistics. Two in five (40.1%) had also been talking to others about the RSA campaign and most (89.0%) liked the RSA campaign see .In multivariable analysis, it was found that the highest proportion of non-helmet use among motorcyclists was in the age group 15 to 17 years old and in riders from the northern region in Thailand. Motorcyclists who were having a passenger were significantly more often not using helmet than those who had no passenger see .Non-helmet use was found to be more frequent earlier during the Songkran festival, later during the day, on roads out of town and on highways. Respondents who had been in an accident before, had low awareness of the danger of non-helmet use and having been caught for non-helmet use were more likely to not wear a helmet. Motorcyclists who recalled a lower exposure to road safety awareness campaign were more likely not to wear a helmet compared to those who recalled a higher exposure to road safety awareness campaign. et al. [This study among a large sample of motorcyclists in Thailand found that 44.2% of the motorcycle riders and 72.5% of motorcycle passengers had not been wearing a helmet, which is similar to a survey of 2007 in Thailand, with 46% and 69.1% non-helmet use among motorcyclists and passengers, respectively ,10,11, aet al. who founet al. [In contrast to other studies, this study did not find any association between gender ,11,15 anet al. found frCaution should be taken when interpreting the results of this study because of certain limitations. Since the sampling procedure was not truly random, this may be a limitation of the study. As this is a cross-sectional study, causality between the compared variables cannot be concluded. A further limitation is that some variables were assessed by self-report, and desirable responses may have been given. Other examples of limitations include that substance use ,37,38, uRates of non-helmet use by Thai motorcycle riders and passengers during Songkran festival seemed to be high. It appears that the road safety awareness campaign may have a slight positive effect on reducing non-helmet use among motorcycle riders during the Songkran festival. The presented information concerning different peaks of unhelmeted motorcyclists will be helpful in devising specific countermeasures against such risky behaviour."} {"text": "Supplemental Digital Content is available in the text Anorectal melanoma (AM) is a rare type of melanoma that accounts for 0.4% to 1.6% of total malignant melanomas. The incidence of AM increases over time, and it remains highly lethal, with a 5-year survival rate of 6% to 22%. Considering the rare nature of this disease, most studies on AM comprise isolated case reports and single-center trials, which could not provide comprehensive assessment of the disease. Therefore, we conducted a population-based study by using the Surveillance, Epidemiology, and End Results (SEER) program to provide the latest and best available evidence of AM.We extracted all cases of AM registered in the SEER database from 1973 to 2011 (April 2014 release) and calculated age-adjusted incidence. Only cases with active follow-up were included to predict factors associated with prognosis. Survival outcomes were also compared among different types of surgery.We identified 640 AM cases, which consisted of 265 rectal melanoma and 375 anal melanoma. The estimated annual incidence rates of AM per 1 million population were 0.259 in males and 0.407 in females, and it increased with advanced age and over time. Tumor stage and surgical treatment were independent predictors of survival. Results implied that surgery improved the prognosis of patients with local- and regional-stage AM but could not prolong the survival of patients with distant-stage AM. Moreover, the outcome of less extensive excision was not statistically different from that of more extensive excision.This study provides an up-to-date estimation of the incidence and prognosis of AM by using SEER data. The incidence of AM continuously increases over time, despite its rarity. This disease also exhibits poor prognosis. Thus, AM must be further investigated in future studies. We also recommend surgery as the optimal treatment for local- and regional-stage AM patients but not for those with distant metastasis. Primary mucosal melanomas behave more aggressively and have poorer prognosis than cutaneous melanomas and are most common in the head and neck, anorectum, and female, with distribution of approximately 55%, 24%, and 18%, respectively.4 As the most frequent location of primary gastrointestinal tract melanoma, anorectal melanoma (AM) accounts for 0.4% to 1.6% of all malignant melanomas;6 the incidence rate of AM is about 2.7 cases per 10 million population per year in the United States.7 AM is likely to be unnoticed and diagnosed at an advanced stage because of its unspecific symptoms, such as bleeding, presence of a mass, and sensation of tenesmus, which are clinically consistent with benign hemorrhoid diseases.10 Only 20% to 30% of AM is located in the rectum, and the other melanomas are found within the anal canal or anal verge.12 Therapy for AM has not been standardized because of the low incidence of this disease and the lack of clinical experience. Generally, surgical excision is the primary treatment option for AM, but selection of either abdominoperineal resection or wide local excision remains controversial.13 Currently, AM remains a highly lethal disease, with a 5-year survival rate of 6% to 22%.13Melanoma is an aggressive, therapy-resistant malignancy of melanocytes. Melanoma is a major public health concern, and its incidence has continuously increased over the past 4 decades.14 The use of this large population-based database can avoid the limitations of small size, as well as selection or treatment bias. Moreover, the results can be readily generalized and are considered more valid than institutional data because patients were treated in all types of clinical settings.17 This study aims to provide the best available evidence to help clinicians have a better understanding of AM.Information on epidemiology and prognosis of AM, particularly rectal melanoma, is limited because of the rarity of this disease. Most studies in the literature include isolated case reports and single-center trials, which cannot accurately reflect the situation of AM. In this study, we provide insights into the epidemiology and survival outcomes of AM by using the Surveillance, Epidemiology, and End Results (SEER) Program. We also investigated surgical treatment for AM, particularly in terms of survival differences among different surgery types. SEER is an authoritative source of information on cancer incidence and survival in the United States; this program contains data collected from 18 cancer registries, which cover 28% of the US population.We got internet access to SEER database with the reference number 13504-Nov2013. And our study was approved by the Ethics Committee of the Second Affiliated Hospital of Zhejiang University School of Medicine. This observational study did not publish any information on an individual patient. Therefore, informed patient consent was not required.18 The included patients satisfied the following criteria: anatomic sites of rectum or anus ; histologically diagosed as melanoma (Histologic type ICD-O-3 codes: 8720\u20138772); malignant behavior (Behavior code ICD-O-3 code: 3); and microscopic confirmation of diagnosis . In the second part of the study, cases without active follow-up were excluded to predict factors associated with overall survival (OS) and cause-specific survival (CSS). Patients whose data were sourced solely from a death certificate (Type of Reporting Source code: 6) or those who had 2 or more primaries in their lifetime (Sequence number code: 1\u201399) were also excluded. The screening procedure is shown in Figure The SEER program is the largest publicly available cancer dataset and is updated annually. The program contains data on patient demographics, tumor characteristics, first course of treatment, and follow-up information. In this research, the SEER dataset from 1973 to 2011 (April 2014 release) was used for case extraction.In this study, surgery types were categorized into less extensive excision (LEE) and more extensive excision (MEE) to investigate differences in survival outcomes. LEE refers to tumor resection without dissection of lymph nodes, and MEE indicates tumor resection with lymph node removal. Specifically, in rectal melanoma, cases with RX Summ Surg Prim Site (1998+) codes of 10\u201328 or Site specific surgery (1983\u20131997) codes of 10\u201320 are identified as LEE; by contrast, cases with Summ\u2013Surg Prim Site (1998+) codes of 30\u201370 or Site specific surgery (1983\u20131997) codes of 30\u201360 are categorized as MEE. Moreover, LEE (Summ\u2013Surg Prim Site codes: 10\u201327 or Site specific surgery codes: 10\u201340) and MEE in anal melanoma were extracted similarly.\u03c72 test or independent sample T test was employed to investigate significant differences between 2 groups. The incidence of AM in the United States was calculated as the number of new patients per 1 million people per year, with adjustment to the US 2000 population, and presented in terms of sex and tumor sites. Briefly, we extracted population data and calculated incidence rates by using the SEER\u2217Stat Version 8.2.0 software . We also analyzed the incidence trends using 9 age groups and 4 observation periods .Pearson P value of <0.05 was considered statistically significant. All tests were 2 sided, and confidence intervals (CIs) were set as 95%.Univariate and multivariate models were constructed to evaluate factors correlated with survival. Survival was defined as the number of months between the date of diagnosis and the date of death of any causes (OS) or of their cancer (CSS). In the univariate model, Kaplan\u2013Meier curves were plotted to display survival rates over time; the curves were then compared using log-rank statistics. Age at diagnosis, sex, race, stage, surgery, tumor location, and year of diagnosis were included as covariates. The multivariate Cox proportional hazards model was then fitted to estimate hazard ratios (HRs) between survival and covariates, which included variates with statistical significance in the univariate model. Cox regression method was further applied to control the influence of covariates and compare survival rates between the 2 groups. All statistical analyses were conducted using SPSS version 19.0 software . A P\u200a=\u200a0.662) and sex (P\u200a=\u200a0.476) were not statistically different between the 2 groups. About 83.9% of the included cases were White, and more Asian patients were observed in the anal group (P\u200a=\u200a0.027).A total of 640 patients were assessed as eligible for inclusion in the study by using the patient selection algorithm described in the Methods section Figure ; these pP\u200a<\u200a0.001) and less distant ones (P\u200a=\u200a0.018). In addition, surgery was administered in 84.1% of the total patients. The number of patients with AM almost doubled from 1973\u20132000 (N\u200a=\u200a221) to 2001\u20132011 (N\u200a=\u200a419). Radiation and lymph node were excluded from the analysis because of the absence of corresponding data in 80.8% and 70.6% of the cases, respectively.In terms of stage, 232 (36.3%) patients were diagnosed at the localized stage, followed by the regional and distant stages. Compared with rectal melanoma, patients in the anal group were diagnosed at earlier stage, with more cases at the regional stage . The incidence rate increased with advanced age in both sexes and at all tumor sites. Specifically, no patient with AM younger than 20 was reported, and the overall incidence rate increased from 0.013 (20\u201329 years) to 2.818 (\u226585 years), followed by 0.000 to 2.397 and 0.026 to 3.000 for male and female, respectively . Therefore, these 485 patients were used in log-rank and Cox proportional hazard analyses to determine potential factors that affect the prognosis of AM.About 75.8% of the cases were eligible for survival analysis. Patient demographics and therapies were not different between the total patients (N\u200a=\u200a640) and included cases , and patients undergoing surgical resection exhibited improved survival (P\u200a<\u200a0.001). Moreover, tumor location and year of diagnosis were related to survival.As shown in Table P\u200a=\u200a0.023) and 0.69 (P\u200a=\u200a0.048) in OS and CSS models, respectively . In the CSS model, regional stage could be a risk factor for survival .Multivariable analysis was then performed, and the results implied that patients with AM who underwent surgery showed improved prognosis, with HR\u200a=\u200a0.66 P\u200a=\u200a0.02 and 0.69P\u200a<\u200a0.001 and P\u200a=\u200a0.03, respectively). Conversely, survival benefits were not statistically different among patients with distant metastasis of rectal melanoma than previous studies. Moreover, the incidence of rectal melanoma was higher than that of anal melanoma between 2006 and 2011.19 A total of 27 states and 1 metropolitan area participated in this study, and data covered more population than those of the SEER program. However, this program had no updated evaluation of the incidence of AM in the past decade. In Sweden, a population-based study covered about 95% cancer patients and covered 253 cases during a 40-year period (1960\u20131999).20 The reported incidence rates were 1.0 and 0.7 per million for female and male, respectively, and were significantly higher than those reported in the United States. Investigators in Australia also reported that the incidence of AM is 0.28 per million in 1985 to 1995, similar to the present results in the same period.21The estimated annual incidence of AM was 0.343 per 1 million, as well as 0.259 and 0.407 for male and female, respectively. The incidence increased with age and over time. Specifically, the incidence peaked in patients over 85-year old and in the period between 2006 and 2011, with incidence rates of 2.818 and 0.460, respectively. This finding is in agreement with the incidence data obtained from the North American Association of Central Cancer Registries; in this report, the incidence rate is 0.4 per million between 1996 and 2000 (based on 299 cases) and was age adjusted to the 2000 U.S. population standard.22 However, a simplified clinical staging system was used to categorize vaginal and anorectal melanomas because of their rarity. Specifically, AM was classified based on disease distribution as stage I , stage II , or stage III (distant metastasis).23 This finding is consistent with the SEER summary staging system adopted in the present study.We conducted univariate and multivariate analyses to predict the prognosis of AM. As predicted, stage was an independent factor of OS and CSS. Patients with local-stage AM showed improved survival than those with regional- and distant-stage AM. For the staging system of mucosal melanoma, the American Joint Committee on Cancer Tumor, Node, and Metastasis classification is used to stage mucosal melanoma of the head, neck, and vulva.26 In 2010, Iddings et al9 analyzed 145 patients from the SEER database between 1973 and 2003 and concluded that the type of surgery did not affect OS or CSS. These findings are consistent with our results based on latest information and larger population. Therefore, if technically possible, we advise patients with AM to receive LEE to avoid unnecessary injuries and improve their quality of life.Surgical resection is the standard of care for AM, and patients undergoing surgery showed improved prognosis. In this research, patients with distant metastasis could not obtain significant survival benefits from the surgery. Hence, surgical resection may not be the optimal choice for distant-stage patients. With regard to the type of surgery, MEE with dissection of lymph nodes can control lymphatic spread and result in less local relapse; however, this technique confers long hospital stay, slow recovery, and a need for permanent stoma. In addition, a few studies reported that patients failed to achieve any survival benefits by using such an aggressive surgical approach.28 Second, as patients were enrolled in SEER program from 1973 to 2011, the coding and staging system evolve and differ significantly over past 4 decades, which requires a thorough understanding of variables. Apparently, during such a long period, improvements in imaging and pathology may contribute to increased incidence rates, and improved chemotherapy and surgical techniques affect prognosis as well.30 Third, sample sizes were small in subgroup analyses, which may contribute to false positives or negatives. This limitation is inevitable for studies on such a rare tumor. Therefore, if possible, a prospective study with a large sample size must be performed in the future to validate the present results.However, this article still presents several limitations. First, as an SEER-based observational study, we were unable to get data regarding adjuvant therapy and detailed course of treatment, which are strongly associated with prognosis. Moreover, lack of information on comorbidity, recurrence, and treatment-related complications; migration of patients in and out of the SEER registry; and selection bias are factors that should be considered.This population-based study provides an up-to-date estimation of the incidence and prognosis of AM by using the SEER data. The incidence of AM increased with age and over time. Tumor stage and surgery may be independent risk predictors, and patients with distant-stage AM could not obtain survival benefits from surgical treatment. Moreover, prognosis was not statistically different between LEE and MEE."} {"text": "Cash transfer programs (CTPs) aim to strengthen financial security for vulnerable households. This potentially enables improvements in diet, hygiene, health service access and investment in food production or income generation. The effect of CTPs on the outcome of children already severely malnourished is not well delineated. The objective of this study was to test whether CTPs will improve the outcome of children treated for severe acute malnutrition (SAM) in the Democratic Republic of the Congo over 6\u00a0months.We conducted a cluster-randomised controlled trial in children with uncomplicated SAM who received treatment according to the national protocol and counselling with or without a cash supplement of US$40 monthly for 6\u00a0months. Analyses were by intention to treat.P\u2009=\u20090.007). The adjusted hazard ratios in the intervention group for relapse to moderate acute malnutrition (MAM) and SAM were 0.21 and 0.30 respectively. Non-response and defaulting were lower when the households received cash. All the nutritional outcomes in the intervention group were significantly better than those in the control group. After 6\u00a0months, 80% of cash-intervened children had re-gained their mid-upper arm circumference measurements\u00a0and weight-for-height/length Z-scores and showed evidence of catch-up. Less than 40% of the control group had a fully successful outcome, with many deteriorating after discharge. There was a significant increase in diet diversity and food consumption scores for both groups from baseline; the increase was significantly greater in the intervention group than the control group.The hazard ratio of reaching full recovery from SAM was 35% higher in the intervention group than the control group \u2009=\u20091.10 to 1.69, CTPs can increase recovery from SAM and decrease default, non-response and relapse rates during and following treatment. Household developmental support is critical in food insecure areas to maximise the efficiency of SAM treatment programs.NCT02460848. Registered on 27 May 2015.ClinicalTrials.gov, The online version of this article (doi:10.1186/s12916-017-0848-y) contains supplementary material, which is available to authorized users. Childhood malnutrition is a significant cause of ill health and poor development worldwide. High-quality nutrition is essential in early childhood to ensure healthy growth, proper organ formation and function, a strong immune system and neurological and cognitive development. Children with severe acute malnutrition (SAM) are at high risk of morbidity and death . There aAlthough considerable progress has been made in treating SAM \u20134, one wThere is a particular need to determine the effect of CTP strategies and their impact on vulnerable households with malnourished members among different target groups and contexts. Most underlying causes of malnutrition are a function of people\u2019s resources and social context. What households produce as well as the time they have to care for dependent members are determined by a range of social, economic and political factors; these are thought to include the division of labour, gender inequality, educational opportunities and property and power relations. When households have more money, they can diversify their diets by buying or growing food of a higher quality, being able to afford to attend the health centre and investing capital to create ongoing income-generating opportunities. Here, we hypothesised that additional cash will have a direct effect to improve the final outcome of children enrolled in a community-based management of acute malnutrition (CMAM) program. Specifically, within the household the cash would decrease intra-household sharing of the RUTF and improve the food diversity and consumption. The cash would improve the outcome of the malnourished child by reducing death, morbidity, transfer to hospital, defaulting, relapse or other causes of failure. The child would have higher rates of mid-upper arm circumference (MUAC) and weight gain and derived anthropometric indices.This paper presents the findings from a cluster-randomised trial comparing the outcome of a standard Outpatient Therapeutic Program (OTP) for SAM and infant and young child feeding (IYCF) counselling with and without a monthly cash transfer over a 6-month period in the Democratic Republic of the Congo (DRC).2 in the city of Mbuji-Mayi; GPS 6\u00b011'S 23\u00b054'E) in the Kasa\u00ef-Oriental province. There are 52 functional health centres where management of SAM was established using the OTP module of the National Integrated Management of Acute Malnutrition protocol. This setting was selected for the following reasons. First, the socioeconomic homogeneity of the whole livelihood zone was confirmed by three baseline surveys: (1) a market chain analysis [The present study took place in the DRC, where around two million children younger than 5\u00a0years old are affected by SAM every year , 27. Thianalysis , (2) a Hanalysis and (3) analysis . Second,analysis . Third, analysis .A cluster was defined by a health centre and its catchment area. There were 52 eligible health centres, of which 20 were selected. The health centres were selected at random by sequentially drawing, in public, sealed, numbered papers from a basket in the presence of all 52 health centre representatives. A priori, in order to minimise contamination bias between clusters if a subsequent health centre was drawn but was found to have a catchment area adjoining a health centre that had already been selected, that centre was eliminated from the draw and a further sealed paper drawn; the health centre representatives were aware of this procedure before the draw took place. When the 20 health centres that were to be included in the trial were chosen, a second selection round was conducted with each selected centre drawing a sealed paper with either a 1 or 2 written on it to indicate to which arm of the trial that centre would be entered. A third selection was made in private when one of the arms was assigned at random to receive standard SAM management plus counselling and the other arm standard SAM management plus counselling plus cash transfer. In order to detect an expected recovery rate of 70% after 8\u00a0weeks and an expected difference between groups of 10%, with an \u03b1-error\u2009=\u20095%, a \u03b2-error\u2009=\u200920% and an intra-class correlation of 0.001, a sample size of 1392 children was required. Assuming a study dropout of up to 15%, a total sample size of 1600 children was projected with an average of 80 participants per cluster: 800 per arm. Because we wished to avoid any confounding from temporal variation in the centres\u2019 recruitment rate, we determined the potential case load from the admissions during the previous year when there had not been any active screening of the population. We then conducted an exhaustive, active screening of the whole catchment area of each health centre in order to recruit the required number of children (about 4 per centre per day) over as short a time as possible. The staff in each health centre comprised the nurse in charge and two Save the Children staff dedicated to check and collect the data. The study\u2019s staff were supervised by Pronanut (the DRC Government nutrition agency) and UNICEF. The study teams were trained intensively for 3\u00a0weeks prior to the start of the study by the Principal Investigator, and the details of the national protocol were revised with the nurses in charge of the centres.Inclusion criteria for the trial required participants to be eligible for outpatient SAM treatment according to the integrated management of acute malnutrition (IMAM) national protocol , have noThe objectives and procedures of the study were explained to heads of households or principal child caregivers before inclusion. An informed consent statement was read aloud in the local dialect before consenting adults signed or gave their fingerprint. It was emphasised that participation in the study was not a pre-condition for obtaining nutritional treatment and free medical services. It was clearly stated that participants were free to withdraw from the study at any time. The protocol was approved by both the National Ethical Committee of the School of Public Health from the Faculty of Medicine of University of Kinshasa and the Ministry of Public Health. The study was registered on ClinicalTrials.gov as NCT02460848 and was performed in accordance with Good Clinical Practice (GCP) guidelines for clinical trials and according to the tenets of the Declaration of Helsinki.In both study arms, children with SAM received treatment according to the IMAM national protocol for OTP and counselling on IYCF . At admiAfter discharge, children were followed up monthly: medical history, physical examination, and anthropometry were repeated at each visit until the end of the study. Observations were concluded 6\u00a0months after recruitment. Thus, those who were under treatment for longer had a shorter post-discharge follow-up; as failure to respond was defined as still being malnourished after 12\u00a0weeks of treatment, the minimum follow-up period was 14\u00a0weeks, so that individual children had either three, four or five monthly follow-up assessments. Children were defined as having relapsed to SAM if they again reached any of the three criteria defining SAM at least once during the monthly follow-up visits after being discharged as recovered. Children\u2019s relapse to moderate acute malnutrition (MAM) was defined as the development of WHZ\u2009<\u2009\u20132.0 and\u2009\u2265\u2009\u20133.0 (WHO Growth Standards 2006) or MUAC <125\u00a0mm and \u2265115\u00a0mm at least once during the monthly follow-up visits, without the child developing SAM criteria during any other follow-up visit. \u2018Unknown\u2019 was defined as a defaulter not confirmed by a home visit or with no information for the child at the end of the trial. Withdrawal from the study was defined as participants who elected to stop the study for personal reasons.At enrolment, trained health workers recorded the socioeconomic characteristics of the household and categorised it into wealth groups according to local definitions of wealth and assets from the HEA assessment. The Household Dietary Diversity Scores (HDDSs) and the All participating caretakers from the intervention group with one or more children with SAM received an unconditional cash transfer of US$40 value each month during treatment and follow-up for a total of 6\u00a0months . We emphasised to the participants that they could use the funds in any way they saw fit in a completely unrestricted way without any conditions being imposed on how the cash was used. The cash was distributed directly to the child\u2019s caretaker for each household in the intervention group without informing or involving the health centre staff. Each month the cash was given from 10 separate administrative offices by two Save the Children financial staff and two food security supervisors attached to the study and completely independent of the health centre staff. The dispensing of the money was spread throughout the month according to the patient's admission date, so that daily attendance was minimised. This mechanism was considered the most secure to preserve confidentiality and to avoid contamination bias; electronic and other forms of cash transfer were not available in the area at the time of this study.The amount given to each household was fixed and not adjusted by the size of the household or the number of malnourished children. This amount was calculated to provide 70% of the monthly household income for a household of seven persons characterised as very poor using HEA criteria . SeventyThe cash transfer for a family of seven persons amounts to 18 cents US per person per day. As the area is isolated without reliable ground transportation of goods, it tends to be more expensive than less isolated areas. The cash given in other CTPs in the DRC varies from US$110 to US$135 monthly (except for one pilot project which dispensed US$20.5 per month). The amount given in this study was judged by the Emergency Department of UNICEF to be a sustainable amount that could be supported by donors and other stakeholders if the program were to be scaled up; the higher amounts given in other programs were judged to be unsustainable operationally .All data were collected on standardised paper forms and double-entered into EpiData version 3.1 (EpiData Association) by staff unaware of the arm to which each health centre belonged. Any anthropometric data which fell outside the limits of biological plausibility, using WHO criteria, were eliminated from the database . Changest test for continuous variables and the chi-square test for categorical variables.Significance testing for differences between intervention and control groups at baseline was performed using the independent sample Student's P values. We checked for possible deviation from the proportional hazards assumption of the Cox regression model by using the non-proportionality test on the basis of the Schoenfeld residuals.Differences in child recovery between trial arms of the primary outcomes and relapse rate after discharge from therapeutic home treatment were tested by using a mixed-effects Poisson regression model, with health centre as random effect to estimate the incidence rate ratio (IRR). Next, we estimated hazard ratios (HRs) and 95% confidence intervals (CIs) using marginal Cox proportional hazards models adjusted for baseline values, where the outcome variable is time from recruitment to the event (recovery) and the time scale is calendar week. All 95% CIs used robust estimates of the variance to account for clustering at the health centre level as well as a shared-frailty model as developed by Andersen . PotentiComparisons between arms for the secondary outcomes , time to recovery, length/height change, IDDS, HDDS, FCS and daily, weekly and monthly anthropometrics changes were made by using linear mixed-effects models for continuous outcome variables, whereas mixed-effects logistic regression models were used for proportions, with health centre as random effect and models adjusted for baseline values. Analyses of anthropometric data which depended upon body weight excluded children with oedema; oedematous children were included in all other analyses.P values were computed with the robust score test.We produced and used Kaplan-Meier plots to estimate the probability of failure to achieve and maintain nutritional recovery up to 6\u00a0months from enrolment. Survival curves of two groups were compared using the Cox regression analyses with robust estimates of the variance to account for clustering at the health centre level, and Between 16 July 2015 and 31 July 2015, 1600 children were admitted to the centres with a diagnosis of SAM; 119 (7.4%) children did not meet the inclusion criteria and were excluded. Among those, some were admitted using the IMAM unisex weight-for-height table but were ineligible using sex-specific assessment, a few lived outside the catchment area and others were referred directly to the hospital. Figure\u00a0P\u2009>\u20090.05 . The mothers had a higher school achievement in the intervention group, but were otherwise not different. The HEA assessment showed that 75% of the households were classified as poor or very poor in both arms. The household size and number of children younger than 5\u00a0years were both greater in the intervention group. However, there were fewer household members younger than 5\u00a0years in the intervention group compared to the control group (5.3 vs 5.5), so that the increased number of young children, and hence the dependency ratio, in the intervention group would tend to make this group more vulnerable to nutritional deficits. In both groups, less than 20% of the households had an acceptable diet diversity score with 43% being in the lowest category; 53% were considered to have poor or borderline food consumption. These results are consistent with previous assessments conducted in Kasa\u00ef-Oriental [The control and intervention (cash) group\u2019s baseline characteristics are shown and compared in Table\u00a0Oriental and showThe results of the treatment are given in Table\u00a0During treatment, changes in weight, WHZ, weight-for-age Z-score (WAZ), body mass index Z-score (BMIZ) and IDDS were all significantly greater in the intervention group compared to the control group; however, there was not a difference in the rate of increase in MUAC or height. Both groups gained less height than expected when compared with the standards for age of height-for-age Z-score (HAZ); the changes were not significant. The mean length of stay among children in the therapeutic program was 6.9\u00a0weeks (\u00b12.5). It did not differ significantly between the groups, although the median was 1\u00a0week shorter in the intervention group.th week. Adjustment of the analysis for the significant baseline household and maternal characteristics increased the difference between the groups by a trivial amount. The mixed-effects Poisson regression analysis gave similar results.The Cox regression analyses Table\u00a0 showed tTable\u00a0This deterioration in the control children\u2019s status post-discharge is confirmed by the proportion of children who again developed MAM or SAM. Altogether 44% of the control children deteriorated after treatment, 11% to a level where they would be readmitted for SAM. Most of the children who relapsed to MAM did so with both WHZ and MUAC. Furthermore, as there was no significant gain in height, the deterioration in WHZ for those who relapsed to MAM or to SAM could not be ascribed to a disproportionate gain in height relative to weight.\u03c72\u2009=\u20092.7, P\u2009=\u20090.10). In contrast, 47 (7.1%) of the control children re-developed anthropometric SAM having been discharged as cured.For those who relapsed to SAM, only 8 of the 707 (1.1%) intervened children relapsed anthropometrically by the WHZ criterion and none by the MUAC criterion. The main type of relapse in the intervened children was re-development of oedema. There were slightly more children who relapsed with oedema in the control group (26 vs 16), but the difference was not significant , water (2.8%), bill payment (2.3%), tuition fees (1.8%), health costs (1.5%) and the remaining 5.3% on other activities and basic needs. The extra expenditure on food should translate to an improvement of the quality of the diet for the index child and the whole family.P\u2009<\u20090.001). However, the increment in the cash-intervention group was very much greater than that in the control group; the increment amounted to between 2.6 times for the index child\u2019s dietary diversity to 5.3 times the control group value for the household diet diversity score. By the end of the study, 60% of the intervention group\u2019s households had achieved a high dietary diversity, whereas only 17% of the non-intervened group had an acceptable dietary diversity at this time. Their food consumption scores mirrored these data with only 3.8% of the group receiving cash having a poor or borderline score compared to about 25% of the control group.Table\u00a0P\u2009<\u20090.001).The comparison of the children who relapsed after recovery with those who maintained their recovery is given in Additional files To our knowledge, this is the only study to assess the direct effect of supporting households with cash during the course of treatment and follow-up of children with SAM. The analysis by intention to treat showed that the cash supplement significantly improved all aspects of treatment. Six months after admission, 80% of the children whose families were given additional support remained within the normal range of WHZ and MUAC. In contrast, less than 40% of those whose families did not receive this additional support had a good outcome; this is not only statistically significant but also biologically highly significant.For a child to develop severe acute malnutrition shows the child, and presumably the siblings and whole household, to be at particular risk of death and the other serious consequences of being severely malnourished. Follow-up of young children in a DRC community with good medical facilities, but without specific management of SAM, shows that about 5% with a WHZ\u2009<\u2009\u20133Z are dead within 3\u00a0months; this increases to 15 to 20% at\u2009<\u2009\u20134Z and 30% for those approaching \u20135Z \u201346.The parents of malnourished children need to choose between attending the health centre and all their other competing activities essential to the integrity of the household. The poorer the household, the more important each individual economic activity is, because the survival of the household is fragile and continuously at risk from even minor additional stress. If the parents consider that the treatment at the health centre is not helping, competing activities critical or the costs of attending excessive, attending will not be a priority and they will default. Thus, the defaulting rate is one measure of the quality of the service provided. Children are much less likely to recover if they default from treatment . The casDuring treatment the intervention group gained weight faster than the control group, more children recovered and fewer failed to respond to treatment. This occurred despite the same amount of RUTF being dispensed to the families and all other aspects of treatment being the same apart from the cash delivery. The question arises of how the cash delivered to the family effected this improvement in recovery from wasting. We hypothesise that it was due either to the children receiving a higher proportion of the dispensed therapeutic food or to the child getting a higher quality diet from the family pot or a combination of these factors. As most of the funds were spent on food, it is unlikely that the immediate environment (water and sanitation for example) improved over a short time or that there was significantly more health-seeking behaviour.maximum average intake of the RUTF is around 70% of the amount dispensed; the actual intake is likely to be much lower because some of the 120\u00a0kcal/kg taken by the child will come from the family food and not from the RUTF. Furthermore, if the RUTF were being taken exclusively, there would have been no other foods taken and the IDDS would have shown no diversity at all. As this was not the case, it confirms that the children were indeed consuming less than the computed 70% of the dispensed RUTF. Unrealised low compliance can be a major reason why randomised controlled trials report false negative results [We collected data on the sharing of the RUTF. The data is not presented because we judged it to be very inaccurate. The respondents from both groups reported very low levels of sharing within the family. In fact, compliance is usually poor in resource-poor settings with extensive sharing and there is an incentive for the families to exaggerate the amount taken by the child in order to ensure continued enrolment in the program , 49. The results . It is pDietary diversity is associated with a child\u2019s nutritional status \u201354. GlobIn agreement with our results, CTPs in African countries and programs have reported an increase in household consumption with the majority of the additional income from the cash intervention being spent on a variety of foods with a resulting improvement of diet diversity \u201321. CashThe most dramatic finding of our study was the difference in the relapse rate between the children of households who received ongoing support to the end of the 6-month period and those who were simply returned \u201bcured\u2019 to their households. Relapse after discharge was the main reason for failure of the program without the cash transfer; of those admitted initially less than 40% were deemed a success after 6\u00a0months. These are the most vulnerable children in the community; after treatment they are returned to exactly the same environment and circumstances that they endured whilst becoming malnourished. Although after treatment they are older and healthier and thus more likely to demand and receive their share of the household food, they are still at high risk of relapse without continued support.The relapse rate without family support after discharge varies greatly from study to study , but is The state of food security at discharge has a decisive effect in countries with seasonality. This is elegantly shown in a study from India. Discharge when food security is low was followed by a relapse rate of 35% to MAM and 6.5% SAM; with moderate food security this fell to 29% and 3.8% and with high food security to 8.7% and 0.7% respectively . These msecure households in the same area at the same time as the control group households remained food-insecure. The difference in outcome for the two groups is clear. The intervention children continued to catch up from \u20131.5Z weight-for-height and MUAC for age towards the median; in contrast, the control group without this household support significantly deteriorated with a high proportion of the children relapsing.Thus, the very large differences between the relapse rates during food-insecure and food-secure times is artificially mirrored in the DRC with our study. The cash converted the food-insecure households with restricted diets and high levels of malnutrition to relatively food-inter alia differences in the types of food available, cultural practices, taboos, woman\u2019s roles, seasonality and climate between the DRC and Niger. This raises the question of the external validity of such studies. However, it is noteworthy that IYCF was not included in the Niger study, and it is possible that such counselling affected the choices, behaviour and disbursements of the recipient households in our study to the benefit of the children.Other forms of post-discharge support may also have a beneficial impact on the further fate of the children; for example, a quasi-controlled analysis showed a better outcome of MAM with prolonged supplementary feeding . Cash maThere was no difference by group in the proportion of children developing oedema after discharge. We do not have an explanation for this observation, but it may depend upon individual nutrients, such as sulphur or particular antioxidants, being generally deficient from the foods available in this area .Our study emphasises that the protocols specifically developed for short-term relief in emergency situations may not be sufficient for use in impoverished communities in a developmental context. Having identified households with a malnourished child in the poorer sections of the society, giving short-term treatment to increase the weight of the child is appropriate to prevent imminent death, but is insufficient when the \u201bcured\u2019 children are simply returned to their original poverty-stricken households without other interventions. In this context the \u201bemergency\u2019 home treatment should be combined with \u201bdevelopmental\u2019 support to the family that can be sustained in the longer term and lead to an improvement in their circumstances. It is unknown what happened to the children in this study when the cash transfer program ceased, because the children were not followed for logistical reasons. They may have deteriorated subsequently in the same way that the control group deteriorated without family support. Given the dramatic findings of this study, in terms of relapse, longer term follow-up should be investigated in further studies.Although the increases in food diversity and food consumption scores were much greater in the intervention group, there was also an improvement in the control group which was significantly greater than their baseline assessments. This could be due to temporal changes in the whole community; alternatively, it could be ascribed to the IYCF counselling. Whatever the cause, it is clear that IYCF counselling did not have a dramatic effect on preventing deterioration or relapse in the control group. Knowledge about IYCF by the household could not compensate for the effect of poverty on their ability to purchase higher quality foods and presumably to follow the advice given about young child feeding. However, IYCF was probably critical for the changes in the intervention group, who now had the resources to implement the advice given during counselling.It is important that distribution of cash respects the autonomy of households to decide how to best meet their own requirements. However, we expect that any significant effect on child nutritional status depends on the duration of exposure to the intervention and the amount of cash transfer received per household, per month, relative to the local costs and the local availability of high-quality food. Information on program costs, although a key indicator for public health decision-makers and program managers, was not an objective of this study. Nevertheless, it usually costs less to get cash transfers to people than in-kind assistance because aid agencies do not need to purchase, transport, store and then pay to distribute relief goods. A four-country study comparing cash transfers and food aid found that 18% more people could be assisted at no extra cost if everyone received cash instead of basic food (i.e. not including fresh foods) . In EthiIn the DRC, and other large countries with a relatively scattered population and poor transport, preventive programs like general food distribution or blanket feeding are logistically too difficult and expensive and so have never been implemented country-wide. Supplementary feeding programs for treating MAM are sporadic and not functional in most of the DRC. In this context, CTPs should be considered as an alternative to in-kind assistance and services or as a complement to more traditional interventions. Normally national protocols for treating SAM state that children who have recovered from SAM should be enrolled in a supplementary feeding program to be followed and receive a fortified food ration for at least 3\u00a0months after discharge. This provision is frequently unrealistic. The results of the present study, which shows that a cash supplement effectively prevents relapse and allows for continued catch-up, demonstrate CTP to be a viable and more easily implemented alternative to a supplementary feeding program , 69. FurMost programs for managing severely malnourished children do not automatically include a follow-up program and only report the excellent \u201brecovery\u2019 rates. Such would have been the results of both arms of our study. By including the follow-up in the assessment of our program, we have shown that many reports can be misleading in terms of the overall success of a program. We strongly recommend that such continued support and its evaluation should be routine for all CMAM/IMAM programs in relatively stable countries.This study showed that there was no catch-up in height-for-age in either group. Thus, the program had no effect on stunting. It may be that increased rates of normal growth, indicated by a height increase, are delayed beyond the study period, as observed by Heikens et al. . NeverthIt was not possible, of course, to blind the participants to the transfer of cash into their hands. Indeed, this intervention might have been the incentive to continue participation in the study and may have affected the results by preventing defaulting\u00a0which was then analysed by intention to treat so that all defaulting children were included in the analyses and, where possible, followed up with home visits. Although the health centre staff were not involved in any way with the cash transfer, it is difficult on a practical level to prevent the service personnel from becoming aware of who received the intervention and who did not.As the participants were not blinded, it is possible that the intervention group\u2019s respondents were more disposed towards the study than those of the control group, which could have affected the answers they gave to the questionnaires. We do not think that this is likely to have biased the results, as they both reported clearly erroneous reports of RUTF sharing to the same extent, and there were similarities of the two groups\u2019 responses at baseline. Nevertheless, it was not possible to verify the accuracy of the other questionnaire data with direct home observations. Clearly, this potential bias will not affect the anthropometric data.There is likely to have been an ascertainment bias so that the patients were not properly representative of severe malnutrition in the community, because those who were recruited by MUAC criteria were selected from community screening and those who were selected by WHZ criteria were taken from attendees at the health centre. Survey data from the DRC indicate that 32% of the severely malnourished children by WHZ have a MUAC above the cut-off point for SAM . They wiAs there is no real seasonality in this location and as the two arms were conducted concurrently, we do not consider the time of discharge as a cause of bias. The data for the non-intervened group are consistent with the reports of other programs of outpatient treatment in many other locations, and their rates of relapse are similar to those of many other reports. We therefore consider the study to have reasonable external validity for the control group. The effect of the intervention, however, is dependent upon variables such as the availability of varied nutrient-dense products on the market, reasonable market access for the participants and reasonable price stability; therefore, the external validity of the intervention arm needs to be confirmed in other contexts with differing potential effectors.This research demonstrates the benefits of cash assistance; any potential negative impacts were not considered or examined (such as the extra cash in the society increasing market prices to the detriment of the control group). However, one possible negative effect of a therapeutic treatment program is that the child will be purposefully kept malnourished in order to receive the benefit. This is not thought to be a problem in the present study, as it was made clear to the beneficiaries of the intervention arm that they would receive the cash transfer monthly for 6\u00a0months independently of the recovery rate of the child, provided that they did not default from the program. Thus, the cash is likely to have deterred defaulting (a good outcome), but there was no incentive to maintain the child in a malnourished state to continue to benefit from the program. It is possible that such an effect was present for some children in the control arm of the study; however, the mean length of stay under treatment was not different between the groups. If this did occur, it is a further benefit of the cash transfer for the intervention group.The study does not provide evidence of a greater positive effect of providing cash assistance rather than in-kind or other forms of assistance. The study estimates the impacts of cash when US$240 was delivered per household over the course of 6\u00a0months to children with SAM. The findings should not be extrapolated to different amounts or time frames. Moreover, this trial was conducted in a semi-urban area of the DRC without marked seasonal variation that had during the study a relatively poor food insecurity and a high prevalence of wasting. Therefore, our results need to be extrapolated with care and interpreted within the given context.This study shows that giving cash in impoverished communities can be effective in improving the outcome of children treated for SAM and provides a safety net that prevents relapse and allows for continued catch-up in weight and MUAC up to 6\u00a0months from admission. This very positive impact over a relatively short time on the children\u2019s nutritional status is most easily explained by the improved access to high-quality food, enabling households not only to meet minimal needs for survival but also to diversify their diet within a society characterised by a high level of endemic malnutrition. In the DRC, where supplementary feeding interventions are logistically difficult and expensive to implement, to reduce acute malnutrition and improve its coverage, carefully designed cash transfer is shown to be a viable and highly effective intervention. Such innovative programs merit further investigation in different contexts to assess their cost-effectiveness compared to other interventions. Nutritional rehabilitation programs, often using the procedures derived directly from emergency relief operations, should always consider the feasibility of incorporating developmental programs such as micro-credit, home gardening, etc. into their procedures to enable the gains from acute nutritional intervention to be sustained.Additional file 1: Table S1.Changes in diet diversity and food consumption score between children who relapsed and recovered. (DOC 35 kb)Additional file 2: Table S2.Changes in anthropometric indicators between children who relapsed and those who did not relapse after discharge from therapeutic home treatment for severe acute malnutrition. (DOC 51 kb)"} {"text": "Here we report on the observation of this so-called topological magnetoelectric effect. We use monochromatic terahertz (THz) spectroscopy of TI structures equipped with a semitransparent gate to selectively address surface states. In high external magnetic fields, we observe a universal Faraday rotation angle equal to the fine structure constant \u03b1=e2/2hc (in SI units) when a linearly polarized THz radiation of a certain frequency passes through the two surfaces of a strained HgTe 3D TI. These experiments give insight into axion electrodynamics of TIs and may potentially be used for a metrological definition of the three basic physical constants.The electrodynamics of topological insulators (TIs) is described by modified Maxwell's equations, which contain additional terms that couple an electric field to a magnetization and a magnetic field to a polarization of the medium, such that the coupling coefficient is quantized in odd multiples of et al. observe a universal Faraday rotation angle equal to the fine structure constant, evidencing the so-called topological magnetoelectric effect.The electrodynamics of topological insulators has been predicted to show a new magnetoelectric term, but this hasn't been observed. Here, Dziom Pc(E) and magnetization Mc(B) as a function of the applied electric and magnetic fields, respectively. Soon after the theoretical prediction124Pt(B) and Mt(E) when time-reversal symmetry is weakly brokenMaxwell's equations are in the foundation of modern optical and electrical technologies. To apply Maxwell's equations in conventional matter, it is necessary to specify constituent relations, describing the polarization N is an integer and \u03b1\u22481/137 is the fine structure constant. The derivation of \u03b1, when a linearly polarized electromagnetic radiation passes through the top and bottom topological surfaces691011Here \u03b8F=\u22124\u03b1/(1+nsub), that is, it depends on the refractive index of the substrate nsub and hence is not fundamental.In real samples, the TME may be screened by nontopological contributions1314\u03b1. Strained HgTe layers grown on CdTe, that are investigated in the present work, are shown to be a 3D TI911Here we report on the observation of the universal Faraday rotation angle equal to the fine structure constant \u03b1 (for N=0) comes from two spatially separated topological surfaces in a 3D TI. This corresponds to the half-quantized Hall conductivity e2/(2h) per surface or, equivalently, to the TME occurring at each surface separately. Therefore, the observed Faraday effect 2(N+1/2)\u03b1 is intimately related to the TME, which distinguishes qualitatively our 3D TI from 2D or quasi-2D materials. There is also a quantitative difference. Even without the substrate (nsub=1), the Faraday rotation in graphene would be quantized as 4(N+1/2)\u03b1, including the spin and valley degeneracies. The minimum Faraday rotation angle is then \u03b1 (for N=0)N\u03b1, where the factor of 2 comes from the equal contributions of the up- and down-spin subsystems, which independently exhibit the integer QHE. This is because in GaAs/AlGaAs heterostructures, the Zeeman splitting for magnetic fields below \u03b1 (for N=1)The observed Faraday rotation angle 0.7Hg0.3Te layers (0.7Hg0.3Te layers have a thickness of 51\u2009nm (lower layer) and 11\u2009nm (top/cap layer), respectively. The purpose of these layers is to provide the identical crystalline interface for top and bottom surface of the HgTe films as well as to protect the HgTe from oxidization and adsorption. This leads to an increase in carrier mobility with a simultaneous decrease in carrier density compared to uncaped samples11\u2009cm\u22122 and a carrier mobility of 2.2 \u00d7 105\u2009cm2\u2009V\u22121\u2009s\u22121. The optical measurements are carried out on a sample fitted with a 110\u2009nm thick multilayer insulator of SiO2/Si3N4 and a 4\u2009nm thick Ru film. The Ru film (oxidized in the air) is used as a semitransparent top-gate electrodeThe strained HgTe film is a 58\u2009nm thick HgTe layer embedded between two Cde layers . The Cd0v<1\u2009THz) are carried out in a Mach\u2013Zehnder interferometer arrangement232. Using wire grid polarizers, the complex transmission coefficient t=i\u03c6 is obtained both in parallel tp In general case, the light propagating along the i\u03c9t\u2212 time dependence is assumed for all fields. As the wavelength of 856\u2009\u03bcm for v=0.35\u2009THz is much larger than the HgTe layer thickness, we use the limit of thin film, and the corresponding transfer matrix \u03c3xx) and Hall (\u03c3xy) components of the conductivity tensor For normal incidence, the fields across the conducting interface are connected by the Maxwell equation c is the cyclotron resonance (CR) frequency, \u03c30 is the dc conductivity, and \u03c4 is the scattering time. For classical conductors, the CR frequency is written as \u03a9c=eB/me, where me is the effective electron cyclotron mass.Here \u03a9V on both sides of the sample and hence contains full information about the transmission and reflection coefficients. Thus, when ac transport properties of the HgTe layer, in accord with tp and tc based on the transfer matrix formalism as well as the exact form of the transfer matrices are presented in Methods section.The total transfer matrix e) \u03a9ce at tp and tc indicates a high purity of our HgTe layer. The scattering time is significantly longer than the inverse THz frequency ac conductivity reveals a resonance-like behaviour \u03c3xx, \u03c3xy\u221d1/(\u03c92).Magnetic field dependence of the THz transmission is dominated by a sharp CR of surface electrons is the energy dispersion, B is the magnetic field and A is the area enclosed by the wave vector k, we calculate for the topological surface state To understand the origin of the experimentally observed resonances, we analyse the band structure of tensile strained Cdtp and tc allows the extraction of all transport characteristics, that is, conductivity, charge carrier density, scattering time and CR frequencyExperimentally, simultaneous fit of the real and imaginary parts of \u03bc=\u03c4\u03a9c/B. The surface states demonstrate high mobility dc transport data. Since the e-CR and s1,s2 resonances occur at different magnetic fields, their contributions to the ac transport can be clearly separated, as presented in ac conductivity of the surface states dominates at large gate voltages. In what follows, we concentrate therefore on From the obtained scattering time and the CR positions in the magnetooptical spectra of \u03c3xy. The overall behaviour is provided by the high-field tail of the classical Drude model, that is, \u03c3xy with growing magnetic field. In addition to the classical behaviour, regular oscillations in \u2202\u03c3xy/\u2202B can be recognized, which are linear in inverse magnetic field e2/h per surface is provided by the classical curve of \u03c3xy can be approximated as \u03c3xy(\u03c9)\u2248\u03c30\u03a9ce/[(\u03c92)\u03c4], which in the limit \u03c3xy=ne/B, being a multiple of e2/h. In low magnetic fields, the CR frequency \u03a9ce becomes comparable to the THz frequency \u03c9, destroying the regularities in \u03c3xy(\u03c9).In magnetic fields above 5\u2009T, the Hall conductivity clearly shows a plateau close to surface . Anotherac quantum Hall conductivity \u03c3xy(\u03c9) calculated from the Kubo formula for both top and bottom surface states within the Dirac model13Since in strained HgTe, the Fermi level lies in the bulk band gap see , we attr&\u2202;\u03c3xy/\u2202B. However, the model predicts much sharper transitions between the QHE plateaus, as observed in the experiment. One of two possible explanations is the heating of the surface carriers by the THz field, resulting in a higher effective temperature compared to that of the lattice. Such a heating can occur due to inefficient energy relaxation in the electronic system through the emission of LO phonons at low temperaturesT=25\u2009K with density fluctuations within 10% relative to their nominal values. As the fits are nearly indistinguishable, we cannot quantitatively determine the contributions of both mechanisms leading to the smearing of the THz QHE plateaus.Our two-surface Dirac model describes well the surface carrier CR surface conductivity \u03c3a=\u03c3b\u224850e2/h. Its large value indicates high-surface carrier mobility, insuring that the condition for the quantum Hall regime, B>1\u2009T. Here \u03c4a,b are the scattering times of the top and bottom carriers, R0=h/(2e2) is the resistance quantum, and vF is the Fermi velocity of the Dirac surface states.where B>5\u2009T is determined by the conductivity quantum G0=e2/h (Na=Nb=0), we turn to the central result of this work, the THz Faraday effect. Owing to the TME of Exei\u03c9t\u2212 (magnetic Hyei\u03c9t\u2212) field of the linearly polarized THz radiation induces in a 3D TI an oscillating magnetic \u03b1Hxei\u03c9t\u2212 (electric \u03b1Eyei\u03c9t\u2212) field. The generated, in such a way, secondary THz radiation is polarized perpendicular to the primary polarization and its amplitude is \u03b1 times smaller. This can be viewed as a rotation of the initial polarization by an angle Having established that the THz response of the topological surface states in high magnetic fields \u03b7F is shown in \u03b1 for We rigorously characterize the THz Faraday effect, and the Faraday ellipticity \u03b1=e2/2hc=e2\u03bc0c/2h is a direct consequence of the TME, confirming axion electrodynamics of 3D topological insulators. We use monochromatic terahertz spectroscopy, providing complete amplitude and phase reconstruction, which can be applied to investigate topological phenomena in various systems, including graphene, 2D electron gas, layered superconductors and recently experimentally discovered Weyl semimetals0=h/G0=e2/h are suggestede, h and c have been carried out in a Mach\u2013Zehnder interferometer arrangementMagnetooptical experiments in the THz frequency range components of electric and magnetic fields, which may be combined in form of a four-component vector V=. The propagation of light between two points in space separated by a distance V1 and V2 can be described via 4 \u00d7 4 transfer matrix To analyse the experimental transmission spectra, we follow the formalism described by Berreman24Z=\u03bc=1. The Berreman procedure is in general not limited to the case of normal incidence. However, in such geometry the choice of tangential field components simplifies the treatment. Electric and magnetic fields across the interfaces are connected by the Maxwell equation i\u03c9t\u2212 is assumed.Here V consists of (i) the amplitude of the linearly polarized wave (Ex) propagating in the positive direction, (ii) the amplitude of the wave with the same polarization propagating in the negative direction, and (iii) and (iv) of two waves with perpendicular polarization (Ey). The propagation matrix in the new basis is The full transfer matrix t) and reflected (r) waves, respectively. The equation connecting all waves is given by:The present experiment is described by a linearly polarized incident wave and by two components of the transmitted and crossed (tc) polarizers are given by:To interpret the experimental data, we use the whereZ0\u2248377\u2009Ohm is the impedance of free space, Z=k=\u03c3xx(\u03c9), and \u03c3xy(\u03c9), are given by Here As has been discussed previouslyand the influence of the substrate is minimized.The data that support the findings of this study are available from the corresponding authors upon reasonable request.How to cite this article: Dziom, V. et al. Observation of the universal magnetoelectric effect in a 3D topological insulator. Nat. Commun.8, 15197 doi: 10.1038/ncomms15197 (2017).Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Supplementary Figure 1"} {"text": "We present a novel finite-time average consensus protocol based on event-triggered control strategy for multiagent systems. The system stability is proved. The lower bound of the interevent time is obtained to guarantee that there is no Zeno behavior. Moreover, the upper bound of the convergence time is obtained. The relationship between the convergence time and protocol parameter with initial state is analyzed. Lastly, simulations are conducted to verify the effectiveness of the results. In recent years, many applications required a lot of vehicles or robots to work cooperatively and accomplish a complicated task. Given this, many researchers have devoted themselves to the studies of coordination control of multiagent systems , 2. The In practical multiagent systems, each agent is usually equipped with a small embedded microprocessor and has limited energy, which usually has only limited computing power and working time. These disadvantages drive researchers to develop event-triggered control schemes, and some important achievements have been made recently \u201311. For Moreover, the convergence time is a significant performance indicator for a consensus protocol in the study of the consensus problem. In most works the protocols only achieve state consensus in infinite time interval, that is, the consensus is only achieved asymptotically. However, the stability or performance of multiagent systems in a finite time interval needs to be considered in several cases. The finite-time stability focuses on the behavior of system responses over a finite time interval , 18. TheRecently, there are few results reported in the literature to address finite-time event-triggered control consensus protocols for multiagent systems. To the best of our knowledge, in the authThe main contributions of this paper can be summarized as follows: (1)\u00a0a new finite-time consensus protocol based on the event-triggered control strategy for multiagent systems is presented, and the system stability is proved. The protocol is simpler in formulation and computation. (2)\u00a0The lower bound of the inter-event time is gotten to guarantee there is no Zeno behavior. (3)\u00a0The upper bound of convergence time is obtained. The relationship between the convergence time and protocol parameter, the initial state, is analyzed.The rest of this paper is organized as follows. In Section\u00a0x, E is the Euler number (approximately 2.71828).In this subsection, we introduce some basic definitions and results of algebraic graph theory. Comprehensive conclusions on algebraic graph theory are found in . Moreoven vertices, L has always a zero eigenvalue, and 1 is the associated eigenvector. We denote the eigenvalues of L by For an undirected graph Suppose that a functionis differentiable and satisfies the conditionwhereandThenreaches zero atandfor allwhereIfandthenn agents, and the state of agent i is denoted by The multiagent system investigated in this study consists of equality . We suppWith the given protocol s solved . In addiIn the event design, we suppose that each agent can measure its own state u is held constant and can be formalized as Between events are triggered, the value of the input It is well known that the control algorithm is a piecewise constant function, and the value of the input is equal to the last control update.The protocol based on event-triggered control utilized to solve the finite-time average consensus problem is In this subsection, we study protocol . Now, weSuppose that the communication topology of a multiagent system is undirected and connected and that the triggered function is given bywhereandThen, protocol (solves the finite-time average consensus problem for any initial state. Moreover, the settling timeTsatisfiesprotocol solves tGiven that the topology is undirected and connected, Let \u03ba is time invariant.Therefore, Consequently, Define the measurement error as follows: Then we get Let Here we have taken the Lyapunov function By differentiating Equation results Suppose that entclass1pt{minimaThen we get Then Combining the above formulas results in According to the triggered function of Theorem\u00a0Then we get T satisfies Therefore, according to Lemma\u00a0If For each agent, an event is triggered as long as the triggered function satisfies \u03bc in the triggered function (\u03bc is large, the allowable error is large. This means that when \u03bc is large, the trigger frequency is low. From equation (\u03bc is large, the convergence time is long.The role of the parameter function is adjusfunction we know equation we know Note that if we set udied in . Howeverudied in does not\u03c4. This is proven in the following theorem.In the event-triggered control conditions, the agent cannot exhibit Zeno behavior. Namely, for any initial, the interevent times equation are loweConsider the multiagent system (with consensus protocol (Suppose that the communication topology of the multiagent system is undirected and connected. The trigger function is given by equation (Then the agent cannot exhibit Zeno behavior. Moreover, for any initial state, the interevent timesare lower bounded bygiven bywheret system with conprotocol . Supposeequation . Then thSimilarly to the main result in , define By differentiating Then Then we can get that the interevent times are bounded by the time as follows: From equation we know Combining the above formulas results in \u03bc.From equation it is eaNote that if we set \u03b1 is studied.In this subsection, the relationship between convergence time and other factors, including initial state and parameter Firstly, we study the relationship between convergence time and initial states. In the consensus problem, rather than the size of the initial states, the disagreement between states is more concerned. By definition, equation we easil\u03b1. Supposing that Then we study the relationship between convergence time and parameter equation we obtaiThen \u03b1 becomes large, and when Letting Convergence time is defined as the amount of time the system consumes to reach a consensus. The precise convergence time of the studied nonlinear protocol is difficult to obtain. The above conclusions were obtained based on the upper bound of convergence time in equation .In this section, the simulations are conducted to verify the efficiency of the conclusions. Consider the multiagent system with mentclass2pt{minimThe trajectories of agents under protocol when \\doentclass1pt{minimaWhen entclass1pt{minima\u03b1, \u03b1 when \u03b1 becomes large, and when Moreover, the relationship between the convergence time and parameter The proposed protocol can solve the finite-time average consensus problem.The lower bound of the interevent time was obtained to guarantee that there is no Zeno behavior.\u03b1 becomes large, and when The larger the difference in the initial state, the longer the convergence time. Moreover, if We presented a novel finite-time average consensus protocol based on event-triggered control strategy for multiagent systems, which guarantees the system stability. The upper bound of convergence time was obtained. The relationship between convergence time and protocol parameter with initial state was analyzed. The following conclusions were obtained from simulations. In this paper, the authors only considered first-order multiagent systems. Our future works will focus on extending the conclusions to second-order or higher-order multiagent systems with switching topologies, measurement noise, time delays, and so on."} {"text": "A 40-year-old male suffering from hallucinations and bizarre behavior was brought to our emergency room (ER) by the police. His drug and alcohol screens were positive for amphetamines and a blood alcohol content of 0.029 mg/dL. His past medical history was significant for alcohol use disorder, end-stage liver disease, ascites, esophageal varices, portal hypertension, and hepatic encephalopathy. He was admitted in an encephalopathic state and developed worsening hematochezia and hemodynamic instability over the course of days. Multiple investigations including contrast enhanced computed tomography (CT), upper and lower endoscopy, and mesenteric angiography did not identify a clear cause of the bleeding.\u00a0Eventually, his source of bleeding was found to be from cecal varices. A transjugular intrahepatic portosystemic shunt\u00a0procedure and coil embolization of the right colic and ileocolic veins stabilized the patient and he was discharged home\u00a0a few days later. Cecal varices are a poorly characterized cause of gastrointestinal bleeding and are rare compared to varices in other locations in the gastrointestinal tract\u00a0. They usA 40-year-old male suffering from hallucinations and bizarre behavior was brought by police to our emergency room (ER). His vitals on arrival were: temperature 36.9\u00b0C, pulse 124 BPM, respiration 20 per minute, blood pressure 104/57, and pulse oximetry 95% on room air. A urine drug screen was positive for amphetamines and his blood alcohol level was 0.029 mg/dL. His past medical history was significant for alcohol use disorder, end-stage liver disease, portal hypertension, ascites, esophageal varices, and hepatic encephalopathy. On examination, the patient was lethargic and difficult to arouse with an ammonia level of 109.5 umol/L. He was admitted for acute treatment of hepatic encephalopathy but developed hematochezia within 24 h of admission. An esophagogastroduodenoscopy (EGD) demonstrated grade II esophageal varices, which were banded, and portal hypertensive gastropathy. This seemed to resolve the hematochezia; however, two days later he had another episode of bright red blood per rectum. Sigmoidoscopy was performed, which demonstrated nonbleeding internal hemorrhoids. Over the next 36 h the patient complained of increasing lower abdominal pain and had intermittently bloody stools; however, a computed tomography (CT) scan of the abdomen and pelvis was negative for any acute changes. He then had two large, bloody stools and developed hypotension overnight; additionally his creatinine increased from 0.6 to 1.2 within 12 h. Given the intermittent nature of his gastrointestinal bleeding, a Model for End-Stage Liver Disease (MELD) score of 20 and concerns that he may have been developing hepatorenal syndrome, the gastroenterologist determined colonoscopy too risky. Instead, a tagged red blood cell scan was ordered as a less invasive modality to seek out intermittent bleeding. It showed abnormalities in the duodenum and stomach as well as bleeding from the right colon. The patient was taken to interventional radiology for a mesenteric angiogram. No active bleeding was identified; however, the portal venous phase of the superior mesenteric arteriogram did show dilated varices within the mesentery of the right colon.Given the grave prognosis, the patient decided to transition to palliative care and became no code status for four days. He continued to worsen during this time period, though he later decided he would like to transition off palliative care and after much discussion, he elected to proceed with transjugular intrahepatic portosystemic shunt (TIPS) procedure in an effort to reduce his portal hypertension in hopes of reducing his bleeding risk. Interventional radiology first recommended a triphasic CT scan to better evaluate arterial/venous anatomy relative to cross-sectional anatomy. Triphasic CT scan was performed and demonstrated varicosities throughout the abdomen with a focus of varicosities in the right lower quadrant, likely the right colon Figure\u00a0. TIPS waDuring the first nine months of follow-up, the patient has had a complicated course related primarily to his chronic liver disease. He has suffered from intermittent abdominal pain and has been hospitalized or seen in clinic for lactic acidosis, bouts of abdominal pain, an incarcerated right inguinal hernia, significant scrotal edema, and methicillin-resistant Staphylococcus aureus bacteremia. He has, however, attempted positive lifestyle changes, including abstaining from alcohol and illicit drugs and improving his social support. He has had neither recurrent episodes of hematochezia nor has he suffered additional bouts of hepatic encephalopathy. The patient continues to be followed closely as an outpatient.Ectopic varices are defined as varices outside the cardio-esophageal junction\u00a0. ApproxiDue to the paucity of data surrounding cecal varices, as well as the various underlying causes, diagnosis and best treatment strategies remain equivocal. Most literature currently points to selective mesenteric angiography as the ideal diagnostic approach as it may allow for precise localization of hemorrhage as well as immediate therapeutic options\u00a0, 8. HoweNonbleeding cecal varices have even less available treatment data. One case reported successfully reduced cecal variceal size with the use of propranolol, but the patient could not tolerate the side effects of the medication\u00a0. UltimatIn this case, the decision to embolize the right colic and ileocolic veins post-TIPS procedure was made in an urgent setting to stabilize a hemodynamically unstable patient despite the known risk of potential bowel ischemia. Post-embolization, the patient showed no evidence for bowel ischemia. We postulate that upon embolization of the right and ileocolic veins, the presence of a mesocaval shunt provided venous return from the right colon, which the pressure gradient now favored.\u00a0Some authors have previously attempted mesocaval shunt procedures as therapy for small intestinal bleeding secondary to portal hypertension\u00a0. This prAs demonstrated in this case, colonic varices can be a life-threatening and somewhat illusory cause of gastrointestinal bleeding. The presence of a spontaneous mesocaval shunt may allow safe embolization of the colonic varices when concerned about ischemic bowel.\u00a0As demonstrated in this case, cecal varices can be a life-threatening and\u00a0illusory cause of gastrointestinal bleeding. The presence of a spontaneous mesocaval shunt may allow safe embolization of cecal varices when concerned about ischemic bowel."} {"text": "Disturbance is known to affect the ecosystem structure, but predicting its outcomes remains elusive. Similarly, community diversity is believed to relate to ecosystem functions, yet the underlying mechanisms are poorly understood. Here, we tested the effect of disturbance on the structure, assembly, and ecosystem function of complex microbial communities within an engineered system. We carried out a microcosm experiment where activated sludge bioreactors operated in daily cycles were subjected to eight different frequency levels of augmentation with a toxic pollutant, from never (undisturbed) to every day (press-disturbed), for 35 days. Microbial communities were assessed by combining distance-based methods, general linear multivariate models, \u03b1-diversity indices, and null model analyses on metagenomics and 16S rRNA gene amplicon data. A stronger temporal decrease in \u03b1-diversity at the extreme, undisturbed and press-disturbed, ends of the disturbance range led to a hump-backed pattern, with the highest diversity found at intermediate levels of disturbance. Undisturbed and press-disturbed levels displayed the highest community and functional similarity across replicates, suggesting deterministic processes were dominating. The opposite was observed amongst intermediately disturbed levels, indicating stronger stochastic assembly mechanisms. Trade-offs were observed in the ecosystem function between organic carbon removal and both nitrification and biomass productivity, as well as between diversity and these functions. Hence, not every ecosystem function was favoured by higher community diversity. Our results show that the assessment of changes in diversity, along with the underlying stochastic\u2013deterministic assembly processes, is essential to understanding the impact of disturbance in complex microbial communities. Complex microbial communities and ecosystems are highly sensitive to disturbance, which can affect community diversity and structure, but to date the impact of disturbance remains difficult to predict. Here, Stefan Wuertz and colleagues from the Nanyang Technological University in Singapore show how different disturbance frequencies affect microbial population dynamics. Analyses of microbial communities in sludge bioreactors exposed to a toxic pollutant at different rates revealed that populations at the extremes (not exposed and most exposed) showed the lowest \u03b1-diversity, whereas populations exposed at intermediate levels were most diverse. Notably, ecosystem function trade-offs were observed between organic carbon removal and nitrification and biomass productivity, with diversity also affecting these functions. These observations highlight the importance of evaluating diversity when determining the effects of disturbance on microbial communities. The IDH has been influential in ecological theory, as well as in management and conservation,32 but its predictions do not always hold true.33 For example, in soil and freshwater bacterial communities, different patterns of diversity were observed with increasing disturbance frequency with biomass destruction34 and removal35 as disturbance type, respectively. Meanwhile, the effect of varying frequencies of non-destructive disturbances on bacterial diversity remains unknown. Furthermore, the IDH predicts a pattern but it is not a coexistence mechanism as it was originally purported to be.36 Hence, its relevance is being debated38 with multiple interpretations and simplicity as the main points of critique. To date, the mechanisms behind the observed patterns of diversity under disturbance remain to be elucidated.40Disturbance is defined in ecology as an event that physically inhibits, injures, or kills some individuals in a community, creating opportunities for other individuals to grow or reproduce.41 which is also known to inhibit both organic carbon removal and nitrification in sludge reactors.42 Microcosm studies are useful models of natural systems,43 can be coupled with theory development to stimulate further research,44 and by permitting easier manipulation and replication can allow inference of causal relationships45 and statistically significant results.46The objective of this work was to test the effect of disturbance on the bacterial community structure, diversity, and ecosystem function of a complex bacterial system, with emphasis on the underlying assembly mechanisms. We employed sequencing batch bioreactors inoculated with activated sludge from an urban wastewater treatment plant, in a laboratory microcosm setup with eight different frequency levels of augmentation with toxic 3-chloroaniline (3-CA) as disturbance. Triplicate reactors received 3-CA either never , every 7, 6, 5, 4, 3, and 2 days , or every day for 35 days. Chloroanilines are toxic and carcinogenic compounds and few bacteria encode the pathways to degrade 3-CA,We analysed changes in the ecosystem function over time by measuring the removal of organic carbon, ammonia, and 3-CA, as well as biomass. Changes in community structure were examined at different levels of resolution using a combination of metagenomics sequencing and 16S rRNA gene fingerprinting techniques. Such changes were assessed by employing a combination of ordination tools, diversity indices, cluster analysis, univariate and multivariate statistical analyses. We also explored how diversity was related to function, focusing on trade-offs. Furthermore, the role of stochasticity in community assembly was investigated by employing null model techniques from ecology. We hypothesized that time would lead to a decrease in \u03b1-diversity at the extreme sides of the disturbance range due to deterministic adaptation to the environment, while less predictable conditions at intermediate disturbance levels would lead to higher \u03b1-diversity and stochastic assembly. Consequently, replicates at intermediately disturbed levels should display higher variability in terms of both community structure and function, compared with the ones at the extreme sides of the disturbance range where the opposite should occur.P\u2009=\u20090.003, Supplementary Table P\u2009=\u20090.15). Disturbance was the factor responsible for the observed clustering .Bacterial community structure displayed temporal changes and varied between disturbance levels, as assessed by 16S rRNA gene terminal restriction fragment length polymorphism (T-RFLP) Fig. . Overalling Fig. and not The undisturbed community (L0) was the only one with complete dissolved organic carbon (COD) removal and nitrate generation without nitrite residuals, while the press-disturbed community (L7) was the only one where nitrification products were never detected and also had the lowest biomass Fig. . InitialX component was nitrite, but some nitrate was also produced and nitrate production (\u03c1\u2009=\u2009\u22120.697). Biomass values on day 35 differed significantly among levels with the highest value at L1 and the lowest at L7 and nitrate production (\u03c1\u2009=\u20090.656) yielded significant similarity . Procrustes tests of comparisons within ordination methods of PCO .\u03b2-Diversity patterns observed from 16S rRNA gene amplicon T-RFLP data on day 35 were significantly similar to those from shotgun metagenomics data. A Mantel test on Bray\u2013Curtis distance matrixes for both datasets which give higher weight to less abundant operational taxonomic units (OTUs). They displayed the same parabolic pattern . Additionally, there were strong significant correlations between \u03b1-diversity and ecosystem function in terms of stochastic intensity (SI) and standard effect size (SES) values, which corresponded to communities less deviant from the null expectation and press-disturbed (L7) levels were distinct from each other as well as from the remaining intermediate levels, as supported by multivariate tests (both distance-based and GLMMs). The ordination plots and cluster analyses showed a clear separate clustering for the independent replicates of these two disturbance levels along the experiment, and particularly the constrained ordination plots displayed this with 0% misclassification error. Furthermore, the ecosystem function was clearly differentiated between L0 and L7, as well as being consistent across replicates at each level. We contend that the observed clustering is an indication that both the undisturbed and press-disturbed levels favoured deterministic assembly mechanisms, where the selective pressure due to unaltered succession (L0) or sustained toxic-stress (L7) promoted species sorting, resulting in similar community structuring among biological replicates over the course of the experiment.X products, which was initially hampered when communities were still adapting to degrade 3-CA, was not the same across all equally handled independent replicates. The observed divergence across independent replicates is considered here as a strong indicator of stochasticity in community assembly. Additionally, the lower deviation for L2\u2013L5 from expected \u03b2-diversity values estimated via null model analysis indicates a higher role of stochasticity at intermediate disturbance levels. Several processes might be promoting stochastic assembly, like strong feedback processes51 that are linked to density dependence and species interactions,52 priority effects,53 and ecological drift.54 Reactors within this study were designed as closed systems, hence stochastic dispersal processes55 could not affect community assembly.Conversely, the communities from intermediately disturbed levels (L1\u20136) did not form distinct clusters for any particular level through the experiment. Within-treatment dissimilarity among replicates increased over time, with some replicates being more similar to those of other intermediate levels. Concurrently, ecosystem function parameters also displayed within-treatment variability for L1\u20136. For example, the conversion of ammonia to NO18 found through null model analysis that both deterministic and stochastic processes played important roles in controlling community assembly and succession, but their relative importance was time-dependent. The greater role of stochasticity we found on day 35 concurred with higher observed variability in the ecosystem function and structure among replicates for intermediately-disturbed levels. Likewise, previous work on freshwater ponds tracking changes in producers and animals49 found \u03b2-diversity (in terms of dissimilarity) increasing with stochastic processes. These observed patterns are also in accordance with ecological studies proposing deterministic and stochastic processes balancing each other to allow coexistence,10 with communities exhibiting variations in the strength of stabilization mechanisms and the degree of fitness equivalence among species.9 Thus, it is not sufficient to ask whether communities mirror either stochastic or deterministic processes,8 but also necessary to investigate the combination of such mechanisms that in turn explain the observed community structures along a continuum.9We argue that there were different underlying stochastic\u2013deterministic mechanisms operating in the resulting community assembly along the disturbance range of our study. Similarly, a study on groundwater microbial communities31 both in terms of composition (0D) and abundances . This finding is non-trivial in two aspects. First, Svensson et al.32 have shown that most studies find support for the IDH by using species richness (0D) rather than evenness or other abundance-related indices (like 1D and 2D). They suggested that low evenness at high disturbance levels could be caused by the dominance of a few disturbance specialists. Second, the use of richness for microbial communities is not reliable48 since it is heavily constrained by the method of measurement,56 which makes it hard to compare results from different studies using this metric. Additionally, for complex communities there is often a huge difference between the abundance of rare and abundant taxa. Hence, for microbial systems, it is reasonable to assess diversity in terms of more robust compound indices rather than richness, the reason why we focused on 1D and 2D for diversity-function analyses.We observed the highest \u03b1-diversity at intermediate levels as predicted by the IDH,2D decreased over time in agreement with deterministically-dominated processes, probably because such levels represented the most predictable environments within our disturbance range. In contrast, intermediate levels either increased or maintained the same 2D over time , seemingly a case where niche overlap promoted stochastic assembly.8 The emergence of an IDH pattern after time is coherent with findings in previous microcosm studies using synthetic communities of protists57 and freshwater enrichment microbial communities.35 Yet, none of these studies evaluated the relative importance of the underlying assembly mechanisms for the observed diversity dynamics.Importantly, the observed pattern in \u03b1-diversity was time-dependent and resulted in an IDH pattern after 35 days. Temporal dynamics were expected since the sludge community experienced an initial perturbation in all reactors after transfer from a wastewater treatment plant to our microcosm arrangement. For the sludge inoculum, this implied changes in reactor volume, frequency of feeding (continuous to batch), type of feeding (sewage to complex synthetic media), immigration rates (open to closed system), and mean cell residence time (low to high). This was a succession scenario in which communities had to adapt to such changes along with the designed disturbance array. For L0 and L7, 1D and 2D were positively correlated with nitrification and productivity, suggesting that higher community evenness favours functionality under selective pressure,58 but were negatively correlated with organic carbon removal. Thus, we cannot affirm that more diverse communities have better functionality without considering trade-offs. This supports the notion that higher \u03b1-diversity does not necessarily imply a \u201cbetter\u201d or \u201chealthier\u201d system.56 In addition to the observed changes in OTU diversity, there was an evident variation in ecosystem function along the disturbance range studied , which are stronger at extreme ends of the disturbance range. Stochastic mechanisms will produce even assemblages (higher \u03b1-diversity) at intermediately disturbed levels, whilst infrequent or too-frequent disturbances will favour some species over others (lower \u03b1-diversity). We propose this idea as the intermediate stochasticity hypothesis might display similar responses to disturbance. We argue that changes not only in diversity but also in the underlying deterministic\u2013stochastic assembly mechanisms should be evaluated in studies of the effects of disturbance on such systems. For such an assessment, both replication and wide-enough disturbance ranges are key. Additionally, the ISH could be evaluated within open systems to include the effect of dispersal processes. This calls for more studies in microcosm69) included toxic 3-CA at varying frequencies. Eight levels of disturbance were set in triplicate independent reactors (n\u2009=\u200924), which received 3-CA every day (press-disturbed), every 2, 3, 4, 5, 6, or 7 days (intermediately-disturbed), or never (undisturbed). Level numbers were assigned from 0 to 7 , nitrogen species and 3-CA, and volatile suspended solids (VSS). On the initial day and from the second week onwards, sludge samples (2\u2009mL) were collected weekly for DNA extraction.We employed sequencing batch microcosm bioreactors (20-mL working volume) inoculated with activated sludge from a full-scale plant and operated for 35 days. The daily complex synthetic feed was analysed by T-RFLP of the 16S rRNA gene using the 530F\u20131050R primer set targeting V4\u2013V5 regions. The PCR program included initial denaturation at 95\u2009\u00b0C for 10\u2009min, followed by 30 cycles of denaturation , annealing and extension , and final extension at 72\u2009\u00b0C for 7\u2009min. Purified DNA products were digested using the BsuRI (HaeIII) enzyme through incubating at 37\u2009\u00b0C for 16\u2009h. Enzyme inactivation was performed at 80\u2009\u00b0C for 20\u2009min. Digested DNA was subjected to T-RFLP on an ABI 3730XL DNA analyser. Sequence alignment files from T-RFLP runs were assessed for quality control and pre-processed using the software GeneMapper v.5 (Applied Biosystems).71 Peak areas were normalized to the total area per sample72 and de-noised using a conservative fluorescence threshold of 200 units.73DNA extracted from all sludge samples (n\u2009=\u200924) were subjected to metagenomics sequencing at the SCELSE sequencing facility (Singapore). Library preparation was performed according to Illumina\u2019s TruSeq Nano DNA Sample Preparation protocol. Libraries were sequenced in one lane on an Illumina HiSeq 2500 sequencer in rapid mode at a final concentration of 11\u2009pM and a read-length of 250\u2009bp paired-end. Around 173 million paired-end reads were generated in total and 7.2\u2009\u00b1\u20090.7 million paired-end reads on an average per sample. Illumina adaptors, short reads, low quality reads or reads containing any ambiguous base were removed using cutadapt .74 Taxonomic assignment of metagenomics reads was done following the method described by Ilott et al.75 High quality reads (99.2\u2009\u00b1\u20090.09% of the raw reads) were randomly subsampled to an even depth of 12,395,400 for each sample prior to further analysis. They were aligned against the NCBI non-redundant (NR) protein database (March 2016) using DIAMOND (v.0.7.10.59) with default parameters.76 The lowest common ancestor approach implemented in MEGAN Community Edition v.6.5.577 was used to assign taxonomy to the NCBI-NR aligned reads with the following parameters: maxMatches\u2009=\u200925, minScore\u2009=\u200950, min Support\u2009=\u200920, paired\u2009=\u2009true. On average, 48.2% of the high-quality reads were assigned to cellular organisms, from which in turn 98% were assigned to the bacterial domain. Adequacy of sequencing depth was corroborated with rarefaction curves at the genus taxonomy level . We did not include genotypic information as it was outside the scope of this study, but will do so in future investigations arising from this work.Purified genomic DNA from sludge samples on d0 (inoculum) and d35 of 10%.78 Community structure was assessed by a combination of ordination methods and multivariate tests 79 on Bray\u2013Curtis dissimilarity matrixes constructed from square-root transformed normalized abundance data using PRIMER (v.7). Additionally, GLMMs, which deal with mean\u2013variance relationships,80 were employed using the mvabund R-package81 fitting the metagenomics dataset to a negative binomial distribution, to ensure that the observed differences among groups were due to disturbance levels and not heteroscedasticity. The 500 most abundant genera were employed to ensure random distribution of residuals fitted in the model. Significance was tested using the anova function in R with PIT-trap bootstrap resampling (n\u2009=\u2009999).82 Hill diversity indices83 were employed to measure \u03b1-diversity as described elsewhere,84 and calculated for normalized non-transformed relative abundance data.All reported 85 were applied to compare metagenomics and T-RFLP datasets from all bioreactors on day 35 . Such an approach is valid for the questions asked in this study, since comparisons between NGS and fingerprinting techniques support the use of T-RFLP to detect meaningful community assembly patterns and correlations with environmental variables,61 and such patterns can be validated by NGS on a subset of the fingerprinting dataset.2Mantel and Procrustes tests86 All these statistical tests were performed using the vegan R-package .Bray\u2013Curtis dissimilarity matrixes were computed using square root transformed T-RFLP data and bacterial genus-level taxa tables generated using a metagenomics approach. Mantel tests were then used to determine the strength and significance of the Pearson product\u2013moment correlation between complete dissimilarity matrices. Procrustes tests (PROTEST) were also employed as an alternative approach to Mantel tests in order to compare and visualize both matrices on PCO and NMDS ordinations. The resultant m2-value is a statistic that describes the degree of concordance between the two matrices evaluated.87 which assumes that species interactions are not important for community assembly.88 We employed a null model approach originally applied to woody plants50 and more recently to microbial communities.18 The model defines \u03b2-diversity as the \u03b2-partition 89 and takes into account both composition and relative abundances. To adapt it to handle microbial community data, we considered species as OTUs (genus taxonomic level) and each individual count as one read within the metagenomics dataset. The model randomizes the location of each individual within the three independent reactors for each of the eight disturbance treatment levels, while maintaining the total quantity of individuals per reactor, the relative abundance of each OTU, and the \u03b3-diversity. We applied it to the metagenomics datasets from d0 and d35.To disentangle the roles of stochastic and deterministic processes as drivers of change in \u03b2-diversity it is necessary to incorporate a statistical null model in the analysis,\u03b2obs) minus the mean of the null distribution of \u03b2-diversity values \u03c3exp), SES\u2009=\u2009Each step of the null model calculates expected mean \u03b1-diversities for each disturbance level and then estimates an expected \u03b2-partition. After 10,000 repetitions, the mean and standard deviation of the distribution of random \u03b2-partitions for each disturbance level are calculated. The output of this model is a \u03b2-deviation or SES, which is the observed \u03b2-diversity (Further information on experimental design is available in the Supplementary informationReporting Summary"} {"text": "Radiodermatitis is a painful side effect for cancer patients undergoing radiotherapy. Irradiation of the skin causes inflammation and breakdown of the epidermis and can lead to significant morbidity and mortality in severe cases, as seen in exposure from accidents or weapons such as \u201cdirty bombs\u201d and ultimately leads to tissue fibrosis. However, the pathogenesis of radiodermatitis is not fully understood. Using a mouse model of radiodermatitis, we showed that the Transient Receptor Potential Melastatin 2 (TRPM2) ion channel plays a significant role in the development of dermatitis following exposure to ionizing radiation. Irradiated TRPM2-deficient mice developed less inflammation, fewer severe skin lesions and decreased fibrosis when compared to wild type mice. The TRPM2-deficient mice also showed a faster recovery period as seen by their increased weight gain post irradiation. Finally, TRPM2-deficient mice exhibited lower systemic inflammation with a reduction in inflammatory cytokines present in the serum. These findings suggest that TRPM2 may be a potential therapeutic target for reducing the severity of radiodermatitis. Radiation therapy is commonly used to treat several types of cancer , placed in an X-ray irradiator box and shielded with lead so that only the lower pelvic region was exposed to radiation for five consecutive days at a dose of 8\u00a0Gy/day. This dose regimen was chosen because it mimics the radiation therapy regimen of a patient being treated for pelvic cancers was purchased from Sigma-Aldrich (cat # C6019). CTZ was first solubilized in ethanol (50\u00a0mg/ml), further diluted in corn oil to reach a 1% CTZ containing solution, and filter sterilized using membranes of 0.22\u00a0\u00b5m nominal pore size. Wild-type C57BL/6 male mice were treated topically with this solution. CTZ is clinically very well characterized as it is used as an anti-fungal agent, and its topical application is not toxic. The solution (or ethanol/corn oil as a vehicle control) was applied twice a week on affected skin areas starting as soon as lesions would appear (4 weeks after irradiation).The skin lesions were recorded and scored 12 weeks after irradiation for quality as described in (E.J. 2012) (0) normal, (1) erythema, (2) dry desquamation, (3) wet desquamation, and (4) ulceration for 5\u00a0min and rinsed before differentiating 15\u00a0min in 2.5% phosphomolybdic-2.5% phosphotungstic acid. Finally, collagen was stained for 8\u00a0min in 2.5% aniline blue, rinsed, and differentiated for 1\u00a0min in 1% acetic acid. Sections were dehydrated through graded alcohols, cleared in xylenes, and mounted with Permount .Trichrome stained sections were imaged in brightfield mode, with a 20\u00d7 objective, on a Leica DM4000 B LED microscope . To measure the collagen density in the skin, each section was imaged over the length of the section requiring ten evenly spaced fields of view. Using ImageJ software, the region of interest was selected so that only the area containing collagen was included in the analysis. Next, thresholding was used to select only blue pixels (collagen) and excluded purple/red pixels (immune cells and keratin); white hues were excluded to eliminate holes in the tissue. The collagen density was calculated as the number of pixels representing collagen divided by the total number of pixels in the region of interest (ROI). The percent area of tissue comprised of collagen was averaged for each animal and the mean per group reported.The epidermal layer thickness was quantified using the trichrome staining images. For each image, approximately 20 equally spaced measurements were made along the length of the tissue by drawing a line from the junction of the dermis and epidermis to the edge of the epithelial layer. The pixel value was converted to microns using a factor of 3.84 pixels/micron. A mean epidermal thickness was calculated for each animal using all images containing epithelium.Serum was separated using Z-Gel microtubes (Sarstedt), clotting blood for 30\u00a0min at room temperature, and serum removed by centrifugation (8000\u00a0rpm for 5\u00a0min). Meso Scale Discovery V-Plex Proinflammatory Panel 1 (mouse) plates were used according to the manufacturer\u2019s instructions.2O2 in PBS, followed by a 5-min wash. Next, slides were blocked in 10% goat serum for 30\u00a0min followed immediately by 1\u00a0h incubation in primary antibody. Primary antibodies included CD68 , CD3 , and TRPM2 . Negative stain controls were incubated in blocking buffer without primary antibody for 1\u00a0h. Following several washes in PBS, the sections were next incubated in biotinylated goat anti-rabbit secondary antibody then washed in PBS again. Peroxidase activity was associated to the biotinylated secondary antibody using the Vector Labs ABC Kit (cat. PK-4000) by incubation for 30\u00a0min in ABC buffer. Finally, DAB substrate was applied to detect the proteins of interest for 5\u20137\u00a0min until the brown color was visible under a microscope. Slides were counterstained by briefly dipping in Harris hematoxylin , then dehydrated through graded alcohols, cleared in xylenes, and mounted with Permount solution.Tissue sections were deparaffinized in xylenes and rehydrated in graded alcohols, then rinsed in running deionized water. Antigen retrieval was performed by boiling slides in 10\u00a0mM sodium citrate buffer, pH 6.0 for 20\u00a0min, followed by a 20-min cool down, and a 10-min PBS wash. Endogenous peroxidases were quenched for 5\u00a0min in 3% HFor both CD3 and CD68 in skin, six random fields of view were captured with a 20\u00d7 objective, on a Leica DM4000 B LED microscope. Cells staining a deep brown color were manually counted using the \u201cmulti-point\u201d function in ImageJ software. The average cells per field were reported and used for statistical analysis. For TRPM2, serial sections were stained for TRPM2, CD68 and CD3 to determine if TRPM2 expression co-localized in lymphocytes and macrophages.t test was used when only two groups were compared. Differences were considered statistically significant when p\u2009<\u20090.05.Data are expressed as mean\u2009\u00b1\u2009SEM. One-way analysis of variance was used for multiple comparisons, and Tukey\u2019s post hoc test was applied where appropriate. Student\u2019s \u2212/\u2212 male mice were irradiated for five sequential days with 8\u00a0Gy/day in the pelvic region. This radiation scheme was used previously in rats and TRPM2\u2212/\u2212 mice up to 5 weeks post-irradiation, TRPM2\u2212/\u2212 mice recovered and gained weight steadily from 6 to 12 weeks post radiation whereas the WT mice never recovered from their initial weight loss WT or TRPM2nce Fig.\u00a0b. We alsnce Fig.\u00a0c.\u2212/\u2212 mice 4 weeks post irradiation. In accordance with our previous observations, TRPM2\u2212/\u2212 mice had reduced inflammatory cytokines, including IL-1\u03b2, IL-6 and KC as compared to WT mice following irradiation of WT mice. As illustrated in Fig.\u00a0\u2212/\u2212 mice are protected from skin damage and overall weight loss associated with lower abdominal radiation exposure. Furthermore, histological analysis of skin lesions showed that TRPM2-deficiency protected the tissue from irradiation-induced damage by limiting the inflammation and the development of fibrosis in irradiated skin. Finally, we showed that TRPM2\u2212/\u2212 mice had significantly lower circulating inflammatory cytokines and lower leukocyte recruitment, but apical inhibition of TRPM2 had no effect on radiation-induced dermatitis. Taken together, these data suggest that TRPM2 deficiency is protective against radiation-induced skin damage and helps preserve the function of this organ.In this study, we have demonstrated that TRPM2-deficiency decreases the severity of various side effects associated with radiation exposure. Specifically, we have shown that TRPM2\u2212/\u2212 skin lesions showed less infiltration of inflammatory cells as well as decreased levels of systemic inflammatory cytokines, specifically IL-1\u03b2, IL-6 and KC. TRPM2 is known to promote inflammation and cytokine production in various situations anthranilic acid inhibits TRPM2 (Kraft et al. 2 inhibitor (Chen et al. Several compounds have been shown to inhibit TRPM2 currents. For instance, as stated previously, we used clotrimazole to see if we could prevent radiation-induced skin injury by apically blocking TRPM2. Other compounds such as 2-aminoethoxydiphenyl borate (Togashi et al. Radiodermatitis is a serious side effect due to radiotherapy to treat many types of tumors found throughout the body, which can lead to the delay of therapeutic treatments. Furthermore, the skin is the first organ that would be affected in a nuclear accident or \u201cdirty bomb\u201d detonation and as such exposed to whole body irradiation. However, given that our understanding of the inflammatory pathways involved in radiodermatitis is still limited, we currently do not have an effective treatment for controlling damage to the skin. Our results emphasize the importance of TRPM2 in mediating radiation-induced inflammatory responses and suggest TRPM2 as a potential target when considering therapeutic interventions for radiodermatitis."} {"text": "Rheumatoid arthritis (RA) is an autoimmune disease of the joints characterized by synovial hyperplasia and chronic inflammation. Fibroblast-like synoviocytes (FLS) play a central role in RA initiation, progression, and perpetuation. Prior studies showed that sirtuin 1 (SIRT1), a deacetylase participating in a broad range of transcriptional and metabolic regulations, may impact cell proliferation and inflammatory responses. However, the role of SIRT1 in RA\u2013FLS was unclear. Here, we explored the effects of SIRT1 on the aggressiveness and inflammatory responses of cultured RA-FLS. SIRT1 expression was significantly lower in synovial tissues and FLS from RA patients than from healthy controls. Overexpression of SIRT1 significantly inhibited RA-FLS proliferation, migration, and invasion. SIRT1 overexpression also significantly increased RA-FLS apoptosis and caspase-3 and -8 activity. Focusing on inflammatory phenotypes, we found SIRT1 significantly reduced RA-FLS secretion of TNF-\u03b1, IL-6, IL-8, and IL-1\u03b2. Mechanistic studies further revealed SIRT1 suppressed NF-\u03baB pathway by reducing p65 protein expression, phosphorylation, and acetylation in RA-FLS. Our results suggest SIRT1 is a key regulator in RA pathogenesis by suppressing aggressive phenotypes and inflammatory response of FLS. Enhancing SIRT1 expression or function in FLS could be therapeutic beneficial for RA by inhibiting synovial hyperplasia and inflammation. Rheumatoid arthritis (RA) is a chronic autoimmune disease characterized by synovial hyperplasia, inflammation and progressive destruction of the cartilage and bone, which ultimately lead to irreversible joint deformities and functional loss. The cause of RA is unclear, although many risk factors are recognized including genetics, environment, hormones, and lifestyle . RA is aFibroblast-like synoviocytes are a special type of mesenchymal-derived cells lining the internal synovium. FLS displays many markers of fibroblasts, including CD90, type IV and V collagens, and vimentin. FLS also shows characteristics distinct from other fibroblasts, including secretion of lubricin that lubricates the synovium, and expression of unique surface markers such as CD55, VCAM-1, cadherin-11, integrins, and their receptors. Studies from the past two decades established FLS as a prominent cellular participants in RA. RA\u2013FLS directly participates in synovial hyperplasia and the production of cytokines that perpetuate local inflammation. RA\u2013FLS also contributes to modulation of immune cells and proteolytic destruction of extracellular matrix, cartilage, and bone. Targeting RA-FLS has been recognized as a novel therapeutic approach with potentially improved clinical outcomes and less impact on systemic immunity [Sirtuin 1 (SIRT1) is an NAD-dependent deacetylase engaging in a wide range of cellular functions such as transcription, cell cycle, DNA replication and repair, metabolism, apoptosis, and autophagy. SIRT1 is ubiquitously expressed and functions as a link between extracellular signals and transcriptional regulation of target genes. Several prior studies implicated a key role for SIRT1 in collagen-induced arthritis ,7 and osThe objective of the present study is to systematically characterize the role of SIRT1 in RA-FLS, in order to provide insight for novel disease mechanisms and potential therapeutic targets for RA.2-enriched atmosphere. Cells between passages three to six were used for experiments. Visual examination of cell morphology under light microscopy and fluorescence activated cell sorting analysis of cells stained with anti-CD11b antibody confirmed that FLSs accounted for more than 95% of the cells (data not shown).Experiments involving human subjects were carried out in accordance with the declaration of Helsinki and were approved by the Medical Ethical Committee of Jiangsu University. Synovial tissues were obtained from 12 patients with RA during synovectomy or joint replacement surgery at Northern Jiangsu People\u2019s Hospital affiliated to Yangzhou University. Healthy synovial tissues were collected from six knee joint trauma patients during surgery and used as negative controls (NC). Signed informed consent was obtained from all participants. Synovial tissue samples were prepared as described previously . FLS wasThe SIRT1-overexpression vector (pCDNA3.1-SIRT1) and the blank vector pCDNA3.1 were transfected into RA-FLS cells using Lipofectamine 2000 according to the manufacturer\u2019s instructions. Cells were harvested at 48 h post-transfection for mRNA and protein expression measurements.GAPDH, forward 5\u2032\u2013AGC CAC ATC GCT CAG ACA-3\u2032, reverse 5\u2032\u2013TCT CCT GGG AGG CAT AGA CC-3\u2032. For each sample, RT-qPCR were performed in triplicates, and the relative mRNA level was normalized to GAPDH using the 2C\u2212\u0394\u0394t method [Total RNA was extracted using TRIzol reagent according to the manufacturer\u2019s instructions and reversely transcribed into cDNA using PrimeScript RT reagent kit . RT-qPCR was performed using the SYBR Premix Ex Taq kit (Takara Bio Inc.) on a 7500 fast real-time PCR System . The primers used for the RT-qPCR were as follows: SIRT1, forward 5\u2032-TGG ACT CCA CGA CGT ACT-3\u2032, reverse 5\u2032-TCT CCT GGG AGG CAT AGA CC-3\u2032; t method .GAPDH (ab8245) overnight at 4\u00b0C, followed by incubation with peroxidase labeled anti-rabbit or anti-mouse secondary antibodies at room temperature. The relative protein expression was detected using an enhanced chemiluminescent solution and quantified by ImageJ using GAPDH as loading control.Total protein was extracted using RIPA lysis buffer and concentration normalized using the BCA protein assay kit . Equal amount of protein lysates (30\u202f\u03bcg each lane) was resolved by 8\u201312% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred onto PVDF membranes . The membranes were blocked in 5% nonfat milk in Tris-Buffered Saline with Tween-20 and incubated with primary antibodies against NF-\u03baB p65 (ab16502), phospho-NF-\u03baB p65 (Ser536) (ab76302), acetyl-NF-\u03baB p65 (Lys310) (ab19870), SIRT1 (ab110304), and 4 cells/well and received pCDNA3.1-SIRT1 or pCDNA3.1 transfection at 100\u202fng/well. At 24, 48, and 72\u202fh post-transfection, 10\u202f\u03bcl/well Cell Counting Kit-8 solution was added and incubated for 4 h at 37\u00b0C. The absorbance at 450\u202fnm was measured on an ELx800 Absorbance Microplate Reader .RA-FLS was seeded on 96-well plates at a density of 2 \u00d7 10RA-FLS was detached as single-cell suspension 48\u202fh post-transfection and fixed in 75% ethanol overnight at 4\u00b0C. Fixed cells were washed and stained with 25\u202fmg/ml propidium iodide in PBS containing 0.1% Triton and 10\u202fmg/ml RNase on ice for 30\u202fmin in the dark. Cell cycle was analyzed by flow cytometry on a FACSCalibur system .For apoptosis assay, fixed and washed cells were detected using an annexin V-FITC apoptosis detection kit (BD Bioscience Pharmingen) and apoptosis rate\u202fwas measured as the combined percentage of cells that underwent early apoptosis (FITC+/PI-) and advanced apoptosis and necrosis (FITC+/PI+). Activity of caspase-3 and caspase-8 was measured using colorimetric protease assay kits .Cell migration was determined by wound healing assay. Briefly, transfected cells were seeded on six-well plates and cultured to confluence. A wound was created by manually scraping the cell monolayer with a sterile 200\u202f\u03bcl pipette tip. Cells were washed twice with PBS to remove floating cells and incubated in medium containing 1% FBS for 24\u202fh. The wounds were imaged at two time points (0 and 24\u202fh) with a phase-contrast microscope equipped with a Nikon DS-5M camera. Cell migration was calculated as percentage of healed distance relative to the initial wounds over 24 h.4 transfected cells in serum-free DMEM were added onto the upper chambers of 8-\u03bcm diameter Transwell inserts precoated with Matrigel , and 0.5\u202fml of DMEM with 10% FBS was added to the lower chamber. After 48\u202fh incubation at 37\u00b0C, cells on the bottom chamber were fixed with 70% ethanol, stained with 0.1% Crystal Violet, and photographed under an inverted microscope. The numbers of cells in five random fields per well were counted and averaged as the invading cell number.Cell invasion was analyzed using Transwell invasion chambers . Briefly, 3 \u00d7 10Cytokine levels in the supernatant of RA-FLS were measured 48\u202fh post-transfection using ELISA kits for human TNF-\u03b1, IL-6, IL-8, and IL-1\u03b2 per manufacturer\u2019s instructions.t-test was used to evaluate the significance of difference between two groups. One-way analysis of variance followed by a post hoc Tukey\u2019s test was used to test the significance of differences among three or more groups. P<0.05 was considered as statistically significant.Results from at least three independent experiments were presented as mean\u202f\u00b1\u202fstandard deviation (SD). Statistical analysis was performed using the SPSS 19 software package . Student\u2019s To test potential association between SIRT1 and RA, we first measured the expression level of SIRT1 mRNA in the synovial tissues and FLS obtained from 6 negative control individuals and 12 RA patients by RT-qPCR. Compared with NC, SIRT1 mRNA was significantly down-regulated in RA synovial tissues and in RA-FLS , implicaIn order to systematically study the role of SIRT1 in RA-FLS, primary RA-FLS culture was established using cells isolated from RA patients. Transfection of the SIRT1 overexpression vector significantly increased SIRT1 expression in RA-FLS at mRNA and protein level by more than 2-fold compared with the control vector A,B. To eBecause proliferation assays only reflect viable cell numbers and do not distinguish the contribution from cell death, we then assessed the effects of SIRT1 on RA-FLS apoptosis. RA-FLS transfected with pCDNA3.1 or pCDNA3.1-SIRT1 was stained for apoptosis makers and analyzed by flow cytometry. SIRT1 overexpression increased the percentage of cells at both early apoptosis E, right.in vitro scratch wound healing assay revealed reduced migration in SIRT1-overexpressing RA-FLS as compared with empty-vector-transfected cells. The mean relative migration distance was decreased by nearly half in SIRT1-overexpressing RA-FLS compared with control cells , and acetyl-NF-\u03baB p65 . In RA-FLS, SIRT1 transfection significantly reduced the expression of NF-\u03baB p65, p-p65, and Ac-p65 compared with the control transfection E,F.in vitro models derived from RA patients, we uncovered that, in addition to the suppression of RA-FLS proliferation, migration and invasion, SIRT1 also substantially reduced the proinflammatory cytokine production and proinflammatory NF-\u03baB signaling in RA-FLS. Therefore, SIRT1 is a promising dual-effect target for both synovial hyperplasia and chronic inflammation in RA.RA is a chronic autoimmune disorder characterized by synovial hyperplasia, chronic inflammation, and progressive destruction of joints. FLS is a major cellular component that maintains the synovial homeostasis, and in the setting of RA, drives the synovial hyperplasia and inflammation progression. Emerging evidence indicate that inhibition of RA-FLS effector molecules might be beneficial for RA therapy . The preSynovial hyperplasia is characterized by increased proliferation and decreased apoptosis of RA-FLS. SIRT1 is NAD-dependent protein deacetylase that links transcriptional regulation to a variety of metabolic signals, such as nutrient deprivation, DNA damage, and oxidative stress . One keyThe effect of SIRT1 on RA-FLS apoptosis has been elusive. Several prior studies showed that SIRT1 down-regulated proapoptosis protein CYR61 and protChronic synovial inflammation is a hallmark of RA. Activated RA-FLS contributes to synovial inflammation by secreting proinflammatory cytokines that serve as kindling for an immune response. These cytokines in turn recruit and activate leukocytes to the RA synovium, amplifying local inflammation and tissue damage. Cytokines not only are responsible for the inflamed joints but also have profound systemic effects. Therefore, cytokines represent novel therapeutic opportunities at both local and systemic levels. Several biologic agents targeting TNF-\u03b1 and IL-1 are already licensed for RA treatment, and others showed promise in clinical trials . UnfortuNF-\u03baB is constitutively activated in RA and maintains a proinflammatory, proliferative, and damaging phenotype of RA-FLS ,34,35. NTo summarize, SIRT1 suppressed RA-FLS proliferation, migration and invasion, induced apoptosis and reduced proinflammatory cytokine secretion from RA-FLS. We found these protective effects were partially due to SIRT1 inactivation of the NF-\u03baB pathway. Our findings have therapeutic significance, because the SIRT1-mediated effects on RA-FLS involve a dual mechanism targeting both synovial hyperplasia and inflammation. Novel approaches aiming at augmenting the protective effect of SIRT1 may therefore be a promising option for FLS-targeted therapy in RA."} {"text": "Fibroblast-like synoviocytes (FLS) are essential cellular components in inflammatory joint diseases such as osteoarthritis (OA) and rheumatoid arthritis (RA). Despite the growing use of FLS isolated from OA and RA patients, a detailed functional and parallel comparison of FLS from these two types of arthritis has not been performed.In the present study, FLS were isolated from surgically removed synovial tissues from twenty-two patients with OA and RA to evaluate their basic cellular functions.\u2212CD31\u2212CD146\u2212CD235a\u2212CD90+PDPN+). OA FLS and RA FLS at the same passage (P2-P4) exhibited uniform fibroblast morphology. OA FLS and RA FLS expressed a similar profile of cell surface antigens, including the fibroblast markers VCAM1 and ICAM1. RA FLS showed a more sensitive inflammatory status than OA FLS with regard to proliferation, migration, apoptosis, inflammatory gene expression and pro-inflammatory cytokine secretion. In addition, the responses of OA FLS and RA FLS to both the pro-inflammatory cytokine tumor necrosis factor-alpha (TNF-\u03b1) and the anti-inflammatory drug methotrexate (MTX) were also evaluated here.Pure populations of FLS were isolated by a sorting strategy based on stringent marker expression (CD45The parallel comparison of OA FLS and RA FLS lays a foundation in preparation for when FLS are considered a potential therapeutic anti-inflammatory target for OA and RA. Rheumatoid arthritis (RA) is a chronic autoimmune disease, and the inflammatory synovium is full of hyperplastic fibroblast-like synoviocytes (FLS) . MoreoveFLS stem from the mesenchymal lineage and exhibit aggressive and invasive cellular characteristics, secreting a range of cytokines and matrix factors that recruit other immune cells to the inflammatory synovium, eventually leading to cartilage injury and bone erosion \u20135. ThereThe aims of this study were to comprehensively characterize and compare the cellular function and inflammatory phenotype of FLS from the synovium of OA and RA patients by using a stringent sorting method. In particular, we were interested in investigating the proliferation, migration, apoptosis, and expression and production of inflammatory markers as well as in determining how this inflammatory phenotype is maintained over prolonged cell culture (P2-P4) and in response to pro-inflammatory cytokines and anti-inflammatory drugs.n\u2009=\u200923) include in this study were diagnosed with RA according to the 2010 American College of Rheumatology/European League Against Rheumatism (ACR/EULAR) criteria [n\u2009=\u200923) conformed to the criteria from the American College of Rheumatology (ACR) [Synovial tissues were obtained from OA and RA patients upon joint replacement or synovectomy at the First Affiliated Hospital of USTC and the First Affiliated Hospital of Anhui Medical University, Hefei, China. This study was approved by the Ethics Committee of the First Affiliated Hospital of USTC and the First Affiliated Hospital of Anhui Medical University. The informed consent was signed by each of the patients and their guardians. All RA patients (criteria . The diagy (ACR) , radiogr2. Culture media were replenished every 2 days, and cells were subcultured when they reached 90% confluence.The method used for the isolation of FLS from synovial tissues was modified from a method previously described . The syn8 total synovial cells from RA and OA synovium were used. It\u2019s around (1.3\u2009\u00b1\u20090.4)\u2009\u00d7\u2009108 and (1.0\u2009\u00b1\u20090.3)\u2009\u00d7\u2009108 total count (events) in gate P4 for RA samples and OA samples. Isotype controls were included in the FACS detection for those proteins with relatively low expression -, phycoerythrin (PE)- or allophycocyanin (APC)-conjugated antibodies at 37\u2009\u00b0C for 1\u2009h: CD45-FITC (5\u2009\u03bcg/test), CD31-FITC (5\u2009\u03bcg/test), CD146-FITC (5\u2009\u03bcg/test), CD235a-FITC (5\u2009\u03bcg/test), PDPN-PE (2\u2009\u03bcg/test), CD90-APC (2\u2009\u03bcg/test), CD106 (VCAM1)-FITC (5\u2009\u03bcg/test), CD54 (ICAM1)-PE (2\u2009\u03bcg/test). All flow antibodies were from Miltenyi Biotechnology Company . After 3 times of washing by pre-cooled PBS, the cells were analysed by flow cytometer (BD FACSAria II) and BD FACSDiva software. For each test, 2\u2009\u00d7\u200910OA and RA synovial tissues were fixed with acetone for 15\u2009min, washed twice with PBS, and then incubated for 1\u2009h in a humid chamber with primary antibodies . The synovial tissues were then washed 3 times with PBS and incubated for an additional hour with an isotype-matched horseradish peroxidase (HRP)-conjugated secondary antibody . After 3 additional washes, the HRP reaction was developed with diaminobenzidine (DAB) per the manufacturer\u2019s instructions.Paraffin-embedded OA and RA synovial tissues were sliced and then blocked with 5% goat serum for 30\u2009min. The tissues were incubated in primary anti-PCNA (Santa Cruz Biotec), anti-CD90 (Santa Cruz Biotec), anti-ICAM1 or anti-VCAM1 antibodies overnight at 4\u2009\u00b0C. IHC staining was generally performed as previously reported. The primary antibody was omitted in negative controls. H&E staining and Masson staining were performed according to standard protocols.The proliferation abilities of OA and RA FLS were evaluated by CCK-8 assay and Ki-67 staining (Miltenyi Biotec). All procedures were performed according to standard protocols.4 FLS were seeded into the upper chamber in serum-free medium, and serum-containing medium was added into the lower wells. After 24\u2009h or 48\u2009h, the upper chamber was washed with PBS twice and fixed with 4% paraformaldehyde for 20\u2009min, and then 1% crystal violet was used to stain the invasive cells.Transwell (Costar) cell culture plates were used for migration and invasion assays. A total of 5\u2009\u00d7\u2009106 cells were collected and washed twice with ice-cold PBS. The cells were then stained using the Alexa Fluor\u00ae488 Annexin V/Dead Cell Apoptosis Kit with Alexa Fluor 488 annexin V and PI (Miltenyi Biotec) for flow cytometry according to the manufacturer\u2019s guidelines. The untreated cells served as a negative control for double staining.Annexin V/propidium iodide (PI) staining was performed for the detection of apoptotic cells. After the desired treatment, 1\u2009\u00d7\u200910OA and RA FLS were lysed in SDS buffer. The protein concentrations were determined via a BCA kit . Equal amounts of cell lysates were run on a gel, transferred onto a PVDF membrane , and probed with the following primary antibodies: interleukin-1 beta (IL-1\u03b2) , interleukin-6 (IL-6) and GAPDH . GAPDH served as a loading control. After incubation with secondary antibodies, signals were visualized by enhanced chemiluminescence (GE system).5 cells were seeded in a 12-well plate for 48\u2009h. Supernatants were collected to measure the amounts of secreted IL-6 and TNF-\u03b1. IL-6 and TNF-\u03b1 were detected via a human IL-6 ELISA kit and a human TNF-\u03b1 ELISA kit (Miltenyi Biotec), respectively.A total of 2\u2009\u00d7\u200910Cultures of OA and RA FLS were grown to confluence in 6-well culture plates. Cells were treated for 24\u2009h with TNF-\u03b1 (20\u2009ng/ml) or Methotrexate . Twelve hours after stimulation, whole-cell RNA was collected by TRIzol (Life Technologies), and it was analysed by qRT-PCR for gene expression. GAPDH was used as an endogenous control. A reverse transcription kit and qRT-PCR dye were obtained from Takara Biomedical Technology Company .P\u2009<\u20090.05, P\u2009<\u20090.01 and P\u2009<\u20090.0001, respectively (Student\u2019s t-test).The error bars mean\u2009\u00b1\u2009Standard Error of Mean (SEM) of six independent experiments. *, ** and *** indicate \u2212CD31\u2212CD146\u2212CD235a\u2212CD90+PDPN+ FLS [Twenty-three joint synovium biopsies from OA and RA patients were used in this experiment. H&E staining showed the typical pathological status of OA and RA hyperplastic synovial tissues Fig.\u00a0a. MassonDPN+ FLS after co+ cells was higher in RA synovial tissues than in an OA synovial tissues resulted in a significant increase in the proliferation of both OA and RA FLS, and the inductive effect in RA FLS was greater Fig.\u00a0a and b. \u2212CD31\u2212CD146\u2212CD235a\u2212CD90+PDPN+). OA FLS and RA FLS were compared and characterized based on multiple criteria to provide optimal confirmation of their origin and purity. The markers used in the FLS sorting strategy included general stromal fibroblast markers such as PDPN and CD90 [For inflammatory joint diseases such as OA and RA, FLS are an essential part of inflammation and joint erosion . Human Fand CD90 and moreand CD90 . The anaIn addition to surface markers of FLS, the present study also performed side-by-side comparisons of some basic cellular features of OA FLS and RA FLS, analysing proliferation, migration, expression/secretion of inflammatory cytokines, and response to pro-inflammatory cytokines and anti-inflammatory drugs. In general, RA FLS shows more aggressive cellular behaviour compared to OA FLS, including a more rapid proliferation rate, stronger invasive ability, and higher expression and secretion of inflammatory cytokines. These observations are consistent with our knowledge of the arthritis features exhibited by OA and RA. Moreover, higher expression of inflammatory markers, such as CCL2, IL-6, IL-1\u03b2 and TNF-\u03b1, were also observed in RA FLS when compared to FLS isolated from the less inflamed OA synovium. OA FLS and RA FLS also show different responses to the pro-inflammatory cytokine TNF-\u03b1 or the anti-inflammatory drug MTX. TNF-\u03b1 could induce proliferation/migration of both OA FLS and RA FLS, while the inductive effect on proliferation was more obvious in RA FLS. In contrast, MTX could inhibit RA FLS proliferation but did not affect the proliferation of OA FLS in vitro. The different responses to MTX were further investigated with apoptosis experiments: MTX greatly induced apoptosis of RA FLS and induced less apoptosis in OA FLS. However, low concentrations of MTX (10\u2009\u03bcM) only slightly promoted apoptosis of RA FLS and OA FLS (data not shown). The high dose of MTX can induce a series of severe side effects , which iMTX also inhibited up-regulated inflammatory markers . TNF-\u03b1 sAlthough these inflammatory genes and markers have been previously validated with mice FLS culture in vitro , it is uIn summary, we characterized FLS isolated by collagenase digestion of synovial tissues from OA and RA patients by using a strict sorting strategy. The main cellular features were compared between OA FLS and RA FLS here. The current report provides an isolation and characterization standard for future research on human OA and RA FLS.By using a stringent sorting strategy, we comprehensively characterized and compared the cellular function and inflammatory phenotype of FLS from the synovium of OA and RA patients in vitro. The parallel comparison of OA FLS and RA FLS lays a foundation for when FLS are considered potential therapies for anti-inflammatory treatment of OA and RA.Additional file 1: Figure S1. The isotype controls were included in the FACS detection for proteins with relatively low expression ."} {"text": "Optoacoustic imaging, based on the differences in optical contrast of blood hemoglobin and oxyhemoglobin, is uniquely suited for the detection of breast vasculature and tumor microvasculature with the inherent capability to differentiate hypoxic from the normally oxygenated tissue. We describe technological details of the clinical ultrasound (US) system with optoacoustic (OA) imaging capabilities developed specifically for diagnostic imaging of breast cancer. The combined OA/US system provides co-registered and fused images of breast morphology based upon gray scale US with the functional parameters of total hemoglobin and blood oxygen saturation in the tumor angiogenesis related microvasculature based upon OA images. The system component that enabled clinical utility of functional OA imaging is the hand-held probe that utilizes a linear array of ultrasonic transducers sensitive within an ultrawide-band of acoustic frequencies from 0.1\u2009MHz to 12\u2009MHz when loaded to the high-impedance input of the low-noise analog preamplifier. The fiberoptic light delivery system integrated into a dual modality probe through a patented design allowed acquisition of OA images while minimizing typical artefacts associated with pulsed laser illumination of skin and the probe components in the US detection path. We report technical advances of the OA/US imaging system that enabled its demonstrated clinical viability. The prototype system performance was validated in well-defined tissue phantoms. Then a commercial prototype system named Imagio\u2122 was produced and tested in a multicenter clinical trial termed PIONEER. We present examples of clinical images which demonstrate that the spatio-temporal co-registration of functional and anatomical images permit radiological assessment of the vascular pattern around tumors, microvascular density of tumors as well as the relative values of the total hemoglobin [tHb] and blood oxygen saturation [sO2] in tumors relative to adjacent normal breast tissues. The co-registration technology enables increased accuracy of radiologist assessment of malignancy by confirming, upgrading and/or downgrading US categorization of breast tumors according to Breast Imaging Reporting And Data System (BI-RADS). Microscopic histologic examinations on the biopsied tissue of the imaged tumors served as a gold standard in verifying the functional and anatomic interpretations of the OA/US image feature analysis. The American Cancer Society (ACS) estimates that 40,290 women will die from breast cancer in the United States ,2. MammoDiagnostic US is widely performed in the workup of abnormal mammography findings. Advantages of US include its safety, convenience and capability to visualize tumors with video rate display that are radiologically occult, lack of ionizing radiation, and relatively low cost. Targeted diagnostic breast ultrasound helps in classifying breast cancer with excellent sensitivity, but suffers from low specificity. Ultrasound diagnosis of breast cancer has been primarily based on the lesion morphology (shape characteristics and ultrasound properties). Many malignant breast masses are too small to present sufficiently distinctive features on conventional ultrasound. Thus, the positive predictive value of the diagnostic ultrasound imaging after mammography and diagnostic ultrasound in biopsied masses (PPV3) is under 30% . When th1.1We present a newly developed technology of combined optoacoustic plus ultrasound (OA/US) imaging, integrated in the Imagio\u2122 breast imaging system, specifically designed for imaging of breast and diagnosis of breast masses. This technology provides a two-fold enhancement of the overall diagnostic accuracy, combining specificity from functional imaging with molecular specificity of hemoglobin and oxyhemoglobin with the 95-plus% sensitivity of breast ultrasound anatomical imaging . We desc1.2For clinical radiologists, \u201cseeing is believing\u201d. Therefore, optical imaging technologies are naturally suited for medical applications. However, due to optical scattering, pure optical modalities that illuminate and sense light are not able to achieve adequate depth penetration and spatial resolution in the thickness of breast tissue necessary for complete evaluation of normal breast tissue . Optoaco1.3In the early years of development, OA imaging researchers realized that B-mode gray scale ultrasound imaging based on contrast provided by the acoustic impedance is complementary to the nature of medical information provided by the functional optoacoustic images ,16. The Furthermore, combining the two systems in one modality is acceptable to radiologists because they can readily adapt and associate functional information with morphology provided by co-registration of the optoacoustic and ultrasound images. With this understanding, a number of groups developed optoacoustic ultrasonic dual modality systems based on commercial ultrasound machines and commercial pulsed lasers . While 1.4in situ stage for years before switching into a rapid growth stage when the rate of the cell growth is much greater than the rate of apoptosis and requires new capillary blood vessels. Tumor-associated neovascularization allows the tumor cells to express their critical growth advantage. There is a constant requirement for vascular supply in cancerous tumors. Experimental and clinical evidence suggests that the process of metastasis is also angiogenesis-dependent ) may be used for detection of tissue abnormalities such as hypoxia and angiogenesis of cancer. Therefore, imaging of vasculature, blood circulation and blood distribution in tissues and measurements of [tHb] and [sO2] by methods of functional imaging, can be used for detection of microvascular network of aggressively growing malignancies in the breast and for their differentiation from normal tissues and benign tumors and [Hb] molecules in the body. For purposes of functional biomedical diagnostics, parameters of the total hemoglobin, [tHb], and blood oxygen saturation, [sO2], can be determined from optoacoustic images acquired at multiple wavelengths, 2] can be measured from optoacoustic images acquired at two wavelengths, When spatial distribution of the optical fluence, c images . Since tll known , one can follows :(1)THb] and [tHb(r)] in the large tissue volumes including blood vessels and tumor microvasculature through normalization of the optical fluence distribution as a function of depth in the breast. We demonstrate below that functional imaging of tumor microvasculature and surrounding breast vasculature that is recruited to both feed and drain the tumor, together with depiction of [Hb] and [HbO2] concentrations, represents clinically relevant diagnostic information.In the past decade, there have been a number of optoacoustic studies in microscopy , endosco22.1The clinical system design was based on extensive computer modeling including Monte Carlo simulations of light propagation in the breast, signal generation, propagation and detection by realistic ultrasound transducers and optimization of image reconstruction algorithms for limited view tomography ,34, expeWe achieved clinical viability of OA\u2009+\u2009US technology for real-time functional anatomical mapping of the breast through (i) development of ultrawide-band ultrasonic transducer arrays and their proper loading to an ultra-low noise analog preamplifiers with high input impedance; (ii) understanding and elimination of optoacoustic image artefacts associated with hand-held probe design, which resulted in substantially increased image contrast, (iii) signal processing that inverted OA signal distortions by the detection system and (iv) image reconstruction and post-processing that computation of functional images and their accurate temporal-spatial co-registration with B-mode ultrasound images. To produce OA/US images, the Imagio system required uniquely designed subsystems: a handheld duplex OA/US probe, a laser system, OA/US reception signal processing, and OA/US image formation. Each of these subsystems is significantly different from conventional ultrasound to enable the most challenging requirement of simultaneously providing high spatial resolution (sensitivity to high ultrasonic frequencies) and high contrast of volumetric brightness of relatively large objects (sensitivity to low ultrasonic frequencies).2.2Perhaps the most unique part of the combined optoacoustic-ultrasonic imaging system, compared to conventional ultrasound, is the specialized hand-held probe. This is also one of the most vital and challenging part of the design. We integrated a fiberoptic light delivery system with a multi-element linear array ultrasound transducer in a form that is comfortable for the clinician and patient, compact, and light, while delivering the best possible imaging performance. The probe supports sufficient imaging penetration depth in the breast, while simultaneously minimizing image \u201cclutter\u201d artefacts associated with proximity of optical and acoustic components.2 , through analyzes of references [\u22121 at 757\u2009nm and \u223c3.5 \u2013 4.5 cm\u22121 for 1064\u2009nm. Optical absorption coefficient of cancerous breast tumors \u22121] was found to be statistically 2- to 3-fold higher (0.130\u2009\u00b1\u20090.060 at 757\u2009nm) than in the surrounding breast tissue, but only, only slightly higher (0.154\u2009\u00b1\u20090.089) at 1064\u2009nm. The average tissue optical properties result in an average effective attenuation of the optical flux in the near-infrared spectral range of approximately 3 times per cm.Back-projected optoacoustic images without image processing compensation would otherwise display gradually decreasing brightness of voxels as a function of depth. To quantify this attenuation, we determined bulk breast tissue optical absorption coefficient ferences and expeAcoustic attenuation has been studied by Foster et al . Based oAnalysis of the optical and acoustic properties of the breast shows that breast tissue effectively attenuates both near-infrared light and ultrasound. Therefore, the first step in the optoacoustic image processing was accounting for OA signal attenuation.We developed a method of the OA image brightness normalization that resulted in the equal brightness of the well-characterized objects on clinical images of the breast, such as artery cross-sections. The method includes the following steps: (i) segment the brightest objects on the image: blood vessels and tumors, (ii) calculate average pixel brightness in each horizontal row of pixels excluding that of segmented objects, (iii) normalize each pixel brightness by dividing it to the average value at each depth.To test this method, it was applied to an optical and acoustically simulated dataset. 2.6Imagio produces images for each laser wavelength . First a reference region is selected. Then the two optoacoustic images are converted into the three functional maps: one map of the total hemoglobin, [tHb], proportional to the density of microvasculature in breast tissues, and 2 different maps of blood oxygen saturation, [sO2] relative to its average background level. The color palette for display of functional images is designed to reveal the functional information to the radiologist in its easy to interpret color map. The RGBA color maps are used for the [sO2] (left) and [tHb] (right) as displayed in As depicted in In addition to demonstrating relative degrees of oxygenation-deoxygenation, OA can demonstrate normal and abnormal anatomy of vessels that lie within a breast mass and its surrounding tissues. Interpreting physicians can assess the anatomy of these OA-demonstrated vessels in addition to the degree of relative oxygenation-deoxygenation in order to refine their overall evaluation for risk of malignancy.2.7The co-registered functional color OA maps are interleaved temporally and presented simultaneously in real-time along with the gray scale (B-mode) ultrasound. Co-registration of optoacoustic and ultrasound images is performed with the video frame rate of 10 OA/US images per second (5 fps for each of the two laser wavelengths). Since the creation of a single complete OA frame requires a pulse from each of the 2 OA wavelengths, complete OA frame rates are 5 fps rather than 10 fps. The two lasers (see section 2.2.1) operating independently were synchronized to emit a pair of pulses at the wavelengths of 757\u2009nm and 1064\u2009nm with a short delay of 5\u2009ms between them. During such a short period of time almost no motion occurs in the course of hand-held probe scanning on the skin surface of the breast, even when the probe is moved slowly across the breast. Thus, when the duration separating data acquired from each wavelength is small, the multi-wavelength pixelwise operations for conversion of optoacoustic images into functional images do not produce motion artefact noise. There is a difference in frame update rates of gray scale ultrasound and OA. The ultrasound frame rate varies with depth of field and other technical settings, but is generally at least 3 to 4 times the update rate of the OA images. While the OA images are robust enough to tolerate slow sweeping of the transducer across the breast during scanning, movement of the transducer that is too fast can result in slight temporal and spatial miss-registration of the OA images with the underlying gray scale ultrasound images. Though a potential problem, in clinical trials, this was dealt with merely by adequate training in scan techniques, and was found not a clinically significant problem .The computer display shows separately an anatomical ultrasound image and two functional images of [sO2] and [tHb] superimposed with the anatomical image. All images are presented on screen simultaneously, but in clinical studies, the readers evaluated and interpreted the images in the following sequence: (1) B-mode ultrasound, a conventional image of breast morphology that gives the first diagnostic indications to the radiologists, then (2) functional image of [tHb] superimposed with ultrasound, and (3) functional image of [sO2] superimposed with ultrasound. The commercial version of the developed imaging system displays the two optoacoustic images , plus a \u201ccombined\u201d image of [sO2] where only those pixels are colored that have colors on the [tHb] image. Thus, the total number of images on the display panel is six, when conventional ultrasound, OA Short (757\u2009nm) and OA Long (1064\u2009nm) are also included. However, for the purposes of Imagio technology presentation here we discuss only the main 3 images on the display that play the key role in the diagnostic imaging of breast cancer. In summary, the co-registering and temporal interleaving of OA and gray scale ultrasound images is creating a real time oxygenation-deoxygenation blood map fused with an underlying gray scale anatomic image.33.1The capability of Imagio\u2122 to display functional values of [sO2] relative to the median background values of normal breast tissues was studied in a phantom made of PVCP with optical and acoustic properties replicating average breast tissue ,41 and ci.e. green color images were observed in the [sO2] range of \u223c100% to \u223c91%, no color is the [sO2] range from \u223c90% to \u223c85%, and red color images are in the [sO2] range below \u223c85%.Blood oxygenation was changed by gradual oxygenation of fully deoxygenated blood in the air. The level of [sO2] in the tubes was monitored with a commercial oximeter. In this experiment, blood in the top vessel was changed from \u223c100% [sO2] level to \u223c91% level, while [sO2] in the other vessel was varied from \u223c50% to 85%. This phantom experiment demonstrated that brightness of the optoacoustic images and colors of functional images are displayed correctly through the entire range of depths in the phantom. A gradual decrease of the image contrast was observed with increasing depth of the tubes from the imaging surface.The results obtained in the phantom were used for validation pertaining to the system readiness for clinical studies. Not only did the system accurately present colors representing the blood oxygen saturation in the major vessels, the image [sO2] color did not switch or disappear in the case of low hematocrit, however [tHb] signal-to-noise ratio became weaker against the background at the lowest hematocrit levels. In the clinical study of tumor angiogenesis, it has been found that 70-\u03bcm to 150-\u03bcm size vasculature can show significant deoxygenation in aggressive solid tumors ,23. This3.2During the device investigation, several different clinical studies were conducted on patients. Following the system validation in phantoms, a clinical feasibility study was performed on patients with breast masses suspected for cancer based on screening mammography followed by breast ultrasound. Patients with Breast Imaging Reporting and Data System (BI-RADS) scores of 3, 4 or 5 were recommended for dual-modality functional anatomical imaging comparing noninvasive diagnosis from the OA and US co-registered images of Imagio\u2122 with the gold standard of core biopsy. Based on the clinical feasibility data of an initial 79 patients, the system was calibrated and its functional imaging algorithm was trained, which means that the thresholds were determined for display of colors in the area of image maximum brightness of the total hemoglobin [tHb] distribution and its oxygen saturation [sO2]. Following the feasibility study, a multicenter clinical trial guided by the Food and Drug Administration was performed on over 2000 patients to determine sensitivity and specificity of noninvasive diagnosis of breast masses suspected for malignancy utilizing the co-registered optoacoustic functional and ultrasound morphological images. The statistical analysis of this clinical trial was reported in . Here weThe functional parameters provide strong information suspicious of malignancy. Therefore, optoacoustic functional images increase confidence of radiologist interpretation compared to that based on conventional ultrasonic morphological images. Biopsy confirmed invasive breast carcinoma. The tumor does not have a well-developed internal angiogenesis, but both the partial marginal/boundary zone deoxygenated blush and the presence of multiple radiating vessel within the peripheral zone have very high positive predictive values for malignancy in the range of 90+%. Optoacoustic imaging provided meaningful information for the radiologist to accurately diagnose malignancy.4The main advantages of optoacoustic imaging are its capabilities of displaying the presence or absence of tumor neovessels and their anatomy as well as displaying functional blood distribution and oxygen saturation properties based on optical absorption differences of hemoglobin and oxyhemoglobin at different laser wavelengths . The devIn summary, we developed and described a method of qualitative functional imaging and demonstrated clinical utility of functional parameters such as [tHb] and [sO2] displayed within morphological tissues structures in the breast. Addition of these functional images to the co-registered breast ultrasound images may have potentially transformative impact on breast cancer care in terms of accuracy of diagnosis and differentiation of malignant lesions. Other advantages of Imagio as a clinical modality that require a special report include relative image contrast independence on the age of a patient as well as equal system performance in all racial and ethnical groups.5By combining optoacoustic (OA) and ultrasonic (US) imaging capability in one system (OA/US), the first clinical modality was developed for functional imaging within specific morphological structures of the breast and thereby satisfied a long-standing need for a more accurate diagnostic imaging of breast cancer. The advanced system design features that enabled contrast and accuracy of functional images co-registered with anatomical images are: (i) the hand-held duplex probe containing an 128 element linear array transducer of ultrawide-band that is protected from interference of both the direct illumination by the light diffusely scattered from tissue back to the probe and the light leaking from the optical fibers inside the probe housing; (ii) sensitive, low noise electronics; (iii) the dual wavelength laser rapidly emitting near-infrared pulses one after the other targeting hemoglobin and oxyhemoglobin; (iv) software with sophisticated signal processing and image reconstruction algorithms enabling real time rate co-registered optoacoustic and ultrasound imaging. Clinical viability of this new technology was demonstrated initially in a feasibility study and then in multicenter clinical trials involving over 2000 patients recruited in 16 leading university hospitals and private diagnostic centers of the United States. The examples of the clinical results presented in this report show that optoacoustic functional images has the potential to provide clinically valuable enhancement of the diagnostic specificity of conventional breast ultrasound by either upgrading morphologically benign masses to cancer (and thus saving lives) or downgrading morphologically suspicious lesions to definitely benign masses or simply confirming and/or elevating radiologist confidence during interpretation of breast masses based on morphology. Further technical advances of the technology described here are envisioned in the direction of more quantitatively accurate full view three-dimensional optoacoustic tomography systems. These advances could help to enable automated screening in addition to improved diagnostics, especially valuable to replace sensitive but not specific detection by MRI for patients with familial history of breast cancer and simultaneously dense heterogeneous breast .Authors - technology developers disclose employment or equity position at Seno Medical Instruments, however, no apparent conflict of interests, and authors - clinical collaborators , declare no conflicts of interest."} {"text": "Many viruses initiate interaction with target cells by binding to cell surface glycosaminoglycans (GAGs). Heparan sulfate (HS) appears to be particularly important in fibroblasts, epithelial cells and endothelial cells, where it represents the dominant GAG. How GAGs influence viral infectivity in HS-poor target cells such as macrophages has not been clearly defined. Here, we show that mouse cytomegalovirus (MCMV) targets HS in susceptible fibroblasts and cultured salivary gland acinar cells (SGACs), but not in macrophage cell lines and primary bone marrow-derived macrophages, where chondroitin sulfate was the dominant virus-binding GAG. MCK-2, an MCMV-encoded GAG-binding chemokine that promotes infection of macrophages as part of a gH/gL/MCK-2 entry complex, was dispensable for MCMV attachment to the cell surface and for direct infection of SGACs. Thus, MCMV tropism for target cells is markedly influenced by differential GAG expression, suggesting that the specificity of anti-GAG peptides now under development as HCMV therapeutics may need to be broadened for effective application as anti-viral agents. Herpesviridae family and chronically infects ~60\u201380% of adults [Human cytomegalovirus (HCMV) is the largest member of the f adults . Infectif adults . HCMV isf adults . Accordif adults .HCMV vaccine strategies have focused mainly on blocking cell entry by the virus. A generally accepted entry mechanism involves the binding of HCMV glycoprotein gH/gL complexes to cellular receptors, which triggers a conformational change in the HCMV gB protein, thereby inducing fusion of the viral envelope with the target cell membrane . In HCMVAntibody, viral vector and subunit vaccines based on gB, the trimer and the pentamer are currently under development with a special interest in the prevention of congenital HCMV . HoweverA third potential target for development of vaccines or therapeutics involves cellular glycosaminoglycans (GAGs), which mediate binding of many viruses to target cells . In thisBALB/cJ mice were obtained from The Jackson Laboratory . All mice were maintained under specific pathogen-free housing conditions at an American Association for the Accreditation of Laboratory Animal Care-accredited animal facility at the National Institute of Allergy and Infectious Diseases (NIAID) and housed in accordance with the procedures outlined in the Guide for the Care and Use of Laboratory Animals under the protocol LMI-8E approved on 31 December 2015 and annually renewed by the Animal Care and Use Committee of NIAID.g for 5 min, and then washed twice with PBS. SGACs obtained from 6 mice were resuspended in 10 mL of bronchial epithelial basal medium supplemented with bronchial epithelial growth medium bullet kit and plated in 96-well plates previously coated with 150 \u03bcg/mL of Cultrex Basement Membrane Extract . Cells were washed with PBS and media was replaced at 24 and 48 h after isolation. Cells were maintained in culture for three additional days before being used in experiments.Mouse fibroblast cell lines NIH-3T3 and M2-10B4 and the mouse macrophage cell line RAW264.7 were purchased from ATCC . NIH-3T3 and RAW264.7 cells were maintained in DMEM-Glutamax supplemented with 10% FBS. M2-10B4 cells were cultured in RPMI-Glutamax supplemented with 10% FBS. Salivary gland- and lung-derived fibroblasts (SGFBs and LGFBs) were isolated from BALB/cJ mice as previously described . PrimaryGaussia luciferase and it was previously demonstrated that they establish productive infection in tissue culture and in mice [g for 3 h) of supernatants from 100% infected M2-10B4 cells, and then further purified by ultracentrifugation through a 15% sucrose cushion in virus standard buffer . The pellet was resuspended in 0.5\u20131.0 mL of VSB, aliquoted and stored at \u221280 \u00b0C. Viral titers were determined by plaque assays in M2-10B4 cells as previously described [The viruses MCMV-3D (MCK-2 knock out) and MCMV-3DR (wild type) were generously provided by Dr. Martin Messerle . These viruses were engineered to express in mice ,45. Viruescribed .2, 50 mM NaCl, and 0.01% BSA; ChABC buffer: 50 mM Tris-HCl pH 8.0, 60 mM sodium acetate, and 0.02% BSA). Subsequently, cells were washed once with PBS to remove the enzyme and infected with MCMV-3D or MCMV-3DR at the indicated moi in RPMI supplemented with 10% FBS. The luciferase activity in the supernatant was determined 18 hpi as explained above. Of note, we noticed that the ChABC buffer weakens the cell attachment to the plate, which may result in the loss of some cells during washes and final luciferase levels lower than in cells treated with the HepII buffer (See Cells seeded in a 96-well plate were infected at the indicated multiplicity of infection (moi) with MCMV-3D or MCMV-3DR in RPMI supplemented with 2% FBS. After a 2 h viral adsorption at 37 \u00b0C, the cells were washed once with PBS to remove unbound viral particles and then incubated in RPMI 10% FBS. The luciferase activity in 10 \u03bcL of supernatant was determined 18 h post infection (hpi) using the Biolux Gaussia Luciferase assay kit and a Mithras LB 940 luminometer . The relative luminescence units (RLUs) obtained in supernatants from mock-infected cells were subtracted from all samples during the analysis. To study the role of different cellular GAGs in the infectivity of MCMV, where indicated, cells were incubated prior to infection with 1 U/mL of heparinase II (HepII) or/and chondroitinase ABC (ChABC) for 30 min at 37 \u00b0C in the corresponding reaction buffer was measured in a FlexStation 3 microplate reader .To evaluate the presence of soluble GAGs in the culture media of M2-10B4 cells, we analyzed the capacity of M2-10B4-conditioned supernatants to block the binding of B18, a GAG-binding protein encoded by vaccinia virus , to hepa5 cells were incubated on ice for 20 min with buffer alone or increasing concentrations of B18-His in PBS-staining buffer . Beforehand, cellular Fc receptors were blocked using TruStain FcX antibody solution . After washing with PBS, cell-bound protein was detected with an anti-His mAb and an anti-mouse Alexa Fluor 488 secondary antibody . In total, 20,000 events were collected in a BD LSR Fortessa analyzer , and the data were analyzed using FlowJo software .The binding of B18-His to the surface of M2-10B4, RAW264.7 and BMDM cells was analyzed by FACS. 3 \u00d7 10Tjp1, Aqp5, Aqp3 and Amy1\u2014was calculated in mouse salivary glands and cultured SGACs by qPCR. For this, the total RNA from whole submaxillary salivary glands of BALB/cJ mice and from 5 day cultures of SGACs was isolated with Trizol following the manufacturer\u2019s instructions. Genomic DNA was removed using the Turbo DNA-free kit . cDNA was synthesized from 500 ng of RNA using the SensiFast cDNA synthesis kit . Another 500 ng of RNA from each sample were mock treated without reverse transcriptase. The Cq values for the amplification of each acinar marker and the reference gene Gapdh were obtained in a CFX96 Real-Time System using the SensiFast SYBR kit and the following primer pairs: Gapdh F (5\u2032-aactttggcattgtggaagg-3\u2032) and Gapdh R (5\u2032-acacattgggggtaggaaca-3\u2032); Amy1 F (5\u2032-gaaaagatgtcaatgactggg-3\u2032) and Amy1 R (5\u2032-accatgttccttatttgacg-3\u2032); Aqp3 F (5\u2032-cttctttgatcagttcataggc-3\u2032) and Aqp3 R (5\u2032-gggttgttataagggtcaac-3\u2032); Aqp5 F (5\u2032-atcttgtggggatctacttc-3\u2032) and Aqp5 R (5\u2032-tagaagtagaggattgcagc-3\u2032); Tjp1 F (5\u2032-ctgatagaaaggtctaaaggc-3\u2032) and Tjp1 R (5\u2032-tgaaatgtcatctctttccg-3\u2032). The relative expression for each marker was calculated as 2(Cqmarker \u2212CqGAPDH).The relative expression of 4 common markers of salivary acinar cells\u20144\u2013108) of MCMV-3D and MCMV-3DR were incubated with NIH-3T3 and BMDM cells for 30 min on ice to prevent virus internalization. Cells were washed three times with cold PBS to remove unbound virus and DNA was isolated using QIAamp Blood kit . Cell-retained viral iE1 copies were quantified by qPCR as explained above and represented relative to the Gapdh copies calculated by interpolation in a standard curve generated with a pcDNA3.1-Gapdh plasmid. Alternatively, MCMV-3D and MCMV-3DR were incubated at moi = 0.5 with M2-10B4, NIH-3T3 or RAW264.7 plated in 24-well plates. After a 30 min incubation on ice, cells were profusely washed with cold PBS and collected with a cell scraper in 100 \u00b5L of RPMI 2% FBS. Cells were lysed by three cycles of freeze and thaw and lysates were clarified by centrifugation. M2-10B4 cells in 96-well plates were infected with increasing volumes of the input stocks or 25 \u00b5L of the lysates and the luciferase activity in the supernatants was measured 18 hpi, as explained above. The RLUs upon infection with the lysates were represented relative to the RLUs obtained per \u00b5L of input used for the infection of M2-10B4.The capacity of MCMV-3D and MCMV-3DR to bind to the surface of fibroblasts and macrophages was evaluated by qPCR or by reinfection of M2-10B4 cells with cell lysates. After purification, viral stocks used in qPCR-based assays were resuspended in PBS and treated overnight with 625 U/mL of benzonase at 4 \u00b0C to remove free DNA. Then, viral titers were determined by qPCR as the number of copies of the viral gene iE1/mL, interpolating in a standard curve generated with a pcDNA3.1-iE1 plasmid, the Cq values obtained with increasing volumes of the viral stocks as template and the primers iE1 2F (5\u2032-catctcctgtcctgcaacct-3\u2032) and iE1 2R (5\u2032-cttgggctgctgttgattct-3\u2032). Subsequently, increasing viral copies (10Gaussia luciferase reporter viruses MCMV-3D (MCK-2 knockout) and MCMV-3DR (MCK-2 wild type) [MCMV is known to infect both mouse fibroblasts and macrophages. Moreover, using a pSM3fr bacmid-derived MCMV system, it has been reported that MCMV infection of macrophages, but not fibroblasts, is promoted by MCK-2 . Here, wld type) , which aNext, we investigated whether MCK-2-dependent or -independent MCMV infectivity depended on cell surface GAGs. For this, we first tested the infectivity of both MCMV-3D and MCMV-3DR in fibroblasts and macrophages treated beforehand with the GAG lyases HepII and ChABC to deplete specific types of GAGs from the cell surface. HepII removes heparin and HS , and ChATo determine whether the presence of soluble GAGs in cultured M2-10B4 fibroblast supernatants might contribute to the effects of GAG lyases on MCMV infectivity, we tested the influence of GAG lyase-treated M2-10B4 cell-free supernatants on the binding of purified B18 protein to heparin D. B18 isThe effect of GAG lyase treatment on MCMV infectivity in macrophages, however, fundamentally differed from our observations in fibroblasts and were different for the macrophage cell line RAW264.7 compared with primary BMDM. For RAW264.7 cells, neither HepII nor ChABC pretreatment affected the infectivity of either MCMV-3D or MCMV-3DR viruses B. This cAqp5) and amylase-1 (Amy1) B. A redu1 (Amy1) C, which Since the depletion of specific GAGs from the surface of fibroblasts, macrophages and SGACs altered MCMV infectivity in an MCK-2-independent manner, and since cellular GAGs are the first point of contact for the virus, we reasoned that MCK-2 may not be involved in MCMV cell tethering. To confirm this, we performed MCMV\u2013target cell binding experiments with fibroblasts and macrophages using qPCR to quantify the viral copies of MCMV-3D and MCMV-3DR retained on the cell surface. As shown in In the present study, we have demonstrated that MCMV infection of diverse mouse target cell types is markedly and differentially affected by cell surface GAGs. In particular, infectivity in primary mouse salivary gland- and lung-derived fibroblasts was inhibited by HepII pretreatment but enhanced by ChABC pretreatment. For primary salivary gland acinar cells, infectivity was reduced by HepII pretreatment but was unaffected by ChABC pretreatment, whereas the converse was observed for infectivity in primary mouse BMDM pretreated with these two GAG lyases. Cultured cell lines revealed two additional GAG lyase susceptibility patterns for MCMV infection: neither GAG lyase affected infectivity in the cultured mouse macrophage cell line RAW264.7, whereas, as for primary fibroblasts, the two enzymes oppositely affected infectivity in the cultured fibroblast cell line M2-10B4 (HepII pretreatment decreased whereas ChABC pretreatment increased infectivity). These results imply that HS and CS GAGs have differential target cell type-specific effects on MCMV infectivity, with HS being most important in supporting infection in primary fibroblasts and acinar cells and CS being most important in supporting infection in primary macrophages.We have previously reported that the MCMV chemokine MCK-2, like host chemokines, can bind directly to GAGs , and othMCK-2-deficient MCMV reaches significantly lower viral titers specifically in the salivary gland when compared to an MCK-2-expressing virus ,20. HoweThe cell type-dependent GAG effects on MCMV infection reported here might be explained by differential availability of GAGs in different target cells. In fact, consistent with our results, CS is usually the most abundant type of GAG in macrophages ,54,55. MAnti-HS peptides have been engineered to block HCMV and MCMV cell attachment . However"} {"text": "New technologies are being touted as solutions to many societal challenges not least of which are ageing and health. However, the rapid development of new technologies is proceeding with little input from older adults. This presentation highlights the perceptions and attitudes of three age cohorts related to the continuous technological advancement of products intended to support active and healthy aging. Participants were 30-39 (n= 639), 50-59 (n=703), 70-79 (n=779) years-old randomly sampled from the Swedish population registry. Results showed both similarities and difference across generations. For example, 24%-35% of older adults would like to use home monitoring devices to support active and healthy aging, compared to 35%-56% of younger groups. More than 82% of all groups highlighted the importance of involving intended users in the development process. Results can be used to support the needs and desires of current older adults and future generations."} {"text": "The current article provides a brief summary of biopsychosocial gender differences in alcohol use disorder (AUD), then reviews existing literature on gender differences in treatment access, retention, outcomes, and longer-term recovery. Among psychotherapies for AUD, there is support for the efficacy of providing female-specific treatment, and for female-only treatment settings but only when female-specific treatment is included. However, despite mandates from the National Institutes of Health to do so, there is little work thus far that directly compares genders on outcomes of specific psychotherapies or pharmacotherapies for AUD. Although existing research has mixed findings on sex and gender differences in overall outcomes, there are more consistent findings suggesting different mechanisms of behavior change among men and women in AUD treatment and long-term recovery. Thus, more work is needed that attends to gender and sex differences, including planning studies that are structured to examine not only gender-differentiated outcomes in treatment response, but equally important, differences in treatment access and attendance as well as differences in mechanisms of change in drinking behavior. Clinicaltrials.gov.Between 1994 and 2017, the National Institutes of Health (NIH) issued mandates that biomedical researchers include female participants in clinical research,Diagnostic and Statistical Manual of Mental Disorders (DSM-5) \u2014with past-year rates of 10% among women and 18% among men, and respective lifetime rates of 23% and 36%.Most recent epidemiological results indicate a higher prevalence among men than women of AUD\u2014defined by criteria of the fifth edition of the American Psychiatric Association\u2019s Regarding the terminology used in this article\u2014\u201csex,\u201d \u201cgender,\u201d and \u201crecovery\u201d\u2014the NIH definition of sex refers to biological differences between females and males in chromosomes, sex organs, and endogenous hormones, whereas gender refers to more socially based roles and behaviors that may vary by historical and cultural contexts.10Regarding recovery from AUD, there is currently no consensus in definition of this term. Historically, recovery has been associated with Alcoholics Anonymous as \u201congoing cognitive, emotional, behavioral, and spiritual reconstruction of the sobered alcoholic\u201dLastly, the research reviewed in this paper uses diagnoses from DSM-IV and DSM-5. Whereas DSM-IV described two distinct disorders\u2014alcohol abuse and alcohol dependence\u2014DSM-5 combines these into a single alcohol use disorder (AUD) with mild, moderate, and severe subclassifications reflecting the number of symptoms met. The main criteria change from DSM-IV is that DSM-5 eliminates alcohol-related legal problems and adds alcohol craving as a criterion for AUD. Lastly, although the search did not exclude international research, the majority of findings reviewed are from studies conducted and/or funded in the United States.Alcohol is consistently shown to have more negative effects on women\u2019s health than men\u2019s, even at weight-adjusted lower levels of alcohol exposure, partly due to gender differences in pharmacokinetics of alcohol.18Stress plays an important role in the development and maintenance of AUD among both men and women.Sex hormones affect all body systems directly and indirectly, and for women there appears to be a reciprocal effect of alcohol on sex hormones.21Women with AUD report higher levels of co-occurring psychiatric conditions than do men with AUD. Co-occurrences of mental health conditions with AUD were examined using data from two waves (2001\u20132002 and 2004\u20132005) of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC).26Among individuals with AUD, women are more likely than men to experience alcohol cravings in response to daily negative emotion and stress.29There are high rates of trauma among women receiving treatment for any substance use, and an estimated 25% to 55% of women in substance use treatment have PTSD.Research has found gender differences in the relationship between social networks, social support, and alcohol use. For example, compared to men, women with AUD are more likely to have a family history of AUD and a spouse with a history of AUD.Research has illuminated gender differences in the biopsychosocial factors contributing to the development of, and recovery from, AUD. The physical effects of alcohol are more pervasive for women than men, and sex-specific factors, such as sex hormones, have been associated with alcohol use. In terms of psychosocial differences, stress, trauma, and negative affect are particularly relevant contributors to alcohol use and development of AUD among women. Relatedly, there are gender differences in terms of rates of co-occurring mental health conditions, the rates of major depressive disorder among women with alcohol abuse being particularly high. These differences provide a context for understanding potential gender differences in AUD treatment and recovery and can be used to guide future research.A small percentage of individuals with AUD ever receive treatment, with past-year estimates of 7% of men and 5% of women with AUD receiving treatment40Among individuals who do enter AUD treatment, there are gender differences in clinical presentation. Women tend to have more severe alcohol and drug use histories, lower education and income, higher unemployment and housing needs, more children living at home, and higher parental stress, and they tend to be younger in age.44Data on gender differences in treatment retention are mixed, and most studies have been completed among samples with substance use disorder (SUD), meaning the results are not specific to AUD. For example, a review by Greenfield and colleagues reported no overall gender differences in SUD treatment retention but hypothesized that there would be different predictors and mediators of retention among men and women.45ClinicalTrials.gov for clinical trials on these AUD treatments, and reviewed publications from large clinical trials for AUD, to determine whether gender differences were analyzed and reported. Lastly, the authors searched for and reviewed reports of clinical trials, literature reviews, or meta-analyses on specific treatments to identify commentary or results regarding sex or gender. This was done to address the fact that analyses not yielding any significant gender differences may not have been identified using the search terms. Thus, for some treatments the authors were able to comment on null gender difference findings. Despite the NIH mandate to include females in biomedical research,The following review on outcomes of psychosocial treatments for AUD focuses on empirically supported treatments identified by American Psychological Association Division 12.Motivational enhancement therapy (MET) is a psychotherapy that helps patients resolve their ambivalence about engaging in treatment and reducing or stopping their substance use. Cognitive behavioral therapy (CBT) is an approach that focuses on the reciprocal effects of cognitions, emotions, and behaviors that maintain problem drinking. In treating SUD, CBT also focuses on identifying and resolving factors that reinforce or punish the substance use behavior and teaching both general coping skills and coping skills to negotiate drinking triggers. Twelve-step facilitation (TSF) treatment for AUD is based on the traditional Alcoholics Anonymous (AA) 12-step model and focuses on AA attendance, personalized spirituality, and guided introspection (\u201cstep work\u201d).MET and CBT are among the most widely researched treatments for AUD;49Witkiewitz, Hartzler, and Donovan tested whether matching patients\u2019 motivation level to CBT or MET was associated with better outcomes in the aftercare arm of Project MATCH.51A meta-analysis on controlled trials of brief motivational interventions examined gender as a moderator of treatment effect.54Couples-based approaches to the treatment of AUD are based in the assumptions that partners engage in malleable behaviors that reinforce and/or punish the client\u2019s drinking behaviors, and that enhancing intimate relationships can improve problem-solving, enhance relationship functioning, and reduce likelihood of relapse. Behavioral couples therapy (BCT) and Alcohol BCT (ABCT) have been shown to be effective at increasing rates of abstinence from alcohol, decreasing alcohol-related problems, and improving relationship functioning.Several studies have tested ABCT separately among samples of men and women. An early study among men with alcohol dependence and their female partners compared three conditions: (1) ABCT, in which the spouse attended all sessions that included both alcohol- and marital-focused treatment; (2) full spousal attendance but alcohol-focused treatment only; and (3) minimal spousal involvement in alcohol-focused individual treatment.59ABCT also has been tested among women with AUD, and one study compared ABCT to a treatment arm in which women received individual CBT for AUD.Three medications are currently approved by the U.S. Food and Drug Administration for the treatment of AUD: acamprosate, naltrexone, and disulfiram. There are important gender differences in their bioavailability, distribution, metabolism, elimination,A meta-analytic study examined acamprosate for AUD treatment separately for men and women from a total of 22 studies,One of the first studies on naltrexone for AUD was a multicenter, placebo-controlled RCT of injectable naltrexone,69A third study tested high-dose naltrexone in men and women with co-occurring cocaine use disorder and AUD in a double-blind placebo RCT.72Thus, early studies suggested naltrexone for AUD was not as effective for women as for men, or that women may experience worse side effects, contributing to worse outcomes. However, more recent research has suggested that these effects may be due to study characteristics such as sample size or outcomes assessed. Baros, Latham, and Anton used data from two RCTs comparing a naltrexone plus CBT group and a placebo plus CBT group and found effect sizes favoring naltrexone in men compared to women on some outcomes (drinks per drinking day), but not others .A secondary analysis of COMBINE data tested treatment effects separately in men and women and found that both genders had better treatment response when they received naltrexone with either medication management or combined behavioral intervention , in comparison to placebo and any other combination of treatments.In 2016, Agabio et al. cited the low number of women in clinical trials on disulfiram that preclude evaluation of sex differences in efficacy and safety.Emerging digital and mobile models of treatment delivery include platforms such as telehealth sessions via videoconference; direct access computer programs such as CBT4CBT;78The preliminary research on access and use of AUD treatment via digital and mobile technologies suggests gender differences. For instance, a survey of members of an online social network site for women trying to resolve alcohol problems revealed that 47% of the site\u2019s members had never tried any other form of support related to their drinking.Existing research suggests no major gender differences in terms of overall outcome in psychosocial or pharmacological treatments for AUD. However, this finding is qualified by the small number of studies that directly test gender differences and the low enrollment of women in clinical trials. Additionally, as demonstrated by secondary analysis of Project MATCH, moderating factors such as AUD severity and motivation may be differentially associated with outcomes for men and women.Recovery is a complicated construct, ill-defined and historically confined to a mutual care, 12-step \u201cdisease model\u201d system that considers abstinence as the only viable outcome.Alcoholics Anonymous, the largest and most popular mutual help organization available, offers primarily mixed-gender meetings, but also some single-gender meeting options . However, AA meeting content is consistent across groups and does not necessarily include gender-specific content.Outcomes of single-gender versus mixed-gender AA meeting attendance have not been studied; however, studies on gender differences in treatment outcomes among attendees of mixed-gender AA have shown some significant results, including different moderators of attendance for men and women. One longitudinal study followed 466 men and women for 16 years who were initially untreated for problem drinking.Witbrodt and Delucchi followed participation in AA for 7 years and found that men were more likely to stop attending over the 7-year period.83In sum, research on gender differences in outcomes of AA attendance are mixed, but the most consistent findings suggest women are more likely to stay in AA longer than men, and there may be different moderators of the efficacy of AA for men and women.In line with contemporary notions of AUD and SUD as chronic, relapsing diseases requiring a continuum of care, McKay and colleagues developed and tested stepped and continuing care interventions with various levels of intervention, including telephone counseling.In a sample of participants who used cocaine, most of whom were also alcohol dependent, McKay and colleagues found that women but not men benefited from telephone continuing care.Sliedrecht and colleagues conducted a review of 321 articles, published between 2000 and 2019, to examine the evidence for precipitants of relapse in AUD.91In another review, Walitzer and Dearing indicated that rates of alcohol relapse did not differ among men and women, but evidence did indicate different predictors of relapse by gender.92Gender differences in empirical studies on viability of non-abstinent forms of recovery have recently been studied. Analysis of gender differences in such studies needs to attend to different thresholds for risky or heavy drinking for men and women.93In a study of three clinical trials for AUD\u2014including data from Project MATCH, the COMBINE study, and the United Kingdom Alcohol Treatment Trial\u2014several baseline variables were tested as predictors of low-risk drinking; gender was not found to be predictive.95One study examined men and women with AUD between ages 55 and 77 in a private outpatient program.Issues such as co-occurring mental health conditions, social environment, sleep, and physical health are directly affected by problem drinking and are important independent outcomes reflecting quality of life (QoL). Literature reviews have shown that heavy drinking is associated with reduced QoL, which improves with reductions in drinking.Attention to gender differences among various forms of recovery (both in the 12-step model and in the treatment outcome literature)\u2014including examination of abstinence, reduction of drinking, and/or secondary outcomes\u2014has yielded some interesting results, but research is sparse so far. Predictors of relapse appear to differ between men and women, with women being more likely to relapse in response to interpersonal conflict and negative affect whereas men are more likely to relapse in response to isolation and both positive and negative affect. Also, although being married is a protective factor for men, it can act as a risk factor of relapse for women. Having at least one close friend to discuss drinking with is differentially helpful for women. Also, gender differences in treatment outcome and maintenance may depend on the outcome of interest (drinking or secondary outcomes) and the \u201cform of recovery\u201d studied.There are several behavioral treatments now known to be efficacious for AUD, but there is almost no examination of gender differences in the AUD psychotherapy process and mechanisms of behavior change in this research literature. For example, the authors of this paper found 49 articles published between 2000 and 2012 (26 published since 2010) studying mechanisms of change in CBT, Motivational Interviewing, or MET or examining general therapeutic alliance as a mechanism of change. Of these 49 articles, 22 were review or non-empirical papers and did not mention gender. Of the 27 empirical studies, seven (26%) provided no sample breakdown by gender, one study (4%) had an all-female sample, and 17 (63%) had mixed-gender samples . Furthermore, of these 17 mixed-gender studies, only five (29%) mentioned gender at all, typically as a statistical covariate. Since 2012, researchers have continued to examine mechanisms of change but generally have continued to ignore gender or used single-gender samples.The Women\u2019s Recovery Group (WRG), a treatment for women with SUD (including AUD), examined mechanisms of change between men and women. WRG was compared to a traditional mixed-gender Group Drug Counseling (GDC) treatment in Stage I102Litt et al. studied Network Support Treatment (NST) for AUD, which is designed to help patients build social support networks for sobriety.Recent studies have investigated potential mechanisms of behavior change among female-only samples receiving CBT for AUD A recent review conducted by the RAND National Defense Research Institute examined 24 AUD RCTs to examine gender differences in outcome and found mixed results, with little evidence for systematic gender differences in treatment effects across studies.Our review and those by Greenfield and colleaguesAs suggested by Moyer and colleagues,It is also important to note that even among the studies that examined sex and gender differences, the sample sizes of women were often small, and analyses were likely underpowered. Given the historical differences in prevalence of AUD among men and women, this may have been justifiable in the past. However, the convergence of prevalence rates for lifetime AUD among men and women no longer justifies such small samples of women in treatment. Although studies may recruit men and women, women often comprised less than 50% of the sample, which makes it difficult to examine gender differences. If gender is considered a moderating factor, there must be enough men and women to statistically power the examination of interaction effects. Thus, in conducting clinical trials it may be important to enroll comparable numbers of men and women, with sufficient power to properly examine gender differences.Another consideration is single-gender treatment options, with female-only treatment most often a focus of research. This area of research has examined the delivery of treatment in a women-only setting, with or without including female-specific content should be reported in AUD treatment outcome research.Research suggests gender differences in relapse precipitants. Furthering our understanding of biological, social, and psychological determinants of relapse based on gender has implications for personalized or tailored relapse prevention approaches.Clinical trials are mandated to recruit men and women, as well as analyze and report gender differences; however, the field needs to adhere more stringently to these mandates in future research. This involves consistent changes to methods such as intentional oversampling of women, randomization based on gender, and gender-specific analyses.The research reviewed here provides ample reason to believe that men and women recover from AUD differently. It is important to test and report gender differences when studying mechanisms of change\u2014mediators, moderators, and active therapeutic ingredients\u2014in AUD treatments."} {"text": "Temperate phages engage in long-term associations with their hosts that may lead to mutually beneficial interactions, of which the full extent is presently unknown. Here, we describe an environmentally relevant model system with a single host, a species of the Roseobacter clade of marine bacteria, and two genetically similar phages (\u0278-A and \u0278-D). Superinfection of a \u0278-D lysogenized strain (CB-D) with \u0278-A particles resulted in a lytic infection, prophage induction, and conversion of a subset of the host population, leading to isolation of a newly \u0278-A lysogenized strain (CB-A). Phenotypic differences, predicted to result from divergent lysogenic-lytic switch mechanisms, are evident between these lysogens, with CB-A displaying a higher incidence of spontaneous induction. Doubling times of CB-D and CB-A in liquid culture are 75 and 100\u2009min, respectively. As cell cultures enter stationary phase, CB-A viable counts are half of CB-D. Consistent with prior evidence that cell lysis enhances biofilm formation, CB-A produces twice as much biofilm biomass as CB-D. As strains are susceptible to infection by the opposing phage type, co-culture competitions were performed to test fitness effects. When grown planktonically, CB-A outcompeted CB-D three to one. Yet, during biofilm growth, CB-D outcompeted CB-A three to one. These results suggest that genetically similar phages can have divergent influence on the competitiveness of their shared hosts in distinct environmental niches, possibly due to a complex form of phage-mediated allelopathy. These findings have implications for enhanced understanding of the eco-evolutionary dynamics of host-phage interactions that are pervasive in all ecosystems. Temperate phages may engage in long-term association with their bacterial hosts that can lead to mutually beneficial interactions. It is well established that prophages can offer their hosts benefits, including resistance to superinfection by homologous phages \u20134 and enProphages have frequently been referred to as \u201ctime bombs\u201d e.g., , in whicLysogeny is hypothesized to be prevalent in marine environments , 16, wheSulfitobacter sp. strain CB2047 and its infecting temperate phage \u0278-A were originally isolated from a phytoplankton bloom ) were monitored at two discrete time points preceding and following measurable culture lysis at the same time point Fig.\u00a0. The \u0278-Aint Fig.\u00a0. Similarint Fig.\u00a0. CollectattB), within the 3\u2032 end of a host tRNA-Leu gene Table\u00a0. In addiattB site. The \u0278-D prophage was not present, indicative of a substitution . In contrast, CB-A formed more robust biofilms relative to CB-D of 5.13\u2009\u00b5m, compared with 3.48\u2009\u00b5m for CB-D. The maximum biomass thickness was also larger for CB-A than CB-D and had a larger range Fig.\u00a0.Fig. 4Ph6 (\u00b12.08\u2009\u00d7\u2009105) PFU/ml and 5.2\u2009\u00d7\u2009105 (\u00b18.5\u2009\u00d7\u2009104) PFU/ml respectively. CB-D cultures of the same growth states do not yield quantifiable phage using plaque assays, indicating values below the 45 PFU/ml limit of detection for the assay.Given the discrepancy in growth dynamics of the two strains, we next determined whether there were quantifiable differences in free-phage titers in CB-A and CB-D cultures. CB-A stationary and mid-log phase cultures yielded 1.63\u2009\u00d7\u2009109 (\u00b12.18\u2009\u00d7\u2009109) copies/ml and 1.18\u2009\u00d7\u2009109 (\u00b10.27\u2009\u00d7\u2009108) copies/ml, respectively. Head-to-head competition assays during growth on a surface showed an opposite response: co-culture biofilms had 29% and 55% greater biomass than CB-A and CB-D monoculture biofilms, respectively in broth culture, the ratio of CB-A to CB-D gene copies were 3.26 (range 2.00\u20134.73) after 24\u2009h of co-culture. The co-cultures were ~90% and ~65% lower than typical densities for monocultures of CB-A and CB-D, respectively Fig.\u00a0. The numely Fig.\u00a0. In addiely Fig.\u00a0; the bioE. huxleyi bloom from which CB-D and \u0278-A were originally isolated, samples were collected immediately following collapse of the phytoplankton bloom for genetic characterization of viral particles. Of the eight size-selected viral DNA fractions sequenced, reads from two libraries (~35 and ~75\u2009kb size fractions) mapped to \u0278-A and \u0278-D genomes. From the ~35\u2009kb library , 580 individual reads mapped to \u0278-A and 705 mapped to \u0278-D that mapped to \u0278-A and \u0278-D, respectively, are likely the result of incomplete separation of DNA molecules during PFGE. These data indicate that viral particles from both phage types were present, at non-equal abundances, in natural populations.From the same induced \u0278-D Fig.\u00a0. Of thes\u0278-D Fig.\u00a0. The smaViral-mediated lysis of microbial cells in marine systems leads to quantitatively important impacts on food webs and biogeochemical cycles . Yet, ouLysogenized bacteria are typically resistant to superinfection, that is secondary infection, by homologous phages [reviewed in pepG) encoded by \u0278-A is upregulated during superinfection relative to non-superinfected CB-D controls. Similarly, upregulation of the CB-D host genes recA and lexA indicates activation of the global SOS response, which has been shown to mediate the lysogenic to lytic switch in various bacteria [e.g., Salmonella enterica, Escherichia coli, and Pseudomonas aeruginosa; reviewed in xre-like), is below the limit of detection, perhaps suggesting a role for this gene\u2019s product in suppression of phage lytic genes during the lysogenic state. As the \u0278-A and \u0278-D encoded transcriptional regulators lack conserved catalytic domains common to well characterized phage repressors, these proteins may be valuable targets for future studies aimed at deciphering the lysogenic-lytic switch in these Sulfitobacter-phage pairs. An aspect of lytic activation of prophages in response to superinfection that has not been explored in our, or other systems, is whether lytic infection and prophage induction occur simultaneously in an individual cell or within distinct subpopulations of cells, one undergoing lytic infection by an exogenous phage and the other undergoing lytic activation of a previously quiescent prophage. It is possible that a subpopulation superinfected with one phage could communicate with non-superinfected counterparts, thus initiating a lytic induction, a phenomenon that has only recently been reported for a Bacillus phage [Infection of CB-D with \u0278-A leads to the simultaneous production of \u0278-A and \u0278-D, indicative of both lytic infection and prophage induction. While we do not yet know the proteins that mediate the lysogenic-lytic switch in this system, gene expression assays from infected cell populations support quantitative measurements of phage abundance. A putative peptidase (us phage .Sulfitobacter sp. CB-D cultures) lack a prophage at the attB site (data not shown). Such individual cells would have increased susceptibility to lysogeny by \u0278-A invasion. Finally, integration of both prophages in tandem could result in the establishment of transient polylysogens. Due to an intrinsic instability arising from the high degree of nucleotide identity between the two phages, such presumed polylysogenic events might be expected to readily revert to a single phage type. Regardless of the apparent replacement mechanism, the relatively high frequency with which new lysogens are recovered from superinfections suggests either genotypic switching is prevalent with this two-phage-one-host system or that one-host-phage pair displays higher fitness than the other in a given environmental context.In addition to a mixed infection resulting in the production of both \u0278-A and \u0278-D viral particles, infection of CB-D with \u0278-A also yields new lysogens in which the prophage appears to have been replaced by the superinfecting phage. The mechanism whereby this presumptive substitution occurs is not yet clear, but several possibilities exist. It could have been achieved through homologous recombination between the phage genomes, an oft-cited mechanism of viral evolution \u201363. AlteOur data indicate a competitive interaction between the two host-phage pairs based on a fundamental difference in their lysogenic-lytic switches. One manifestation of these differences is altered frequencies of SPI that influence growth dynamics when the strains are cultivated planktonically and as biofilms. Rates of SPI are anticipated to be the combined result of stochasticity in gene expression (genetic noise) and induction of the SOS response. It has been observed that either a drop in phage repressor protein levels below a given threshold concentration or sporadic expression of integrase genes may initiate the lytic cycle . Noise iS. enterica, which is important for the evolution and diversity of host populations [S. enterica studies demonstrate selective eradication of nonimmune hosts in mixed populations as a means of a competing strategy by Gifsy 2 lysogenized strains [P. aeruginosa strains lysogenized by Liverpool Epidemic Strain prophages use the phages as anti-competitor weapons against phage-susceptible P. aeruginosa populations in a chronic lung infection model [E. coli MG1655 lysogenized with \u03bb reveal the competitive nature of this type of inaction is anticipated to be limited, as lysogenization of susceptible hosts ultimately diminishes nonlysogenized \u201ccompetitors\u201d [Hydra vulgaris, one microbiome member, a Curvibacter species, possesses an inducible prophage that lytically infects another microbiome member, a Duganella strain. Mathematical modeling predicts this interaction may modulate competition amongst microbiome members [Until recently, SPI was largely considered detrimental as some fraction of the cells is continuously lost by phage-induced lysis. However, benefits of SPI on bacterial fitness are now recognized, and include the release of extracellular DNA, which facilitates and enhances biofilm formation , 66, conulations . Relevanulations . As an eulations , 69. Emp strains . Rat modon model . Studiesetitors\u201d . Finally members .Sulfitobacters, in particular, thrive in nature [Our system proposes a new element to this type of interaction: the reciprocal attack by genetically similar phages that share an integration site in a common host. Head-to-head competition experiments between CB-A and CB-D indicate different fates depending upon mode of bacterial growth: planktonic or biofilm, two modes in which Roseobacters, in general, and n nature , 74. We n nature . The oveS. enterica that allow the bacterium to rapidly adapt to shifting environmental conditions [The lab and field data presented here demonstrate that both phage types can occur in mixed populations and indicate the competitiveness of a given host-virus pair is niche specific. Thus, the maintenance of both phage types within a population may be advantageous to a given host over multiple generations and across marine landscapes. Indeed, we might consider these discrete host-phage populations as analogous to bacterial populations that exhibit phase variation. Phase variation has been described as an interchange between physiological \u201cstates\u201d, and is exemplified by the production of antigenic components, H1 and H2, in motile and nonmotile strains of nditions . Why co-nditions , 77, 78)Lysogeny is widespread in nature and has recently received considerable attention in the context of marine systems where focus has been on elucidation of the environmental factors that drive temperate phage into either a lytic or lysogenic state e.g., , 28. OurSupplemental materials"} {"text": "Aortic arch replacement in acute type A aortic dissection patients remains the most challenging cardiovascular operation. Herein, we described our modified Y-graft technique using the Femoral Artery Bypass (FAB) and the One Minute Systemic Circulatory Arrest (OSCA) technique, and assessed the short-term outcomes of the patients.Between February 2015 and November 2017, 51 patients with acute type A aortic dissection underwent aortic arch replacement. Among them, 23 patients underwent FAB while 28 patients underwent both FAB and OSCA. The intraoperative data and postoperative follow-up data were recorded. The follow-up data of patients with traditional Y-graft technique were collected from previously reported studies.In the FAB group, two patients died due to pulmonary infection , and two patients were paralyzed from the waist down. Hemodialysis was performed for five patients (21.7%) before hospital discharge. Fifteen patients (65.2%) received respiratory support for more than 2-days and eight patients (34.8%) for more than 5-days. These follow-up results were comparable or better than the patients with traditional Y-graft technique. Furthermore, compared to the FAB group, the morbidity due to neurological dysfunction and acute renal failure was significantly reduced in the FAB+OSCA group. Moreover, the respiratory support, length of postoperative stay and ICU stay were shortened.This study clarified the feasibility of FAB and OSCA technique in modifying Y-graft technique. The acute type A aortic dissection patients showed less surgical complications and favorable short-term outcomes after this surgery. The first successful aortic arch replacement was reported more than fifty years ago. With the development of the surgical technique and the improvement of patient care, the aortic arch can be repaired more safely now. However, total aortic arch replacement in acute type A aortic dissection patients remains the most challenging cardiovascular operation, which incurs high risk of cerebral damage and acute renal failure, and consequently, a considerable risk of operative mortality. The 30-day mortality was as high as 18% and stroke rates were as high as 10% in 2011 [In 2002, Spielvogel and colleagues developed the Y-graft technique to enable antegrade selective cerebral perfusion , 4. HoweOur team modified the traditional Y-graft technique using Femoral Artery Bypass (FAB) and One Minute Systemic Circulatory Arrest (OSCA) technique to simplify arch reconstruction, reduce embolization and avoid cerebral ischemia.In the past 2 years, we have treated 51 cases of acute type A aortic dissection with our modified Y-graft technique for aortic arch replacement. The perioperative data and short-term follow-up data, such as perfusion time, 30-days survival rate, neurological dysfunction, length of respiratory support and acute renal failure, were recorded for all patients. The follow-up data of patients with traditional Y-graft technique were collected from previously reported studies. Herein, we described our modified Y-graft technique and assessed the short-term outcomes of the patients.We conducted a retrospective analysis of the data from 51 patients who underwent aortic arch replacement between February 2015 and November 2017 at our hospital. The clinical features of the patients are shown in Table\u00a0The surgical procedure was divided into three stages.The femoral artery was exposed through a small incision in the inferior inguinal ligament. The left axillary artery was exposed through a small infra-clavicular incision. A median sternotomy was performed with extension of the incision superiorly along the medial border of the left sternocleidomastoid muscle and the brachiocephalic vessels were exposed.Intravenous heparin was administered to achieve an activated clotting time (ACT) >\u2009350\u2009s. A 10\u2009mm graft (4\u2009cm length) was end-to-side anastomosed to the femoral artery and then connected to the arterial tubes of the cardiopulmonary bypass (CPB) machine. Another 10\u2009mm (15\u2009cm length) graft was end-to-side anastomosed to the left subclavian artery and the other end of this graft was tunneled via the second intercostal space into the mediastinum on demand. The left common carotid artery was cannulated with an arterial catheter, which was connected to the arterial tubes of the CPB machine. Since the CPB machine did not work, the femoral artery-to-left common carotid artery bypass was completed. Then the left common carotid artery was transected. Another 8\u2009mm graft was end-to-end anastomosed to the left common carotid artery. The previous 10\u2009mm graft (connected to the left subclavian artery) was measured to the appropriate length and side-to-side anastomosed to this 8\u2009mm graft (connected to the left common carotid artery). The free-end of the previous 10\u2009mm graft (connected to the left subclavian artery) was tightly connected to the arterial tubes of the CPB machine. The free-end of the 8\u2009mm graft (connected to the left common carotid artery) was clamped. This is how both femoral artery-to-left common carotid artery bypass and the femoral artery-to-left subclavian artery bypass were completed. As for the innominate artery, the procedures of cannulation and anastomosis were similar to the left common carotid artery. The innominate artery was cannulated with an arterial catheter, which was connected to the arterial tubes of the CPB machine. A femoral artery-to-innominate artery bypass was completed. Then the innominate artery was transected. A 12\u2009mm graft was end-to-end anastomosed to the innominate artery. The free-end of the 12\u2009mm graft was clamped.Finally, without the assistance of CPB machine, femoral artery-to-left common carotid artery bypass, femoral artery-to-left subclavian artery bypass and femoral artery-to-innominate artery bypass were all completed at room temperature, with continuous selective bilateral cerebral perfusion. The reconstruction of the three branches was also successfully completed. Every graft was carefully de-aired, and perfusion was restored to the head and upper extremities Fig. .The pericardium was opened after cannulation of the superior and inferior vena cava, and ensuring that the CPB machine worked and cooling was started. The ascending aorta was cross-clamped at 32\u2009\u00b0C and the cardioplegic solution was usually perfused through a coronary sinus cannulation to arrest the heart. After the cardioplegic cardiac arrest, the aortic valve repair or replacement was performed if significant aortic valve insufficiency was identified through TEE. The ascending aorta was also replaced by the graft, which was anastomosed to the aortic sinotubular junction or the artificial valve ring.The completion of the reconstruction of the proximal aortic root usually coincided with the end of core cooling (target temperature: 32\u2009\u00b0C). At the beginning of this stage, the patient was placed in slight Trendelenburg position, and the head was packed in ice. The cross-clamp was moved to the distal aortic arch (between the innominate artery and the left common carotid artery), and the aortic arch was trimmed. Hypothermic circulatory arrest was started after removing the clamp. The intraoperative stent was placed into the distal aortic arch and then the aortic arch was immediately cross-clamped after de-airing was end-to-side anastomosed to the 12\u2009mm graft (innominate artery). The free-end of the 12\u2009mm graft (innominate artery) was end-to-side anastomosed to the ascending graft in ideal site , due to pulmonary infection. Two patients were paralyzed from the waist down, one of them was transient, while the other did not recover during the follow-up period. Hemodialysis was performed for five patients (21.7%) during the follow-up period. Fifteen patients (65.2%) received respiratory support for more than 2-days and eight patients (34.8%) for more than 5-days. As for the traditional Y-graft technique, we reviewed previous articles and obtained the corresponding data , male gender (78.6%), weight (80.8\u2009\u00b1\u200912.6 Kg), Marfan syndrome (7.1%), aortic valve regurgitation (28.6%), smoking, past or current (75.0%), hypertension (92.9%), renal dysfunction (7.1%), and pulmonary disease, are shown in Table\u00a0\u2009years, mThe operative variables are also shown in Table The operative mortality was 7.1%, and neurological dysfunction was 3.6%. Among the 28 patients, 10 patients (35.7%) had >2\u2009days intubation and three patients (10.7%) had >5\u2009days intubation. Only one patient (3.6%) had postoperative renal dysfunction and required temporary hemodialysis before discharge.p<0.05), except for the aortic clamp time and the skin-to-skin time. As for the short-term outcomes, the 28 patients who underwent modified Y-graft technique using FAB+OSCA had an operative mortality of 7.1% and the morbidity of neurological dysfunction was 3.6% . By this technique, we can reconstruct the brachiocephalic branches at room temperature, without cardiopulmonary bypass (CPB). Even the pericardium does not need to be opened. The left subclavian artery is difficult to expose and anastomose. Nerve injury, especially recurrent laryngeal nerve, can easily occur. Notably, reconstruction of brachiocephalic branches is easier to perform by our modified technique. Besides, it is easy to repair a leak immediately after anastomoses of the supra-aortic arteries because of optimal mobility and slight wall tension of the anastomoses. Twenty-three patients underwent this modified technique. Subsequently, we modified the Y-graft technique using FAB and One Minute Systemic Circulatory Arrest (OSCA) technique. The latter technique majorly decreases the systemic circulatory arrest time, thus reducing the ischemic time of spinal cord and kidneys. Moreover, the lowest nasopharyngeal/rectal temperature can be maintained at 32\u2009\u00b0C. To summarize, our modified technique almost eliminated the systemic circulatory arrest time, and reduced the CPB time and other operative variables.Comparison of the traditional Y-graft technique with the modified Y-graft technique using FAB Table showed tComparison of the FAB technique and FAB+OSCA technique Table showed tThis study clarified the feasibility of FAB+OSCA technique in modifying Y-graft technique. The acute type A aortic dissection patients showed less surgical complications and favorable short-term outcomes by this surgery. Comparison of the traditional Y-graft technique with the modified Y-graft technique using FAB Table showed t"} {"text": "The correlation of the in vitro dissolution of a drug with the pharmacokinetics of one of its metabolites was recently proposed by the authors of the article as an additional or alternative analysis to the usual in vitro correlations in vivo, mainly in the case of fast-absorbing drugs that have metabolites with a significant therapeutic effect. The model proposed by the authors considers that amiodarone has a slow dissolution, rapid absorption, and rapid metabolism, and before returning to the blood from other compartments, its pharmacokinetics is determined mainly by the kinetics of release in the intestine from the pharmaceutical formulation. Under these conditions, the rate of apparition of desethylamiodarone in the blood is a metric of the release of amiodarone in the intestinal fluid. Furthermore, it has been shown that such an estimated in vivo dissolution is similar, after time scaling, to the dissolution measured experimentally in vitro. Dissolution data of amiodarone and the pharmacokinetic data of its active metabolite desethylamiodarone were obtained in a bioequivalence study of 24 healthy volunteers. The elimination constant of the metabolite from plasma was estimated as the slope of the linear regression of logarithmically transformed data on the tail of plasma levels. Because the elimination of desethylamiodarone was shown to follow a monoexponential model, a Nelson\u2013Wagner-type mass equilibrium model could be applied to calculate the time course of the \u201cplasma metabolite fraction.\u201d After Levi-type time scaling for imposing the in vitro\u2013in vivo correlation, the problem became that of the correlation between in vitro dissolution time and in vivo dissolution time, which was proven to follow a square root model. To validate the model, evaluations were performed for the reference drug and test drug separately. In both cases, the scaled time for in vivo dissolution, t*, depended approximately linearly on the square root of the in vitro dissolution time t, with the two regression lines being practically parallel.Due to its very low water solubility and complex pharmacokinetics, a reliable point-to-point correlation of its Amiodarone (AMD) has been shown to have variable oral bioavailability (20\u201380%). After absorption, AMD undergoes extensive metabolism, is distributed in the blood, lipids, and in deep compartments, and undergoes enterohepatic circulation . MetabolConcentrations in the myocardium have been shown to be 35 times higher than in the plasma . The phaIn vitro\u2013in vivo correlations (IVIVCs) are correlations between in vitro dissolution data and in vivo release kinetics, estimated by the deconvolution of pharmacokinetic IVIVCs were constantly recommended by regulatory authorities in the last decades when developing extended-release formulations at 100\u00a0rpm. The dissolution medium was sodium lauryl sulfate 10\u00a0g/L in ultrapure water . Samples of 5 \u00b1 0.1\u00a0ml were collected at 5, 15, 30, 45, and 60\u00a0min and subsequently replaced with an equal volume of medium. AMD concentrations were determined at 242\u00a0nm on a V-530 UV-VIS spectrophotometer .In vivo data were obtained in a bioequivalence study by comparing a tested formulation (T) with reference (R) Cordarone 200\u00a0mg, Sanofi Synthelabo. The study was approved by the Romanian National Medicines Agency and Ethics Committee of the Army Center for Medical Research.Venous blood samples (5\u00a0ml) were collected into heparinized tubes through a catheter inserted in the antecubital vein before (time 0) and at 1, 1.5, 2, 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5, 8, 9, 10, 12, 24, 48, 72, 96, and 120\u00a0h. Blood samples were centrifuged at 5\u00b0C for 6\u00a0min at \u223c3,000\u00a0rpm. Plasma was immediately frozen and stored at \u221230\u00b0C until analysis.Plasma samples were transferred to 10\u00a0ml disposable polypropylene tubes, to which 50\u00a0\u00b5L internal standard (IS) solution (20\u00a0\u00b5g/ml fenofibrate in methanol), 500\u00a0\u00b5L pH 4.5 phosphate buffer, and 4\u00a0ml methyl tert-butyl ether were added. The tubes were vortex mixed for 10\u00a0min and then centrifuged for 10\u00a0min at 4,000\u00a0rpm. Of the organic layer, 3\u00a0ml were retaken and evaporated to dryness at 40\u00b0C under a gentle nitrogen steam. The sample was reconstituted into 200\u00a0\u03bcL of mobile phase. Of each sample, 100\u00a0\u00b5L were injected into the chromatographic column.2HPO4 and 11\u00a0mM KH2PO4, adjusted to pH 4.5 (Solvent A) and a 1:1 (v:v) acetonitrile methanol mixture (Solvent B), and delivered in a 20:80 (v:v) ratio. The mobile phase was prepared daily, filtered, and degassed before use. The flow rate was 1.0\u00a0ml/min, and all work was carried out at 40\u00b0C.The chromatographic analyses were performed on a Waters liquid chromatographic system consisting of a 600E quaternary gradient system, an AF model in line degasser, 486 UV-VIS tunable absorbance detector, and a 717 plus auto sampler. Empower Pro software was used to control the system and acquire and process data. The UV detector was set at 242\u00a0nm. A 15\u00a0cm \u00d7 4.6\u00a0mm i.d Microsorb-MV C18 column and a guard column packed with C18 were used for separation. The mobile phase consisted of a phosphate buffer solution containing 7\u00a0mM NaThe method was validated in accordance with the bioanalytical method validation guidelines of the FDA, including linearity, limits of quantification, selectivity, accuracy, precision, recovery, dilution effects, and stability. The specificity was evaluated related to interferences from the endogenous matrix components of drug-free plasma samples of six different origins. The calibration curves of AMD and DAMD were constructed in the range in the range 20\u20131,000\u00a0ng/ml for both AMD and DAMD, by plotting the ratios between their peak areas and IS peak areas vs. concentration (ng/ml), using data obtained from triplicate analysis of the calibration standard solution. The lower limit of quantification (LLOQ) was set as the lowest concentration on the calibration curve. Within-run and between-run precision and accuracy were estimated by analyzing five replicates of the LLOQ and quality control (QC) samples in a single analytical run and on five consecutive days, respectively. The absolute recovery of AMD and DAMD was determined using five replicates of the three concentration level QC samples and was determined to be 74% for AMD and 97% for DAMD. Benchtop, extract, stock solution, freeze-and-thaw, long-term, and post-preparative stability studies were also performed to evaluate the stability of both analytes.max) were considered as random variables with the following structure + C + eijk,Yk, j = index for period, k = index for sequence, F = the direct fix effect of the formulation in the kth sequence which is administered at the jth period, Cth period, where C = 0 and \u03a3C = 0, and eijk = the within-subject random error in observing Yijk.where \u03bc = the overall mean, i = index for subject, i = 1, nAll parameters were evaluated by analyses of variance to determine statistically significant (\u03b1 = 0.05) differences between the drug formulations using the program Kinetica, version 4.2. InnaPhase Corporation.0-\u03c4 and AUC0-\u221e were shown to lie within the 80\u2013125% interval.To demonstrate bioequivalence, the 90% confidence intervals for AMD (DAMD) test/reference ratios of AUCIn vitro dissolution data were modeled using a square root law and a power law model, used in linear forms, as previously described (escribed .r(t) is the ratio of cumulated released substance at the moment t. It should be noted that r(t) is sometimes written in the form r(t) = M(t)/M\u221e, where M\u221e is the amount released at infinity; however, in all cases, this is not the total amount of diffusing component. In case of nanosystems, for example, the release most frequently involves only a part of the active substance, which we can consider as the \u201cavailable fraction for release,\u201d with another part of it remaining sequestered. Whatever the case, in practice, in most cases, the experimentally determined quantity tends to reach a saturation value. If this value remains constant for a sufficient period of time, it is reliable to consider it as M\u221e.The law of square root can result from a phenomenological model that involves the diffusion of the drug into the solvent that penetrates the matrix of the pharmaceutical formulation (Higuchi model) or from a model that considers release from the pharmaceutical formulation as an infinite reservoir across the interfaces with the solvent in a long diffusion path :r(t)=kt,Power law is an empirical law which combines two release kinetics as a result of the diffusion and the erosion of a matrix, is linearized in the formppas law .Analysis of time evolution of plasma levels of AMD and DAMD and estimation of the pharmacokinetic parameters was performed by both non-compartmental and compartmental methods, based on the data obtained in the 0\u2013120\u00a0h time interval.There were estimated partial and cumulated areas under curves. It was tested if, after logarithmic transformation, a good regression line on the tail of the curve was obtained, in order to define an elimination constant. Mono- and bicompartmental modeling was tested for AMD and DAMD pharmacokinetics.Amiodarone, a lipophilic drug (logP = 7.24), undergoes substantial metabolism, being classified as BDDCS (biopharmaceutics drug disposition classification system) Class 2 compounds .t) could be considered an estimation of the absorption of the parent drug from the intestine FRA(ti). Based on this hypothesis, a correlation between in vitro dissolution and the in vivo pharmacokinetics of metabolites would be expected, which was indeed found in the case of diltiazem.The hypothesis of this article, presented previously by the authors , was thaBecause the pharmacokinetics was measured after a single dose, the return from the \u201cdeep compartment,\u201d where accumulation occurs over time, was neglected. Furthermore, because metabolites occur at the same time as plasma AMD, metabolism is considered a rapid process.Consequently, the slowest, rate-determining step for the chain of kinetics leading to the apparition of metabolite in plasma remains the release kinetics of the parent drug in the gastrointestinal tract.Again, because AMD is lipophilic, the rate of transfer from the blood to the lipid compartment is higher than that of reverse transport; the return of AMD to the blood may be neglected, and the transfer from blood to lipids will become a component of the elimination of the parent drug.Consequently, in a simplified one-compartment model for DAMD, it was considered only two processes, corresponding to the appearance of the metabolite in the blood and its total elimination .cAMD and cdAMD are, respectively, the concentrations of the parent drug and metabolite in blood compartment.Where FRAAMD is the absorption fraction of AMD, FRD is the dissolution fraction, and correl denotes correlation.\u2192 represents a slow process and \u2192\u2192 a rapid process, FRAp):FRApDAMD is the fraction of the apparition of the metabolized drug at time ti, cdAMD(ti) is the plasma concentration of the metabolite at time ti, and A modified, Wagner\u2013Nelson-type equation was applThe elimination rate constant was estimated as the slope of linear regression of the last points of the logarithmic transformed data. Integrals were approximated by areas under plasma levels of DAMD.The model could actually be much more general. In the case of compounds subject to extended metabolism (BDDCS classes 1 and 2 compounds), because the rate of absorption and metabolism are usually high, the rate of occurrence of metabolites in the plasma is determined by the rate and extent of the parent drug release from the pharmaceutical formulation.Since amiodarone is lipophilic (logP 7.24 ) [Amiodarone DrugBanK], its dissolution in water is very low, meaning that it is necessary to add surface-active agents in dissolution medium. The FDA recommends sodium lauryl sulfate (SLS) 1% or Tween 80 1% (accessdata.fda). In these conditions, the dissolution of AMD was rapid, being complete within 1\u00a0h in all cases. The mean amiodarone dissolution profiles are presented in Dissolution is forced by the addition of a high concentration of surfactant in the release medium, which is a good test for quality control, but dissolution in the presence of great concentrations of surface-active agents is not biorelevant .The modeling of release kinetics was performed using both the square root and power law model. It appeared that both models work well enough. Correlation coefficient was higher in the case of the power law, but the number of points approximated by the square root law was greater. Fitting with the square root law for tested and reference drug are presented in Individual pharmacokinetics curves for AMD and mean curves for AMD and DAMD, for the reference (R) and tested (T)formulations are presented in There is a great variability of concentrations between subjects from 12\u00a0h, but it is, at the same time, to note that the tails of curves are approximately parallel, suggesting a common pattern for elimination in all subjects. AMD has unpredictable absorption and therefore bioavailability .In the first phase, a rapid decrease in plasma levels appeared, with lipids and deep compartments becoming depots for both AMD and DAMD. Later, both of them return to the central compartment, and a long and variable terminal elimination half-life appears .A naked eye analysis suggests that the formulations are bioequivalent. Mean pharmacokinetic parameters and 90% confidence intervals for mean ratios As the formulations proved to be bioequivalent in spite of their high variability, starting from AUC and Cmax, a first analysis was performed on the entire set of data in the study .in vivo, the elimination constants for AMD and DAMD were estimated.To apply the mass balance of the Wagner\u2013Nelson type in the calculation of the fraction of drug absorbed and, in our case, the fraction of AMD dissolved Half-time was not well defined in the case of AMD, with the result depending on the interval selected on the tail of the plasma level curves. Three, very different values were obtained: 7\u00a0h in the 7\u201312\u00a0h interval, 23\u00a0h in the 12\u201348\u00a0h interval, and 77\u00a0h in the 48\u2013120\u00a0h interval. In the label of the AMD reference drug, a half-time of 53\u00a0days is reported. This evolution is a result of the distribution in lipids and enterohepatic circulation, as well as returning AMD back to the central compartment from the accumulations in lipid and deep compartments.in vitro and in vivo evaluations of three tablet formulations of amiodarone in healthy subjects were previously reported by in vitro dissolution, 120\u00a0min, and the in vivo time points of up to 18\u00a0h. He applied a time scale, following the FDA recommendation: \u201cTime scaling may be used as long as the time scaling factor is the same for all formulations.\u201d His conclusion was that \u201ca point-to-point acceptable and reliable correlation was not achieved\u201d and \u201cdissolution data could be used only for routine and in-process quality control of amiodarone tablet formulations.\u201dComparative In the case of DAMD, as can be seen in By introducing this value in the proposed deconvolution formula and making the calculation, as can be seen in in vivo, an FRAp dependence on time similar to the model of dissolution kinetics in vitro could be expected. A naked eye examination suggests a linear model. A good fit of FRA as a function of the square root of time was also was obtained.As the pharmacokinetic model supposes that the apparition of DAMD in plasma equals the release of AMD The linear correlation is just slightly better, but the small lag time appeared in the square root of time scale; this was a good result since absorption and metabolism are not instantaneous.in vivo release is much slower.Following the low solubility of AMD and the small volume of GI liquids, dissolution had reason to be slow and limited. Release is also influenced by the secretion of bile salts and lecithine . Releasein vivo dissolution of the parent drug. In order to correlate the in vitro dissolution fraction with the in vivo appearance of metabolite, time scaling was performed. Time in the interval 0\u201360\u00a0min, corresponding to in vitro dissolution, was transformed in time t* in the interval 0\u20137\u00a0h.n circumstances of the model, the apparition of metabolite in plasma is correlated with An exponential dependence of the FRA on FRD is diffiin vivo dissolution/T50% in vivo dissolution was obtained, as can be seen in t (IA function een in t seemed teen in t . This reeen in t .In the case of lipophilic drugs, due to slow dissolution, rapid absorption, and rapid metabolism, the pharmacokinetics of both the parent drug and metabolites before the return of the drug from other compartments in the blood is mainly determined by the kinetics of release in the intestine from the pharmaceutical formulation.For long-life lipophilic drugs, as shown for DMA, it is possible to estimate the absorption fraction of the parent drug from the simpler pharmacokinetics of the metabolite, in which case it is possible to calculate an elimination constant.in vitro dissolution and the in vivo estimated dissolution models as well as the similar dependence of scaled time on in vitro time in the case of bioequivalent formulations can be considered a validation of the metabolite approach of the in vitro\u2013in vivo correlation model.The similarity between"} {"text": "Although numerous studies have presented potential mechanisms underlying its pathogenesis, the understanding of \u03b1-synuclein-mediated neurodegeneration remains far from complete. Here, we show that overexpression of \u03b1-synuclein leads to impaired DNA repair and cellular senescence. Transcriptome analysis showed that \u03b1-synuclein overexpression led to cellular senescence with activation of the p53 pathway and DNA damage responses (DDRs). Chromatin immunoprecipitation analyses using p53 and \u03b3H2AX, chromosomal markers of DNA damage, revealed that these proteins bind to promoters and regulate the expression of DDR and cellular senescence genes. Cellular marker analyses confirmed cellular senescence and the accumulation of DNA double-strand breaks. The non-homologous end joining (NHEJ) DNA repair pathway was activated in \u03b1-synuclein-overexpressing cells. However, the expression of MRE11, a key component of the DSB repair system, was reduced, suggesting that the repair pathway induction was incomplete. Neuropathological examination of \u03b1-synuclein transgenic mice showed increased levels of phospho-\u03b1-synuclein and DNA double-strand breaks, as well as markers of cellular senescence, at an early, presymptomatic stage. These results suggest that the accumulation of DNA double-strand breaks (DSBs) and cellular senescence are intermediaries of \u03b1-synuclein-induced pathogenesis in PD. Excess levels of a protein involved in Parkinson\u2019s disease can impair the brain\u2019s capacity to repair DNA damage, leading to a state of cellular aging that accelerates neuronal death. When aggregated, the \u03b1-synuclein protein plays a major role in Parkinson\u2019s disease and other neurodegenerative disorders. A team from South Korea, led by He-Jin Lee of Konkuk University, Seoul, and Seung-Jae Lee of Seoul National University College of Medicine, showed that human neuronal cells and mouse models with elevated expression of \u03b1-synuclein develop double-stranded breaks in their genomes as a consequence of deficient quality control mechanisms. The accumulated DNA damage spurs the cells to enter a state in which they show canonical signs of cellular aging but remain metabolically active in ways that fuel neurodegeneration. Therapies that target these processes could help prevent or treat \u03b1-synuclein\u2013linked diseases. This protein is abnormally folded and aggregated in several neurodegenerative diseases, referred to as \u03b1-synucleinopathies, such as dementia with Lewy bodies, multiple system atrophy, and Parkinson\u2019s disease (PD)2. These diseases affect millions of people worldwide, with the distinct characteristics of intracytoplasmic protein aggregates and a gradual increase in neuronal death in particular areas of the brain.\u03b1-Synuclein is an abundant neuronal protein with an intrinsically disordered structure3. The central nervous system undergoes considerable changes with age. Brain autopsy studies of aged people without a PD diagnosis have reported brain and spinal cord atrophy; decreases in the volume of gray matter; accumulation of pathological protein aggregates, such as amyloid plaques, neurofibrillary tangles, and Lewy bodies; and inclusions of TAR DNA-binding protein 43 and senescent cells5.Aging is a serious risk factor for PD and related neurodegenerative diseases3. In addition, senescent cells undergo morphological alterations to become larger and irregular due to cytoskeletal rearrangements and changes in cell membrane composition7. Senescence is often triggered by irreparable DNA damage, which accumulates with aging. This accumulation may be responsible for the pathogenesis of age-related diseases, such as neurodegenerative diseases.Cellular senescence is defined by irreversible cell cycle arrest and resistance to apoptotic death, often accompanied by changes in cell metabolism, such as changes in protein synthesis, glycolysis, fatty acid oxidation, reactive oxygen species (ROS) generation, and the acquisition of senescence-specific phenotypes8. However, they are highly toxic and mutagenic, as chromosomal breakage may result in loss of genetic integrity9. Two major pathways are involved in repairing DSBs: homologous recombination (HR) and non-homologous end joining (NHEJ). HR is the predominant DSB repair pathway during embryonic development, meiotic recombination, replication fork stabilization, one-ended DSB repair, and two-ended DSB repair of the late S/G2 phase of the cell cycle12. The NHEJ pathway is the main pathway in mammalian cells acting on DSBs through all cell cycle phases, including the G1 interphase12. The primary DSB repair pathway for postmitotic neurons is NHEJ, although HR is important for other proliferating cells in the brain16. The initiation of DSB repair involves the formation of the MRE11-RAD50-NBS1 (MRN) complex, a central DSB sensor18. MRE11 plays an important role in the regulation of the choice of DSB repair pathway19. DNA end resection of DSBs leads to the loading of the RPA complex and RAD51 for HR repair, whereas blocking resection leads to NHEJ repair19. The binding of the Ku70/Ku80 heterodimer to DSB sites is recognized as blocking resection, which leads to NHEJ repair.DNA double-strand breaks (DSBs) arise infrequently, on the order of 10\u201350 per cell per day20. In addition to the direct restoration of DNA integrity, DDRs also activate several cellular processes, such as cell cycle checkpoints, gene expression, and protein turnover. p53/p21 and p16/p16INK4a-pRB are the two primary regulators of these responses. Poor execution of DDRs may trigger cellular senescence; persistent DDRs may lead to age-related neurodegenerative diseases.DNA damage activates a series of cellular pathways called DNA damage responses (DDRs). DSBs are the strongest triggers for such reactionsThis study aimed to assess the effects of \u03b1-synuclein on DDRs and their connections to cellular senescence. Our results showed that \u03b1-synuclein in human neuronal cells increased DSBs with impaired DNA repair. \u03b1-Synuclein-induced impairment of DSB repair leads to increased levels of senescence markers. These results suggest that \u03b1-synuclein induces cellular senescence with DSB accumulation via impaired DDRs.The following primary antibodies were used: \u03b1-synuclein monoclonal antibody (BD Biosciences), \u03b3H2AX, phospho-ATM, ATM, p53, H3, poly(ADP-ribose) polymerase (PARP), ERCC1, XRCC1, MRE11, Rad51, Ku80, DDB2, p16, p73, and H3K9me3 antibodies ; H2AX, phospho-p53, p21, and Ku70 antibodies ; \u03b1-tubulin and \u03b2-actin antibodies ; and 53BP1 antibody .Retinoic acid, poly-L-lysine, and glutathione were obtained from Sigma\u2013Aldrich. NE-PER\u2122 Nuclear and Cytoplasmic Extraction Reagents were obtained from Thermo Fisher Scientific . X-gal was obtained from Duchefa Biochemie .Total RNA was amplified and purified using the Target Amp-Nano Labeling Kit for Illumina Expression BeadChip. Detection of the array signal was carried out using Amersham Fluorolink Cy3 Streptavidin according to the bead array manual. The arrays were scanned using a bead array reader confocal scanner. The quality of the hybridization and overall chip performance were monitored manually by visual inspection of internal quality control checks and raw scanned data. Raw data were extracted using the software provided by Illumina Genome Studio v2011.1 and Gene Expression Module v1.9.0. Array probes were logarithm transformed and normalized using the quantile method. Filtered reads were aligned to the human reference genome (hg38 assembly) using the STAR mapper. The mapped reads were counted and converted to TPM values using RSEM. For differentially expressed gene analysis, fold-change and statistical significance were calculated using DESeq. Gene set enrichment analysis (GSEA) was performed in preranked mode. This dataset was obtained from the National Center for Biotechnology Information (NCBI) database (accession no. GSE149559).DAVID, Metascape, and Enrichr were used to infer the biological functions of the genes associated with the peaks. Default parameters were used.To evaluate global gene expression profiles, the RRHO test was performed on two sets of gene expression comparisons (10.3390/ijms20236098). In this algorithm, genes were ranked according to their differential expression between two sample groups, and then these ranked gene expression profiles were iteratively assessed for overlap.ChIP assays were performed according to the instructions provided by Upstate Biotechnology. For each assay, 50\u2009\u03bcg of DNA was sheared by sonication (DNA fragment size 200\u2013500\u2009bp) and precleared with protein A magnetic beads (Upstate Biotechnology #16-661). Then, 50\u2009\u03bcg of DNA was precipitated using \u03b3H2AX and P53 antibodies. After immunoprecipitation (IP), the recovered chromatin fragments were subjected to sequencing.TM DNA Library Prep Kit. Briefly, the chipped DNA was ligated to adaptors. After purification, PCR was performed on the adaptor-ligated DNA with an index primer for multiplex sequencing. The library was purified using magnetic beads to remove all the reaction components. The size of the library was assessed using an Agilent 2100 Bioanalyzer. High-throughput 100\u2009bp paired-end sequencing was performed using a HiSeq 2500 system. The dataset was submitted to the NCBI Gene Expression Omnibus database (accession no. GSE149558).The library was constructed using the NEBNext\u00ae UltraThe sequenced reads were trimmed using BBMap (BBDuk) and aligned to the human reference genome (hg38 assembly) using Bowtie 2. HOMER (findPeaks) was used to identify P53 binding sites (peaks) or rH2AX-enriched sites compared with the corresponding input samples in Con and SNCA-overexpressing cells with a false discovery rate-adjusted cutoff value of 0.001. The identified peaks were annotated using a known gene database (RefSeq). The annotated peaks were categorized into two groups (promoters and enhancers). Peaks located between \u22121 and +0.1\u2009kb from the transcription start site were defined as promoter peaks, while the remaining peaks were defined as enhancer peaks. Superenhancer regions were also identified using HOMER (findPeaks with \u201csuper\u201d option). The read coverage tracks for visualization were constructed using HOMER (UCSC file) with default options.Motif analysis of \u03b3H2AX-bound sequences depending on genomic locations (promoter and enhancer) was performed using HOMER (findMotifsGenome.pl) with the default option.21. B6.Cg-tg (Prnp-SNCA*A53T)23Mkle/J hemizygous mice overexpress mutant \u03b1-synuclein in the brain at levels approximately sixfold higher than the level of endogenous mouse \u03b1-synuclein. Three- and 8.5-month-old mice expressing A53T and control C57BL/6J mice were purchased from Jackson Laboratory . All mice were housed in pathogen-free facilities under 12-h light/12-h dark cycles with ad libitum access to food and water. All experimental animals were handled in accordance with the animal care guidelines of Konkuk University (IACUC KU16067-2).The human alpha-synuclein (A53T) transgenic line G2-3 was described previously22.The human neuroblastoma cell line SH-SY5Y (ATCC CRL-2266) was maintained and differentiated as described previously23.Differentiated SH-SY5Y cells were transduced with recombinant adenoviral vectors , as previously described24. Briefly, cells were rinsed with ice-cold phosphate-buffered saline (PBS), and ice-cold extraction buffer (PBS/1% Triton X-100/protease inhibitor cocktail/phosphatase inhibitor cocktail) was added. After incubating on ice for 10\u2009min, the cell extracts were centrifuged at 16,000\u2009\u00d7\u2009g for 10\u2009min, and the supernatants and pellets were reserved separately for further analysis.Cell extracts were obtained as described previouslyg for 10\u2009min. The supernatant fraction (cytoplasmic extract) was transferred to a prechilled tube. The insoluble pellet fraction was resuspended in nuclear extraction reagent and vortexed vigorously. The nuclear and cytoplasmic extracts were stored at \u221280\u2009\u00b0C until further analysis.Nuclear extracts were prepared using the NE-PER Nuclear Cytoplasmic Extraction Reagent Kit according to the manufacturer\u2019s instructions. Briefly, cells were rinsed with ice-cold PBS, and cytoplasmic extraction buffer was added. Cells were collected using a cell scraper. After incubating on ice for 10\u2009min, the cell extract was centrifuged at 16,000\u2009\u00d7\u200924. Chemiluminescence detection was performed using a FUJIFILM Luminescent Image Analyzer LAS-3000 and GE Healthcare Amersham Imager 680. Images were analyzed using the Multi Gauge (v3.0) software .Western blotting was performed as described previouslyThe cells were trypsinized and divided into two tubes. Accustain solution T containing detergent and propidium iodide (PI) was added to one tube to determine the total number of cells. Accustain solution N, containing PI but no detergent, was added to the other tube to label the damaged cells. The cells from both tubes were counted using an ADAM cell counter .Total RNA was extracted using the RNeasy Mini Kit . The RNA extract was reverse transcribed using the High Capacity cDNA Reverse Transcription Kit . Quantitative real-time PCR was performed on a LightCycler 480 II using LightCycler 480 SYBR Green I Master Mix , as recommended.The primers used were as follows: DDB2 forward: 5\u2032-AAACCCAGAAGACCTCCGAG-3\u2032, DDB2 reverse: 5\u2032-ACATCTTCTGCTAGGACCGG-3\u2032, BTG2 forward: 5\u2032-AGGGAACCGACATGCTCC-3\u2032, BTG2 reverse: 5\u2032-GGGAAACCAGTGGTGTTTGT-3\u2032, RPS27L forward: 5\u2032-ACTACATCCGTCCTTGGAAGAG-3\u2032, RPS27L reverse: 5\u2032-GCTGAAAACCGTGGTGATCT-3\u2032, p21 forward: 5\u2032-CACCGAGACACCACTGGAGG-3\u2032, p21 reverse: 5\u2032-GAGAAGATCAGCCGGCGTTT-3\u2032, GAPDH forward: 5\u2032-GAGTCAACGGATTTGGTCGT-3\u2032, GAPDH reverse: 5\u2032-TGGAAGATGGTGATGGGATT-3\u2032, p21_1 forward: 5\u2032-CACCGAGACACCACTGGAGG-3\u2032, p21_1 reverse: 5\u2032-GAGAAGATCAGCCGGCGTTT-3\u2032, GAPDH forward: 5\u2032-GAGTCAACGGATTTGGTCGT-3\u2032, and GAPDH reverse: 5\u2032-TGGAAGATGGTGATGGGATT-3\u2032.(tissue extract) p16 forward: 5\u2032-TTCTTGGTGAAGTTCGTGCG-3\u2032, p16 reverse: 5\u2032-GCACCGTAGTTGAGCAGAAG-3\u2032, p21_2 forward: 5\u2032-ACAAGAGGCCCAGTACTTCC-3\u2032, p21_2 reverse: 5\u2032-GTTTTCGGCCCTGAGATGTT-3\u2032, p53 forward: 5\u2032-TGCTCACCCTGGCTAAAGTT-3\u2032, and p53 reverse: 5\u2032-AATGTCTCCTGGCTCAGAGG-3\u2032.2) in a freshly prepared X-gal staining solution. The stained samples were rinsed with ice-cold methanol, air-dried, and then imaged using a digital camera.The cells were rinsed with ice-cold PBS and fixed in 4% paraformaldehyde (PFA) in PBS. The cells or tissues were incubated overnight at 37\u2009\u00b0C . Primary antibodies were diluted in blocking solution and added to the cells. After washing in PBS, the cells were incubated with Alexa488, Cy2, rhodamine red X, or Alexa647 fluorescent dye-conjugated secondary antibodies and then washed again in PBS. The nuclei were stained with TO-PRO-3 dye or Hoechst 33342 (Sigma) and then mounted under coverslips using ProLongTM Gold Antifade reagent (Thermo Fisher Scientific). The stained cells were observed under an Olympus FV1000 confocal laser-scanning microscope or Zeiss LSM 900 with Airyscan 2.The cell staining procedure was described previously25. Briefly, 40-\u00b5m-thick floating brain sections were quenched with 0.3% H2O2 and then blocked with 4% BSA in PBST (0.1% Triton X-100). The samples were incubated overnight at 4\u2009\u00b0C with primary antibodies against mouse anti-phospho-\u03b1-synuclein and rabbit anti-\u03b3H2AX . After washing with PBST, the brain sections were incubated with biotinylated secondary antibodies and treated with avidin\u2013biotin peroxidase complex . Then, 3,3-diaminobenzidine (DAB)-developed sections were observed under a ZEISS AX10 microscope. All samples were evaluated by optical density analysis using the ImageJ program (NIH) with correction for background signal levels.Mouse brain tissue was kept in 4% PFA in cold 0.1\u2009M phosphate buffer (pH 7.4) for 2 days, followed by incubation in 30% sucrose solution. For immunostaining, 40-\u03bcm coronal sections were cut on a sliding microvibratome . The details of the immunohistochemistry procedures have been described elsewhere26. Briefly, 40-\u00b5m-thick floating brain sections were quenched with 0.3% H2O2 and then blocked with 4% BSA in PBST (0.1% Triton X-100). Samples were incubated overnight at 4\u2009\u00b0C with primary antibodies against mouse anti-phospho-\u03b1-synuclein , rabbit anti-\u03b3H2AX , and mouse anti-NeuN . After washing in PBST, the brain sections were incubated with Alexa488- or rhodamine red-X-conjugated secondary antibodies and then washed again with PBST.Details of the immunohistochemistry procedures are provided elsewhereStained sections were mounted in a fluorescence mounting medium containing DAPI . The stained samples were observed under a Carl Zeiss LSM 700 confocal laser-scanning microscope.27.SH-SY5Y cells were transduced with adeno/\u03b1-syn or adeno/lacZ for 3 days. The sections were prepared as described previouslyP values were <0.05. The data were analyzed using repeated-measures one-way analysis of variance with posttests using the Prism 9 software . *P\u2009<\u20090.05, **P\u2009<\u20090.01, ***P\u2009<\u20090.001, ****P\u2009<\u20090.0001).All experiments were repeated at least three times. The values were expressed as the mean \u00b1 S.E.M. Null hypotheses of no difference were rejected if the P\u2009<\u20090.05). Downregulated genes were enriched in meiosis cytokinesis of unknown function . In contrast, \u03b3H2AX-enriched genes were slightly more often downregulated (five upregulated genes and eight downregulated genes) Fig. . A repre31.To verify whether p53 expression increased in \u03b1-synuclein-expressing cells, recombinant \u03b1-synuclein viral vector (adeno/\u03b1-syn) or control vector (adeno/lacZ) was transduced into differentiated SH-SY5Y cells. As shown in Fig. The p53/p21 pathway is involved in several cellular activities that lead to cell death or survival. These include apoptosis, cell cycle arrest, repair, and senescence. Our results in Figs. 32. SH-SY5Y cells were transduced with adeno/\u03b1-syn and adeno/lacZ for 3 days and stained for H3K9me3 me3 Fig. . The con33. Transmission electron microscopy results showed that the number of abnormal mitochondria was increased in \u03b1-synuclein-expressing cells (red arrows). The control and lacZ-expressing cells with normal mitochondria are indicated by blue arrows Fig. .Fig. 5In37. The number of 53BP1 nuclear foci was also significantly increased in \u03b1-synuclein-expressing cells and DSBs, was upregulated 24\u2009h after \u03b1-synuclein expression but slowly decreased thereafter , DDB2, and BTG2. Cellular senescence and DNA DSBs were observed in cells overexpressing \u03b1-synuclein. Specifically, the NHEJ DNA repair pathway was partially activated, with a critical component of the pathway being reduced, resulting in a faulty DNA repair system. Histopathological analyses of \u03b1-synuclein transgenic mice showed increases in the levels of phospho-\u03b1-synuclein and DNA DSBs, as well as markers of cellular senescence at the presymptomatic stage, suggesting that cellular senescence and DNA damage are early events in synucleinopathy. The accumulation of DNA damage has been previously reported in \u03b1-synuclein-overexpressing cells and in several synucleinopathy mouse models41. These cells and tissues retained unrepaired DNA DSBs that were associated with DDR markers. Therefore, persistent DNA damage is probably a consequence of incomplete DDRs. Consistent with this idea, the levels of the DNA repair proteins Ku70 and Mre11 were decreased in aging human lymphocytes42. Neurons with prolonged DDRs showed typical characteristics of cellular senescence, such as mitochondrial dysfunction, production of ROS, and metabolic abnormalities. Neurons are particularly vulnerable to DNA damage because they are postmitotic and are highly metabolically active. A large proportion of neurons in several brain areas of aged mice showed severe DNA damage. Interestingly, the senescent-like phenotypes of a mouse model of premature senescence were rescued by deleting the p21 gene, a key signal transducer in cellular senescence43. These results suggest that senescence-like phenotypes can be induced by severe DNA damage in mature neurons.The mechanism by which DNA damage causes cellular senescence remains unclear. However, accumulating evidence suggests that these two phenomena are strongly associated. DNA damage accumulates with aging in both senescent cells and aged mammalian tissues45. Impairment of the base excision repair (BER) pathway, especially of the mitochondrial BER pathway, can cause various neurodegenerative disorders, such as Alzheimer\u2019s disease46. Ataxia-oculomotor apraxia-1 (AOA1) and spinocerebellar ataxia with axonal neuropathy-1 (SCAN1) are associated with SSB repair defects48. AOA1 is associated with mutations in a novel human gene, aprataxin50, which encodes nucleotide hydrolases/transferases51. The encoded protein may play a role in single-stranded DNA repair through its nucleotide-binding activity and its diadenosine polyphosphate hydrolase activity. A mutation in the DNA repair protein tyrosyl-DNA phosphodiesterase 1 (TDP1), which can repair abortive SSBs created by topo1, is associated with SCAN152. Likewise, defects in DSB repair genes due to genetic mutations are also associated with neurological diseases, such as ataxia-telangiectasia (AT), AT-like disorder (MRE11 gene mutations), and Nijmegen breakage syndrome (NBS1 gene mutations)9. These genes are essential for the recognition of DSBs at the initial stage of DSB repair. These findings suggest that the human nervous system is vulnerable to DNA damage and that impairment of DNA repair pathways can cause neurological diseases.Prolonged or irreparable DNA damage leads to various human diseases that are characterized by genomic instability, including neurodegenerative diseases53. The precise mechanism by which \u03b1-synuclein overexpression causes defects in DDRs remains unknown. The levels of MRE11 were decreased in cells overexpressing \u03b1-synuclein. Mre11 is the nuclease component of MRN/X, one of the primary complexes responsible for recognizing and repairing DSBs as well as transducing DSB signals in eukaryotes. Elucidation of the pathway leading to the reduction of Mre11 would reveal the mechanism by which DNA damage accumulates during synucleinopathy.Our study showed that the overexpression of \u03b1-synuclein leads cells to initiate DDRs. However, in this case, the DDRs are not fully functional, which leads to the accumulation of DNA damage, particularly DSBs. Lewy body-containing neurons in the brain tissues of human patients showed increased levels of DNA DSBs53. These authors also suggested that \u03b1-synuclein binds double-stranded DNA and facilitates NHEJ repair. Taken together, these findings indicate that both overexpression and deficiency of \u03b1-synuclein caused impaired DNA repair and accumulation of DNA damage. It remains unclear how these seemingly contradictory observations can be explained: either too much or too little \u03b1-synuclein leads to increased DNA damage. To maintain functional DNA repair systems, it may be essential to maintain \u03b1-synuclein levels within a specific range.Interestingly, Schaser et al. recently showed that the removal of \u03b1-synuclein in human cells and mice resulted in increased levels of DNA DSBs after bleomycin treatment and a reduced ability to repair DNA damage54. Alternatively, these results suggest that DNA damage may not solely be the result of the cell-autonomous actions of \u03b1-synuclein but may also be induced by a non-cell-autonomous mechanism. \u03b1-Synuclein and its aggregate form can be secreted by neurons55. These secreted forms of \u03b1-synuclein are the culprit of the non-cell-autonomous actions of this protein, affecting neighboring neurons and glia57. Neurons that were exposed to extracellular \u03b1-synuclein showed signs of apoptosis58, while glial cells treated with \u03b1-synuclein showed inflammatory responses57. The latter is particularly interesting because DNA damage can induce inflammatory responses in innate immune cells60. Our study paves the way for further studies on the mechanism of non-cell-autonomous neuronal degeneration and glial inflammation triggered by extracellular \u03b1-synuclein.In our study, DNA DSBs accumulated in both phospho-\u03b1-syn-positive cells and phospho-\u03b1-syn-negative cells in \u03b1-synuclein tg mice. Moreover, DNA DSBs have been found in nonneuronal cells and neurons. These observations may indicate the occurrence of senescence-induced senescence, in which senescent phenotypes spread to neighboring cellsIn conclusion, we propose that the accumulation of DNA damage and cellular senescence may be key components in the pathogenesis of PD and other \u03b1-synuclein-related neurological diseases. DNA damage appears to accumulate due to incomplete and hence impaired activation of DNA repair pathways. How \u03b1-synuclein impairs the DNA repair system is a critical question arising from the current study. Other important questions include how DNA damage leads to cellular senescence in neurons and what the consequences of senescence processes are in terms of neuronal function and viability. Pursuing these questions could ultimately lead to an understanding of the pathogenic mechanism of synucleinopathies.Supplementary Figures"} {"text": "Since the advent of TAVR , the transapical surgical approach has been affirmed as a safe and effective alternative access for patients with unsuitable peripheral arteries. With the improvement of devices for transfemoral approach and the development of other alternative accesses, the number of transapical procedures has decreased significantly worldwide. The left ventricular apex, however, has proved to be a safe and valid alternative access for various other structural heart procedures such as mitral valve repair, mitral valve-in-valve or valve-in-ring replacement, transcatheter mitral valve replacement (TMVR), transcatheter mitral paravalvular leak repair, and thoracic aorta endovascular repair (TEVAR). We review the literature and our experience of various hybrid transcatheter structural heart procedures using the transapical surgical approach and discuss pros and cons. Transcatheter Aortic Valve Replacement (TAVR) was introduced for the treatment of aortic valve stenosis in 2002 by Cribier , and theIn recent years, unfavourable outcomes have been reported, and reduction of sheath calibres for the transfemoral approach, together with the development of other alternative endovascular procedures, have led to a fall in the number of transapical TAVR procedures performed worldwide.Despite this, some researchers have developed consistent experience with the left ventricular apex manipulation and reported satisfactory results in large series of transapical TAVR. The left ventricular apex proved to allow safe access and had great advantages: short distance and good accessibility with good coaxiality with multiple heart structures, and thus good stability of the devices during implantation. For all these reasons, the trans-apical approach can thus also be a useful tool for multiple transcatheter structural heart procedures other than TAVR: transcatheter mitral valve-in-valve and valve-in-ring replacement, mitral valvuloplasty, transcatheter mitral valve replacement (TMVR), paravalvular leak occlusion, and thoracic endovascular aortic repair (TEVAR). Left ventricular outflow tract pseudo-aneurysm occlusion is also described.Transapical TAVR was introduced into clinical practice to treat patients deemed at high surgical risk for standard surgical aortic valve replacement and also at high risk of vascular complications for the transfemoral approach due to peripheral arteries occlusive disease, calcifications or excessive tortuosity. Transapical TAVR is a hybrid procedure in which arterial access for a transcatheter aortic valve implantation is obtained by surgical isolation of the left ventricular apex through a standard left anterolateral 5\u20137-cm-long thoracotomy in the fifth or sixth intercostal space. After opening the chest, the correct location of the apex is identified with the \u201cfinger test\u201d: under transoesophageal echocardiography (TEE) guidance, the left ventricular apex is pushed with the finger to determine at the four chambers view the ideal place to insert the sheath in order to obtain the best possible coaxiality. We place two orthogonal full-thickness U-shaped 2\u20130 polypropylene stitches on the muscular portion of the myocardium to assure correct haemostasis after sheath removal . At thisSince the early days of TAVR, the transfemoral approach has been considered the treatment of choice, and patients undergoing transapical TAVR have concomitant severe peripheral artery occlusive disease and tend to be sicker and with a higher surgical risk profile due to significant comorbidities . SeveralThe \u201ctransfemoral first\u201d approach has become more widespread over the years, and several randomised trials reported a consistent gap in terms of survival and adverse outcomes in favour of transfemoral over transapical TAVR ,7,8. TheConsidering that patients submitted to transapical TAVR are usually affected by significative peripheral artery occlusive disease, they tend to also be sicker due to other significant comorbidities . For thiOver time, however, many centres gained valuable experience in managing the left ventricular apex and thanks to improvements in materials, were able to report satisfactory results for the transapical approach in patients unsuitable for the transfemoral procedure . Some laIn the early days, larger sheaths were used, and access site complications were re-ported. In 2011 Bleiziffer et al. reported a rate of 7% of severe apical bleeding and 1% (2 patients) of apical pseudo-aneurysm, in one case requiring surgical revision, on a series of 143 patients undergoing transapical TEVAR . This leOur TAVR programme with a \u201ctransfemoral first approach\u201d was started in 2009, and to date the multidisciplinary team has remained the same. Our preliminary results were collected in the Italian Registry of Transapical Aortic Valve Implantation and published by D\u2019Onofrio and coll ,4. Up toAccording by the AHA/ACC focused update of the 2014 guidelines TAVR should be reserved for inoperable/high-risk patients (Class I recommendation) and intermediate-risk patients (Class IIa reccomendation) as defined by the STS score. The guidelines also state that a multi-disciplinary heart team approach is mandatory to define the risk profile of each patient and subsequently select the most appropriate procedure [Transcatheter mitral procedures can be performed with a venous transeptal endovascular antegrade approach, or a with a direct retrograde approach through the left ventricular apex isolated with a standard anterolateral minithoracotomy. As described for TAVR, in the transapical approach, the distance from the left ventricular apex to the mitral annulus is very short. Together with coaxiality of the catheter with mitral anulus, this guarantees easy access and high stability during implantation.After initial experience with transapical TAVR, surgeons started to report transcatheter transapical mitral valve replacement for degenerated mitral bioprosthesis and failed mitral valve repairs in which an anuloplasty prosthetic ring was used to reinforce the repair during the first intervention ,22,23. TSome authors report similar early and one-year clinical results among patients undergoing trans-septal, transapical mitral transcatheter ViV procedures and open surgical re-mitral valve replacement , while oThere have been attempts to treat mitral stenosis according to the same principles and with the same devices as TAVR , and manSystematic reviews of results reported in published studies and international conference presentations conclude that TMVR is a feasible alternative to open mitral replacement in high-risk patients ,31. ThesPromising results were reported by Muller et al. in the global feasibility trial of the Tendyne Mitral Valve System . The valFurther studies with larger series of patients and other devices are necessary to better evaluate and validate the procedureSeveral devices for transcatheter mitral valve repair are nowadays commercially available. Most of them use the transvenous transeptal approach.Nechord DS100 is a device that allows artificial chord implantation on prolapsing mitral valve leaflets using the transapical approach on a beating heart without cardiopulmonary bypass. The surgical technique is described step by step by Colli et al. . The choAhmed et al. systematically reviewed the published literature on transapical beating-heart mitral valve repair with Neochord , and ideThranscatheter endovascular paravalvular leak (PVL) repair has become a reasonable alternative to open surgical reintervention for high risk patients. The procedure is performed with either arterial or venous femoral access, but in any case, it is technically demanding. The transapical approach offers a very easy alternative, although it involves surgical minithoracotomy and general anaesthesia. The transapical route guarantees direct access to the mitral annulus and an easy engagement of PVL, even those in the postero-medial position for which transvenous and transarterial approaches are particularly challenging .Some surgeons find that transapical access is associated with lower procedural and fluoroscopy time than other approaches .In 2014 Taramasso et al. report results of 139 patients undergoing open surgical (122 patients) and hybrid transapical transcatheter (17 patients) mitral paravalvular leak occlusion . The oveZorinas et al. review their single-centre experience of 19 patients undergoing transapical mitral PVL closure with the novel specifically designed device Occlutech PLD Occluder . They reIn 2020 Onorato et al. report midterm results of 136 patients undergoing aortic or mitral PVL transcatheter closure in 21 sites in 9 countries with the same device (Occlutech PLD Occluder) . Access TEVAR is nowadays the treatment of choice for many acute aortic syndromes and elective aortic pathologies. Deployment of stent grafts is usually retrograde, through femoral and iliac arteries. Delivery sheath and catheter calibres range from 14 to 24 French and require large minimal inner arterial diameters. Thoracic pathologies or acute aortic syndromes often affect elderly patients with peripheral arterial occlusive disease, which precludes standard retrograde endovascular treatment. Alternative approaches, like common iliac arteries or infrarenal abdominal aorta, are often required. After reporting the feasibility of the technique in a pig model , MacDonaOur transapical TAVR program started in 2010, and we described our first case of a successful transapical TEVAR of a patient admitted for an acute aortic syndrome in 2012 . He was Antegrade delivery of the thoracic stent graft through a left anterolateral minithoracotomy was performed. A Dry-Seal 22 French sheath was inserted into the left ventricular apex. The aortic valve was first inspected through trans-thoracic and trans-oesophageal echocardiography and presented no stenosis or regurgitation. Exclusion of the lesion was achieved, and the patient discharged on post-operative day 15.In 2018 we then reported a series of five patients affected by acute aortic syndromes undergoing transapical TEVAR: two patients presented with a post-traumatic aortic injury with signs of impending rupture, two patients with contained aneurysmatic aortic rupture and one patient with symptomatic PAU with large pseudo-aneurysm . All patWe also performed an elective transapical TEVAR as completion of a previous complete aortic arch replacement using the Elephant Trunk (ET) technique on a patient with extended aneurysmatic disease of the aortic arch and descending aorta, peripheral arteries occlusive disease and severe aortic tortuosity .Other researchers have described percutaneous transapical access to create a rail wire support for a very complex retrograde TEVAR . They inThe transapical antegrade approach proved to be a feasible option for experienced multidisciplinary teams for patients with complex thoracic aorta pathologies deemed unsuitable for standard retrograde TEVAR due to concomitant occlusive disease of the peripheral vessels. Considering the large size of the sheaths used for stent graft delivery, a preoperative aortic valve assessment with echocardiography is mandatory to exclude calcific aortic disease that could determine an acute impairment of hemodynamic conditions during the procedure.Transapical closure of left ventricular outflow tract (LVOT) pseudoaneurysm is also described in the literature ,49.Although in recent years the number of transapical procedures has decreased worldwide thanks to the size reduction of sheath calibres in the standard transfemoral approach, the transapical approach continues to be a valid alternative option for procedures normally performed using transfemoral access, such as TAVR and TEVAR, in elderly patients deemed to be at high surgical risk for standard surgical procedures with unsuitable peripheral arteries. The close vicinity and coaxiality of the left ventricular apex with several heart structures such as mitral valve and LVOT makes the transapical approach a safe and easy option that guarantees easy access and great stability and accuracy for several structural procedures.Possible left anterior descending coronary artery damage, apical bleeding and late left ventricular apical pseudoaneurysm formation are threatening possible complications of the transapical procedures. For this reason, considerable experience in ventricular apex manipulation as well as complications management and a multidisciplinary heart team evaluation are mandatory to properly evaluate patient risk profile, select the appropriate approach and safely perform a transapical procedure.Further improvements in devices and randomised control trials comparing TF vs. TA and TA vs. standard surgical procedures are necessary to standardise and validate the transapical approach in the subsetting of different structural heart procedures.We consider the transapical approach a valid and essential tool in the portfolio of a modern heart valve centre."} {"text": "Objective: Since its outbreak, the rapid spread of COrona VIrus Disease 2019 (COVID-19) across the globe has pushed the health care system in many countries to the verge of collapse. Therefore, it is imperative to correctly identify COVID-19 positive patients and isolate them as soon as possible to contain the spread of the disease and reduce the ongoing burden on the healthcare system. The primary COVID-19 screening test, RT-PCR although accurate and reliable, has a long turn-around time. In the recent past, several researchers have demonstrated the use of Deep Learning (DL) methods on chest radiography (such as X-ray and CT) for COVID-19 detection. However, existing CNN based DL methods fail to capture the global context due to their inherent image-specific inductive bias. Methods: Motivated by this, in this work, we propose the use of vision transformers for COVID-19 screening using the X-ray and CT images. We employ a multi-stage transfer learning technique to address the issue of data scarcity. Furthermore, we show that the features learned by our transformer networks are explainable. Results: We demonstrate that our method not only quantitatively outperforms the recent benchmarks but also focuses on meaningful regions in the images for detection (as confirmed by Radiologists), aiding not only in accurate diagnosis of COVID-19 but also in localization of the infected area. The code for our implementation can be found here - https://github.com/arnabkmondal/xViTCOS. Conclusion: The proposed method will help in timely identification of COVID-19 and efficient utilization of limited resources. I.A.The novel COronaVIrus Disease 2019 (COVID-19) is a viral respiratory disease caused by Severe Acute Respiratory Syndrome COronaVirus 2 (SARS-CoV2). The World Health Organization (WHO) has declared COVID-19 a pandemic on 11 March 2020 B.Motivated by the success of the Deep Learning in diagnosing respiratory disorders While there has been a large body of literature on use of Deep Learning for Covid detection, most of them are based on Convolutional Neural Networks (CNNs) 1)We propose a vision transformer based deep neural classifier, xViTCOS for screening of COVID-19 from chest radiography.2)We provide explanability-driven, clinically interpretable visualizations where the patches responsible for the model\u2019s prediction are highlighted on the input image.3)We employ a multi-stage transfer learning approach to address the problem of need for large-scale data.4)We demonstrate the efficacy of the proposed framework in distinguishing COVID-19 positive cases from non-COVID-19 Pneumonia and Normal control using both chest CT scan and X-ray modality, through several experiments on benchmark datasets.II.A.Chest Computed Tomography (CT) imaging has been proposed as an alternative screening tool for COVID-19 infection The work in B.Although chest-CT has more sensitivity as compared to RT-PCR In C.Images can be naively represented using a sequence of pixels for analysis using transformers but that would lead to huge computational expenses with a quadratic increase in costs. This has led to a number of approximations. For example, III.Unlike the existing methods that incorporate CNNs, we propose a vision transformer (ViT) A.A Vision Transformer B.Unlike CNN based models that impose inherent bias such as translation invariance and a local receptive field, vision transformer (ViT) C.A domain and a task are the two main components of a typical learning problem. For the specific case of a supervised classification problem, the domain, Given a source domain, In the current problem, the target domain consists of chest radiography image data i.e., for xViTCOS-CXR, the target data is the COVID-19 CXR dataset and for the xViTCOS-CT model, the target data consists of the COVIDx-CT-2A dataset The first source domain The underlying distribution of clinical radiographic images is vastly different from an unconnected set of natural images like those in ImageNet, and distributional divergence is very high between the two domains. Hence in cases where the target dataset is of insufficient capacity, the pre-trained ViT model might find it highly difficult to bridge the domain shift between the learned source domain and the unseen target domain. However, with a sufficient number of training examples available from the target domain, the ViT model can overcome the gap between these two domains. Keeping this in mind, an intermediate stage of knowledge transfer is used in this paper to train our proposed model depending on the size of the target domain training data. The primary goal of this stage of transfer learning is to help the ViT model, pre-trained on a generic image domains With the COVIDx-CT-2A dataset D.A number of Vision Transformers architectures have been proposed in literature. In this paper we have tested our algorithm on architectures proposed in 1https://github.com/faustomorales/vit-kerasWhile training xViTCOS-CXR, for the intermediate finetuning step using CheXpert IV.A.Some of the existing works validate their methods using private datasets 1)To demonstrate the efficacy of xViTCOS-CT, we use COVIDx CT-2A dataset 2)To benchmark xViTCOS-CXR against other deep learning based methods for COVID-19 detection using CXR images, we construct a custom dataset consisting of three cases: Normal, Pneumonia, and COVID-19. Like in COVIDx-CXR-2 B.1)COVIDx CT-2A dataset 2)In the compiled dataset, the chest X-ray images are of various sizes. To fix this issue, all the images were resized to a fixed size of refer to . In addiC.To quantify and benchmark the performance of xViTCOS, we compute and report Accuracy, Precision , Recall (Sensitivity), F1 score, Specificity, and Negative Prediction Value (NPV) as defined and compared in the standard literature such as 1)The prowess of the proposed model can be further understood from examining the confusion matrix . The pro2)The observations regarding the performance of xViTCOS-CXR compared to its contemporaries are on the same lines as that of xViTCOS-CT, if not better. In terms of classification accuracy, xViTCOS-CXR achieves an accuracy of 96%, outperforming the baseline methods by a considerable margin as can be seen from Analysing D.1)To visually analyze how clustered the feature space is, we perform a t-SNE visualization of the penultimate layer\u2019s features for both the models using the test splits. As can be seen from 2)For qualitative evaluation of xViTCOS we present samples of CXR images and CT scans along with their ground truth labels and corresponding saliency maps along with the prediction in Report corresponding to V.In this study, we introduce a novel vision transformer based method, xViTCOS for COVID-19 screening using chest radiography. We have empirically demonstrated the efficacy of the proposed method over CNN based SOTA methods as measured by various metrics such as precision, recall, F1 score. Additionally, we examine the predictive performance of xViTCOS utilizing explanability-driven heatmap plot to highlight the important factors for the predictive decision it makes. These interpretable visual cues are not only a step towards explainable AI, also might aid practicing radiologists in diagnosis. We also analyzed the failure cases of our method. Thus, to enhance the effectiveness of diagnosis we suggest that xViTCOS be used to complement RT-PCR testing. In the next phase of this project, we aim to extend this work to automate the analysis of the severity of infection using vision transformers."} {"text": "Many living tissues achieve functions through architected constituents with strong adhesion. An Achilles tendon, for example, transmits force, elastically and repeatedly, from a muscle to a bone through staggered alignment of stiff collagen fibrils in a soft proteoglycan matrix. The collagen fibrils align orderly and adhere to the proteoglycan strongly. However, synthesizing architected materials with strong adhesion has been challenging. Here we fabricate architected polymer networks by sequential polymerization and photolithography, and attain adherent interface by topological entanglement. We fabricate tendon-inspired hydrogels by embedding hard blocks in topological entanglement with a soft matrix. The staggered architecture and strong adhesion enable high elastic limit strain and high toughness simultaneously. This combination of attributes is commonly desired in applications, but rarely achieved in synthetic materials. We further demonstrate architected polymer networks of various geometric patterns and material combinations to show the potential for expanding the space of material properties. Synthesizing architected materials with strong adhesion has been challenging. Here the authors fabricate architected polymer networks by sequential polymerization and photolithography, and attain adherent interface by topological entanglement. Many applications require soft materials to deform reversibly (high elasticity) and resist fracture (high toughness). High elasticity and high toughness, however, are often conflicting requirements in materials development. A highly elastic material loads and unloads without dissipating much energy, whereas a highly tough material resists the growth of a crack by dissipating a large amount of energy. Polymer networks have been synthesized to achieve either high elasticity or high toughness10, but rarely both. The difficulty in simultaneously achieving elasticity and toughness is evident on the plane of elastic limit strain and toughness , and highly tough materials have low elastic limit strain (\u03b5e\u2009<\u2009100%)23. A large area in the top right of the plane is empty. This negative correlation originates from the commonly used toughening strategy: sacrificial bonds25. When a crack advances in such a material, the polymer network transmits high stress from the crack front to the bulk of the material, breaking sacrificial bonds in the bulk, which toughens the material. The sacrificial bonds, however, lowers the elastic limit strain.Soft polymer materials, such as elastomers and gels, are under intense development to enable emerging fields of \u00a0biointegration and bioinspiration, including tissue engineeringon) Fig.\u00a0. The two26. An Achilles tendon has many parallel fascicles, and each fascicle consists of staggered collagen fibrils in a proteoglycan matrix27 \u2009=\u2009WD(\u03b5c)\u2009+\u2009WE(\u03b5c). For the soft gel, the hysteresis is small, WD(\u03b5c)/W(\u03b5c)\u2009=\u20091%, and the toughness comes from the rupture of the long polymer chains. For the hard gel, the hysteresis is pronounced, WD(\u03b5c)/W(\u03b5c)\u2009=\u200951%, and the toughness comes from both the dissipated work and elastic work. The former mainly comes from the rupture of the short-chain network, and the latter mainly comes from the rupture of the long-chain network. The synergy amplifies the toughness of the hard gel relative to the soft gel. For the TPN gel, the hysteresis is also pronounced, WD(\u03b5c)/W(\u03b5c)\u2009=\u200953%, and the toughness comes from both the dissipated work and elastic work. At the crack tip, the soft phase deconcentrates stress over the hard phase. This stress deconcentration further amplifies the toughness of the TPN gel relative to the hard gel.We record the stress\u2013strain curves of the three gels with and without precut crack Fig.\u00a0. For bot/m2 Fig.\u00a0. The TPNels Fig.\u00a0. We sepa33. The TPN gel demonstrates simultaneous improvement in elastic limit strain (seven times) and in toughness (three times) compared with the hard gel. The successful integration of high elasticity and high toughness needs to fulfill the following four requirements.(i)The soft phase needs to have high strength. The soft and hard segments in the stripe-patterned TPNs alternate in series, and are subjected to the\u00a0equal stress. The soft segment is strong enough to make the hard segment fracture preferentially The soft and hard phases must adhere strongly. Interfacial adhesion is \u00a0the critical challenge for the design of composite materials. We attain strong interfacial adhesion by topological entanglement of polymer networks. The strong adhesion helps to smoothly transfer the stress between the phases. In all the mechanical measurements of TPNs, we have not observed any interfacial fracture.(iii)r of the hard phase. When r\u2009=\u20090, the TPN will regress to the soft gel, and lose high toughness. When r\u2009=\u2009\u221e, the hard phase becomes continuous fibers, making the TPN lose high elastic limit strain. Therefore the TPNs must have an intermediate aspect ratio to reconcile the elastic limit strain and toughness. With the increase in r, the elastic limit strain decreases, whereas the toughness first increases to a peak value at r\u2009=\u20094, and then decreases slowly and toughness (\u0393\u2009=\u20094202\u2009J/m2). The decrease of \u03b5e is attributed to the decreased volume fraction \u03d5s of the soft segment in the composite column when r increases. The toughness \u0393 is mainly determined by the energy dissipation in the process zone, which is localized in the hard block at the crack front CMBAA for the short-chain network. We prepare a series of TPNs with the different modulus ratio Eh/Es of the hard phase to the soft phase, and find they have the similar critical strain \u03b5c , the low \u03b5e is attributed to the large swelling of the hard segment and thus the decreased volume fraction \u03d5s of the soft segment , which greatly expand the space of material properties. The TPNs integrate diverse polymers and patterns by sequential polymerization, photolithography, and stacking. The TPNs resolve a longstanding challenge in fabricating architected materials with strong adhesion through topological entanglement. As a demonstration, we fabricate tendon-inspired TPNs that simultaneously achieve high elastic limit strain and high toughness. We further fabricate TPNs of various geometric patterns, material combinations, and multilayer stacks. We have fabricated TPNs using mask photolithography, but TPNs can also be fabricated using other methods, such as stereolithographyN,N-dimethylamino ethylacrylate methyl chloride quarternary . Crosslinker: N,N\u2019-methylenebis(acrylamide) (MBAA). Initiators: 2-Oxoglutaric acid (OA), 2-hydroxy-2-methylpropiophenone (HMPP), and 2-hydroxy-4\u2019-(2-hydroxyethoxy)-2-methylpropiophenone (Irgacure 2959). Dyes: Alcian blue and Amaranth. Patterned photomasks were designed by us and fabricated by GX Photomask (Shenzhen) Co. Ltd.Unless otherwise mentioned, all chemicals were purchased from Aladdin and used without further purification. Monomers: Acrylamide , 2-acrylamido-2-methylpropanesulfonic acid (AMPS), 2-hydroxyethyl acrylate (HEA), and 2) for 8\u2009h. The as-prepared PAAm gels were leached in deionized water, and were used as scaffolds. We submerged the scaffolds in an aqueous precursor solution containing 1\u2009M AMPS, 4\u2009mol% MBAA, and 1\u2009mol% HMPP to an equilibrium state. We sandwiched the precursor-containing scaffolds between glass and polyethylene terephthalate (PET) release film. The PET film was covered by patterned masks and subjected to UV irradiation for several seconds in the air. The as-prepared PAAm/PAMPS gels were leached in deionized water to obtain patterned scaffolds. The patterned scaffolds were further immersed in the aqueous precursor solution containing 4\u2009M AAm, 0.001\u2009mol% MBAA, and 0.01\u2009mol% OA to an equilibrium state, and then covered by two parallel glass plates. After the UV irradiation for 8\u2009h in a nitrogen atmosphere, the TPN gels were leached in deionized water, and were ready for various characterization tests.TPNs were prepared through three-step sequential polymerization and photolithography Fig.\u00a0. The scaThe patterned TPN gels consisted of soft phase (PAAm/PAAm) and hard phase (PAAm/PAMPS/PAAm). The modulus ratio of the hard phase to the soft phase was tuned by changing the MBAA concentration in the second precursor. As the control cases to the TPN gels, the soft gels were prepared without the second precursor, and the hard gels were prepared without photomasks. PHEA-based and PDMAEA-Q-based TPN gels were synthesized by replacing the PAMPS precursor. Note that the UV initiator Irgacure 2959 was used in the third precursor for PDMAEA-Q-based TPN gels. Bilayer TPN gels were obtained by stacking two layers of patterned scaffolds and then polymerizing the third PAAm network.To visualize the hard phase in TPN gels, the charged hard phase was selectively dyed by adsorbing dye molecules with the opposite charges in the elastic tensile region. The strain ratio \u03b5s/\u03b5h was estimated through the in situ observation of deformation . Fully swollen stripe-patterned TPN gels were cut into a dumbbell shape by a standardized cutter with width of 2\u2009mm and height of 12\u2009mm. Both ends of the dumbbell-shaped samples were clamped and stretched at a constant velocity of 100\u2009mm/min, by which the stress\u2013strain curves were recorded. The tensile process of the stripe-patterned gels obeys the series model (isostress model), so the modulus ratio of the hard phase to the soft phase equals to the inverse ratio of their strains and a 20\u2009mm one-edge crack were used. The sample was clamped and stretched along the H direction at a constant velocity of 30\u2009mm/min, by which the stress\u2013strain curve was recorded. The strain at which the precut crack started to propagate was defined as the critical strain \u03b5c, and the integrated area under the stress\u2013strain curve from 0 to \u03b5c was defined as the critical work of extension W(\u03b5c). The toughness was given by \u0393\u2009=\u2009W(\u03b5c)H. Note that the stress was corrected by the initial effective sample width of 30\u2009mm. With this correction, the stress\u2013strain curve of the precut sample coincided with that of the uncut sample with the same geometry in the range of 0\u2009\u2264\u2009\u03b5\u2009\u2264\u2009\u03b5c including seven columns of hard phase (width 1.5\u2009mm and height 6\u2009mm) (Supplementary Fig.\u00a0Supplementary InformationPeer Review FileDescription of Additional Supplementary FilesSupplementary Movie 1Supplementary Movie 2Supplementary Movie 3Supplementary Movie 4Supplementary Movie 5"} {"text": "Non-Hodgkin lymphoma is one of the most frequently occurring hematologic diseases in the world. Current drugs and therapies have improved outcomes for patients with lymphoma, but there is still a need to identify novel medications for treatment-resistant cases. The aim of this review is to gather the latest findings on non-Hodgkin lymphoma in children, including genetic approaches, the application of therapy, the available treatment options, and resistance to medications.One of the most common cancer malignancies is non-Hodgkin lymphoma, whose incidence is nearly 3% of all 36 cancers combined. It is the fourth highest cancer occurrence in children and accounts for 7% of cancers in patients under 20 years of age. Today, the survivability of individuals diagnosed with non-Hodgkin lymphoma varies by about 70%. Chemotherapy, radiation, stem cell transplantation, and immunotherapy have been the main methods of treatment, which have improved outcomes for many oncological patients. However, there is still the need for creation of novel medications for those who are treatment resistant. Additionally, more effective drugs are necessary. This review gathers the latest findings on non-Hodgkin lymphoma treatment options for pediatric patients. Attention will be focused on the most prominent therapies such as monoclonal antibodies, antibody\u2013drug conjugates, chimeric antigen receptor T cell therapy and others. Non-Hodgkin lymphoma (NHL) is one of the most frequently occurring hematologic disease in the world . In chilOverall survival (OS) rates in children, adolescents and young adults diagnosed with NHL increased to 80\u201390% during the last 30 years, giving the opportunity to investigate the long-term effects of prior chemo- and radiotherapy (RT). Children and adolescent NHL survivors are at significant risk of late mortality from secondary neoplasms, recurrent/progressive disease and chronic health conditions , and late morbidity of multiple organ systems and poor health-related quality of life. These risks are similar to other long-term risks of childhood and adolescent acute lymphoblastic leukaemia (ALL), Wilms tumor and Hodgkin lymphoma (HL) survivors . IncreasNovel approaches are required to reduce the burden of late morbidity and mortality in childhood and adolescent NHL survivors, and obtain methods to identify at-risk patients who are at significantly increased risk of these complications. Nowadays, we can distinguish several therapeutic substances that work in various ways. These include immunomodulatory drugs, monoclonal antibodies (mAbs), immune checkpoint inhibitors (ICI), antibody\u2013drug conjugates (ADCs) and genetically modified chimeric T cell receptor antigens (CAR) . In addiMore detailed treatment of the different types of NHL in pediatric patients is summarized in Monoclonal antibodies target specific markers on cancer cells by activating the patient\u2019s immune system. In this way, the therapy avoids widespread non-specific cytotoxic effects . MoleculSome mAbs, such as obinutuzumab and ofatumumab, did not improve outcomes in NHL patients . NeverthThe following generation of mAbs are bispecific antibodies (BiAbs), which are derived from mAbs. They consist of two single-chain variable fragments (scFv) that target tumor-associated antigens. BiAbs engage the cells of immune system to attack indicated tumor cells . BlinatuADCs comprise of a mAb connected to a small cytotoxic molecule. When attached to the cell-surface antigen of cancer cells, the ADC is internalized; next, the cytotoxin is released, causing cell cycle termination and cell apoptosis. The drug can also kill adjacent cells by \u201cbystander killing\u201d 40]. Th. Th40]. Polatuzumab vedotin, whose clinical trial (NCT02257567) ended in FDA approval for treatment of R/R DLBCL in adults, is the combination of the CD79b antibody and MMAE . PolatuzCAR-T cell therapy uses T lymphocytes, which are engineered with synthetic chimeric antigen receptors (CAR). The CAR-T cell recognizes and then eliminates specific cancer cells, independently of major histocompatibility complex molecules . The creIn a recent study of Juan Du, an eight-year-old child suffering from R/R BL showed no clear response after being treated with CD19-specific CAR-T cells. After the attempt to treat the malignancy with CD22-specific CAR-T cells, the disease reoccurred. Subsequently, CD20 CAR-T cell treatment was applied, and that action resulted in the achievement of CR. What is more, CAR-T cell therapy targeting CD23 and the tyrosine kinase-like orphan receptor brought about promising results in the improvement of R/R NHL treatment in children . One of DNMT inhibitors belong to a family of enzymes that catalyze the methylation of DNA . There aThe conclusions of Han Weidong\u2019s study on children under 16 years of age inform us that decitabine-primed CAR-T cells can recognize and kill the CD19 negative malignant cells, which leads to death of lymphoma tumor cells. Decitabine increases tumor antigens and human leukocyte antigen expression, enhances antigen processing, promotes T cell infiltration and boosts effector T cell function; therefore, it can be used in DLBCL, high grade B-cell lymphoma and other aggressive B-cell lymphomas in pediatric patients . DecitabFor many years, the modulation of epigenetic mechanisms was the subject of trials and studies . The HDAAt the time of this article\u2019s publication, there are two ongoing pediatric clinical trials involving vorinostat in NHL. They cover combination with chemotherapy before donor stem cell transplant (NCT04220008) and a potential graft-versus-host disease (GVHD) incidence reducing drug NCT03842696) ,68. Prom6 ,68. PrIn the case of panobinostat, the results of only one clinical trial (NCT01321346) were published by Goldberg et al. Among 22 pediatric patients, only one was diagnosed with NHL. The researchers observed several adverse effects, predominantly regarding the gastrointestinal tract. Moreover, the clinical activity of panobinostat was unsatisfactory, and now more efficient therapies are available .The following examples of HDACIs were registered in trials in patients at least 16 years old. With a positive outcome of these studies, we may assume the age of enrollment to these trials will be lowered in the future.Chidamide is being tested in R/R peripheral T cell lymphoma with encouraging results and a satisfying safety profile . MoreoveHDACIs are undeniably under rapid development. Certainly, we will observe more clinical trials of either HDACIs as single agents or in combined therapies with novel drugs .ICIs have become a great milestone as a cancer cure. Pediatric malignancies, such as leukemias and solid tumors, also benefit from ICIs, with their expansion in our sight ,76. DespAs for nivolumab, which is a fully human IgG4 mAb targeting the PD-1 receptor , there aIn terms of other checkpoint inhibitors, the fully human IgG1 PD-1L antibody, atezolizumab, has become the object of clinical trials . HoweverThere are several active clinical trials, which will determine the usage of the mentioned ICIs, along with others, such as anti-CTLA-4. The National Cancer Institute is conducting studies (NCT02304458) on combined nivolumab with ipilimumab (which is the anti-CTLA-4 antibody) in patients with R/R NHL .EZH2 is a histone methyltransferase, which comes from the gene family containing epigenetic regulators that inhibit transcription . EZH2 alWhen EZH2 mutation occurs, the suppression of GC output genes and checkpoints persists, which in turn leads to hyperplasia . Until nSince 2012, several specific EZH2 inhibitors have been investigated. Their task was to inhibit H3K27 methylation, reactivate silenced PRC2 target genes, or inhibit the survival of B-cell lymphoma cells (GC-derived and containing the EZH2 activating mutation) . These sEZH2 inhibitors have the potential to be useful in the treatment of lymphomas, but there is still insufficient research to draw specific conclusions. It is possible that in the future it will be possible to use compounds from this group in combination therapy, e.g., with chemotherapy, which would result in improved patient outcomes. However, for this to happen, a lot of research has yet to be conducted.IDH is an enzyme that catalyzes the conversion of isocitrate to \u03b1-ketoglutarate (\u03b1KG) . RecentlEnasidenib and ivosidenib are orally selective inhibitors of mutant IDH, with enasidenib blocking IDH2 and ivosidenib blocking IDH1. Both of these drugs have already been approved by the FDA for the treatment of acute myeloid leukemia (AML) ,107,108.Further clinical trials are currently underway to test these compounds in other diseases as well. One of these studies, on ivosidenib, is currently being conducted in the age group 12 months to 21 years. It focuses on patients with R/R NHL, advanced solid tumors, or histiocytic disorders that have IDH1 genetic alterations .Expression of anti-apoptotic proteins from the BCL-2 family can be disrupted by several different mechanisms, including gene amplification, chromosomal translocation, increased gene transcription, or altered post-translational processing . In the Currently, several compounds that are inhibitors of BCL-2 can be distinguished. One of them, ABT-263 (navitoclax), initially showed good efficacy. However, its use has been limited due to its main toxicity: thrombocytopenia. Despite this, research can still be found in the context of NHL and navatoclax. In one study, although each patient had at least one treatment-related adverse event, the safety profile of navitoclax in R/R FL was assessed as acceptable . In turnIn the context of the future, VOB560 (65487) and MIK665 (S64315) may also be significant. Both of these compounds were designed to potently and selectively block BCL-2 and MCL-1, respectively. Their combination in preclinical studies has shown strong anti-cancer properties . Undoubtedly, the discovery of inhibitors of the BCL-2 family opened a new path in the targeted therapy of many cancers. There is the need for new clinical trials comparing the effectiveness of these substances in similar and the same diseases. Their toxicity profile is also extremely important. As already proven, the side effect of individual BCL-2 inhibitors is not only the abovementioned thrombocytopenia. In addition, with venetoclax, diarrhea, upper respiratory tract infections, neutropenia and tumor lysis syndrome may occur ,126,127.Approximately 90% of pediatric ALK+ ALCL is due to t chromosomal translocation, which entails the emergence of oncogenic fusion protein nucleophosmin activating several proliferation and survival pathways . The reaCrizotinib is a first-generation ALK inhibitor . Based oCeritinib is a second-generation ALK inhibitor that binds to ALK with a higher affinity than crizotinib , but itsBTK is a kinase with a protein structure involved in the regulation of B-cell signaling . As one Several generations can be distinguished among these drugs. The first-generation BTK inhibitors include ibrutinib, which is administered orally. It binds irreversibly to a cysteine residue (C481) in the active site of BTK, thereby inhibiting B-cell receptor signaling ,143. As Studies to date have shown that ibrutinib is well tolerated in many B-cell tumors, including DLBCL ,146. HowIt is worth mentioning that, as in the case of other drugs, the possibility of combining BTK inhibitors with, e.g., BCL2 inhibitors, is being considered. In one such study, Constantine et al. proved that the combination of ibrutinib + venetoclax was highly effective in patients with MCL. They received an OS rate of 79% at 12 months and 74% at 18 months . In turnBortezomib (PS-341) is a peptide aldehyde derivative. It is the first potent, selective, and reversible inhibitor of the 26S proteasome in its class 161]. . 161]. Its role is to inhibits the ubiquitin\u2013proteasome pathway by binding directly to the active sites of the proteasome, which in turn disrupts targeted protein proteolysis ,162. TheAlthough most studies on bortezomib in NHL patients concern adults, for several years there have also been studies in which the target group are children. One of them was conducted by Horton et al. and related to the use of combination therapy of bortezomib with ifosfamide/vinorelbine (IVB) in pediatric NHL patients. Although few patients achieved the primary goal , the CR after two cycles was 83% ,173. RecTemsirolimus is one of the mammalian targets of the rapamycin (mTOR) inhibitor, which, as a derivative of the sirolimus ester, exhibits antibacterial, immunosuppressive, and antitumor properties ,175. TheCurrently, temsirolimus is approved in the European Union for the treatment of R/R MCL, but not in the USA ,178,179.For this reason, combinations of temsirolimus with other agents, both cytotoxic drugs and other targeted inhibitors, are currently at the forefront of NHL treatment . One sucAs can be seen, despite the approval of temsirolimus for treatment in MCL, there is still a space for improvement in its effectiveness in this and other types of NHL. It is possible to achieve this by developing better biomarkers, which would allow for better stratification of the patient prior to drug administration. Moreover, such biomarkers could help to elucidate the mechanisms of resistance and to develop new therapeutic combinations, which would certainly improve the effectiveness of treatment .Our knowledge of the refractory NHL\u2019s treatment has grown significantly. Novel drugs trial results give hope to those who suffer from the mentioned hematologic dis-eases and make up a large percentage of people.There are many approved drugs for NHL therapy, e.g., temsirolimus for MCL or bilinatumomab for R/R B-NHL in children and adults. The application of second-generation CD19 CAR-T cells has shown significant positive outcomes in the treatment of FL, PMBCL, DLBCL, MCL, and splenic MZL. Moreover, CD20 CAR-T cell treatment resulted in CR in BL.EZH2 inhibitors in combination with chemotherapy will probably improve patients\u2019 outcomes, if examined more closely. When the effective doses have been improved and the side effects suppressed, BCL-2 inhibitors will be a promising class of drugs to fight NHL. Additionally, combination of ibrutinib and venetoclax is considered highly effective in treating MCL. Lastly, chidamide indicated encouraging results in its effectiveness and safety profile in T cell lymphoma treatment; for that reason, we can expect many clinical trials validating this method.There is a necessity to develop better biomarkers, which could help with elucidation of resistance mechanisms and stratification of the patient before administrating the drug.It is likely that the majority of the compounds described in this review will be used in widespread NHL therapies; however, much more research needs to be conducted and many years have to pass to establish personalized treatment options. As always, future pediatric treatment regimens will arrive after clinical trials take place in adults. Thus, we expect rapid progress in this branch of medicine. All figures presented in this article have been created with Biorender.com."} {"text": "Anti-neutrophil cytoplasmic antibody (ANCA)-associated vasculitis (AAV) is a systemic vasculitis, most frequently presenting as microscopic polyangiitis (MPA) or granulomatosis with polyangiitis (GPA). Pathogenic ANCAs trigger a deleterious immune response resulting in pauci-immune necrotizing and crescentic glomerulonephritis (GN). Standard therapeutical regimens include aggressive immunosuppressive therapy. Since some patients require renal replacement therapy (RRT) despite intensive immunosuppressive therapy, additional therapeutic plasma exchange (PEX) to deplete pathogenic ANCAs has been recommended but its value has recently been questioned. Because therapeutic decision making is crucial in these critically ill patients, we here aimed to identify inflammatory lesions in association with PEX consideration in a retrospective study from a single center tertiary hospital in a real-world population of 46 patients with severe AAV requiring intensive care treatment. The decision to consider PEX was more likely in patients with need for intensive care treatment and severe renal dysfunction. In contrast, short-term outcomes did not depend on clinical, or laboratory characteristics assessed at admission. Histopathological analysis confirmed active disease reflected by increased glomerular necrosis and crescents, but these histopathological findings did not associate with short-term outcome either. Interestingly, only increased global glomerular sclerosis in renal biopsies associated with a detrimental short-term outcome. In conclusion, our study investigated determinants for the consideration of therapeutic PEX in patients with severe AAV requiring intensive care treatment. This aspect underscores the need for renal biopsy and requires further investigation in a prospective controlled setting for therapeutic decision making especially in patients with severe AAV requiring intensive care treatment, especially important for treating intensivists. Anti-neutrophil cytoplasmic antibody (ANCA)-associated vasculitis (AAV) is a systemic vasculitis, which most frequently presents as microscopic polyangiitis (MPA) or granulomatosis with polyangiitis (GPA) . Renal iA total number of 46 patients with biopsy-proven AAV at the University Medical Center G\u00f6ttingen were retrospectively included between 2015 and 2020, the patient cohort has, in part, previously been described \u201315. Whil2 in the absence of hyperkalemia, heart failure, edema or uremic encephalopathy. Pulmonary hemorrhage was mild in all cases without requirement for mechanical ventilation.At admission, the Birmingham Vasculitis Activity Score (BVAS) version 3 was calculated as described previously . The BVATwo renal pathologists (PS and SH) independently evaluated kidney biopsies and were blinded to data analysis. Within a renal biopsy specimen, each glomerulus was scored separately for the presence of necrosis, crescents and global sclerosis. Consequently, the percentage of glomeruli with any of these features was calculated as a fraction of the total number of glomeruli in each renal biopsy. Apart from these categories, the degree of interstitial fibrosis/tubular atrophy (IF/TA) was quantified. Based on these scorings, histopathological subgrouping according to Berden et\u00a0al. and ARRS according to Brix et\u00a0al. were performed , 20. Ren2 every week, RTX was not administered within 48\u00a0h before PEX treatment. As per our practice, PEX treatment was scheduled at least 48\u00a0h after RTX administration to avoid interference with the rapid immunosuppressive effects of RTX on circulating CD19-positive/CD20-positive lymphocytes as described previously (2 RTX every week and two intravenous doses at 15 mg/kg CYC during the first and third RTX infusion. Prophylaxis to prevent pneumocystis (carinii) jiroveci infection was administered according to local practice.PEX was administered during the induction period at the discretion of the treating physicians. Glucocorticoids (GCs) were administered either as intravenous pulse therapy or orally with a tapering schedule. Choice of further remission induction therapy was dependent on previous regimens and individual patients with preference for cyclophosphamide (CYC) in patients with severe deterioration of kidney function, a higher likelihood to choose rituximab (RTX) in younger patients with toxicity of CYC being the main reason for this choice . RTX waseviously , 24. CYCeviously . CombinaVariables were tested for normal distribution using the Shapiro\u2013Wilk test. Non-normally distributed continuous variables are expressed as median and interquartile range (IQR), categorical variables are presented as frequency and percentage. Statistical comparisons were not formally powered or prespecified. For group comparisons, the Mann\u2013Whitney U-test was used to determine differences in medians. Non-parametric between-group-comparisons were performed with Pearson\u2019s Chi-square test. Data analyses were performed with GraphPad Prism .At the discretion of the treating physicians, 18/46 (39.1%) patients with severe AAV received PEX Figure\u00a01According to current recommendations, therapeutic PEX should be considered for AAV patients with severe deterioration of kidney function (serum creatine levels >500\u2005\u00b5mol/L due to rapid-progressive GN in new onset or relapse of disease) and for the treatment of severe diffuse alveolar hemorrhage . It has The main limitations of our study are its retrospective design, different regimens of remission induction, the small patient number, influence of therapeutic PEX on histopathological findings because PEX treatment was initiated before renal biopsy in most cases, and limited data on long-term renal survival rates. Nevertheless, the number of critically ill patients with AAV at our center is considerable and our study identified determinants for the consideration of therapeutic PEX in patients with severe AAV requiring intensive care treatment despite the negative results of the PEXIVAS trial. It has to be kept in mind, however, that the patients included in the PEXIVAS trial differed from the real-world population regularly admitted to our ICU. This new aspect underscores the need for renal biopsy and requires further investigation in a prospective controlled setting for therapeutic decision making and short-term care especially in patients with severe AAV requiring intensive care treatment, especially important for treating intensivists , 30\u201332.Renal involvement is a common and severe complication of AAV as it can cause requirement of RRT, ESRD or death. This study identifies determinants for PEX consideration in patients with severe AAV requiring intensive care treatment. This aspect underscores the need for renal biopsy especially important in patients with severe AAV requiring intensive care treatment.The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding author.The studies involving human participants were reviewed and approved by the Institutional Review Board of the University Medical Center G\u00f6ttingen, Germany (no. 22/2/14 and 28/9/17). The patients/participants provided their written informed consent to participate in this study.SH and BT conceived the study, collected and analyzed data and co-wrote the first draft. DT collected and analyzed data. PS and SH evaluated histopathological findings. PK analyzed data and edited the manuscript. SH and BT contributed equally as senior authors. All authors contributed to the article and approved the submitted version.BT was supported by the Research program, University Medical Center, University of G\u00f6ttingen (1403720). The funding sources had no involvement in the design, collection, analysis, interpretation, writing or decision to submit the article.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "TRPS1 gene coding for the TRPS1 transcription factor. Considering Trps1 expression in odontoblasts, where Trps1 supports expression of multiple mineralization-related genes, we focused on determining the consequences of odontoblast-specific Trps1 deficiency on the quality of dental tissues. We generated a conditional Trps1Col1a1 knockout mouse, in which Trps1 is deleted in differentiated odontoblasts using 2.3kbCol1a1-CreERT2 driver. Mandibular first molars of 4wk old male and female mice were analyzed by micro-computed tomography (\u03bcCT) and histology. Mechanical properties of dentin and enamel were analyzed by Vickers microhardness test. The susceptibility to acid demineralization was compared between WT and Trps1Col1a1cKO molars using an ex vivo artificial caries procedure. \u03bcCT analyses demonstrated that odontoblast-specific deletion of Trps1 results in decreased dentin volume in male and female mice, while no significant differences were detected in dentin mineral density. However, histology revealed a wider predentin layer and the presence of globular dentin, which are indicative of disturbed mineralization. The secondary effect on enamel was also detected, with both dentin and enamel of Trps1Col1a1cKO mice being more susceptible to demineralization than WT tissues. The quality of dental tissues was particularly impaired in molar pits, which are sites highly susceptible to dental caries in human teeth. Interestingly, Trps1Col1a1cKO males demonstrated a stronger phenotype than females, which calls for attention to genetically-driven sex differences in predisposition to dental caries. In conclusion, the analyses of Trps1Col1a1cKO mice suggest that compromised quality of dental tissues contributes to the high prevalence of dental caries in TRPS patients. Furthermore, our results suggest that TRPS patients will benefit particularly from improved dental caries prevention strategies tailored for individuals genetically predisposed due to developmental defects in tooth mineralization.Dental caries is the most common chronic disease in children and adults worldwide. The complex etiology of dental caries includes environmental factors as well as host genetics, which together contribute to inter-individual variation in susceptibility. The goal of this study was to provide insights into the molecular pathology underlying increased predisposition to dental caries in trichorhinophalangeal syndrome (TRPS). This rare inherited skeletal dysplasia is caused by mutations in the Dental caries remains the most prevalent chronic disease affecting 60\u201390% of children and adolescents (5\u201317 years of age), and nearly every adult among all populations . This coMany environmental, endogenous, and behavioral risk factors have been identified as contributors to the development of dental caries , 11, 12.2+) and phosphate (Pi) ions, pH, and the presence of mineralization inhibitors mice , post-translational modifications of ECM proteins, availability of calcium is caused by missense mutations located exclusively in the exon 6 and 7 of TRPS1 gene, resulting in functional modification of the DNA binding domain of the TRPS1 transcription factor literature encompass defects in tooth number, size, shape, and mineralization. The most common TRPS dental abnormalities include supernumerarity, microdontia, malocclusion, and delay in root and crown development. Cases of hypodontia, abnormal tooth morphogenesis, impaired dentin mineralization, large dental pulp chambers, and extensive dental caries have also been reported, underscoring the clinical importance of elopment , 44, 45.Trps1 expression during mouse tooth organ development, demonstrated that Trps1 is highly and specifically expressed in the dental mesenchyme , harbors an allele with the in-frame deletion of the exon coding for the GATA-type DNA binding domain of Trps1 mice were used. To generate Trps1 cKO mice, Trps1 cKO (Trps1fl) allele was generated by inserting two LoxP sites flanking the first coding exon of Trps1 by homologous recombination flanked by FRT sites for a positive selection of recombinant embryonic stem cells (ES). Following the Trps1 cKO construct injection into C57BL/6 ES cells, neomycin selection was performed, and the resistant ES clones were screened to verify the recombination. Mice carrying the recombinant allele were subsequently obtained via the generation of germline chimeras. The Trps1fl allele was generated after breeding with germline deleter Flp mice (r cassette. Trps1Col1a1cKO mice were generated by breeding Trps1fl mice with 2.3kbCol1a1-CreERT2 mice expressing Cre recombinase under the control of the 2.3-kb fragment of Col1a1 promoter mice via intraperitoneal injection (0.1 mg/g body weight) at postnatal days (P)1, P2, P9, P16 and P23 to assure efficient deletion of Trps1 in odontoblasts and as a control in WT mice , Col1a1-Cre , which rpromoter 54). Th. Th2.3kbpromoter , 56. To WT mice . TamoxifAATGCAG) . The micAATGCAG) .2 inhalation. All analyses were performed on tissues collected post-mortem.All animal studies were conducted in accordance with a protocol approved by the University of Pittsburgh Institutional Animal Care and Use Committee (IACUC protocol # 19095648), complying with the Federal Animal Welfare Act and all NIH policies regarding vertebrate animals in research. Mice were euthanized by COTrps1Col1a1cKO male and female mice (N = 3/genotype/sex) were dissected under a stereo microscope (Leica S9D) to remove soft tissues, and fixed with 10% formalin overnight. Samples were decalcified in 10% ethylenediaminetetraacetic acid (EDTA) solution (pH 7.4) for 14 days prior to paraffin embedding. Serial 7 \u03bcm sagittal sections were placed on Fisherbrand\u2122 Superfrost\u2122 Plus microscope slides and deparaffinized for hematoxylin and eosin (H&E) staining following standard protocols. Hemimandibles of P7 and 4 wk old 2.3kbCol1a1-CreERT2;mTmG reporter mice were harvested and processed as described above. Decalcified samples were cryoprotected in 30% sucrose/PBS overnight at 4 \u00b0C, embedded in an OCT compound and stored at\u221280\u00b0C until sectioned. Samples were cryosectioned at 7 \u03bcm, protected from the light. To validate Cre recombinase activation in odontoblasts of mandibular first molars of reporter mice, cryosections were counterstained with DAPI and mounted with immu-mount for microscopic analyses of GFP and RFP signals.Hemimandibles of 4 wk old WT and \u00ae IndentaMet\u21221100 Series microindentation hardness tester adapted to a uEye camera and a Buehler Omnimet MHT software.Whole teeth images were captured on a Leica M165FC dissecting microscope using a DFC 450 camera and Leica LAS software. Histological images were captured on a Zeiss AXIO microscope with an AxioCam MRc 35 camera and Zen software. Microindentation images were captured using a BUEHLERN = 5/genotype/sex) were imaged in 70% ethanol by the Scanco \u03bcCT 50 system. The following parameters were set for the scans: 6-\u03bcm voxel size, 55 KVp, 0.36 degrees rotation step (180 degrees angular range) and a 1,500 ms exposure per view. After 3D reconstruction, volumes were segmented using a global threshold of 0.6 g HA/cc. Mineral density (TMD), thickness (Th), and volumes (BV) were measured for enamel and dentin separately. Additionally, dentin and enamel tissue fraction (BV/TV) in the total tooth crown volume (TV) was calculated as described before and d2 the area of the indentation (measured in mm2) and polished as previously described . Mechani in mm2) . MicrohaTrps1Col1a1cKO mice (N=5/genotype/sex) were analyzed using the protocol described by Vieira et al., .Experiments performed in this study used five mice per genotype per sex or otherwise stated. Males and females were analyzed separately. Values are expressed as mean \u00b1 standard deviation (SD). Statistically significant differences were determined using the Student\u2019s Trps1 deficiency . Trps1 knockout was initiated with tamoxifen injections at postnatal day 1 (P1) and P2, followed by 3 more injections 7 days apart .\u201365.Trps1ter mice ,E. Fluorpromoter . These rpromoter ,E.Trps1Col1a1cKO mice demonstrated that Trps1Col1a1cKO males are significantly smaller than WT littermates at P7, P21 and 4 wks of age. This difference was not found in females, suggesting that Trps1 deficiency has stronger effect in males than in females enamel and dentin specifically in the circumpulpal dentin, pit dentin and outer enamel in Trps1Col1a1cKO males ; behavioral features ; socioeconomic and demographic factors ; environmental exposures ; along with host-genetics , 18, 69.Trps1Col1a1cKO molars is thinner than in WT mice. This deficiency was particularly prominent in pits of the tooth crown, which are the sites most susceptible to dental caries initiation in humans. Teeth with thinner dentin may be prone to more severe caries, as once a lesion is initiated, it can reach the pulp faster than in teeth with a thicker dentin layer. On radiographic images of teeth, features such as enlarged pulp chambers and prominent pulp horns suggest a thinner dentin layer. This was visible on 2D \u03bcCT images of Trps1Col1a1cKO molars, and importantly, such features were recently reported in permanent molars of a 16-year-old male patient with TRPS1 mutation of both Trps1Col1a1cKO males and females is significantly softer, than in the WT mice; while the inner enamel (IE) hardness of Trps1Col1a1cKO females becomes significantly lower after the artificial caries procedure as compared to WT. Notably, the detection of localized enamel mineralization defects in pits contributes to our understanding of dental caries lesion distribution pattern in TRPS. We agree with others, that the effects of dental caries risk factors may be surface-specific , as shown in our previous in vitro studies , present with a stronger dental phenotype compared to females.Interestingly, rickets , 84, 85.estrogen \u201388. The androgen , 89, 90.androgen . The repale mice . This seTrps1Col1a1cKO mice relevant to the predisposition to dental caries is malocclusion. Dental caries is a common complication of malocclusion (Trps1Col1a1cKO males and females. This and severe root exposure detected in Trps1Col1a1cKO mice suggest impaired formation of the alveolar bone, which is caused most likely by deficiency of Trps1 in osteoblasts, as osteoblasts are another cell type expressing Cre from the 2.3kbCol1a1 promoter (Trps1 not only in formation of sound mineralized dental tissues, but also in the dento-alveolar complex.Among other dental findings in cclusion . This abpromoter . Notablypromoter , underliTrps1Col1a1cKO mice suggests that TRPS patients are genetically predisposed to dental caries. Hence, TRPS patients may benefit from more assertive prevention strategies and early interventions to mitigate dental caries risk and improve oral health.In summary, the compromised quality of the tooth mineralized tissues, expressed as softer and less acid-resistant enamel and dentin, together with decreased dentin layer, tooth misalignment, localized tooth mineralization defects in occlusal and buccal pits detected in Supplementary Materials"} {"text": "CNNM2 is primarily expressed in the brain and distal convoluted tubule (DCT) of the kidney. Mutations in CNNM2 have been reported to cause hypomagnesemia, seizure, and intellectual disability (HSMR) syndrome. However, the clinical and functional effect of CNNM2 mutations remains incompletely understood. We report our clinical encounter with a 1-year-old infant with HSMR features. Mutation screening for this trio family was performed using next-generation sequencing (NGS)-based whole exome sequencing (WES) with the identified mutation verified by Sanger sequencing. We identified a de novo heterozygous mutation c.G1439T (R480L) in the essential cystathionine \u03b2-synthase (CBS) domain of CNNM2 encoding CNNM2 (cyclin M2) without any other gene mutations related to hypomagnesemia. The amino acid involved in this missense mutation was conserved in different species. It was also found to be pathogenic based on the different software prediction models and ACGME criteria. In vitro studies revealed a higher expression of the CNNM2-R480L mutant protein compared to that of the wild-type CNNM2. Like the CNNM2-wild type, proper localization of CNNM2-R480L was shown on immunocytochemistry images. The Mg2+ efflux assay in murine DCT (mDCT) cells revealed a significant increase in intracellular Mg2+ green in CNNM2-R480L compared to that in CNNM2-WT. By using a simulation model, we illustrate that the R480L mutation impaired the interaction between CNNM2 and ATP-Mg2+. We propose that this novel R480L mutation in the CNNM2 gene led to impaired binding between Mg2+-ATP and CNNM2 and diminished Mg2+ efflux, manifesting clinically as refractory hypomagnesemia. Defective Mg2+ reabsorption in the DCT inevitably causes renal hypomagnesemia because there is no Mg2+ reabsorption in the downstream DCT. Gene mutations related to the regulation of Mg2+ transport in the DCT can cause renal hypomagnesemia. These genes include SLC12A3 encoding thiazide-sensitive NCC, TRMP6 encoding apical TRPM6 channel, HNF1B encoding HNF1\u03b2, PCBD1 encoding PCBD1, EGF encoding EGF, EGFR encoding EGFR, KCNJ10 encoding Kir4.1 (EAST syndrome), KCNA1 encoding Kv1.1, FXYD2 encoding \u03b3-subunit of Na+-K+ ATPase, and CNNM2 encoding CNNM2 (cyclin M2) (Magnesium (Mgfunction . In the in TALH) . Like thclin M2) .2+-ATP, and cytosolic cyclic-nucleotide-binding homology domain (CNBH). To date, only a few families with CNNM2 mutations have been reported or recessive (minority) renal hypomagnesemia with seizure and intellectual disability (HSMR) . In humareported . AlthougMg2+-ATP . However2+ wasting, seizure, and intellectual disability, consistent with the clinical description of HSMR syndrome. In this study, we aimed to identify the genetic mutation for her phenotype and to assess the functional impact of the identified mutation. Our results indicated that a de novo heterozygous mutation c.G1439T (R480L) in the CBS domain of the CNNM2 gene was identified in this proband. This missense CNNM2 R480L mutation was found to be pathogenic. In vitro studies revealed higher CNNM2-R480L protein expression with proper cellular localization on immunocytochemistry images. The impaired Mg2+ efflux with a significant increase in intracellular Mg2+ suggests that the CNNM2-R480L mutation blocks Mg2+ efflux. In our simulation model, this R480L mutation leads to an attenuated interaction between CNNM2 and ATP-Mg2+.We have encountered an infant with refractory hypomagnesemia and excessive renal MgThis study was approved by the ethics committee on human studies at Tri-Service General Hospital in Taiwan (IRB2-105-05-136). A trio family including the proband and her parents were enrolled. Written informed consent was obtained from the participants.2+ excretion (FEMg 6.5%), normokalemia, normocalcemia, and normocalciuria with increased urine Mgalciuria . There whttp://www.ncbi.nlm.nih.gov/projects/SNP/), HapMap, the 1000 Genomes Project (http://www.1000genomes.org), Exome Aggregation Consortium (ExAC) database, and the Genome Aggregation Database . Direct Sanger sequencing was performed for all patients and their parents to verify the genetic variants detected by WES. The data that support the findings of this study are available from the corresponding author upon reasonable request.Genomic DNA was isolated from a peripheral venous blood sample. We performed exome capture using the Agilent Sure Select v6 and massively parallel sequencing using the HiSeq 4000 platform as previously reported . Raw ima2 incubator. Human CNNM2 cDNA (NM_017649.5) was cloned into the pcDNA3.1 vector. The disease-causing mutation was obtained by a QuikChangeTM Site-Directed Mutagenesis Kit . The primers used to introduce the mutations were R480L: forward primer 5\u2032-GAG\u200bCGG\u200bCTA\u200bCAC\u200bCCT\u200bCAT\u200bTCC\u200bAGT\u200bGTT\u200bTG-3\u2032; reverse primer 5\u2032-CAA\u200bACA\u200bCTG\u200bGAA\u200bTGA\u200bGGG\u200bTGT\u200bAGC\u200bCGC\u200bTC-3\u2032, R480K: forward primer 5\u2032-GGA\u200bGAG\u200bCGG\u200bCTA\u200bCAC\u200bCAA\u200bGAT\u200bTCC\u200bAGT\u200bGTT\u200bTGA\u200bAGG-3\u2032; reverse primer 5\u2032-CCT\u200bTCA\u200bAAC\u200bACT\u200bGGA\u200bATC\u200bTTG\u200bGTG\u200bTAG\u200bCCG\u200bCTC\u200bTCC-3\u2032, V548M: forward primer 5\u2032CTC\u200bACC\u200bTGG\u200bCTA\u200bTCA\u200bTGC\u200bAGC\u200bGGG\u200bTAA\u200bACA\u200bATG-3\u2032; reverse primer 5\u2032-CAT\u200bTGT\u200bTTA\u200bCCC\u200bGCT\u200bGCA\u200bTGA\u200bTAG\u200bCCA\u200bGGT\u200bGAG-3\u2032, T568I: forward primer 5\u2032-GAA\u200bGTT\u200bCTG\u200bGGA\u200bATC\u200bATC\u200bTTA\u200bGAA\u200bGAT\u200bGTG\u200bATT\u200bG-3\u2032; reverse primer 5\u2032-CAA\u200bTCA\u200bCAT\u200bCTT\u200bCTA\u200bAGA\u200bTGA\u200bCGA\u200bTTC\u200bCCA\u200bGAA\u200bCTT\u200bC-3\u2032. The mDCT cells seeded in a six-well plate with 70\u201390% confluence were transfected by the indicated amount of plasmid DNA with a Lipofectamine 3000 Reagent (Thermo Fisher Scientific). The reported R480K, V548M, and T568I constructs were selected as negative controls. R480K constructed in CBS1 was selected as a charge-unchanged control. Two previously reported mutant constructs (V548M and T568I) in CBS2 were also performed.Murine distal convoluted tubular (mDCT) cells were cultured in a 1:1 mixture of Dulbecco\u2019s modified Eagle\u2019s medium with 1\u00a0g/L glucose, 1\u00a0mM sodium pyruvate, and Ham\u2019s F-12 Nutrient Mix. Finally, 5% (v/v) fetal bovine serum,100\u00a0U/ml penicillin, and 0.1\u00a0mg/ml streptomycin were added to the growth medium. The cells were incubated at 37\u00b0C in a humidified 5% CO+-K+ ATPase .The mDCT cells were harvested for 24\u00a0h after transfection. The cell lysates were prepared in a RIPA lysis buffer with a protease inhibitor cocktail (Roche). Following the separation by centrifugation, the remaining protein lysates were denatured in an SDS sample reagent with 100\u00a0mM DTT for 30\u00a0min at 37\u00b0C and then analyzed by polyacrylamide-SDS mini-gels. The mDCT cells were transfected with a pcDN3.1 empty vector or disease-causing mutation and harvested after 24\u00a0h. The cell membrane fraction was subjected to the ProteoExtract native membrane protein extraction kit (Merck-Millipore), following the manufacturer\u2019s description, and then analyzed by semiquantitative immunoblotting . The immThe mDCT cells were seeded on a chamber slide (Millicell EZ slide) and transiently transfected with 0.5\u00a0\u03bcg of plasmid DNA. After 24\u00a0h , the cells were washed with PBS and fixed with 4% paraformaldehyde for 15\u00a0min. After PBS rinses, the cells were incubated for 1\u00a0h with 0.1 Triton X100 in PBS and then blocked with 1% BSA in PBST for another 30\u00a0min. Specific antibody CNNM2 (X200 Cusabio) was used for cell staining. After PBST rinses, cells were incubated with Alexa Fluor 488-conjugated goat anti-rabbit (X200 Invitrogen) for 1\u00a0h and stained with DAPI (5\u00a0ug/ml) for 5\u00a0min. The images were captured with a Leica DM2500 microscope.2+ imaging of transfected cells was analyzed with Magnesium Green\u2122 (Molecular Probes) as described previously (2+ loading buffer including 2\u00a0\u03bcM Magnesium Green (Molecular Probes) at 37\u00b0C for 60\u00a0min. Then, the buffer was changed to buffer without Mg2+ (MgCl2 was replaced with 60\u00a0mM NaCl) and the fluorescent was recorded at 1-min intervals. The cells images were detected by ImageXpress Micro XLS (molecular devices) and fluorescence was measured using MetaXpress High content image acquisition and analysis software (Molecular Devices). The cell fluorescence was analyzed by the software setting for Cell Scoring.The mDCT cells were cultured on a 96-black, clear bottom tissue culture plate (Corning), after transfection (24\u00a0h). The Mgeviously , with slThe resolved structures of the human CNNM-PRL complex (PDB code: 5LXQ) were uset-test was used to compare differences between groups. When comparing the ratio of differences between groups, we used a ratio paired t-test with the Holm\u2013\u0160\u00edd\u00e1k method. A p-value less than 0.05 was considered statistically significant.The results were presented as mean \u00b1 standard deviation (SD) for continuous variables. Student\u2019s de novo heterozygous mutation in the CNNM2 gene located on the cystathionine-\u03b2-synthase (CBS) domain of CNNM2. Sanger sequencing for the patient and her parents confirmed this NM2 gene . R480 isThe total and membrane expressions of wild-type CNNM2 (amino acids 1\u2013875) and mutant CNNM2 proteins were examined after transient transfection of CNNM2 in mDCT cell lines. As shown in The immunocytochemistry images of mDCT cells with anti-nuclei (blue) and anti-CNNM2 (green) demonstrated that CNNM2-wild type, CNNM2-R480L, and other negative controls (CNNM2-V548M and CNNM2-T568I) were properly localized adjacent to the cell membrane . Of note2+ efflux in a cellular assay with wild-type and mutant CNNM2. The mDCT cells were transfected with the indicated constructs treated with Mg2+ Green and then subjected to Mg2+ depletion. As shown in 2+ Green was significantly higher in CNNM2-R480L compared to wild type and other CNNM2 mutants, with p-values < 0.05 from min\u00a01 to min 5. This finding indicates that the R480L mutation located in the CBS1 domain of CNNM2 blocks Mg2+ efflux activity.To evaluate the impact of R480L on the function of CNNM2, we examined the CNNM2-dependent Mg2+-ATP in the CBS module of CNNM2 by approximately 352\u00a0kcal/mol, which suggested that this mutation may cause significant impairment in the binding ability with Mg2+-ATP.The interaction from the side chain of Arg480 with the \u03b3-phosphate of ATP was lost after being replaced by leucine . The mutde novo heterozygous mutation c.G1439T (R480L) in the CBS domain of the CNNM2 gene in a trio family with typical HSMR. In vitro studies showed that this CNNM2-R480L had a higher expression level than the CNNM2-wild type and proper localization to the plasma membrane. The Mg2+ efflux assay in mDCT cells revealed the blockade of intracellular Mg2+ efflux under Mg2+ depletion. The simulation model also predicted the attenuated interaction of this mutant protein with Mg2+-ATP.In this study, we have identified a novel and CNNM2 have been reported responsible for HSMR Epilepairments . Althougairments .in vitro studies for this R480L mutant. A higher CNNM2-R480L protein expression with proper cellular localization on immunocytochemistry images was found. In contrast to the previous findings, the membranous expressions of mutant CNNM2-R480L were higher than those of the CNNM2-wild type , and a large cytosolic region including a cystathionine-\u03b2-synthase (CBS) domain and a putative cyclic nucleotide-binding homology (CNBH) domain . The mutrization . In linerization .2+ efflux assay to evaluate the effect of R480L on CNNM2 transport activity. Consistent with previous studies, R480L impaired the Mg2+ efflux activity in the presence of significantly high intracellular Mg2+ under Mg2+ depletion. It has been shown that Mg2+-ATP binding is required for cellular Mg2+ efflux since mutations that abolished Mg2+-ATP binding prevent Mg2+ efflux (2+-ATP in the CBS1 of CNNM2 by R480L mutation. However, we did not find a significant impairment of Mg2+ efflux in CNNM2 T568I in CBS2, unlike a previous report (2+-green by calculating the mean fluorescence of all cells over 300 gray levels. Hirata et al. presented the relative intensity as the mean fluorescence of 10 cells. In fact, we found that the relative intensity of CNNM2 T568I on the Mg2+ efflux assay was higher than the wild type, although the difference was not statistically significant. Altogether, our study demonstrated that this R480L mutation resulted in the diminished binding with Mg2+-ATP and consequently led to impairment of Mg2+ efflux. This defective Mg2+ efflux might account for the hypomagnesemia in this patient with CNNM2 R480L mutation.We conducted a cellular Mg+ efflux . Of notes report . This dis report . Second,2+-ATP and the CBS-pair. However, the simulation model showed the reduction of the binding energy of ATP-Mg2+ in the CBS module of CNNM2 by R480L mutation.There were a few limitations in our study. First, we did not provide direct evidence for the R480L mutation in causing impairment of CBS dimerization by the crystal structure. Second, isothermal titration calorimetry was not performed to confirm the diminishment of the interaction of Mgde novo heterozygous R480L mutation in the CBS domain of the CNNM2 gene in a trio family with severe HSMR. This highly expressed CNNM2-R480L properly localizes to the plasma membrane but impairs Mg2+ efflux likely through the attenuated interaction with Mg2+-ATP, resulting in the clinical manifestation of refractory hypomagnesemia.We identified a novel and"} {"text": "The most common concomitant PCHS combination was Kathon CG\u00ae + MI. Most patients (32.4%) belonged to the age group of 21\u201330, and skin symptoms affected mostly the limbs and face. The most common other concomitant allergens were nickel, lanolin alcohol and balsam of Peru. Preservatives are important contact allergens in adult AD, mostly among young women. The rate of AD in the PCHS group and the rate of PCHS in the AD group is remarkable; thus, the role of PCHS should be highlighted in the topical therapy and in the prevention of possible AD exacerbations.Atopic dermatitis (AD) is a chronic inflammatory disease characterised by an impaired skin barrier. The prolonged use of topical preparations containing medications, emollients, fragrances and preservatives may increase the risk of contact hypersensitivity (CHS). In the Allergy Outpatient Unit of the Department of Dermatology, Venereology and Dermatooncology of Semmelweis University, 5790 adult patients were patch tested between 2007\u20132021 with the European Environmental Baseline Series according to international standards. Among all the tested adult patients, 723 had preservative CHS (PCHS) and 639 had AD. Among the 723 PCHS patients, 68 (9.4%) had AD; the female to male ratio was 3:1 in this group. Out of 639 AD patients, 68 had PCHS (10.6%). In the AD-PCHS group, 83.8% had CHS to methylisothiazolinone (MI) (tested from 2014), 36.8% to Kathon CG Atopic dermatitis (AD) is a common, relapsing, chronic inflammatory skin disease with scaly, pruritic, erythematous skin symptoms. It is characterised by skin barrier impairment in both lesional and non-lesional skin regions ,2,3,4.Patients with AD are treated locally, mostly with emollients, moisturisers, topical corticosteroids or calcineurin inhibitors. Several factors can modify the effectiveness of the therapy: stress, infections, lack of compliance and contact allergen exposures may trigger exacerbations. Because of the damaged skin barrier and the long-term local therapy, there can be an increased risk of developing a contact hypersensitivity in AD patients ,5,6,7,8.Plenty of cosmetical and dermatological products contain not only ingredients which are helpful in care and treating AD, but fragrances and preservatives as well. An observation of a higher risk of CHS to preservatives among AD patients has been published, but the number of publications regarding preservative CHS and adult AD is very limited ,9,10,11.n = 68) were analysed.During our examination (2007\u20132021), 5790 consecutive adult (\u226518 years) patients were patch tested with the European Environmental Baseline Series (EEBS). Out of them, 723 adult patients had PCHS and 639 had AD. Among them, data of adult PCHS AD patients (\u00ae (methylchloroisothiazolinone/methylisothiazolinone [MCI/M] 3:1) and methylisothiazolinone (MI) were used in the aqueous phase. Allergens of the EEBS are regularly revised and new ones can be included in the series [Patch testing took place in the Allergy Outpatient Unit and Laboratory of the Department of Dermatology, Venereology and Dermatooncology of Semmelweis University between 2007\u20132021 on adult AD patients. The EEBS produced by Brial allergEAZE GmbH were used . Most ofe series ,17,18,19\u00ae, MI, formaldehyde, Quaternium-15, para-tert-butylphenol formaldehyde resin , methyldibromo-glutaronitrile (MDBGN).Data of adult AD patients who had positive patch test reaction to at least one of the following seven EEBS preservative allergens were evaluated: paraben, Kathon CGn = 5790), 639 patients (11.03%) had AD.Out of the total tested adult patient population adult AD patients who had at least one CHS in the EEBS, and 68 (17.4%) had at least one positivity to preservatives.Among the tested 639 adult AD patients, 68 had at least one positivity to preservatives (10.6%) .From the 5790 tested patients, 723 (12.5%) had CHS to at least one preservative. Out of them, 68 had AD (9.4% of the preservative CHS population) .Gender distribution:In this study, 75.0% of patients were female and 25.0% were male. The female to male ratio was 3:1.2.PCHS and polysensitivity:n = 37), 36.8% Kathon CG\u00ae, 16.2% MDBGN, 11.8% paraben, 7.4% formaldehyde, 4.4% PTBP-formaldehyde resin and 1.5% Quaternium-15 hypersensitive , whereas 17.65% had two PCHSs (most common combination: Kathon CG\u00ae + MI) and 2.94% had three PCHSs (Kathon CG\u00ae + MI + MDBGN/paraben) A,B.3.Age distribution:\u00ae, MI and PTBP formaldehyde resin in the group of 61\u201370. In the age groups of 51\u201360 and after 71, PCHS was not typical were mostly affected by MI and MDBGN hypersensitive patients. Widespread skin symptoms were typical in MDBGN, MI and Kathon CG\u00ae CHS patients ), lanolin alcohol (13 p), balsam of Peru (11 p), propylenglycol and thiomersal (10 p), wood tar (9 p), fragrance mix II and thiuram mix (7 p) and mercury-chloride, cobalt and fragrance mix I (6 p) .AD is a chronic inflammatory skin disease which affects about 1\u201310% of the adults and about 15\u201320% of the children worldwide [AD has a multifactorial background. Skin barrier dysfunction, immune system dysregulation, the disbalance of the skin bacterial microbiome and genetic factors are also included in the complex pathogenesis. Endogenous and exogenous components can also modify the prognosis of the disease. One of the most remarkable predisposing factors is a family history of atopic diseases. Patients with mutations or impaired expression of the filaggrin gene were also reported, which contribute to the skin barrier. The lipid metabolism with decreased ceramide production is also damaged. Trans-epidermal water loss was increased. All these factors weaken the proper skin barrier functions leading to inflammation of the skin. In this process, initially the type-2 T-helper cells (Th2) are crucial, with a subsequent Th2-Th1 cell switch in the chronic phase ,21,22,23Not only are the background and the provoking factors of AD rather complex, but also the clinical characteristic shows a dynamically changing tendency over time. In the past, it was believed that AD begins always at a young age (infancy or childhood) and then the skin symptoms disappear with time. According to the recent concepts, AD is considered to be a life-long condition even if the patient has no actual active, inflamed skin lesions at all. It is reported that about 60\u201380% of the AD population has a very early-onset type of AD. Among them, it is estimated that about 60% of patients have a complete remission before two years of age. The other group of these patients and those who develop AD between 2 and 6 years of age have a higher risk to have chronic and persistent AD. Data of adolescent-onset AD are limited. Most adult AD patients have flare ups after a long symptom-free period or have had persistent AD since childhood. About 2% of AD patients have real adult-onset AD, and are mostly women. The number of elderly patients with active inflammatory AD symptoms is low, although dry and sensitive skin will stay lifelong. A real elderly-onset type of AD is very unusual but possible ,5,6.Although the initial onset of the disease can be different among AD patients, the clinical picture of AD is well characterised according to the actual age of the patients. Regarding the connection between age and skin symptom features, infantile, childhood, adolescent/adult and elderly AD types can be defined. The first lesions of infantile AD usually appear some months after birth and are characterised by the acute, exudative skin lesions. AD most commonly affects the scalp and facial regions of the newborns and oozing and crusting are quite typical. The extensor surfaces of the limbs are also predilectional parts for AD in the infants. The childhood AD has a clinical feature of acute and chronic lesions as well. Xerosis and lichenification may also appear. Eczematous skin symptoms usually occur in the bends and in the perioral and periorbital regions. Involvement of the hands and the wrists should also be mentioned. Adolescents and adults have AD skin symptoms often on the head-neck region and the flexural localisations and hands are the most typical places for the clinical distribution of the skin symptoms. Cosmetics and inadequate skin care may exacerbate the skin condition. Hand eczema is a leading problem for adult AD patients and is a great burden on the quality of life. Irritants and environmental contact allergens are typical provoking factors of it. After 60 years of age, widespread AD skin symptoms are uncommon but possible, and even erythroderma may occur. Certain other dermatologic diseases need to be excluded in order to diagnose AD properly in adults ,5,6.Staphylococcus aureus, Malassezia and Trichophyton species) and pollutants can be mentioned. Studies on the adult AD population attribute a significant role to different kinds of contact and aeroallergens as important provoking factors underlying a sudden flare up of the symptoms or therapy resistance [Having vulnerable skin to the endogenous factors mentioned before, certain exogenous factors may trigger an exacerbation of AD. Among environmental exposure irritants, aero/food/contact allergens, stress, certain microbes is a cell-mediated, delayed, type IV hypersensitivity reaction. During the sensitization phase, the person comes into contact with the allergen for the first time. Allergens are low-molecular-weight substances connected to a larger carrier (haptens). Haptens are engulfed by antigen-presenting cells migrating to the local lymph nodes where the activation and proliferation of naive T-cells begins. The formerly naive T-cells become allergen-specific T-cells which have a key role during the elicitation phase, since re-exposure of the allergen activates them, provoking inflammation and the clinical picture of ACD. Prevalence of CHS is reported to be up to 20%, and the incidence of ACD seems to be rising. In Europe, 27% of the general population has at least one CHS ,20,24,25Regarding human data, compared to control skin, the absorption through AD skin is increased not only in case of lesional, but non-lesional skin as well. However, there are regional differences in the increased skin absorption. Forehead and genital skin is reported to be more vulnerable than forearm skin, for example. The severity of the AD and the presence of the filaggrin mutation are also factors which should be kept in mind when discussing skin absorption. The more severe and widespread the AD is, the more increased the skin absorption will be. The filaggrin mutation contributes to the higher risk of having a skin barrier impairment in non-lesional skin, too. In conclusion, according to the literature, patients with AD have nearly a twofold-increase in skin absorption of different chemicals, including irritants and contact allergens as well. The most common contact allergens in AD are metals, fragrances, emollients, vehicles, dyes, antibiotics, topical antiseptics and preservatives ,28,29,30The number of allergic contact reactions to different cosmetic products is reported to be increasing. The preservatives are the most common cosmetic contact allergens after fragrances, but emulsifiers, vehicle components, sunscreen agents and nail resins can also provoke CH. In the general European population, 6.2% have PCHS. This fact highlights the importance of these allergens, since AD patients regularly use numerous personal care products besides topical medications. These products may contain contact allergens ,20,25,30\u00ae, MI, formaldehyde, Quaternium-15, PTBP-formaldehyde-resin and MDBGN [Cosmetics, hygiene products and local therapeutics with high water content need chemical preservation. Seven of the most common preservatives are a part of the EEBS: paraben, Kathon CGnd MDBGN ,16,19,30Parabens are one of the most commonly used preservatives around the world. Contact allergy is reported to parabens from 1940. Nearly 35 variants of parabens are known, but methyl-, ethyl-, propyl-, and butylparabens became the most widely applicated ones. Parabens have an antimicrobial spectrum covering gram-positive bacteria and fungi. The different variants are often combined with each other or with other preservatives as well. Foods, medications and cosmetics also contain this substance. Paraben is a rather common cosmetic ingredient in skin care products and can also be found in makeup products, powders, foundations, eye contour pencils, mascaras, lipsticks, lip glosses, hair dyes, nail cosmetics, toothpastes and mouthwashes ,31.\u00ae is the 3:1 mixture of methylchloroisothiazolinone (MCI) and methylisothiazolinone (MI). It has become a quite popular preservative because of its potent antimicrobial effects which covers gram-positive bacteria, gram-negative bacteria, yeasts and moulds. Despite its advantages as a broad-spectrum antimicrobial agent, its contact sensibilisation-provoking effect was also published. Kathon CG\u00ae was introduced in the early 1980s as an industrial and household product and cosmetic preservative. However, an increasing tendency in cases of CHS rates was reported, and the first cosmetic-related Kathon CG\u00ae contact dermatitis was reported in 1985. MCI and MI were also published to be provoking factors of allergic contact dermatitis in humans, and animal studies also showed that mostly MCI was the main sensitiser out of the mixture. A large variety of cosmetic formulations contain this chemical. It is common in intimate hygiene cosmetics, hair care products and facial cleansers. Shower gels, shampoos, makeup products, moisturisers, body lotions, creams and hair cosmetics are also sources of Kathon CG\u00ae exposure [Kathon CGexposure ,16,30,32\u00ae. It was believed to be a less potent sensitising contact allergen than MCI. In the 2000s it was allowed to be an individual preservative; firstly, in industrial products, and later in household products and cosmetics. MI CHS was reported increasingly, and occupational cases and cosmetic-related studies were also published. In cases of occupational sources of exposures, cutting oils, glues, inks, paints and lacquers are remarkable. Among household products, glass cleaners, wood cleaners, laundry detergents, dishwashing liquids and fabric softeners are common products containing MI. MI is an important preservative in personal care products , hair cosmetics , soaps, deodorants, make-up products nail cosmetics, aftershaves, moisturisers, self-tanning products, sunscreens and intimate hygiene products [MI was previously used for preservation only as a component of Kathon CGproducts ,16,30,33Formaldehyde is a gas, which is called formalin when it is an aqueous solution. It has a biocide, preservative and denaturant function in cosmetics. The aqueous formaldehyde solutions are known irritants and their CHS provoking effect has already been published in occupational and non-occupational cases as well. Cosmetic exposures of formaldehyde include shampoos, hair conditioners, hair dye products, soaps, detergents, bath oils, bath salts, personal care products, shaving creams, moisturisers, face masks, face wraps and nail cosmetics. In Europe, different kinds of regulations are present regarding the concentrations of formaldehyde. It is permitted for usage as a preservative at a concentration of 0.2% in cosmetics and 0.1% in oral hygiene products, and products must be labelled as \u201ccontains formaldehyde\u201d if they contain more than 0.05% of this chemical ,34,35.Quaternium-15 is a formaldehyde-releaser preservative, which was first introduced to be a part of the EEBS in 1984. It is a potent antimicrobial agent, which is effective even at low concentrations. The sources of Quaternium-15 exposure are quite variable, since non-cosmetic and cosmetic products can contain it as well. As non-cosmetic products, detergents, polishes, inks, paints, textile finishing products, joint cements and metalworking fluids can be mentioned. Cosmetic sources of exposure are baby shampoos, body lotions, soaps, detergents, bath salts, eyeliners, eyeshadows, eyeshadow removers, perfumes, hair conditioners, shampoos, hair sprays, hair dyes, face powders, lipsticks, primers, nail cosmetics, deodorants, face creams, body lotions, face wraps and self-tanners ,36,37.The PTBP-formaldehyde resin is also a formaldehyde-releaser preservative. Contact allergies provoked by this chemical have been known for decades. The first case of contact dermatitis to PTBP-formaldehyde resin, published in the late 1950s, was caused by a shoe glue. Non-occupational and occupational sources of exposures can be mentioned. Occupational CHS is reported to be less frequent. Workers in the car industry and shoe manufacturing are affected. Most cases of CHS to PBTP-formaldehyde resin are non-occupational. Mainly domestic glues, amputation prostheses, leather watch straps and neoprene orthopaedic knee braces belong to this group. However, PTBP-formaldehyde resin as a preservative can also be found in nail cosmetics and deodorants ,39.Methyldibromo-glutaronitrile is a preservative and known contact allergen with mainly cosmetic sources of exposure. Although MDBGN was banned in Europe, not only from leave-on products in 2005 but also from rinse-off products, CHS reported about this preservative is still present nowadays. Among cosmetics, MDBGN is used for preservation in shampoos, soaps, cleansers, body lotions, make-up products and make-up-removing wet wipes. ,40,41.Due to the long-term usage of a large number of topical preparations (cosmetics and medications), AD patients are reported to be more likely to develop CHS not only to fragrances, but to preservatives as well. However, this topic is not researched in more detail and data on PCHS in adult AD patients are very limited ,9,10,11.In our 15-year (2007\u20132021) retrospective study we examined the clinical features of PCHS in adult AD patients. The rate of adult PCHS AD patients (9.4%) is remarkable in the total PCHS population, in our overall tested adult AD population (10.6%) and in the AD population with at least one CHS (17.4%) as well.\u00ae and MDBGN despite the fact that MI was patch tested only from 2014. By concomitant PCHS the most common combination was Kathon CG\u00ae + MI.According to our observation in adult AD patients the most common preservatives are MI, Kathon CGThe most affected adult PCHS AD patients belonged to the age group of 21\u201330 and most skin symptoms were localised to the limbs and face-neck region.According to our data, besides metals, the most common other EEBS concomitant allergens were cosmetic-therapeutic ones in the PCHS adult AD group.To our best knowledge, this is the first study which focuses on the clinical characteristics of PCHS in the adult AD group.In conclusion, PCHS is important among adult AD patients. This finding highlights that adult AD patients are worth patch testing in case of therapy-resistance or worsening skin symptoms due to topical medications and/or personal care products. Our results underline the importance of regular and detailed medical counselling about conscious skin care and about applying not only fragrance-free, but also preservative-free products in this population."} {"text": "Staphylococcus aureus (S. aureus) has been shown to exacerbate AD. In recent years, in vitro models of AD have been developed, but none of them reproduce all of the pathophysiological features. To better mimic AD, we developed reconstructed human epidermis (RHE) exposed to a Th2 pro-inflammatory cytokine cocktail and S. aureus. This model well reproduced some of the vicious loops involved in AD, with alterations at the physical, microbial and immune levels. Our results strongly suggest that S. aureus acquired a higher virulence potential when the epidermis was challenged with inflammatory cytokines, thus later contributing to the chronic inflammatory status. Furthermore, a topical application of a Castanea sativa extract was shown to prevent the apparition of the AD-like phenotype. It increased filaggrin, claudin-1 and loricrin expressions and controlled S. aureus by impairing its biofilm formation, enzymatic activities and inflammatory potential.Atopic dermatitis (AD), the most common inflammatory skin disorder, is a multifactorial disease characterized by a genetic predisposition, epidermal barrier disruption, a strong T helper (Th) type 2 immune reaction to environmental antigens and an altered cutaneous microbiome. Microbial dysbiosis characterized by the prevalence of Atopic dermatitis (AD) is a chronic skin inflammatory disease , highly stratum corneum, the outermost layer of the epidermis, anucleated corneocytes are stacked and embedded in a lipid-enriched extracellular matrix.In healthy epidermis, the barrier function has several purposes: to protect against ultra-violet radiation; to maintain good hydration of the upper cell layers and, at the same time, limit body fluid and water loss; to provide an immune defense system against microbial infection; and, finally, to physically and chemically control what goes inside the skin . To ensuThe inside-out signal, where chronic inflammation, driven by T helper (Th)2 cytokines, including interleukin (IL)-4 and IL-13, secondarily alters keratinocyte differentiation and reduces the expression of several epidermal barrier proteins ;The outside-in signal, where epidermal barrier disruption allows for the penetration of allergens and microbes and triggers immunological imbalance ,11,12.Various studies have demonstrated that AD is characterized by a default in the epidermal barrier ,8. Two nstratum corneum extracellular lipids and, therefore, in the permeability properties of this layer [In addition, IL-31 is described as being responsible for pruritus, but it is not directly linked to inflammation and pain . In favois layer .stratum corneum, the homeostatic microbiota balance of the skin evolves and does not prevent the growth of pathogens, such as Staphylococcus aureus (S. aureus) [S. aureus express superantigens, which are allergens and bind antigen-presenting cells and T cell receptors [As a consequence of both epidermal barrier impairments and changes in the (bio)chemical properties of the aureus) ,23. Thuseceptors .S. aureus onto reconstructed human epidermis (RHE) [Over the years, various models of AD have been developed . Some hais (RHE) .S. aureus. We demonstrated that S. aureus adhesion and proliferation were enhanced in the presence of the cytokines, with the bacteria inducing a strong inflammation response as evidenced by an increased IL-8 release. We also used the model to evaluate the protective effect of an active ingredient from the leaves of Castanea sativa. This active ingredient was preselected from a library of thousands of potential ingredients by performing a preliminary anti-inflammatory property evaluation (quantification of IL-6 and IL-8 by ELISA) on monolayer cultures of normal human keratinocytes stressed by poly I:C, TNF alpha and interferon gamma. Among the anti-inflammatory hits, Castanea sativa extract was then selected for its capacity to inhibit IL-6 and -8 production in keratinocytes challenged by S. aureus ATCC35556, to inhibit their lipase activity and, in parallel, to stimulate filaggrin on differentiated keratinocytes.In the present study, we developed a new 3D model of RHE reproducing AD, adding different concentrations of both Th2 cytokines and S. aureus. We used TNF\u03b1 at 5 ng/mL and the Th2-related cytokines IL-4, IL-13 and IL-31 at two concentrations, 5 ng/mL each (C1) and 20 ng/mL each (C2), in order to attempt to reproduce a moderate-to-mild phenotype.We generated two in vitro models of AD using RHEs treated with either a cocktail of inflammatory cytokines in a culture medium or both the cocktail and topically applied stratum corneum, a good organization of the stratum spinosum and granulosum and a nice basal layer with well-shaped pavement cells -treated RHEs. Its adhesion and proliferation were measured . This inhibition became significant after 8 h of culture and lasted up to 24 h.Compared to the control cultures of extract a. This rver time b. In thev/v) to 0.015%) of the Castanea sativa extract on the enzymatic activities released by S. aureus. The plant extract significantly inhibited the three enzymatic activities with various efficacies. We observed a high inhibition of lipase activity (IC50 = 0.33%) and plasminogen activation (IC50 = 0.035%) and a moderate inhibition of hyaluronidase activity (20% inhibition when the extract was present at 1.5%).We then measured the effect of various concentrations (from 5.5% (S. aureus-infected human keratinocytes and macrophages. A strong increase in IL-6 and IL-8 production by keratinocytes (HaCaT human cell line) was induced by the bacteria and macrophages (IL-8) infected by the bacteria was decreased when treated with the Castanea sativa extract. With the achievement of this key step, we evaluated the efficacy of the plant extract at 2% on a cohort of 22 volunteers with mild-to-moderate AD. The significant improvement in the barrier function and EASI score (decreased by 39% and 49%) confirmed the relevance of our model to evaluate ingredients that can be used to improve atopic-prone skin. The activity of the Castanea sativa extract may be explained by its flavonoid profile, particularly flavonoid glucosides, and among them, the major compounds miquelianin (Quercetin-3-O-glucuronide) and astragalin (Kampferol 3-O- glucuronide), as these two compounds are known as NF kB pathway modulators [Finally, we used our model to evaluate the efficacy of an extract of dulators ,53,54,55Castanea sativa tree. The organic certified leaves are harvested as a byproduct of chestnut production. Raw materials were grinded, followed by aqueous extraction and filtration. The extraction process was optimized to standardize flavonol glycosides: the amount of astragalin, miquelianin and equivalents.CaVa is an aqueous extract of the leaves of the 5 keratinocytes/cm2 were seeded on the membrane and grown in Epilife medium containing 1.5 mM CaCl2, HKGS and antibiotics by immersion for 2 days at 37 \u00b0C with 5% CO2. The inserts were then elevated at the air\u2013liquid interface, and the culture was continued for 12 days with supplementation with 50 \u00b5g/mL ascorbic acid and 10 ng/mL KGF . The fully supplemented Epilife medium was changed every 2 days. When necessary, RHEs were treated, from day 5, with an inflammatory cocktail composed of 5 ng/mL of TNF\u03b1 and either 5 ng/mL (C1) or 20 ng/mL (C2) of each of the following cytokines: IL-4, IL-13 and IL-31. The RHEs were also treated with a Castanea sativa leaf extract at 0.04% from day 5 to 11 in a systemic way.Tridimensional RHEs were produced on a polycarbonate insert, as previously described , with prS aureus. For these experiments with a S. aureus strain obtained from a lesion in the thigh of a 44-year-old patient with AD , antibiotic treatment was stopped at day 10, and 130 \u00b5L of PBS containing 5 \u00d7 102 bacteria was seeded on top of the RHEs at day 11. After 1 h of contact, bacteria were washed twice with PBS. The RHEs were harvested immediately or 24 h later. Bacteria were counted using the tryptic soy agar plate method . Each condition was carried out in triplicate.The RHEs treated with C1/C1+ CaVa were challenged in parallel by RHEs were formaldehyde-fixed and paraffin\u2013embedded, and hematoxylin\u2013eosin staining was performed on 5 \u00b5m-thick sections using a standard histochemistry technique. Immunofluorescence staining for the detection of filaggrin, claudin-1 and loricrin were carried out on 5 \u00b5m-thick sections using anti-filaggrin , anti-claudin-1 and anti-loricrin primary antibodies, followed by secondary antibodies conjugated to either Alexa 488 or Alexa 555. After several washes and counterstaining with 4\u2032,6-diamidino-2-phenylindole (DAPI Vector), sections were mounted with an anti-fading medium. Observations were realized using bright-field or confocal microscopy.4 and 1.5% K3Fe(CN)6 in the same buffer [As described by Reynier et al., the RHE tissues were fixed with 2.5% glutaraldehyde and 2% paraformaldehyde in 0.1 M cacodylate buffer, pH 7.2, for 24 h at 4 \u00b0C, and post-fixed at 4 \u00b0C with 1% OsOe buffer . The tisAfter treatments, RHEs were lysed with specific lysis buffer to evaluate filaggrin, loricrin and claudin-1 expressions. Protein concentration was determined by a BCA assay, and then lysates were kept frozen at \u221280 \u00b0C until use. All the samples were adjusted to the same protein concentration, and an equal quantity of proteins was loaded in each capillary. Target proteins were identified by a capillary electrophoresis-based protein analysis system using primary antibodies , and they were immunoprobed using a horseradish peroxidase-conjugated secondary antibody and chemiluminescent substrate. The capillaries containing a proprietary UV-activated chemical-linked coating were obtained from ProteinSimple. All samples and reagents were prepared according to the recommended manufacturer\u2019s instructions. The resulting chemiluminescent signal was detected and quantified using Compass Software version 2.7.1 followed by a statistical analysis.IL-8 was quantified using an AlphaLISA immunoassay kit according to the manufacturer\u2019s recommendations.S. aureus (ATCC\u00ae 35556\u2122) was seeded at 0.1 million/mL within tryptic soy broth (Biom\u00e9rieux\u00ae TSB-F index 42614) enriched by glucose at 2% and incubated for 24 h at 37 \u00b0C. Castanea sativa extract was applied from the beginning to the end of the incubations. Biofilm formation was observed using crystal violet staining and a real-time, label-free technique based on impedance recording using an xCELLigence\u00ae device as described [escribed .v/v) up to 5.5% and from 0.5% up to 5%, of the Castanea sativa extract was added in the culture broth with S. aureus. After 6 and 24 h, the number of bacteria was determined by densitometry at 600 nm, while supernatants were recovered by centrifugation of conditioned broth at 1000\u00d7 g for 5 min and then used for enzymatic evaluation. Lipase activity was measured by recording the specific fluorescence of a synthetic substrate . Hyaluronidase activity was evaluated through the quantification of residual hyaluronic acid using a turbidimetric method [Castanea sativa extract was incubated with S. aureus-conditioned broth at 0.015% to 0.15%. Then, plasminogen activation into plasmin was measured by recording the specific fluorescence of a protease substrate .For lipase and hyaluronidase activities, various concentrations, from 0.165% -2,5-diphenyltetrazolium bromide (MTT) test and cytokine release in supernatant medium using ELISA methods .Keratinocytes (human cell line HaCaT) were seeded in a standard medium with fetal calf serum and incubated for 3 days at 37 \u00b0C. The growth medium was replaced by a standard medium containing the Castanea sativa extract for 24 h. The treatment medium was exchanged to a medium containing a defined quantity of heat-killed S. aureus bacteria, and it was incubated for 24 h at 37 \u00b0C. Cell viability was measured using the MTT test, while the release of IL-8 was measured on cell culture supernatant using the ELISA method . The active ingredient was applied at 0.003% up to 0.03%.Human macrophages (U937 cell line) were activated by phorbol myristate acetate for 48 h and then treated with the t-test, the Mann\u2013Whitney test or One-way ANOVA test. A p value < 0.05 was considered significant.A statistical analysis was performed using Sigma Plot software with Student S. aureus extracts or by silencing the expression of pivotal genes encoding epidermal barrier proteins. However, none of them reproduced all of the pathophysiological AD features. In this paper, we developed an AD-like model consisting of RHEs exposed to both a Th2 pro-inflammatory cytokine cocktail and S. aureus. This model mimics the impairments in the skin barrier observed in AD, at the physical, molecular and immune levels. Moreover, our results strongly suggest that S. aureus acquired a higher virulence potential when the epidermis was challenged with inflammatory cytokines, thus later contributing to disease exacerbation. The relevance of this model was confirmed using an extract of Castanea sativa, which significantly improved components of the physical, microbial and immune epidermal barriers.In recent years, in vitro models reproducing some features of AD have been developed by challenging epidermis with either interleukin cocktails or"} {"text": "Alectoria, Bryoria, Usnea) form conspicuous epiphyte communities across the boreal biome. These poikilohydric organisms provide important ecosystem functions and are useful indicators of global change. We analyse how environmental drivers influence changes in occurrence and length of these lichens on Norway spruce (Picea abies) over 10\u00a0years in managed forests in Sweden using data from >6000 trees. Alectoria and Usnea showed strong declines in southern\u2010central regions, whereas Bryoria declined in northern regions. Overall, relative loss rates across the country ranged from 1.7% per year in Alectoria to 0.5% in Bryoria. These losses contrasted with increased length of Bryoria and Usnea in some regions. Occurrence trajectories on remeasured trees correlated best with temperature, rain, nitrogen deposition, and stand age in multinomial logistic regression models. Our analysis strongly suggests that industrial forestry, in combination with nitrogen, is the main driver of lichen declines. Logging of forests with long continuity of tree cover, short rotation cycles, substrate limitation and low light in dense forests are harmful for lichens. Nitrogen deposition has decreased but is apparently still sufficiently high to prevent recovery. Warming correlated with occurrence trajectories of Alectoria and Bryoria, likely by altering hydration regimes and increasing respiration during autumn/winter. The large\u2010scale lichen decline on an important host has cascading effects on biodiversity and function of boreal forest canopies. Forest management must apply a broad spectrum of methods, including uneven\u2010aged continuous cover forestry and retention of large patches, to secure the ecosystem functions of these important canopy components under future climates. Our findings highlight interactions among drivers of lichen decline , functional traits , and population processes (extinction/colonization).Thin, hair\u2010like lichens ( Picea abies) over 10\u00a0years in managed forests in Sweden using data from >6000 trees in the National Forest Inventory. Alectoria and Usnea showed strong declines in southern\u2010central regions, whereas Bryoria declined in northern regions. Our analysis strongly suggests that the decline was mainly driven by industrial forestry, in combination with nitrogen deposition. Our study highlights interactions among drivers of lichen decline, functional traits and population processes.Thin, hair\u2010like lichens have important ecosystem functions in boreal forest canopies. We analyse how environmental drivers influence changes in occurrence and length of these lichens in the lower canopy of Norway spruce ( Our analysis is based on >6000 trees surveyed 1993\u20132012 in the National Forest Inventory (NFI). We focus on changes in habitat quality by monitoring lichens when the host is present. This allowed us to examine how global change drivers interact and affect extinction and colonization processes. We hypothesized the following: (1) that lichen occurrence declines due to industrial forestry, driven by short logging cycles and an unfavourable microclimate (i.e. low light) in dense forests that lichen occurrence and thallus length (the length of the vegetative body) increase and recover in southern\u2010central regions in response to reduced anthropogenic N deposition since 1980 . The hemiboreal zone, a transition between the boreal and temperate zone, covers most of southern Sweden. The temperate zone forms a narrow belt in the south and southwest. It has broad\u2010leaved, deciduous trees but also host conifers.The study area covers the whole of Sweden and spans latitudes 55\u201369\u00b0N (c. 1500\u00a0km). Sweden has 279,000\u00a0kmPicea dominates by volume (41%), followed by P. sylvestris (39%) and Betula spp. (12%). The growing stock has increased with 106% since the 1920s, at which time the growing stock had decreased during the late 19th century. This trend mainly results from the efficient, production\u2010oriented forestry, including cutting of unproductive stands, ditching, thinning, N fertilization and planting of seedlings on clear\u2010cuts , followed by agriculture 8%; SLU, . Reindee2 pollution, which has substantially decreased since 1970 and are located around the tract perimeter (the length of tract side varies from 300 to 1200\u00a0m among regions). About 200 forest, vegetation and site variables are recorded on each plot. The NFI has a systematic program for quality assurance, including training, calibration, and control inventory from plots in productive forest land (see above). Formally protected forests were excluded as they were not measured in IP1. Hence, our sample consists of managed forests with active forestry, but includes a small proportion of voluntarily set aside areas Ach. Brodo & D. Hawksw. (southern tendency) and B. fuscescens (Gyeln.) Brodo & D. Hawksw. (northern tendency), while Usnea is dominated by U. dasopoga (Ach.) Nyl. (widespread), followed by U. subfloridana Stirt. . We calculated mean annual temperature (TEMP) and mean total rain per year (RAIN) for each IP in all NFI plots. RAIN was defined as the sum of precipitation in days with mean temperature \u22650\u00b0C, during which lichens can be active. We also extracted deposition of atmospheric inorganic N from gridded data (20\u00a0\u00d7\u00a020\u00a0km) based on the Match model . Mean annual N deposition was calculated for 1998\u20132002 (no data available for 1993\u20131997) and 2003\u20132012. The variables are henceforth referred to by their abbreviations (Table Gridded climate data (4\u00a0\u00d7\u00a04\u00a0km) were obtained from the Swedish Meteorological and Hydrological Institute if the CI do not overlap zero.We first estimated the total number of live 2.5.2We calculated summary statistics for the explanatory variables Table across a2.5.3The trees that were remeasured (~50%) allowed us to examine how changes in lichen occurrence correlate with changes in variables over time in the plots. A lichen is either present (P) or absent (A) on a sample tree in each IP, and thus there are four occurrence trajectories (outcomes): persistence (PP), absence (AA), colonization and extinction . We used Chi\u2010square tests to examine the association between trajectories and regions for each lichen. Extinction and colonization rates were calculated following Yalcin and Leroux . The extr) between all variable pairs to identify potential associations among the variables. We then used multinomial logistic regression versus Y\u00a0=\u00a00 for a one\u2010unit increase in the explanatory variable is then given byPrY=j|x/PrY=0|x is the odds at x that the trajectory is j, given that it is either j or 0. If there are other explanatory variables than x present in the model, these are kept fixed when computing ORj. The odds ratio ORj is a measure of how much more likely or unlikely (in terms of odds) it is for occurrence trajectory j to be present among those trees with a one\u2010unit increment in an explanatory variable x as compared to those with no increment in this variable, while holding the other explanatory variables fixed. The odds ratio is significantly different from 1 when the 95% confidence band does not overlap 1. The analyses were done with r version 4.0.3 to 0.173\u00a0\u00b1\u00a00.018 across the country, Usnea from 0.388\u00a0\u00b1\u00a00.020 to 0.342\u00a0\u00b1\u00a00.021 and Bryoria from 0.532\u00a0\u00b1\u00a00.016 to 0.506\u00a0\u00b1\u00a00.019 (Table Alectoria occurrences (51.6%) was lost in region 3 and 4 (30.1%), whereas Bryoria decreased in regions 1 (13.6%) and 2 (6.3%). The geographic distribution of Alectoria was substantially reduced in region 3 for all lichens, showing that trajectories varied by region.Remeasured trees had higher lichen occurrence than new trees in IP2, particularly in ly Table . Extinct3.3Alectoria in IP1 (mean 18.1\u00a0cm across regions) was twice as high as Usnea (8.6\u00a0cm), with Bryoria intermediate (13.6\u00a0cm). Length of Bryoria increased slightly over time in regions 3\u20135 and Usnea in regions 2\u20134, whereas Alectoria did not change . MAT decreased towards the south reflecting higher degree of forest fragmentation.The individual NFI plots spanned a large range in TEMP (\u22121.7 to 8.6\u00b0C) and RAIN (335\u20131290\u00a0mm). All variables showed clear latitudinal trends from north (region 1) to south . The single\u2010variable logistic models had highly significant (p\u00a0<\u00a0.001) slope coefficients for most variable transformations in the occurrence trajectories than models for Alectoria (R2\u00a0=\u00a0.218) and Usnea in the boreal biome is present. Our key finding of significant declines in Sweden over 10\u00a0years is alarming, as these lichens already experienced large\u2010scale declines also before the 1990s is remarkable, as the distribution of this lichen had once been centred in this region of Fennoscandia Declines were observed in all regions, and forestry is the only driver with significant impact across the country. (2) The estimated annual loss rates of the lichens (0.5%\u20131.7%) are comparable to that ~1% of the forest area is logged each year, mainly by clearcutting of older forests. Moreover, the proportion of logged forests older than 120\u00a0years has increased since 2000 MATs Clerc, . It alsoBryoria, with dark cortical pigments, suffers more from low light conditions than Alectoria and Usnea, with pale pigments, as the former has a higher light compensation point for photosynthesis , suggesting that N deposition contributed to its decline. High rainfall with a sufficient and not too high N concentration boosts growth of this lichen , it likely suffers more from respiration losses in low\u2010light conditions during warmer autumns and winters than Alectoria and Usnea, contributing to its decline in dense forests and northern regions.The occurrence trajectories correlated with TEMP1 in the models, reflecting the lichens macroclimate preferences along the latitudinal gradient are unavailable under thick and/or hard snow and ice. These lichens have historically been important for reindeer husbandry in Fennoscandia and forest certification (Gustafsson & Perhans, 5Alectoria and Usnea decreased in southern\u2010central regions where forests are more productive, denser, have shorter logging cycles and are subjected to higher N deposition than northern regions, where only Bryoria decreased. Logging of forests with long continuity of tree cover and dispersal limitation contributed to the steep decline of the old\u2010growth\u2010associated Alectoria. Warming correlated with occurrence trajectories of Alectoria and Bryoria, emphasizing that poikilohydric canopy\u2010living organisms respond directly to changes in microclimate as driven by climate change (De Frenne et al., This is the first study of drivers and changes in dominant canopy lichens based on a large probability sample from a latitudinal gradient across the boreal biome. The rapid decline in only 10\u00a0years is a stark warning that hair lichens gradually lose their ecological functions in managed boreal forests, with cascading effects on trophic interactions and ecosystem function. Our analysis strongly suggests that industrial forestry, in combination with N deposition, is the main driver of this decline. The authors declare no conflict of interest.P\u2010AE designed the study and wrote the manuscript with help from all co\u2010authors. BW extracted the NFI data and P\u2010AE the other data, AG estimated lichen occurrence and length, ME performed the logistic regressions and P\u2010AE performed the other analyses. ME, AG, GS and BW contributed to statistical aspects, while BGJ and KP contributed to ecological aspects. All authors contributed critically to the drafts and gave final approval for publication.Fig S1\u2010S2Click here for additional data file.Table S1\u2010S5Click here for additional data file.Appendix S1Click here for additional data file.Appendix S2Click here for additional data file."} {"text": "Acute-on-Chronic liver failure (ACLF) is a clinical syndrome with high short-term mortality. Alcoholic ACLF is prevalent in European and American countries, while hepatitis B virus (HBV)-related ACLF is more common in the Asia-Pacific region. There is still a lack of a unified definition standard for ACLF, due to various etiologies and pathogeneses in different continents. Currently, liver transplantation (LT) is the most effective treatment for liver failure. However, the shortage of liver sources is still a global problem, which seriously limits the clinical application of an LT. Premature LT aggravates the shortage of liver resources to a certain extent, and too much delay significantly increases the risk of complications and death. Therefore, this study reviews the current literature on LT in the treatment of ACLF and discusses further the challenges for ACLF patients, the timing of LT for ACLF, and the choice of the patient population. Acute-on-Chronic liver failure is characterized by an extreme fatigue, a rapid deepening of jaundice, coagulation disorder, and decompensated ascites, with or without hepatic encephalopathy. It progresses rapidly with a poor prognosis and the short-term 28-day) mortality can reach as much as 23\u201374% , 2. Acut8-day morArtificial liver support system is the most common type of liver failure. In Eastern countries, hepatitis B virus (HBV) reactivation and alcoholic hepatitis are common predisposing factors for ACLF, while in Western countries, infection and alcoholic hepatitis are the main causes of ACLF . In receThe definitions proposed by the European Association for the Study of the Liver (EASL) and the So far, multiple prognostic scores have been used, including Child\u2013Turcotte\u2013Pugh (CTP) score, the Model for end-stage liver disease (MELD) score, the MELD sodium (MELD-Na) score, APASL ACLF Research Consortium (AARC) score, chronic liver failure-sequential organ failure assessment (CLIF-SOFA) score, etc. The CTP model was first proposed in 1964, which is a commonly used classification standard for quantitative assessment of liver function in patients with liver cirrhosis. The CTP score is a classical parameter that is widely used to evaluate the liver reserve function and assess the condition and prognosis of patients with liver cirrhosis. However, a major drawback with the CTP score is that it is subjective, especially for ascites and hepatic encephalopathy, which makes it difficult to convert to objective grading. And due to the narrow grading window, sometimes it cannot accurately reflect the severity of the patient's condition. The MELD score is used to predict the short-term mortality of patients with chronic liver disease and awaiting transplantation. Moreover, it also serves as the main tool for liver source allocation in Eastern and Western countries. However, the MELD score refers only to creatinine, bilirubin, prothrombin time, and INR, which has certain limitations in the practical clinical application of LT. Among MELD-derived models, the MELD-Na scoring model has been supported by a large number of studies. Compared with creatinine, serum sodium can predict renal function injury earlier and in a more sensitive manner. In a prospective study by Biggins et al. , the risIn conclusion, MELD, CLIF-SOFA, and AARC scores have been recorded as suitable for predicting mortality in patients with ACLF. However, due to inconsistent definitions of ACLF and various etiologies between the East and the West, these models do not have a stable and perfect predictive power. Whether these models can accurately reflect the clinical severity of ACLF patients requires a further study, and The American Association for the Study of Liver Diseases (AASLD) also advises against relying solely on the currently available prognostic scoring systems to predict outcomes and identify candidates for LT. Therefore, there is still a lack of a clear risk prediction system for judging the timing of liver transplantation in patients with an advanced liver failure. If the timing is too early, the liver source will be wasted, and if it is too late, the timing of the operation will be lost or the prognosis will be poor. As the course of ACLF changes rapidly and is associated with high short-term mortality, it is important to identify patients for LT before the onset of the development of MOF. At present, there is no optimal allocation system for liver transplantation, and the choice of SMT, LT, or palliative treatment is still an urgent problem to be solved. More prospective studies are needed to formulate the best prognostic criteria in the future.The latest research based on the CANONIC database showed that most of the patients with d3\u20137 ACLF 2 or 3 died in the first month of follow-up . At presThe ALSS treatment combined with LT in patients with HBV\u2013ACLF improved short-term survival. ALSS treatment pre-LT is an independent protective factor affecting the 4-week survival rate after LT . Ling etIn 2019, the APASL updated the definition of ACLF, pointing out that one of the key characteristics of ACLF is reversible. Over time, liver injury, fibrosis, and portal pressure have been gradually reduced and liver reserve improved in some patients. It emphasizes that early death risk judgment and clinical intervention can improve the prognosis of patients. This is also the golden period for bridging therapy before transplantation and for promoting the ability of liver regeneration and repair.P < 0.0001). As regards patients with no improvement at the time of transplantation, the ACLF score, CLIF-ACLF score, CLIF-OF (organ failure) score, and MELD score are significantly lower. Yadav et al. (P < 0.0001).Liver transplantation includes living donor liver transplantation (LDLT) and deceased-donor liver transplantation (DDLT). Some studies have fouv et al. used EASv et al. . Differev et al. . The MELv et al. shows thv et al. used theLT may be the basic treatment for ACLF to reverse the extrahepatic MOF. The median transplant-free survival in patients with ACLF was 48-day, and previous guidelines have mentioned that an LT was recommended, if the expected 5-year survival exceeded 50% (As ACLF is a heterogeneous condition and follows a dynamic course, the decision to undergo LT should be individualized. The survival rate after liver transplantation is good, and the 5-year survival rate is 74\u201390% . HoweverIn conclusion, decisions to undergo LT or not should be individualized. To evaluate ACLF effectively, the required model must be not only dynamic but also comprehensive. Prospective studies are still needed to further evaluate and determine the optimal timing and selection criteria for transplantation in ACLF.As the transplantation window of ACLF patients is very short, the short-term mortality is high, and the occurrence of organ failure is also very important for prognostic prediction . The ACLMonitoring inflammatory markers and regenerative markers during the \u201cGolden Window\u201d of ACLF progression will also help us to judge the prognosis of patients. In future studies, we should pay attention to the impact of the inflammatory response on the survival and prognosis of patients. In addition, the process of ACLF is potentially reversible, so whether liver function is compensated or not is very important for patients with liver failure. An early diagnosis and timely intervention of ACLF alone can protect surviving hepatocytes as much as possible from inflicting further damage to liver cells, and thereby create favorable conditions for liver regeneration.Generally speaking, the results of ACLF patients after LT are worse than those of chronic liver disease alone. This may be due to the severity of the underlying liver disease, which is manifested as higher MELD, organ dysfunction, and systemic inflammatory response syndrome. Zhang et al. propose Acute-on-Chronic liver failure is a common liver disease syndrome and treatment is mainly based on the prevention of organ failure and its associated complications. LT has a good effect on ACLF 1\u20132 patients. While for ACLF 3 without LT, the prognosis is poor. With the rapid changes taking place in the ACLF process, it is important to carry out a comprehensive medical treatment as well as anti-infective preventive measures in time, and to enter the LT evaluation process as soon as possible to screen the most suitable LT recipients and determine the golden time for surgery. Based on the existing evidence and prospects, some improvements can be brought about in the ACLF field, which may help to improve the management and prognosis of patients. At present, LT can improve the prognosis of patients with different degrees of ACLF. In the future, it is necessary to establish an LT risk assessment model and screen high-risk groups, so that patients can be managed more scientifically at admission, and the correct intervention and monitoring measures carried out as soon as possible to obtain the best treatment effect, thereby improving the prognosis of patients with ACLF and the effectiveness and efficiency of LT. It is expected that in the future, a large sample and multi-center prospective clinical research results will guide the formulation of more accurate clinical diagnostic criteria and a prognostic scoring system to determine the needs and appropriate time of LT in the patient population. The accumulation of long-term follow-up data will also contribute to the formulation of future clinical guidelines.XL performed literature searches and prepared the initial draft. LZ, CP, and ST reviewed and helped to finalize the manuscript. All of the authors read and approved the manuscript."} {"text": "This research aims to understand the influence of bodily practices, especially gymnastics, in the construction of representations of a healthy body conveyed in a Brazilian women\u2019s magazine in the 1940s and 1950s. We use records from the Jornal das Mo\u00e7as magazine for the analysis based on the theoretical and methodological assumptions of cultural history. The results show that gymnastics for women was linked to body maintenance and used as a tool for establishing a body standard, thus disciplining and shapingthe construction of women\u2019s health concepts, determined by the aesthetic bias of that period: a slim body as an ideal standard of beauty and health. This study aims to understand the influence of bodily practices, especially gymnastics, on the construction of representations of a healthy body conveyed in a Brazilian women\u2019s magazine in the 1940s and 1950s. The records from this magazine are analyzed in view of the theories of the body and methodological assumptions of cultural history. According to Chartier [Bodily practices have approached the meanings of human movement actions beyond those that are thought of in their biological sense of adaptation. Aiming to advance in another direction and considering the current discussions about the complexity to conceptualize them ,3,4, theCurrently, the greater sharing of the space for the production of the discourse that guides the role of each individual in society, previously monopolized by the old-school, military and religious pedagogical forms, provides a course for changes, expanding the customs and institutions that lead to social rules of behavior. According to Vigarello and BaumAppearance, behavior and silhouette are increasingly incorporated into personalities and personal particularities, so there is a more intimate relationship of existence from what I am to what my body is. Within this process, the changes of being a woman in contemporary times are involved with the changes that cross her body, as Vigarello observedThe conceptions of women\u2019s bodies and their representations within bodily practices can be presented as a set of elements that expose within them the influences of cultural formation, in which social concepts are enabled to build standards of fundamental structures of organization of life. To think about health in a context of transformation enables us to understand how it can reflect the intentions of a social situation and its ways of production. On the other hand, it also brings evidence of variations when considering power relations that stratify and hierarchize certain groups, discriminating them within the same space based on gender, race, economic conditions, religion and other aspects.Throughout the structure of the social system of the 20th century, in the West, health was observed preponderantly through the bias of positivist rationality, thus denouncing incorrect techniques for the use of the body, composed essentially of biological matter in which the muscles were its driving machine . At the At the same time, in Brazil, political projects on women\u2019s education via body control were formulated, which had the support of the media press. During the period called Estado Novo (1937\u20131945), the Brazilian national project ruled by President Get\u00falio Vargas had women as an essential and central element for the betterment of the Brazilian people.The increase in the working mass in precarious conditions of hygiene, health and housing, resulting from industrialization and urbanization, made women\u2019s health a source of concern in several countries due to the accelerated growth of the world population. However, the emphasis remained on reproductive aspects. The woman that the state considered deserving attention was the one who was in the fertile period .It is possible to find in the Brazilian press (newspapers and magazines) signs of the construction of an ideology of that time, including aspects related to modern civility and a new standard of aesthetics . In the Among the bodily practices assigned to women, gymnastics stands out compared to others. We can observe a representation of gymnastics different from those known today. Some of these representations are linked to the concept of body practices of the period, modernly called \u201cgymnastics\u201d, which, when directed to women, had specific objectives and constructions. That is, in that period, it was common to call \u201cgymnastics\u201d the body practices and/or physical exercises that were done daily, such as outdoor walks and morning stretching practices, among others . AnotherDuring the 19th and 20th centuries, gymnastics developed in the West as an integral element of the modernization project, thus indicating a reorganization of the interpretation of reality. In this social dynamic, the conceptions of the human body were reconfigured by a distinct understanding of the meaning of relationships in the world . In viewThere is a diversification in the way content is analyzed based on the discourses in the documents that were gathered for this research, as well as the variety of format in which the information contained in them was recorded. Establishing perspectives that direct to a methodology is fundamental in order to focus on the reasons and the results consistent with the theoretical framework ). For CeTherefore, the analyses of the sources accessed for this historical research were submitted to the process of identifying whether there was correspondence between the information; that is, the narrated facts communicate with extratextual aspects of the period in order to situate the events in the midst of the interrelationships with the social phenomena that are formed in the modern conjuncture .One of the main sources used by historians to identify traces of the past are periodicals, specifically newspapers and magazines, according to Barros . For theThe reports of a single newspaper are not enough to explain the complex network that underlies the emergence of bodily practices linked to the health of the female body, but as the theory of cultural history demonstrates, they can constitute a historical source presenting evidence of a particular interpretation of the phenomenon in question . The focJornal das Mo\u00e7as, a magazine published every two weeks in Rio de Janeiro that circulated in Brazil from 1914 to 1965, according to its founders, was designed to: \u201cCultivate, by illustrating, and at the same time delight in the charming spirit of the Brazilian women, to whom this magazine is dedicated, will be its, if not sole scope, at least its most lively and ardent concern\u201d . Five yegin\u00e1stica (gymnastics), we obtained 52 occurrences of this bodily practice within the chosen period. After this procedure of collecting the articles, in order to extract the information from the referred issues of the magazine, they were organized and stored as files in folders divided by themes and periods in common. The files were named by author, title of the report, date, issue of publication and page number, among other complementary information important for the elaboration of the source reference.The archive with all the issues of Jornal das Mo\u00e7as is available in the Digital Newspaper Library of the National Library of Brazil. However, only the issues published between 1945 and 1950 were consulted, since this wasthe magazine\u2019s best-selling period, that is, with the widest audience reach, thus establishing the time period of the research. Using the search system available in the Digital Newspaper Library of the National Library of Brazil, with the term Of this total (52), we found that 29 occurrences were linked to health, both implicitly and explicitly. When it was implicit, the word \u201cgymnastics\u201d was not registered literally in the text, but it was in between the lines of the narrative that it referred to the health theme. In the explicit form, the terms \u201cgymnastics\u201d and \u201chealth\u201d were registered directly in the same sentence in the text. After this step, we considered that six reports met the criteria established for the selection of information: the report had to contain aspects related to the use of some body technique in order to give meaning to human movement.With this, the heuristic construction began, using two procedures approached by comparative history. Veyne would usFor Flick (p. 295)According to Bauman , modernibiopower with \u201cthe administration of bodies through the calculating management of life\u201d. These movements would promote new breakdowns of the different spheres of human social life, thus causing the accentuation of more demanding and precise rules of body appearance, making body fat, for example, also a reflection of personality and even associated with ways of ordering one\u2019s thoughts [Observing political life under the light of the articulation of modern states, this power was characterized by the consolidation of the promotion of social welfare through investment in life, and no longer through death as presented by the traditional sovereignty regime, forming what Foucault (p. 131)thoughts .Among the devices that bring society closer to a new world of modernities and new social ruptures, in this research we focus on women\u2019s magazines. Such magazines began to be published at the end of the 19th century with the intention of catering to the minds of readers\u2019 new perceptions of the world and, consequently, new representations of health of women\u2019s bodies. It is worth recalling the statement by Barbosa et al. (p. 24):In the mid-1940s, the entire world was undergoing a radical transformation, including the reorganization of the center of economic power among nations with the end of the Second World War. The United States of America and the Soviet Union would form a bipolar order of command over government systems, through which structures would be set up to shape advantages that could favor their models of life management. It was during this period that the United Nations (UN), United Nations Children\u2019s Fund (UNICEF), Food and Agriculture Organization of the United Nations (FAO/UN), United Nations Educational, Scientific and Cultural Organization (UNESCO), International Labor Organization (ILO) and World Health Organization (WHO) were created, which would serve as a legal, political and ideological instrument for an internationalism necessary for the interest of free-investment capitalism, enforcing a certain type of planning for postwar life .3 Meses de Gin\u00e1stica em Quatro P\u00e1ginas (Three Months of Gymnastics in Four Pages) [Distinct cultures are propagated among countries through processes of transference and reception of body standards. An example of this cultural dynamic was evidenced in the article from the Jornal das Mo\u00e7as, whose title was r Pages) , which sr Pages) . In anexr Pages) .Moreover, in the excerpt mentioned above, we can identify a relationship between the meanings proposed for physical culture within the manifestation of the concept of culture built within the European context during the 18th and 19th centuries. In that period, the term was configured as a representation linked to human development, which soughtalignment with the aspect of civilization from a historical reconstruction, which denoted shades of the advance of humanity to greater freedom from the bonds of irrationality . In the In the Jornal das Mo\u00e7as magazine, the expression \u201cphysical culture\u201d appears in the texts since the 1930s and maintains its incidence until the beginning of 1960, referring to the idea of the mind\u2013body dichotomy, indicating the importance of understanding that in order to achieve proper mental development, one must exercise the body. Thus, exercise through gymnastics would be a way to improve the power of psychic control over movement, which, when performed correctly, would generate alignment with the expected aesthetic standard; the correct posture of the body is perceived by the symmetry of its parts. Furthermore, in the case of women, exercise through gymnastics should express a greater control over the body (understanding that the psyche is part of that body) and not the demonstration of some ability.Physical culture is related to a specific knowledge of how to control the body by directing its movements to the designs of modern society, and gymnastics is one of the composing elements. These notions cross over into the conceptions of bodily practices aimed at women, presenting themselves as a tool for maintaining beauty, which can be conquered by willpower, rigor and discipline . Among tHowever, as part of the culture, the construction of images of what it is to \u201cbe feminine\u201d appears in different spaces and times under different forms, strategies and discourses. According to Pinsky ,33, modeThe concern with a healthy appearance was surrounded by having a youthful appearance, which could be maintained, acquired and corrected through gymnastics, also with the promise of a rejuvenating body, as can be seen in this publication on gymnastics for the face: \u201cFive minutes a day of this highly beneficial gymnastics, in every way, completes this wonderful new treatment, which makes the face fresher, younger, giving it hardness. and normalizing the secretions of the skin, which gives the features and muscles the necessary elasticity to perform more graceful gestures\u201d . In thisHistory identifies that the traditional requirement of beauty refers to the description of a slender, perfect body, and new artifices can be used to correct its flaws . In lighSometimes, the representations of health and beauty appear as synonyms in the reports, which leads women to a total dissatisfaction with their bodies. Yoga, for example, is represented by the magazine as a practice that leads to \u201cconservation of health and beauty\u201d , (p. 46)Throughout the 19th century, Western medicine sought its organization in the face of modernity through the precepts of the paradigms of a positivist science, in which knowledge about the body emerged from clinicopathological anatomy. Thus, the human body came to be constituted as if by right of biological matter, thus being a space of origin and distribution of disease . From thTherefore, it is noticeable that there is a growing concern for women\u2019s thinness as a health component, but also as an aesthetic element revealed by the fashion magazines that address getting fat as one of the greatest fears of women. The female body has been required to occupy public spaces by being endowed with mobility through a slim silhouette, which took into account thin, muscular, fat-free limbs with tapered lines . This isBodily practices have become the main tool for achieving an ideal healthy body for women in Brazil, with the encouragement of the press. In light of this, in the 1940s and 1950s paradigms shifted about women\u2019s bodies. Performing bodily practices was a requirement for those considered modern high-class women entering life , a behavIn an article entitled \u201cC\u00f3digo da Linha\u201d (Code of the Line) , Jornal In this scenario, the applied concept of \u201cfemale gymnastics\u201d comes from the medicalhygienist perspective. In this sense, gymnastics has as its guiding principle, in its conception, the strengthening of women\u2019s bodies, because it was considered that strengthened women would have male children endowed with strength, who would be responsible for building and protecting the country . HoweverHowever, this discourse on body standards, present in Jornal das Mo\u00e7as, socially builds the idea that being overweight means assuming the failure of evolution, thus pushing women away from the idea of being in progress in life, prevented from experiencing the world due to having a silhouette that goes against the dominant culture and thus making them strangers even to themselves . In thisSince its beginning, the 20th century has been marked by many discussions about a possible universal concept of health. Influenced by the organizational policies of nations that sought in the greater intervention of the state a way to systematize control over the bodies of the population, in the idea of construction and preparation of knowledge aimed at the insertion of individuals in the new dynamism of modern life, the political scenario of the 1940s experienced the effervescence of these ideas. This is because at the end of this decade, the World Health Organization (1948) was established, which would define health as \u201cthe state of complete physical, mental and social well-being and not merely the absence of disease\u201d, thus raising its scope in the social structure to a level not yet understood by many nations. This concept certainly promoted a reorganization of the discipline over the bodies, by placing health in an even more vigilant, or even unattainable level, in which bodily practices are the living manifestation of the transformations.The representation itself of a healthy female body that the bodily practices were intended to create would not be historical if it were not noticed as an action of deviation of meaning, which only becomes possible when articulated to a social place and scientific operation related to cultural models or contemporary theorists. Therefore, the writing that articulates the facts of the past is given by the presence in the present of problems that lead to question the reasons for which the reality is being juxtaposed.The evidence found in Jornal das Mo\u00e7as suggests that gymnastics in the analyzed period had different representations, but when aimed at women, it was linked to the maintenance of bodies and beauty. In addition, it was used as a tool to achieve a body standard, aiming to regulate, standardize and discipline women\u2019s behaviors. Therefore, we consider that gymnastics played an important role in the construction of women\u2019s health concepts in Brazil in the 1940s and 1950s.Every narrative comes from a context, seeks bases in ideas and elements that pursue specific views and perspectives of a time, builds symbolic connections that are convenient as a way of interpreting reality and seeks meaning from the previous belief that the past was exactly what is presented in the present, because no human discourse arises out of nowhere and is always surrounded by intentions that circulate subjectivities. For Foucault , the disIn light of the above, it is possible to understand how bodily practices, in particular gymnastics, created representations of healthy bodies in the magazine Jornal das Mo\u00e7as through the aesthetic bias of that period: a slim and beautiful body. Conceptions of healthy bodies were built through the narratives of Jornal das Mo\u00e7as in the country. After all, when the women\u2019s press was established in Brazilian society, it became an important vehicle of communication and injunctions that disciplined and educated women both culturally and socially.A limitation of this study could be the ability to develop the search for the particularities of the phenomenon studied within the Brazilian context, because for this it would be important to select other sources that could present the modifications and preferences that were formed through adialogue with the local culture. This is a gap that this research leaves, but that does not disqualify the contribution made by this study to research on women, gymnastics, health and body practices, since it is in line with the rigor of historical research that, according to Veyne , takes pAmong the gender issues that we can address in future studies is the debate that, when analyzing the representations of the Jornal das Mo\u00e7as magazine, we are talking about a reality of hegemonically white, literate upper-class Brazilian women. This is because the target audience of the magazine was these women. The phenomena that permeate the issues of black and lower middle-class women, for example, are different, and may be presented differently in reports, constituting an interesting aspect for analysis and a deepened investigation.The inductions in the disseminated texts proposed an understanding about the achievement of the ideal body, and ways in which women could reach this \u201cmodel,\u201d placing discipline, the will to achieve this goal and the responsibility of maintaining beauty and health as key elements. We can conclude that the notions of body formed from the characteristics linked to gymnastics in the cultural-historical perspective of the period weremixed and wentthrough a diversity of scenarios, in which both the rhetoricand narratives acted directly on women\u2019s bodies and helped to construct gender stereotypes."} {"text": "Improved breastfeeding practices have the potential to save the lives of over 823,000 children under 5\u00a0years old globally every year. The Baby-Friendly Hospital Initiative (BFHI) is a global campaign by the World Health Organization and the United Nations Children\u2019s Fund, which promotes best practice to support breastfeeding in maternity services. The Baby-Friendly Community Initiative (BFCI) grew out of step 10, with a focus on community-based implementation. The aim of this scoping review is to map and examine the evidence relating to the implementation of BFHI and BFCI globally.This scoping review was conducted according to the Joanna Briggs Institute methodology for scoping reviews. Inclusion criteria followed the Population, Concepts, Contexts approach. All articles were screened by two reviewers, using Covidence software. Data were charted according to: country, study design, setting, study population, BFHI steps, study aim and objectives, description of intervention, summary of results, barriers and enablers to implementation, evidence gaps, and recommendations. Qualitative and quantitative descriptive analyses were undertaken.A total of 278 articles were included in the review. Patterns identified were: i) national policy and health systems: effective and visible national leadership is needed, demonstrated with legislation, funding and policy; ii) hospital policy is crucial, especially in becoming breastfeeding friendly and neonatal care settings iii) implementation of specific steps; iv) the BFCI is implemented in only a few countries and government resources are needed to scale it; v) health worker breastfeeding knowledge and training needs strengthening to ensure long term changes in practice; vi) educational programmes for pregnant and postpartum women are essential for sustained exclusive breastfeeding. Evidence gaps include study design issues and need to improve the quality of breastfeeding data and to perform prevalence and longitudinal studies.At a national level, political support for BFHI implementation supports expansion of Baby-Friendly Hospitals. Ongoing quality assurance is essential, as is systematic (re)assessment of BFHI designated hospitals. Baby Friendly Hospitals should provide breastfeeding support that favours long-term healthcare relationships across the perinatal period. These results can help to support and further enable the effective implementation of BFHI and BFCI globally.The online version contains supplementary material available at 10.1186/s13006-023-00556-2. Globally, improved breastfeeding practices have the potential to save the lives of over 823,000 children under 5\u00a0years old every year . ExclusiGlobal Strategy for Infant and Young Child Feeding is to ensure that every maternity facility practices the BFHI\u2019s \u2018Ten Steps to Successful Breastfeeding\u2019. Hospitals or maternity facilities can be designated \u201cBaby-Friendly\u201d if they pass an external examination that verifies that they comply with the Ten Steps to Successful Breastfeeding and with the \u2018International Code of Marketing of Breastmilk Substitutes\u2019 and subsequent relevant World Health Assembly resolutions (the Code). Table The Baby-Friendly Hospital Initiative (BFHI), launched by WHO and United Nations Children\u2019s Fund (UNICEF) in 1991, has been implemented globally in over 150 countries and is a pillar of the WHO/UNICEF Global Strategy for Infant and Young Child Feeding . One of th step of the Ten Steps to Successful Breastfeeding and of the BFHI overall [th step and associated separate initiatives are often critical to support breastfeeding mothers beyond the initial days of giving birth. While almost all countries in the world have implemented the BFHI at some point in time [The Baby-Friendly Community Initiative (BFCI) is an extension of the BHFI\u2019s 10 overall . Its foc in time , it appe in time and High in time and the in time .There have been a number of attempts to review the literature on the BFHI \u201312. MostTo provide an overview of interventions and/or approaches to implement the BFHI/BFCITo identify barriers and enablers to implementation of the BFHI/BFCITo identify knowledge gaps in relation to research on the BFHI/BFCIThis scoping review asks the question: what is known about the implementation of the BFHI and the BFCI globally? The aim is to map and examine the evidence relating to the implementation of BFHI and BFCI globally. Review objectives include:. [Scoping reviews map the range of evidence on a particular topic, identify gaps in the knowledge base, clarify concepts, and document research that informs and addresses practice . This sc. and prog. , 17. Acc. .A pilot search of the literature and scoping exercise was undertaken by our research team to examine empirical studies that have focused on the implementation of the BFHI in Africa . During A three-step search strategy, as documented in the JBI manual was followed. Step one was a limited search for peer-reviewed, published papers on the PubMed and CINAHL databases. An academic research librarian was consulted and an analysis of the words contained in the titles, abstracts and index terms generated a list of keywords. Search terms were then piloted to assess the appropriateness of databases and keywords. The second step was conducted with the librarian which involved refining the search terms. The third step was to examine the references of key articles that were identified for full text review that met the inclusion criteria. Draft inclusion and exclusion criteria were tested on a sample of 15 articles to check the criteria\u2019s suitability. The following databases were selected in consultation with the academic librarian: PubMed, Embase, Web of Science, Global Health and CINAHL. The timeframe for the search was from when the first article was published in a given database, which was 1993, to September 2022.Inclusion criteria were guided by the Population, Concepts, Contexts approach , as showAll research designs were included: qualitative, quantitative and mixed method studies. Quantitative studies included both experimental and observational study designs. Qualitative studies included designs such as grounded theory, ethnography, phenomenology, action research and qualitative descriptive design. In addition, all types of reviews of empirical research were included. Grey literature was not included, due to the large numbers of results that were obtained. A full list of search terms is detailed in Additional file Inclusion criteria: Articles that:describe the implementation of the BFHI and/or BFCIevaluate the BFHI (any of the 10 steps) and/or the BFCIfocus on experiences of accessing/delivering supports and services through the BFHI and/or BFCIfocus on breastfeeding outcomes as a result of the BFHI and/or BFCIfocus on any country or group of countriesare in the peer reviewed literatureempirical studiesAll types of literature reviews Exclusion criteria: Articles that:focus on other breastfeeding initiatives, supports/interventions in the hospital and/or community other than the BFHI/BFCIthe site is a baby friendly hospital but the study aim/objectives are not focused on the implementation of the BFHI/BFCIare published in a language other than Englishcommentaries, opinion pieces, editorials, evaluations, theses and book chapters and conference proceedingsThe screening process consisted of two phases: i) title and abstract screening; ii) full-text screening. In stage i) all titles and abstracts were screened by two reviewers in pairs . Screening was undertaken in Covidence and duplicates were removed. Where there was disagreement between reviewers as to whether an article should be included or excluded, a third reviewer arbitrated. At full text screening stage, the same process was undertaken. The original search was undertaken in 2020, an updated search was undertaken in 2022, and the overall results are shown in Fig.\u00a0A data charting form was developed, piloted by all members of the team on five articles, amended and applied to all the included articles, according to the JBI framework , 17. DatIt was originally planned to use the PAGER methodological framework to analyFindings from the review will be prepared for stakeholders who have expertise in relation to the BFHI and the BFCI. These will include researchers, practitioners and policy makers at the global level and at WHO regional levels.n\u2009=\u2009210) of studies focused on the BFHI overall/all steps, nine on the BFCI with 25 focusing on becoming BFHI/pre-BFHI (see Table n\u2009=\u2009266), with five focused at the national level. In terms of study design, 46 were qualitative, 139 quantitative, six mixed methods, 28 reviews, and 39 intervention studies (see Table n\u2009=\u2009144), with 60 studies focused on health professionals of various kinds , the creation of a breastfeeding-friendly environment and the removal of formula advertising from the hospital and the neonatal intensive care unit (NICU) (step 6), and the creation of a support system after the mother\u2019s discharge (step 10) . The oveStudies that related specifically to infant formula marketing at hospital level were from Canada , the UK Overall, successful BFHI implementation was associated with higher rates of initiation and continuation of breastfeeding across studies , 76\u201378. Studies often measured which of the Ten Steps were fulfilled and concluded that the more Baby-Friendly hospital practices mothers met, the better the breastfeeding outcomes. This was found in Malawi and HongSome studies highlighted the importance of specific steps. Step 1 was found to be an important factor for exclusive breastfeeding duration in Turkey . Staff sStep 4, specifically skin-to-skin contact, was the focus of several studies, such as in Brazil and the Hospital lactation policies, high rates of surgical deliveries and nurses having limited education in breastfeeding initiation best practices, were noted as barriers to best practices related to step 5 in breastfeeding initiation, in Colorado State, USA . Step 6 A longer length of stay in hospital was seen as important to breastfeeding in Japan . This duWhere studies measured the support for the 10 steps, there was considerable variation, for example, least support (28%) for step 1 and greatest support (93%) for step 3. Inconsistencies in implementation of the other steps were common in a study in the USA . There wThere was limited evidence about the Baby Friendly Community Initiative (BFCI), with just nine studies in total, focusing on Kenya, Italy and Turkey. In Italy, counselling or education being provided concurrently in various settings was seen to be most effective . A natioMany different sorts of educational and training interventions were covered in the research. These were focused on improving both knowledge of breastfeeding, increasing support for the BFHI and improving attitudes towards breastfeeding , 108 acrKnowledge of the BFHI varied across professional groups. The least understood steps in medical and nursing students were steps 1, 3, 8, and 10 in IndiaTime pressures, out of date practices and a lack of commitment to BFHI by experienced midwives was found to have a major impact on newly graduated midwives seeking to develop their breastfeeding support skills, in an Australia-based study . In a USEnablers: Many studies emphasised the need to monitor ongoing learning. Some studies of interventions focused on single professions, for example nurses in Singapore [ingapore , where tingapore accordiningapore . A studyingapore highlighingapore , where iingapore . Continuingapore . Howeveringapore . The neeingapore . Gavine ingapore highlighingapore . Lay breingapore .Barriers: Overall, a lack of health professionals\u2019 education was found to be a barrier to BFHI implementation across many studies. Inadequate training of health staff and a high volume of patients was a barrier in Pakistan [Pakistan . A studyPakistan highlighPakistan and NigePakistan . Health Pakistan . ChanginPakistan , and simPakistan and the Pakistan . WhenevePakistan . A studyPakistan . Poor coPakistan .enablers for BFHI implementation in the USA [Key the USA were fou the USA . Both me the USA and Cana the USA . It was the USA . A netwo the USA . The spe the USA , Taiwan the USA and in t the USA . Giving the USA . Lactati the USA . The inv the USA .barriers such as the medicalisation of childbirth and inter-professional struggles were highlighted as hindering inter-professional teamwork and collaboration and, therefore, the implementation of BFHI and its integration into practice in Austria [Various Austria .interventions for pregnant and lactating women were highlighted. Some interventions were Step specific, such as breastfeeding education in the prenatal setting, step 3 [Many educational , step 3 and a tr, step 3 . Mothers, step 3 . Objecti, step 3 . Confide, step 3 . Women w, step 3 . A breas, step 3 . Other eproviding sufficient information for mothers and the public about the BFHI, the benefits of breastfeeding, disadvantages of not breastfeeding, and benefits of going to accredited facilities .mothers who gave colostrum as the first food had more frequently taken lactation counselling support than mothers who gave prelacteal foods (Turkey) .a client-focused practice development approach was found to be effective in Australia .viewing short videos increased breastfeeding knowledge, particularly about hand expression, and increased confidence in both skills (UK and China) .greatest improvements in breastfeeding were seen when counselling or education were provided concurrently in various settings .Barriers: A study in Cyprus found that a large proportion of pregnant mothers received limited information and/or education on the benefits and ways to achieve exclusive breastfeeding [tfeeding . Administfeeding . Culturatfeeding . In Lebatfeeding .Cultural beliefs of mothers, their family members and others were seen as important across studies, as a wider context for BFHI implementation. A study in Iran found thMany studies identified gaps in the existing evidence. Patterns in study design and data collection that arose were as follows:Study design issues: The importance of conducting studies with a control group and the need to carry out more experimental studies was highlighted multiple times [le times , 145\u2013147le times . Ducharmle times highlighle times \u2013151. Stale times . M\u00e4kel\u00e4 le times also emple times . Shing ele times highlighle times .Data issues: There was a stated need to improve the quality and validity of the collation of breastfeeding data and to perform prevalence studies. Limited size of samples was noted several times [al times , 155. Nual times , 157 sugal times , 158\u2013160al times noted poWithin many studies, recommendations for practice and policy were highlighted, across different levels and settings. At a national level, political enforcement of, and support for, BFHI implementation can assist in expanding the designation process of Baby-Friendly Hospitals , 76, 162Advocacy for additional government resources is needed to support scale-up of BFCI , alongsiOngoing quality assurance is crucial, as is systematic (re)assessment of BFHI designated hospitals , 169\u2013174At the hospital level, there were many recommendations highlighted. Hospital administrators should establish and monitor breastfeeding policies , 179. UpHospitals with Baby-Friendly status should consider models of breastfeeding support that favour long-term healthcare relationships across the perinatal period. There is a need for a continuous healthcare model , includiIn this scoping review, we sought to identify and map what is known about the implementation of the BFHI and the BFCI globally. We have included evidence from a wider range of sources than before, across all settings. A limitation of the review is a lack of critical appraisal of included studies, which may have resulted in studies of low quality being included. Studies that were published in a language other than English were excluded. Evidence from over 48 countries globally, and gathered from many different stakeholders\u2014from women, health professionals and policy makers\u2014is presented and organised here, to highlight six key patterns associated with implementation of the initiatives. These patterns mapped, to some extent, on to the Ten Steps for Successful Breastfeeding themselves, and range from national health system level to community level interventions between women and health professionals and others supporting breastfeeding. The BFHI has been revitalised in many countries. It seems that the potential of the BFCI has not been realised in settings beyond the initial countries in which it was implemented. Evidence gaps highlighted the need for having longer term follow-up outcome data, and having experimental designs where appropriate.These results can help to support and further enable the effective implementation of BFHI and BFCI globally. Researchers can build on this evidence base to plan and carry out higher quality studies to advance understanding and improve future implementation of the BFHI and BFCI.Additional file 1. Search terms.Additional file 2.\u00a0Overview of studies.Additional file 3. Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) Checklist."} {"text": "The road transportation sector is a dominant and growing energy consumer. Although investigations to quantify the road infrastructure\u2019s impact on energy consumption have been carried out, there are currently no standard methods to measure or label the energy efficiency of road networks. Consequently, road agencies and operators are limited to restricted types of data when managing the road network. Moreover, initiatives meant to reduce energy consumption cannot be measured and quantified. This work is, therefore, motivated by the desire to provide road agencies with a road energy efficiency monitoring concept that can provide frequent measurements over large areas across all weather conditions. The proposed system is based on measurements from in-vehicle sensors. The measurements are collected onboard with an Internet-of-Things (IoT) device, then transmitted periodically before being processed, normalized, and saved in a database. The normalization procedure involves modeling the vehicle\u2019s primary driving resistances in the driving direction. It is hypothesized that the energy remaining after normalization holds information about wind conditions, vehicle-related inefficiencies, and the physical condition of the road. The new method was first validated utilizing a limited dataset of vehicles driving at a constant speed on a short highway section. Next, the method was applied to data obtained from ten nominally identical electric cars driven over highways and urban roads. The normalized energy was compared with road roughness measurements collected by a standard road profilometer. The average measured energy consumption was 1.55 Wh per 10 m. The average normalized energy consumption was 0.13 and 0.37 Wh per 10 m for highways and urban roads, respectively. A correlation analysis showed that normalized energy consumption was positively correlated to road roughness. The average Pearson correlation coefficient was 0.88 for aggregated data and 0.32 and 0.39 for 1000-m road sections on highways and urban roads, respectively. An increase in IRI of 1 m/km resulted in a 3.4% increase in normalized energy consumption. The results show that the normalized energy holds information about the road roughness. Thus, considering the emergence of connected vehicle technologies, the method seems promising and can potentially be used as a platform for future large-scale road energy efficiency monitoring. This time series was first resampled at 50 Hz. Then, the speed data were integrated w.r.t. time to provide cumulative distance and the traction energy was calculated from Equation . Lastly,Average vehicle speed and measured traction energy from the GM cars traveling over the same road are visualized in 10 is the 10 m moving average of the IRI reported by the P79 profilometer. Moreover, It is observed from It can be seen from In order to visualize the influence of the normalization technique and validate the proposed method, the road slope versus measured and normalized energy data are plotted in For road sections where the car drives at a constant speed, the energy consumption must vary according to the longitudinal slope since all other driving resistances are constant, as verified/shown in The chart in Finally, the chart in r is 0.68\u20130.83. Since it is impossible to drive at an exact speed there is some noise in the data caused by small changes in longitudinal acceleration. r is 0.93\u20130.96. r is 0.91\u20130.97.From The spikes in the data around a slope of zero in The results in quations (see Tabquations yields(g = 9.81 m/s2 (earth\u2019s gravity acceleration), m, is 1706 kg, which falls within the expected range of 1500\u20131900 kg indicates that the analysis results are significant.The Pearson correlation coefficient, ried out . ANOVA tFrom In order to test the hypothesis, the normalized energy data are aggregated for 15 km of road and then divided into three groups. Group no. 1 contains data from both the highway road and the urban road, group no. 2 contains data from the highway road, and group no. 3 the data from the urban road. The data are divided into five categories within each group, each representing roughness properties from very smooth to very rough . Each category contains 20 percent of the data, i.e., from the 0\u201320th percentile to the 80th\u2013100th percentile for \u2018very smooth\u2019 to \u2018very rough\u2019 (\u2018respectively\u2019). Next, the linear correlation between means in each group is calculated as shown in It is observed from The ANOVA test results for group no. 1 are visualized in the box plot in p-value of zero is obtained for all three groups, indicating that differences between column means are significant. The table shows the between-category variation (\u2018columns\u2019) and within-category variation (\u2018error\u2019). F-statistic is the ratio of the mean squared errors. The p-value is the probability that the test statistic can take a value greater than the value of the computed test statistic. F > The results are summarized in Data from individual car passes are evaluated in windows of 100, 250, 500, 1000, and 2500 m to test the method\u2019s reproducibility and performance for shorter road sections. The choice of window size is guided by the following: (i) standard road condition parameters are typically reported in 10 to 100-m intervals; (ii) the measured energy consumption is constant over more extended road sections compared to traditional standard road condition parameters; and (iii) to ensure overlap between time series data collected from different sources at different speeds .It is observed from p-value is 0.07\u20130.39 It is observed from p-values are obtained from a manufactured dataset. First, a synthetic set of normalized energy consumption data were generated for each car pass. This was done, assuming a normal distribution of data, with a mean and standard deviation equal to the corresponding data for the real car pass. Next, the manufactured dataset was utilized to produce a set of synthetic correlation coefficients. Finally, the p-values were calculated from an ANOVA test. The null hypothesis, in this case, is that the correlation coefficients in The The results of the ANOVA test for a window size of 1000 m are visualized in the box plot in p-values device. The data were transmitted periodically before being processed, normalized, and saved in a database. The normalized energy was then linked to the road pavement roughness and visualized on a map.This is the first time such a concept has been utilized to quantify changes in vehicle energy consumption caused by pavement roughness on highways and urban roads. The approach has the advantage that it enables analysis of data across vehicle types in a real-scale setting and is, therefore, superior to other methods.The new method was first validated utilizing a limited dataset of vehicles driving on a highway road. The results from this verification effort show that energy data can be normalized and that the physical models proposed apply to the problem.Experimental data from ten nominally identical electric cars driven over 25 km of highways and urban roads were utilized to investigate the relationship between normalized energy consumption and road pavement roughness.As a first step, the normalized energy data were aggregated into three road groups/ classes. Within each group, the data were further subdivided into five road roughness categories, each containing 20% of the group data. The results showed a strong linear relationship between the normalized energy consumption and road roughness. Then, the relationship between normalized energy and road roughness for individual passes and the methods\u2019 reproducibility were assessed. In this context, the data were analyzed in 100- to 2500-m windows. It was found that the normalized energy consumption collected from individual cars resembled the measured road roughness. The results also showed a low-to-moderate positive linear relationship for all car passes and window sizes on both highways and urban roads.Analysis of Variance (ANOVA) tests showed that the results obtained are statistically significant for both aggregated data and data from individual passes. Thus, it is concluded that the normalized energy consumption holds some information about the physical condition of the road.The method enables road network mapping of energy data. Such energy consumption maps give users/operators a quick overview of the total energy consumption on the road network and help identify critical areas of the road infrastructure. The method also enables energy data analysis in terms of physical phenomena and source , as well as how these parameters evolve.The main findings from this study can be summarized as follows; (i) a new method for estimating road energy efficiency was proposed and successfully utilized to analyze energy data on highways and urban roads; (ii) the average normalized energy consumption is 0.13 and 0.37 Wh per 10 m for highways and urban roads (respectively); (iii) the normalized energy consumption is positively correlated to surface roughness\u2014the average correlation coefficient is 0.88 for aggregated data and 0.32 and 0.39 for 1000-m road sections on highways and urban roads (respectively), and (iv) an increase in IRI of 1 m/km results in a 3.4% increase in normalized energy consumption.Considering the emergence of connected vehicle technologies, the method seems promising and can potentially be used as a platform for future large-scale road energy efficiency monitoring. The normalized energy consumption is a new pavement condition indicator that supports decision-making and may contribute to improved pavement management. Additionally, the normalized energy can be applied to initiatives to create labeling systems for road infrastructure similar to those used in consumer sectors. In this context, it is envisioned that the proposed concept will be combined with other important factors affecting vehicle energy consumption, such as road pavement type, road classification, road geometry, traffic information, and weather conditions. Hence, this issue is of critical importance and requires further research."} {"text": "Due to the essential role of cyclin D1 in regulating transition from G1 to S phase in cell cycle, aberrant cyclin D1 expression is a major oncogenic event in many types of cancers. In particular, the dysregulation of ubiquitination-dependent degradation of cyclin D1 contributes to not only the pathogenesis of malignancies but also the refractory to cancer treatment regiments with CDK4/6 inhibitors. Here we show that in colorectal and gastric cancer patients, MG53 is downregulated in more than 80% of tumors compared to the normal gastrointestinal tissues from the same patient, and the reduced MG53 expression is correlated with increased cyclin D1 abundance and inferior survival. Mechanistically, MG53 catalyzes the K48-linked ubiquitination and subsequent degradation of cyclin D1. Thus, increased expression of MG53 leads to cell cycle arrest at G1, and thereby markedly suppresses cancer cell proliferation in vitro as well as tumor growth in mice with xenograft tumors or AOM/DSS induced-colorectal cancer. Consistently, MG53 deficiency results in accumulation of cyclin D1 protein and accelerates cancer cell growth both in culture and in animal models. These findings define MG53 as a tumor suppressor via facilitating cyclin D1 degradation, highlighting the therapeutic potential of targeting MG53 in treating cancers with dysregulated cyclin D1 turnover. The expression of cyclin D1 is induced by mitogenic stimulations. Then cyclin D1 enters nucleus and couples with CDK4/6 to inactivate retinoblastoma tumor suppressor (Rb) by phosphorylation, which enables the transcription of E2F-dependent genes. When cells enter S phase, cyclin D1 is phosphorylated and transported back to cytosol where it is ubiquitinated followed by degradation via proteasome. In addition, various stress conditions, such as radiation, DNA damage, drug treatment, or starvation, may also induce cyclin D1 degradation and subsequent cell cycle exit.13Cyclin D1 plays multiple roles in tumorigenesis, including enabling cell cycle progression, promoting cell migration, facilitating DNA damage repair, and driving chromosome instability.16 Thus, innate insensitivity or acquired resistance to CDK4/6is constitutes a major hindrance in the clinical application of these drugs. To date, two large E3 ligase complexes, S-phase kinase-associated protein 1-Cullin 1-F-box complex (SCF) and anaphase-promoting complex/cyclosome (APC/C), have been reported to mediate the degradation-related polyubiquitination of cyclin D1.1 SCF E3 ubiquitin ligase complexes contain different substrate-interacting proteins. Among them, SCFFBX4/\u03b1B crystallin is a well-validated E3 ligase complex of cyclin D1. The mRNA levels of \u03b1B crystalline and FBX4 are downregulated in prostate, thyroid, and breast adenocarcinomas, as well as lymphomas.11 In addition, mutations of FBX4 were found in esophageal tumors.17 SCFFBXW8 is identified to facilitate cyclin D1 ubiquitination in HCT116 and SW480 colon cancer cells and T98G glioblastoma cells, although there is no direct evidence linking SCFFBXW8 to tumorigenesis in vivo.18 \u03b2-transducin repeat-containing protein (\u03b2-TrCP1) is another adaptor that interacts with cyclin D1 in response to the treatment of the peroxisome proliferator-activated receptor \u03b3 agonist STG28.19 Intriguingly, \u03b2-TrCP1 induces cyclin D1 degradation when prostate cancer cell LNCaP are challenged with glucose starvation, while other E3 ligases are not seem to be involved under this condition. Another E3 ubiquitin ligase complex APC/C mediates irradiation-induced degradation of cyclin D1 via its subunit Cdc27/APC3.21 It is noteworthy that the expression of Cdc27/APC3 is repressed in many types of cancer cells, but whether Cdc27/APC3 dysregulation promotes tumorigenesis via cyclin D1 is not reported. It is not uncommon that a protein has several E3 ligases, but the functions of the E3 ligases of cyclin D1 might be cancer-type and context dependent. Furthermore, it has been shown that the ablation or knockdown of the components of either SCF or APC/C E3 ligase complex does not reduce cyclin D1 abundance in vivo or in vitro.22 Therefore, it is important to identify other mechanisms contribute to the dysregulation of cyclin D1 protein turnover in the context of different cancer types.Emerging evidence has pinpointed a pivotal role of dysfunctional E3 ubiquitin ligase in the cancer-associated cyclin D1 accumulation. Moreover, recent studies have implicated dysfunction in ubiquitination-dependent degradation of cyclin D1 as a key mechanism underlying insensitivity towards CDK4/6 inhibitors (CDK4/6is) in treating certain types of cancers.24 On the other hand, MG53 facilitates membrane repair26 and activates survival signaling pathways,28 and thereby playing a protective role against acute damage of multiple organs.31 An unbiased screening has identified MG53 as one of the 4 essential regulators that limits the proliferation and metastasis of non-small cell lung cancer,32 however how MG53 functions as tumor suppressor is unclear. To delineate the underlying mechanism, we examined the function of MG53 in different cancer cell lines and searched for the cell cycle regulators that interacted with MG53 by proteomic analysis. We further validated our findings using MG53 transgenic and knockout mice, as well as samples from patients with colorectal and gastric cancers.MG53 is an E3 ligase of insulin receptor and insulin receptor substrate 1. Thus, increases in MG53 expression contribute to metabolic disorders by impairing insulin signaling and glucose metabolism.Specifically, we have demonstrated that the expression of MG53 is overtly decreased in more than 80% of the tumors examined relative to the normal tissues from the same patient. The downregulation of MG53 is strongly correlated with reduced overall survival of patients with colorectal or gastric cancers. Mechanistically, MG53 functions as an E3 ligase of cyclin D1, thereby suppressing tumor growth in mouse models with xenograft tumors as well as carcinogen-induced colorectal cancer. In contrast, depletion of MG53 has the opposite effects of accelerating cancer cell proliferation and exacerbating tumorigenesis in mice. Furthermore, adenoviral delivery of MG53 expression vector into xenograft tumors significantly retards tumor growth. These results reveal the mechanism of action of MG53 in suppressing tumor growth and demonstrate the potential of targeting MG53 in cancer therapy.To get mechanistic insights in the function of MG53 in cancer, we performed Kaplan\u2013Meier survival analysis of two cohorts of colorectal cancer. Results showed that patients defined as having \u201chigh expression\u201d of MG53 had significantly prolonged survival Fig. . Remarka33To reveal the mechanism underlying the tumor-suppressive function of MG53, we firstly examined MG53 expression in several cancer cell lines, and found that colorectal cancer cell HCT116 had endogenous MG53 expression. Next, we overexpressed MG53 in HCT116 and found that upregulation of MG53 markedly attenuated assay revealed a strong interaction between MG53 and cyclin D1 in HEK293 cells when both proteins were overexpressed abolished the ubiquitination by MG53, substantiating that MG53 is an E3 ligase of cyclin D1 treatment using MG53 overexpressing (MG53-TG) or deficient (MG53-KO) mice via catalyzing its ubiquitination in hepatocellular carcinoma cells.46 In this study, we have shown that MG53 represses tumor growth via destabilizing cyclin D1. Thus, there was a negative correlation of the amount of MG53 and cyclin D1 in cultured cancer cells and tumors derived from animal models as well as patients with gastric or colorectal cancers. Moreover, MG53 induces cell cycle arrest at G1, and cyclin D1 is the key G1/S regulator that interacts directly with MG53. Most importantly, supplement of cyclin D1 can restore the cell proliferation attenuated by MG53 overexpression. Although we cannot fully exclude other potential mechanisms, our results strongly suggest that the anti-cancer effect of MG53 is largely mediated by facilitating cyclin D1\u2019s ubiquitination and subsequent degradation in multiple cancer cell types that depend on cyclin D1 for cell proliferation. Therefore, understanding the detailed mechanism(s) of action of MG53 in each particular cancer type is fundamental for implementation of precision medicine in cancer treatment.Several studies have demonstrated that MG53 is a tumor suppressor,26 which has inspired the application of recombinant MG53 protein (rhMG53) in treating acute injury of a variety of organs.49 In a recent study using mouse xenograft model of colorectal cancer cell SW620/AD300, injection of rhMG53 inhibits tumor growth and displays synergistic antitumor effect with doxorubicin.42 We showed significant therapeutic effects of adenoviral delivery of MG53 in delaying HCT116 tumor progression in vivo. Moreover, as the dysfunction of cyclin D1 E3 ligases is involved in desensitizing of cancer cells towards CDK4/6is in clinical practice,16 we utilized MG53 in combination with palbociclib and successfully enhanced the sensitivity of cancer cells to palbociclib and further repressed tumor growth. These proof-of-concept tests demonstrate the possible application scenarios of MG53 in cancer therapy.MG53 has originally been reported to play an important role in membrane repair,In summary, we have shown that MG53 is a tumor suppressor that targets cyclin D1 for ubiquitination-dependent degradation. Upregulation of MG53 is sufficient to suppress tumor growth. Most importantly, increased expression of MG53 not only inhibits tumor growth in animal models, but also is associated with markedly improved survival probability of cancer patients. These findings highlight the therapeutic potential of MG53 in treating cancers with high cyclin D1 abundance and improving the efficacy of CDK4/6is.Patient samples were obtained from Beijing Cancer Hospital and Institute, Beijing, China; and Wuhan Union Hospital, Hubei, China . The animals were maintained in the AAALAC-accredited Laboratory Animal Center at Peking University, Beijing, China. Male mice were randomly assigned to experiment groups for treatments. MG53 overexpressing transgenic mice (MG53-TG) and MG53 knockout mice (MG53-KO) were generated as described previously.6 HCT116 cells expressing either MG53-GFP or MG53-specific shRNA (shMG53) into one axilla, and cells expressing GFP or GFP-specific shRNA (shGFP) were transplanted into the axilla on the other side of the same animal as a control. Once palpable tumors were established, a caliper was used to measure their sizes every 2 days. The formula (A\u2009\u00d7\u2009B2)/2 was used to calculate the volume of a tumor, where A and B were the tumor length and width, respectively. After the indicated time course, mice were imaged. The xenograft tumors were dissected and weighed. For tests of palbociclib treatments, tumors were monitored till the average size of xenograft MG53-overexpressing tumors reached 100\u2009mm3, at which point mice were randomly assigned to treatment group of either vehicle or palbociclib (40\u2009mg/kg) via oral gavage for 7 consecutive days. For evaluating therapeutic potential of MG53, 3\u2009\u00d7\u2009106 HCT116 cells were xenoplanted into male 6-week-old BALB/c nude mice. Once the average tumor size was around 200\u2009mm3, the mice were randomized into two groups and injected with either adenovirus expressing \u03b2-gal or MYC-tagged MG53. The adenovirus was injected very slowly into the tumor at two different sites once every 2 days . Mice were sacrificed after they received five doses of adenovirus.For the xenograft mouse model, male BALB/c nude mice at 6 weeks of age were injected with 3\u2009\u00d7\u2009102, 1.2\u2009mM CaCl2, 25\u2009mM NaHCO3, 2.4\u2009mM K2HPO4, 0.4\u2009mM KH2PO4, and 2% PMSF, pH 7.4). Colons were slit open longitudinally and the number of tumors were counted. The snap-frozen small intestine, colon, and tumors were stored at \u221280\u2009\u00b0C for further analysis.For AOM/DSS-induced colorectal carcinogenesis, mice were injected with AOM intraperitoneally . 7 days after AOM injection, 2.5% DSS was administered via drinking water for 7 consecutive days, then replaced with normal drinking water for another 14 days. At the end of 10 weeks after repeating this treatment scheme for 3 times, the mice were sacrificed. Colons were collected, and feces were washed off with Ringer\u2019s solution was from Raybiotech. Palbociclib (#S1116) was from Selleck Chemicals. Cycloheximide was from APExBio.Unless specifically indicated, all the chemicals used were from Sigma-Aldrich. The antibodies used in this study are listed in Supplementary Table The expression vector of HA-Ubiquitin was from Addgene (RRID: Addgene_18712). The full-length MG53 cDNA was amplified from a human ORFeome v8.1 Entry library by PCR and the expression vector was generated using pcDNA4/TO/myc-HisB Expression Vector . Cyclin D-FLAG expression vectors were also constructed by inserting PCR product of the human ORFeome library into the vector c-FLAG pcDNA3 (RRID: Addgene_20011). The constructs expressing MG53 D-RING and truncated cyclin D1 were generated from their corresponding wild type full-length expression vectors. All mutant and truncation expression vectors were generated using a Q5 Site-Directed Mutagenesis Kit as described by the manual.All the cells, including HEK293 , HepG2 , H460 , H1975 , A549 , H1299 , AGS , Hs746T , SW480 , and HCT116 cells, were from ATCC. HEK293, HCT116, HepG2, and Hs746T cells were cultured in Dulbecco\u2019s modified Eagle\u2019s medium . H1975, A549, H1299, and AGS cells were maintained in RPMI1640 , all of which were supplied with 10% fetal bovine serum . SW480 cells were cultured in Leibovitz\u2019s L-15 medium with 15% fetal bovine serum. Cell number was determined by Cellmeter Auto T4 (Nexcelom). When cells reached 90% confluency, gene transfer was performed by adenoviral or lentiviral infection (Suzhou GenePharma), or plasmid transfection using Lipofectamine 3000TM . The sequences of the MG53-specific shRNAs are as follows:shMG53#1: 5\u2032-GACTGAGTTCCTCATGAAATA-3\u2032,shMG53#2: 5\u2032-GGGTTGAAGCTTAGGTCTCCT-3\u2032.4 per well and infected with the indicated adenoviruses. Cell proliferation was assessed with MTT assay . Specifically, cells were incubated with MTT at 37\u2009\u00b0C for 4\u2009h. The medium was then replaced with dimethyl sulfoxide and cells were solubilized. MTT cleavage was quantified using a spectrophotometer to measure absorption at 490\u2009nm with absorption at 630\u2009nm subtracted as background. CellTiter-Glo\u00ae Luminescent Cell Viability Assay was utilized to assess cell viability. For monitoring cell proliferation at real-time, 4\u2009\u00d7\u2009104 cells in a well of 24-well plate were subjected to the measurement by an impedance-based real-time instrument system .Cells were seeded in 24-well plates at 4\u2009\u00d7\u200910Cells were synchronized using transient serum starvation and infected with the indicated adenovirus. Twenty-four hours after infections, cells were fixed at 4\u2009\u00b0C overnight in 75% ethanol, then washed in PBS before stained in 1\u2009mL PI staining solution containing RNase A in the dark at room temperature for 30\u2009min. For each sample, around 10,000 PI-stained cells were evaluated with a FACScan flow cytometer (Becton Dickinson).23 The relative mRNA level was determined by normalizing to the level of 18S rRNA. Each real-time PCR experiment was performed in triplicate, and the data were presented as the average of at least three independent experiments. Primers for real-time PCR are:Total RNA was extracted for reverse transcription and real-time PCR reactions.18S forward: 5\u2032-GTAACCCGTTGAACCCCATT-3\u2032;18S reverse: 5\u2032-CCATCCAATCGGTAGTAGCG-3\u2032.Human cyclin D1 forward: 5\u2032-TGCATGTTCGTGGCCTCTAA-3\u2032;Human cyclin D1 reverse: 5\u2032-GAACTTCACATCTGTGGCAC-3\u2032.After transfection with the indicated plasmids or infection with the indicated adenovirus, cells were stained with indicated antibodies, and the immunofluorescent images were examined using a confocal microscope (A1RSi+).Tissue or cell lysate was homogenized in lysis buffer and lysed on ice for 10\u2009min. The supernatant was collected after 10\u2009min centrifugation at 13,000\u2009rpm and used for western blotting and co-immunoprecipitation. All original and uncropped images of western blots were provided in the Cells were infected with adenovirus expressing c-terminus MYC-tagged MG53. Upon harvest, cell lysate was prepared by centrifugation at 13,000\u2009rpm for 10\u2009min. MG53 was immunoprecipitated with anti-MYC antibody or IgG on protein A Sepharose 4 Fast Flow . The non-specific bindings were removed with ice-cold lysis buffer and then loading buffer (BIO-RAD #1610737) was added to elute immunoprecipitated protein complex. The eluent was resolved by SDS\u2013polyacrylamide gel electrophoresis (BIO-RAD #1610183) and stained with Coomassie blue. The specific bands were cut out for mass spectrometry analysis, and the IgG control lane was used to subtract background hit in the MS.m/z, 120,000 resolution) is followed by 3\u2009s data-dependent MS/MS scans in an Ion Routing Multipole at 30% normalized collision energy (HCD). The MS/MS spectra from each LC-MS/MS run were identified using Proteome Discovery searching algorithm (version 1.4) to search against the selected database.For LC-MS/MS analysis, peptides were separated using a Thermo-Dionex Ultimate 3000 HPLC system with a 120\u2009min gradient elution at a flow rate of 0.300\u2009\u03bcL/min. The analytical column was homemade by packing C-18 resin in a fused silica capillary column . Mobile phase A and B are 0.1% formic acid, and 100% acetonitrile and 0.1% formic acid, respectively. Xcalibur3.0 software was used to control and process data obtained from an Orbitrap Fusion mass spectrometer operating in a data-dependent acquisition mode. A single Orbitrap scan across full mass spectrum and lysed on ice for 10\u2009min. The ubiquitinated proteins were immunoprecipitated with indicated antibodies from the cell lysate and resolved on SDS-PAGE for western blotting. The in vitro ubiquitination assay of cyclin D1 was carried out using an ubiquitination kit from Enzo Life Science (cat. no. BML-UW9920-0001) and UBE2H was used in ubiquitination reactions.Cells were incubated with MG132 for 12\u2009h and then harvested in ice-cold PBS. Then cells were re-suspended in RIPA buffer following manufacturer\u2019s protocol.R2: Genomics Analysis and Visualization Platform (http://r2platform.com). Patients were ranked based on the expression levels of MG53 in their tumors obtained from the TCGA RNAseq and GEO databases. Kaplan\u2013Meier survival analyses and log-rank significance tests were used to compare the survival outcome.Data was analyzed in t test or paired t test. P\u2009<\u20090.05 was considered as statistically significant. Reproducibility and statistics were described in detail in the figure legends.Statistical analyses were performed using GraphPad Prism 8. Data were presented as mean\u2009\u00b1\u2009s.e.m. The statistical significance of differences between groups were examined using two-tailed unpaired Supplementary MaterialsSupplementary Table S1"} {"text": "Filamentous fungi possess an array of secreted enzymes to depolymerize the structural polysaccharide components of plant biomass. Sugar transporters play an essential role in nutrient uptake and sensing of extracellular signal molecules to inhibit or trigger the induction of lignocellulolytic enzymes. However, the identities and functions of transceptors associated with the induction of hemicellulase genes remain elusive.l-arabinose transporter MtLat-1 is associated with repression of hemicellulase gene expression in the filamentous fungus Myceliophthora thermophila. The absence of Mtlat-1 caused a decrease in l-arabinose uptake and consumption rates. However, mycelium growth, protein production, and hemicellulolytic activities were markedly increased in a \u0394Mtlat-1 mutant compared with the wild-type (WT) when grown on arabinan. Comparative transcriptomic analysis showed a different expression profile in the \u0394Mtlat-1 strain from that in the WT in response to arabinan, and demonstrated that MtLat-1 was involved in the repression of the main hemicellulase-encoding genes. A point mutation that abolished the l-arabinose transport activity of MtLat-1 did not impact the repression of hemicellulase gene expression when the mutant protein was expressed in the \u0394Mtlat-1 strain. Thus, the involvement of MtLat-1 in the expression of hemicellulase genes is independent of its transport activity. The data suggested that MtLat-1 is a transceptor that senses and transduces the molecular signal, resulting in downstream repression of hemicellulolytic gene expression. MtAra-1 protein directly regulated the expression of Mtlat-1 by binding to its promoter region. Transcriptomic profiling indicated that the transcription factor MtAra-1 also plays an important role in expression of arabinanolytic enzyme genes and l-arabinose catabolism.In this study, we reveal that the M. thermophila MtLat-1 functions as a transceptor that is involved in l-arabinose transport and signal transduction associated with suppression of the expression of hemicellulolytic enzyme-encoding genes. The data presented in this study add to the models of the regulation of hemicellulases in filamentous fungi.The online version contains supplementary material available at 10.1186/s13068-023-02305-3. Trichoderma reesei, Myceliophthora thermophila, and Penicillium oxalicum, have been developed as the platforms to produce cellulolytic enzymes for industrial applications [Non-edible plant biomass is recognized as a potential sustainable source for production of biofuels and commodity chemicals in biorefinery processes . Howeverications \u20135.Neurospora crassa, cellodextrin transporters CDT-1 and CDT-2, and cellobionic acid transporter CLP1 are involved in the induction of expression and secretion of cellulases. The function of CDT-1 and CDT-2 on signal transduction for the activation of cellulolytic gene expression is not dependent on the transporting activities [T. reesei, a dual cellobiose/glucose transporter Stp1 is involved in the carbon catabolite repression response (CCR) and repressed induction of cellulase and hemicellulase genes [N. crassa [Saccharomyces cerevisiae Snf3 and Rgt2 [N. crassa RCO3 [In fungi, lignocellulolytic enzyme-encoding genes are regulated by the complex signaling networks. Sugar transporters play an essential role in substrate uptake and sensing extracellular signal molecules to inhibit or trigger enzyme induction \u20138. The ttivities . In T. rse genes . Similar. crassa . Some prand Rgt2 , and N. ssa RCO3 , 13. Snfssa RCO3 . HoweverN. crassa, Aspergillus nidulans, and P. oxalicum [T. reesei and Aspergillus niger [N. crassa, M. thermophila, and Fusarium graminearum, XlnR homologs regulate xylanase expression and d-xylose utilization [Fusarium oxysporum, a strain carrying a deletion of xlnR lacked transcriptional activation of structural xylanase genes and exhibited dramatically reduced extracellular xylanase activity [N. crassa and Aspergillus spp. [N. crassa [Magnaporthe oryzae [T. reesei [A. niger [Fungal cellulase and hemicellulase gene expression is tightly regulated at the transcriptional level upon appropriate nutrient sensing . Severaloxalicum \u201320. Tranus niger \u201323, Xyr1lization , 24, 25.activity . CreA/CRlus spp. , 28. In . crassa , Magnapoe oryzae and T. r. reesei , while aA. niger .Myceliophthora thermophila is a thermophilic fungus known for its capability to efficiently secrete a complete set of thermostable carbohydrate-active enzymes [l-arabinose transporter MtLat-1 and demonstrated that MtLat-1 exhibits specific transport activity for l-arabinose [M. thermophila. On the basis of comparative transcriptomic analysis and the assays of a non-transporting MtLat-1 mutant, we found that MtLat-1 functions as a transceptor, which is involved in l-arabinose transportation, sensing, and downstream signaling cascades in M. thermophila. In addition, we also demonstrated that Mtlat-1 expression is directly regulated by transcription factor MtAra-1, which was involved in l-arabinose release and metabolism. enzymes , 32. Rec enzymes and as a enzymes , 33\u201335. enzymes , 36, 37.rabinose . In the l-arabinose [Mtlat-1 on various carbon sources, we conducted comparative transcriptomic analysis of M. thermophila grown on plant biomass-derived monosaccharides, including d-glucose, d-xylose, and l-arabinose. When responding to l-arabinose, 26 putative sugar transporter genes showed high expression levels (RPKM\u2009>\u200920), including five putative glucose transporter genes , four putative glucose and pentose transporter genes , and two putative cellodextrin transporter genes (Mycth_114107 and Mycth_43941) of l-arabinose uptake. Therefore, we were interested in the role of the MtLat-1 in the growth of M. thermophila on plant biomass-derived monosaccharides.Our previous study revealed that sugar transporter MtLat-1 (Mycth_95427) showed a high uptake activity and specificity for rabinose . To deteMtlat-1-null strain was constructed via homologous replacement with a neomycin resistance gene (neo)-inclusive cassette using CRISPR/Cas9 system [d-glucose or the three most relevant hemicellulose side-chain sugars , the consumption rates of d-glucose, d-xylose, and d-galactose in the \u0394Mtlat-1 mutant were similar to those of the wild-type (WT) strain indicated that l-arabinose uptake was reduced by approximately 15% in the \u0394Mtlat-1 mutant compared with that in the WT (15.4\u00a0nmol/min/mg_DCW) l-arabinose level, but led to a growth defect at low l-arabinose level [The 9 system . The corbolished . Howeverse level .d-xylose and l-arabinose) can act as inducers of the expression of genes encoding plant biomass-degrading enzymes [l-arabinose-specific transport activity, participates in sensing and transducing an l-arabinose signal that is involved in the induction of hemicellulolytic enzymes in M. thermophila. To test this hypothesis, physiological phenotypes of the strain \u0394Mtlat-1 were assayed when it was grown on arabinan or xylan. Unexpectedly, the strain \u0394Mtlat-1 exhibited an approximately 14% increase in dry cell weight compared with the WT when grown on arabinan for 4 days exhibited markedly up-regulated expression levels in \u0394Mtlat-1 compared with the WT, which is a prerequisite for efficient sugar utilization and mycelium growth. An analysis of the transcriptional profiles of Carbohydrate-Active enzymes (CAZyme) genes indicated that genes encoding hemicellulolytic enzymes were dramatically induced in strain \u0394Mtlat-1 relative to the WT and three xylanolytic enzyme genes showed significantly increased expression levels in the \u0394Mtlat-1 mutant compared with the WT. In addition, the genes associated with the pentose catabolism were significantly up-regulated in \u0394Mtlat-1, including xylose reductase , l-arabinitol 4-dehydrogenase , xylitol dehydrogenase , l-xylulose reductase , and d-xylulose kinase , which might attributed to increased l-arabinose release resulting from the enhancement of hemicellulolytic enzyme activity in \u0394Mtlat-1.To further investigate the effect of MtLat-1 on hemicellulase production, comparative transcriptomic profiles of the WT and \u0394S. cerevisiae strain BSW5AP with a heterogenous arabinose metabolic pathway and native l-arabinose transporter gene deleted [l-arabinose, showing that the mutation abolished the transport function of MtLat-1 , respectively. The initial l-arabinose uptake rate of strain Ptef-Mtlat-1(R136K) was lower than that of mutant Ptef-Mtlat-1 located in a cytoplasmic loop just preceding the fifth transmembrane domain of several hexose sensors, leads to the loss of substrate uptake activity, but constitutive glucose signaling , 45, 46. deleted was usedt-1 Fig.\u00a0: Fig. S5Mtlat-1 strain. When grown on arabinan, the resultant strain Ptef-Mtlat-1M showed mycelium yield, secreted protein content, and hemicellulolytic activities similar to those of the strain Ptef-Mtlat-1(R136K), revealing that the C-terminal cytosolic region of MtLat-1 was dispensable for the function of the protein in transmitting the l-arabinose signal.Previous studies have shown that C-terminal cytoplasmic sequence extension of several transceptors related to the CCR trigger or the induction of cellulase was required for their interaction with a component of the molecular signal transduction pathway , 48. To Mtlat-1, we determined transcriptomic profiles of all transcription factor genes in M. thermophila when grown on mono-saccharides or under starvation (no carbon) conditions. As shown in Fig.\u00a0Mtlat-1 was specific to the presence of l-arabinose, we considered the four genes that were only highly induced by l-arabinose as the candidate regulators of Mtlat-1. Next, single-gene mutants of these genes were constructed and the expression level of Mtlat-1 in response to l-arabinose was detected by real-time quantitative PCR (RT-qPCR) assay. Three of the mutants showed no significant change in Mtlat-1 expression compared with the WT strain. However, the \u0394Mycth_2121737 strain showed dramatically decreased expression of Mtlat-1 and T. reesei (Ara1) were related to utilization of arabinan, l-arabinose, and d-galactose [To search for transcription factors essential for expression of alactose , 27. TheMtlat-1 expression, we conducted electromobility shift assays (EMSAs), involving the DNA-binding domain (DBD) of MtAra-1 and the promoter region of Mtlat-1. A glutathione S-transferase (GST)-fused MtAra-1-DBD was expressed in and purified from Escherichia coli. Two probe fragments of the promoter region of Mtlat-1 were amplified by PCR. In the EMSAs, the recombinant MtAra-1 bound the Mtlat-1 promoter regions in a typical protein concentration-dependent manner. Retardation occurred upon addition of 10\u00a0nM recombinant MtAra-1 protein were also dramatically downregulated in \u0394Mtara-1 grown on l-arabinose , also catalyzes the third reaction of the oxidoreductive catabolism of d-galactose. Consequently, we also found that the \u0394Mtara-1 strain showed a severely decreased rate of d-galactose consumption from N. crassa, LAT-1 from Ambrosiozyma monospora [Pichia stipitis [Penicillium chrysogenum [l-arabinose transporters, their effects on physiological phenotypes differ in various fungi when grown on l-arabinose and arabinan. In T. reesei, Lat-1 is a high-affinity symporter and shows high specificity for l-arabinose. The absence of Trire2_104072 did not result in an altered growth phenotype or different total protein secretion when grown on in high (1%) concentration of l-arabinose, lactose, or spent grain extract with a high content of arabinoxylan [N. crassa lat-1 mutant was observed on arabinan. Moreover, in N. crassa, lat-1 disruption caused a remarkably reduced l-arabinose uptake, which abolished the induction of ard-1 (gene encoding l-arabinitol-4-dehydrogenase) at low (2\u00a0\u03bcM) l-arabinose concentrations [Mtlat-1 was specifically induced by l-arabinose and its absence resulted in a decrease in l-arabinose uptake and consumption rates, but not those of d-glucose, d-xylose, or d-galactose, which is consistent with the specific l-arabinose transport activity of MtLat-1 [Mtlat-1 mutant when grown on arabinan. Comparative transcriptomic analysis revealed that deletion of Mtlat-1 enhanced the expression of genes encoding xylanolytic enzymes and arabinanolytic enzymes. These results suggested that l-arabinose transporter MtLat-1 repressed the expression of hemicellulolytic enzymes on arabinan but not on xylan. Furthermore, a point mutant of MtLat-1 in which the transport activity was abolished exhibited similar effects to the wild-type MtLat-1 on the protein secretion and hemicellulolytic activities on arabinan, clearly demonstrating that the involvement of MtLat-1 in M. thermophila hemicellulase gene expression was independent of its transport activity. These experiments have helped to separate nutrient sensing from substrate transport, and indicate that MtLat-1 protein serves as a transceptor, which might sense and transduce molecular signal of l-arabinose, one of the end products of hemicellulose degradation, to repress the induction of hemicellulase genes. The role of transceptor in the induction of genes involved in plant biomass degradation has also been demonstrated for CDT-1/2 in N. crassa and Crt1 in T. reesei [Our previous study revealed that sugar transporter MtLat-1 of rabinose . LAT-1 ionospora , and Arastipitis and Peniysogenum . Althouginoxylan . Howevertrations . Herein, MtLat-1 . However. reesei , 52.C-terminal amino acids did not alter their transport activities, but abolished their signaling function [T. reesei, a Crt1 mutant retaining only the first five amino acids of C-terminus still had the lactose transport activity but lost the cellulase induction activity [M. thermophila, MtLat-1 has a C-terminal extension of 63 amino acids, which is markedly shorter than those of Snf3 (303 amino acids) and Rgt2 (220 amino acids), but longer than that of Crt1 (44 amino acids) [-terminus restored the growth phenotype and hemicellulolytic activities of strain \u0394Mtlat-1 to similar levels to those of the WT on arabinan, showing that the C-terminal cytosolic region of MtLat-1 is not required for its signaling function. Similarly, in Pichia pastoris, deletion of the C-terminal fragment (150 amino acids) of glucose sensor Gss1 slightly affected glucose catabolite repression and pexophagy, but the signaling function of the protein was maintained [T. reesei, the internalization of Crt1 was observed on cellulose, although it was also found under glucose or glycerol and may not be correlated with the induction of cellulases [The C-terminal cytoplasmic fragments of several transporters and sensors have been reported to be essential for signal transduction . Truncatfunction , 52. In o acids) , 52. Howintained . Signal intained . In T. rT. reesei, XlnR/XYR1 regulates both cellulase and hemicellulase genes, while it is a major positive activator of hemicellulase genes in N. crassa and F. graminearum [M. thermophila, Xlr1 controls the expression of xylanolytic genes and genes involved in pentose transport and catabolism, but it has a smaller impact on l-arabinose catabolism [d-galactose and l-arabinose releasing and catabolic genes. In N. crassa and T. reesei, deletion of ara-1 dramatically downregulated the expression of l-arabinose transporter Lat-1 [M. thermophila, the induced expression of Mtara-1 was observable only on l-arabinose, as also observed for Mtlat-1. Transcriptional analysis and EMSAs demonstrated that MtAra-1 directly regulates Mtlat-1 expression by binding to its promoter region. Consistent with observations in N. crassa [Transcription factors play an essential role in regulation of the expression of CAZyme genes and genes involved in sugar transport and intracellular catabolism. Several essential transcription factors associated with hemicellulose deconstruction and utilization have been studied in fungi, including CreA/CRE-1, Xyr1/XlnR, and Ara-1. CreA/CRE-1 is a major regulator of CCR, a process through which the expression of genes involved in the utilization of non-preferred carbon sources, including hemicellulose/cellulose, are inhibited , 55. In minearum , 21, 24.tabolism . In seveer Lat-1 , 42. In . crassa , 27,Mtara-1 disruption abolished the growth of M. thermophila on l-arabinose and d-galactose, but no effect on the growth of strain \u0394Mtara-1 was observed on d-xylose. Moreover, the \u0394Mtara-1 mutant exhibited a remarkable reduction in secreted protein production and hemicellulolytic activities when grown on arabinan. Our transcriptomic data reflected the significantly downregulated expression of all arabinanolytic enzyme-encoding genes in the \u0394Mtara-1 mutant when it was grown on arabinan. These results demonstrated that MtAra-1 is mainly involved in regulation of the expression of arabinanolytic genes and the genes associated with l-arabinose and d-galactose catabolism in M. thermophila.l-arabinose transport and signal transduction associated with the suppression of expression of hemicellulolytic enzyme-encoding genes in M. thermophila. The absence of Mtlat-1 caused a decrease on l-arabinose uptake and consumption rates, but led to increases in mycelium growth, secreted protein production, and hemicellulolytic enzyme activities on arabinan. Furthermore, point mutation of MtLat-1 revealed that the involvement of MtLat-1 in the expression of hemicellulase genes is independent of its transport activity. Moreover, transcription factor MtAra-1 played an important role in the induction of arabinanolytic genes and l-arabinose catabolism, and directly regulated Mtlat-1 expression by binding to its promoter region.In this study, we demonstrated that MtLat-1 is involved in M. thermophila ATCC 42464 was obtained from the American Type Culture Collection (ATCC). This WT strain and its mutants were grown on Vogel\u2019s minimal medium (VMM) with 2% (w/v) glucose at 35\u00a0\u00b0C to obtain mature conidia. Antibiotic was added when needed to screen for transformants. For media shift experiments, M. thermophila strains were precultured in 100\u00a0mL of 1\u2009\u00d7\u2009VMM with 2% glucose for 16\u00a0h. Subsequently, the mycelia were collected, washed three times with 1\u2009\u00d7\u2009VMM, and then transferred to 100\u00a0mL of fresh 1\u2009\u00d7\u2009VMM with 2% l-arabinose (Sigma) or arabinan (Megazyme) for continued incubation for 2\u00a0h. For fungal growth assays, mature conidia were inoculated into 100\u00a0mL of 1\u2009\u00d7\u2009VMM with 2% carbon source at a concentration of 2.5\u2009\u00d7\u2009105 conidia/mL in 250-mL Erlenmeyer flasks. The culture was incubated at 45\u00a0\u00b0C with rotary shaking at 150\u00a0rpm.S. cerevisiae BSW5AP with heterogenous arabinose metabolic pathway [l-arabinose medium in 250-mL Erlenmeyer flasks at an initial OD600 value of 1.0. Cells were cultivated in an orbital shaker at 200\u00a0rpm at 30\u00a0\u00b0C and samples were taken at intervals. pathway , was culE. coli Mach-T1 cells were used for vector manipulation and propagation, and were cultivated in Luria\u2013Bertani medium with 100\u00a0\u00b5g/mL ampicillin or 50\u00a0\u00b5g/mL kanamycin for plasmid selection.Mtlat-1, Mycth_95427; Mtara-1, Mycth_2121737; MtPdr-1, Mycth_53224; MtPdr-2, Mycth_46266; Mycth_2300935) were identified using the sgRNACas9 tool [M. thermophila U6 promoter sequence, the synthetic gRNA scaffold sequence, and the target DNA sequence were constructed via overlapping PCR and cloned into the pJET1.2/blunt cloning vector to generate the corresponding plasmids U6p-Mtlat-1-sgRNA, U6p-Mtara-1-sgRNA, U6p-MtPdr-1-sgRNA, U6p-MtPdr-2-sgRNA, and U6p-Mycth_2300935-sgRNA. To construct donor DNA sequences, the 5\u2032- and 3\u2032-flanking fragments of Mtlat-1, Mtara-1, MtPdr-1, MtPdr-2, and Mycth_2300935 were separately amplified by PCR from M. thermophila genomic DNA, fused with the selectable marker cassette PtrpC-neo from plasmid p0380-neo [The primer sequences used in this study are listed in Additional file as9 tool . Fragmen0380-neo , and cloMtlat-1 (1934\u00a0bp), was amplified from the M. thermophila genome and ligated between the BglII and BamHI sites of pAN52-PgpdA-bar [SpeI and BamHI sites, under the control of the strong constitutive tef1 (Mycth_2298136) promoter of M. thermophila, to generate the corresponding recombinant vectors. Polyethylene glycol (PEG)-mediated protoplast transformation for gene disruption or overexpression in M. thermophila was performed as described previously [To generate the complementation strain, a DNA fragment containing the upstream region (1485\u00a0bp), the downstream region (853\u00a0bp), and full-length gpdA-bar to genergpdA-bar at the Seviously .S. cerevisiae strain BSW5AP, the genes encoding MtLat-1, MtLat-1\u2013GFP, MtLat-1(R136K), and MtLat-1(R136K)\u2013GFP were separately inserted into the shuttle plasmid p426kanmx4 [S. cerevisiae transformations were carried out as described previously [For expression of transporters in 26kanmx4 with theeviously .M. thermophila, mycelia were pre-grown in VMM with 2% arabinose for 16\u00a0h at 45\u00a0\u00b0C and then the cells were observed using the Olympus BX51 fluorescence microscopy system. Recombinant S. cerevisiae BSW5AP strains expressing sugar transporters tagged with GFP were inoculated into YPD medium and grown to the exponential phase (OD600\u2009~\u20091.0). The cells were collected, washed twice with sterile water, and then resuspended in sterile water. Next, 10 \u03bcL of culture was spotted on a cover glass for confocal microscopy. The images were processed using ImageJ software.To detect subcellular localization of MtLat-1 and its mutants in M. thermophila strains were precultured in 1\u2009\u00d7\u2009VMM containing 2% glucose for 18\u00a0h. Then, mycelia were washed three times in 1\u2009\u00d7\u2009Vogel\u2019s salts without any carbon source and shifted into 0.5% arabinose medium for the additional 4\u00a0h of induction. After that, the mycelia were harvested, washed again as above, and resuspended in uptake buffer for 20\u00a0min. The residual sugar in the supernatant was determined using HPLC with an e2695 instrument , and the fungal biomass was completely dried to determine the dry weight for data normalization.The M. thermophila strain were inoculated into 100\u00a0mL of 1\u2009\u00d7\u2009VMM with 2% carbon source at a concentration of 2.5\u2009\u00d7\u2009105 conidia/mL in 250-mL Erlenmeyer flasks and incubated at 45\u00a0\u00b0C with rotary shaking at 150\u00a0rpm. Samples were taken at the indicated time for the assays of secreted protein and enzyme activity. The total extracellular protein in culture supernatants was measured using a Bio-Rad protein assay kit. Endo-1,5-l-arabinanase, endo-1,4-xylanase, and endo-glucanase activities in the supernatants of culture of M. thermophila strains were measured using an Azo-Xylan kit (Megazyme), an endo-1,5-l-arabinanase assay kit (Megazyme), and an Azo-CM-Cellulose assay kit (Megazyme), respectively.The mature conidia of M. thermophila mycelia were shifted into 1\u2009\u00d7\u2009VMM with 2% carbon source for the induction of 4\u00a0h, harvested by vacuum filtration, and immediately ground to a fine powder using a pestle and mortar with liquid nitrogen for the subsequent extraction of total RNA [Mtlat-1 deletion mutant grown on arabinan, M. thermophila wild-type strain and Mtlat-1 deletion mutant were cultured in 1\u2009\u00d7\u2009VMM with 2% arabinan (Sampled at 2 days and 4 days).otal RNA . For traRNA integrity and concentration were determined using agarose gel electrophoresis and NanoDrop spectrophotometer. Purified RNA samples, with RNA integrity number\u2009>\u20098.0, determined using an Agilent 2100 Bioanalyzer (Agilent Technologies), were sequenced by Novogene Corporation using the Illumina NovaSeq 6000 platform to generate 150-bp paired-end reads.M. thermophila ATCC42446 genome sequence [2 fold-change|\u2265\u20091, RPKM\u2009\u2265\u200920 (at least one sample), and DESeq P-adj value\u2009<\u20090.05, were considered significantly differentially expressed between two samples. The detailed data are shown in Additional file Clean reads were mapped to the sequence using Tosequence . The cousequence . Unless M. thermophila strains were shifted into induction medium containing 2% arabinose as the carbon source and incubated for 4\u00a0h, then harvested for the subsequent extraction of total RNA using the method described previously [\u2212\u0394\u0394Ct method with the actin-encoding gene (Mycth_2314852) as the internal control.For RT-qPCR analysis, precultured mycelia of eviously . QuantitM. thermophila cDNA and inserted between the BamHI and XhoI sites of pGEX-4T-1 to form a GST-tagged protein expression plasmid. The recombinant plasmid was introduced into E. coli BL21 (DE3) cells for protein expression and purification as previously described [The DNA sequence encoding the DBD of transcription factor MtAra-1 was amplified from escribed .Mtlat-1 were amplified from M. thermophila genomic DNA using primers shown in Additional file EMSAs were performed as described previously . BrieflyAdditional file 1. Table S1: List of PCR primers used in this study. Table S2: Profiles of RNA-Seq reads mapped to the genome of M. thermophila. Table S3: Transcriptomic profiles of 26 sugar transporters with robust expression levels (RPKM > 20) in at least one tested condition. Table S4: Genes showing significantly different transcriptional levels in strain \u2206Mtlat-1 compared with the WT when grown on 1 \u00d7 VMM with 2% arabinan for 4 days. Table S5: Transcriptomic profiles of genes encoding major hemicellulases from RNA-Seq data when grown on 1 \u00d7 VMM with 2% arabinan. Table S6: Gene ontology (GO) analysis of up-regulated genes in strain \u0394Mtlat-1 compared with the WT when grown on 1 \u00d7 VMM with 2% arabinan for 4 days. Table S7: Transcriptomic profiles of transcription factor genes with significantly upregulated expression levels in WT M. thermophila grown on l-arabinose, d-xylose, or d-glucose, compared with that under no carbon. Table S8: Genes showing significantly different transcriptional levels in strain \u2206Mtara-1 compared with the WT strain when grown on 2% arabinan for the induction of 4 h.Additional file 2. Fig. S1: Sugar consumption by M. thermophila strains WT, \u0394Mtlat-1, and \u0394Mtara-1, when grown in 1 \u00d7 VMM with 2% d-glucose (A), 2% d-xylose (B), or 2% d-galactose (C). Error bars indicate the SD from at least three biological replicates. Fig. S2: Growth phenotypes of the M. thermophila\u0394Mtlat-1 mutant under xylan condition. A Cell dry weight of M. thermophila strains WT and \u0394Mtlat-1 after growth on 2% xylan for 2 d. B Protein concentrations, C xylanase activity, and D arabinanase activity of the culture supernatants for the M. thermophila strains grown in 2% xylanase medium. Error bars indicate the SD from at least three biological replicates. Fig. S3: Growth phenotypes of the complementation strain of the \u0394Mtlat-1 mutant. Al-arabinose transport rates of mycelia from the complementation strain Pn-Mtlat-1. B Cell dry weight of the M. thermophila complementation strain after growth on 2% arabinan for 4 days. C Protein concentrations, D arabinanase activity, and E xylanase activity of the culture supernatants for M. thermophila grown in 2% arabinan medium. Error bars indicate the SD from at least three biological replicates. Fig. S4: Comparative transcriptomic analysis of the WT and \u0394Mtlat-1M. thermophila strains grown in arabinan medium for 2 d. A Total expression of genes encoding major hemicellulases from RNA-Seq data. B Transcriptional profiles of genes encoding arabinanolytic enzymes in the \u0394Mtlat-1 and WT strains when grown on 2% arabinan for 4 days. Fig. S5: Cell dry weight of M. thermophila strains after growth on 2% arabinan for 4 days. Fig. S6: Heatmap analysis of expression profiles for putative sugar transporter genes with statistically significant differences in transcript levels between \u0394Mtara-1 and the WT under l-arabinose condition. Log-transformed expression values are color-coded. Fig. S7: Protein concentrations and hemicellulase/cellulase activity of the culture supernatants for M. thermophila strain \u0394Mtara-1 grown in 1 \u00d7 VMM with 2% xylan (A) or 2% Avicel (B) for 4 days."} {"text": "Chimeric antigen receptor T- Cell (CAR-T) immunotherapy has been a breakthrough treatment for various hematological malignancies. However, cardiotoxicities such as new-onset heart failure, arrhythmia, acute coronary syndrome and cardiovascular death occur in 10\u201315% of patients treated with CAR-T. This study aims to investigate the changes in cardiac and inflammatory biomarkers in CAR-T therapy to determine the role of pro-inflammatory cytokines.In this observational study, ninety consecutive patients treated with CAR-T underwent baseline cardiac investigation with electrocardiogram (ECG), transthoracic echocardiogram (TTE), troponin-I, and B-type natriuretic peptide (BNP). Follow-up ECG, troponin-I and BNP were obtained five days post- CAR-T. In a subset of patients (N\u2009=\u200953), serum inflammatory cytokines interleukin (IL)-2, IL-6, IL-15, interferon (IFN)-\u03b3, tumor necrosis factor (TNF)-\u03b1, granulocyte-macrophage colony-stimulating factor (GM-CSF), and angiopoietin 1 & 2 were tested serially, including baseline and daily during hospitalization. Adverse cardiac events were defined as new-onset cardiomyopathy/heart failure, acute coronary syndrome, arrhythmia and cardiovascular death.2; p\u2009=\u20090.042). Day 5 BNP levels , but not troponin-I, were higher in patients with adverse cardiac events, compared to those without. The maximum levels of IL-6 , IFN-\u03b3 and IL-15 were also higher in the adverse cardiac events group. However, cardiac and inflammatory biomarker levels were not associated with cardiac events. Patients who developed cardiac events did not exhibit worse survival compared to patients without cardiac events (Log-rank p\u2009=\u20090.200).Eleven patients (12%) had adverse cardiac events . Adverse cardiac events appear to have occurred among patients with advanced age , higher baseline creatinine and higher left atrial volume index . The changes in serial inflammatory cytokine after CAR-T in the setting of adverse cardiac events suggests pro-inflammation as a pathophysiology and require further investigation for their role in adverse cardiac events.CAR-T related Cardiotoxicity has elevated cardiac and inflammatory biomarkers. #CARTCell #CardioOnc #CardioImmunology.The online version contains supplementary material available at 10.1186/s40959-023-00170-5. Chimeric antigen receptor T- Cell (CAR-T) immunotherapy has been a breakthrough for various CD19\u2009+\u2009hematological malignancies, including lymphoma . One serThe pathophysiology of these cardiovascular toxicities is thought to be secondary to CRS and potentiated by both cardiac and non-cardiac risk factors , 10. TheThe purpose of this study was to investigate the changes of cardiac biomarkers and inflammatory cytokines that occur in the setting of adverse cardiac events after CAR-T therapy.This was an observational study in a single, National Cancer Institute (NCI)-designated academic center (H. Lee Moffitt Cancer Center). This study was approved by the IRB of the University of South Florida (Pro00029257 and Pro00021733). Participating patients provided informed consent for the quantification of inflammatory cytokines. Inclusion criteria included patients diagnosed with B-cell lymphoma who were treated with four types of CAR-T therapy . As a collaborative work between our cardio-oncology team and CAR-T team, we set up standard clinical practice guidelines at our institution beginning October 2020. Consecutive patients undergoing CAR-T therapy from October 2020 until October 2021 underwent baseline and follow-up cardiac biomarkers. This study included a retrospective review clinical data, as summarized in below section. In addition, of the consecutive patients reviewed, there were subset of patients who were enrolled in prospective observational study investigating the validity of daily cytokine monitoring system for CRS.Baseline cardiac investigation at the time of evaluation for CAR-T therapy included electrocardiogram, transthoracic echocardiogram, troponin-I, and B-type natriuretic peptide (BNP). Follow-up ECG, troponin-I, and BNP levels were obtained five days after CAR-T infusion and at the development of CRS Grade \u22652 (fever with hypotension or hypoxia). The CRS grading was based on published guidelines by the American Society for Transplantation and Cellular Therapy (ASTCT) . The carSeparate to this study, a subset of patients were enrolled in a prospective study on inflammatory biomarkers in subjects who underwent CAR-T (USF IRB Pro00021733). The study was open to all patients who underwent commercial CAR-T and all subjects who provided written informed consent was included in the study, as previously described . The curClinical data were extracted retrospectively from electronic medical records, including demographics, baseline cardiac and oncologic risk factors, and baseline laboratory and transthoracic echocardiogram. Cardio-oncologists reviewed all electrocardiograms and transthoracic echocardiograms. Echocardiogram data was extracted through clinical reports. LV dimensions were calculated from parasternal long axis. LV ejection fraction was calculated through method-of-disc (MOD) method. LA volume was calculated through MOD method, indexed to body surface area. Data were collected and managed using REDCap electronic data management system hosted at Moffitt Cancer Center .Continuous variables were presented as mean \u00b1 standard deviation or median with interquartile range, depending on the normality of data using the Shapiro-Wilk normality test. Continuous variables were compared between groups using the student\u2019s t-test or nonparametric comparison (Mann-Whitney test) depending on the normality of the data. Categorical data were compared using a chi-square test. Univariate logistic regression analyses were performed to report odds ratio and 90% confidence intervals (95% C.I.) of factors associated with cardiac events. Kaplan Meier survival analysis and a log-rank test were performed to determine differences in median overall survival durations of patients with and without cardiac events. A two-tailed p-value \u22640.05 was considered statistically significant. Statistical analysis was performed using R Software version 4.0.4 and GraphPad Prism (version 9) .(Table\u00a01). Eleven patients developed adverse cardiac events (12.2%). One patient developed cardiomyopathy with reduced left ventricular ejection fraction and was later diagnosed with acute myocarditis based on cardiac MRI. Ten patients (11.1%) developed atrial fibrillation, one of whom had a history of atrial fibrillation prior to CAR-T therapy. A total of 26 patients (28.9%) underwent follow-up TTE at a median of 10 days [interquartile range of 6\u201323 days] after CAR-T.Ninety patients were included in this study. The cohort\u2019s median age was 68 years, with 61.1% being male (Table\u00a0). Patients in the cardiac events group were older , had higher baseline creatinine and a larger indexed left atrium volume , when compared to those who did not develop cardiac events (Table\u00a01). After CAR-T therapy, the incidence of grade 2 or above CRS was similar in both groups (Table\u00a03). There was no difference in the rate of treatment of tocilizumab between those who did or did not develop cardiac events . There were eleven deaths (12.2%) one-year post-CAR-T. Of those, nine deaths were due to cancer progression, one due to CRS, and one due to pneumonia. The patients with cardiac events did not exhibit worse survival compared with patients without cardiac events .Baseline cardiac comorbidities were not significantly different in the cardiac events group, including prior heart failure, coronary artery disease, atrial arrhythmia, and chronic kidney disease Table\u00a0. Patient(Table\u00a03). There was no association between cardiac biomarkers performed at baseline or day 5 post-CAR-T and cardiac events in univariate logistic regression analysis. Of the 31 patients who developed CRS grade 2 or above, post-CAR-T TTE was performed in 17 patients (54.8%).The baseline troponin-I and BNP were similar between the two groups . After CAR-T therapy, the cardiac events group had a higher BNP level on day 5 . Day 5 troponin-I levels during hospitalization were not different between the two groups. Supplementary Tables\u00a01\u20133). Inflammatory cytokine levels are summarized in Fig.\u00a0A total of fifty-three patients had serum testing of inflammatory cytokines at baseline and daily during hospitalization. Of the total eleven patients who developed cardiac events, nine had inflammatory cytokines tested. Baseline clinical characteristics, post-CAR-T clinical events and laboratory values were similar in patients who did and those who did not have inflammatory cytokine data , with 11.1% of patients experiencing atrial arrhythmia and one developing cardiomyopathy (1.1%). We identified that mostly clinical factors were associated with cardiac events. However, cardiac (Troponin I and BNP) and inflammatory cytokine levels were not associated with cardiac events after CAR-T.Notably, our cohort had differences in the type of cardiac events compared to prior studies \u20138, 11. WIn our study, the baseline clinical factors, such as age and baseline creatinine were associated with adverse cardiac events similar to prior literature , 11, 12.We investigated cardiac biomarkers five days after CAR-T to assess the role of cardiac biomarkers in cardiotoxicities. Day 5 troponin-I was not elevated in those who developed cardiac events. Only one in ten patients who developed atrial fibrillation had elevated day 5 troponin I. Troponin-I is a traditional biomarker associated with new-onset cardiomyopathy, myocarditis, arrhythmia and cardiovascular death in cancer patients . In pediIn contrast, there was only a tendency for day 5 BNP levels association with cardiac events. The accentuated BNP level after CAR-T in the setting of cardiac events may be a reflection of baseline comorbidities although not statistically significant due to low number. This may lead to sensitization to hemodynamic stress in cardiac events such as atrial fibrillation. Prior observational studies have shown inconsistent results regarding BNP levels and their association with cardiotoxicity , 22. WhiOf note, the current cohort had a lower incidence of high-grade CRS. Only two patients (2.2%) developed high-grade CRS with CRS grade 3 and 5 and no CRS grade 4. This is in comparison to an 8\u201322% incidence of high-grade CRS 3 or above in clinical trials and real-world commercial CAR-T use , 26. AltContrary to prior literature, adverse cardiac events did not occur more frequently in higher-grade CRS. To that end, we performed a comprehensive analysis of inflammatory cytokines to see if pro-inflammatory status plays a role in the pathophysiology of cardiac events. Several pro-inflammatory cytokines were higher in the cardiac events group. This is in line with prior studies showing elevated IL-6 and IL-15 in patients with atrial fibrillation , 28. HowThere are several limitations to the current study. First, the study was a retrospective review of cardiac events. The method of detecting arrhythmia with abnormal heart rate or symptoms may underestimate the incidence of atrial fibrillation. However, the incidence of AF are comparable to other studies. To our knowledge, there is no study published that has performed continuous telemetry monitoring for all CAR-T patients. For other cardiac events, a close collaboration between the cardio-oncology and cellular immunotherapy teams was developed to evaluate for cardiotoxicities. Therefore, the chances of missing cardiotoxicity after CAR-T were low. Second, we have not performed daily measurements of cardiac biomarkers, which could have missed pre-clinical cardiac injury. Next, the current exploratory study had a small sample size and lower incidence of adverse cardiac events which did not allow for multivariate analysis. The strength of our study includes the measurement of inflammatory cytokines in the setting of CAR-T related cardiac events. Until now, the association between CRS and cardiac events were the only evidence for inflammation-induced cardiac events. Also, our study population represents the relatively recent management changes of CRS with earlier and aggressive use of tocilizumab. Although speculative, this may have changed the incidences of more severe cardiac events such as cardiomyopathy/ heart failure. Further studies are needed with larger cohort size to ascertain this hypothesis.In conclusion, adverse cardiac events, predominantly atrial fibrillation, occur commonly after CAR-T. The changes in serial inflammatory cytokine after CAR-T in the setting of adverse cardiac events suggests pro-inflammation as a pathophysiology and warrant further investigation.Below is the link to the electronic supplementary material.Supplementary Material 1"