diff --git "a/cluster/248.jsonl" "b/cluster/248.jsonl" new file mode 100644--- /dev/null +++ "b/cluster/248.jsonl" @@ -0,0 +1,53 @@ +{"text": "Primary human tissues are an invaluable widely used tool for discovery of gene expression patterns which characterize disease states. Tissue processing methods remain unstandardized, leading to unanswered concerns of how to best store collected tissues and maintain reproducibility between laboratories. We subdivided uterine myometrial tissue specimens and stored split aliquots using the most common tissue processing methods before comparing quantitative RNA expression profiles on the Affymetrix U133 human expression array. Split samples and inclusion of duplicates within each processing group allowed us to undertake a formal genome-wide analysis comparing the magnitude of result variation contributed by sample source (different patients), processing protocol (fresh vs. frozen vs. 24 or 72 hours RNALater), and random background (duplicates). The dataset was randomly permuted to define a baseline pattern of ANOVA test statistic values against which the observed results could be interpreted.14,639 of 22,283 genes were expressed in at least one sample. Patient subjects provided the greatest sources of variation in the mixed model ANOVA, with replicates and processing method the least. The magnitude of variation conferred by processing method (24 hours RNALater vs 72 hours RNALater vs. fresh vs frozen) was similar to the variability seen within replicates. Subset analysis of the test statistic according to gene functional class showed that the frequency of \"outlier\" ANOVA results within each functional class is overall no greater than expected by chance.Ambient storage of tissues for 24 or 72 hours in RNALater did not contribute any systematic shift in quantitative RNA expression results relative to the alternatives of fresh or frozen tissue. This nontoxic preservative enables decentralized tissue collection for expression array analysis without a requirement for specialized equipment. Many of the hopes for achieving clinical benefits of genomic medicine will hinge on the ability to develop an efficient specimen conduit from clinic to laboratory. Quantitative gene expression studies have created unprecedented tissue collection and handling challenges. In particular, the rapid degeneration of RNA, and possible perturbation of expression following excision place a high premium on prompt stabilization of tissue samples intended for expression analysis. This can be accomplished by sending a dedicated trained technologist outfitted with the necessary specialized equipment, such as liquid nitrogen, into the clinical environment. Alternatively, clinicians can be enabled to process the specimens directly in the course of patient care and send them in some stable form by unrushed and routine means for centralized processing. The latter is greatly preferred when patients are physically dispersed, and becomes essential in a multi-institutional setting.High throughput quantification of RNA expression in solid tissues has become a commonplace modality for genome-wide discovery of mechanisms of disease. Typically, groups of samples classified into comparison groups are used as a training set for expression pattern discovery, followed by validation in a fresh challenge set of annotated cases. The likelihood of success is highly dependent on the accuracy of classification within the training set, and ability to control random variables introduced during tissue processing and analytical measurement of RNA abundance. Efforts to standardize RNA quantification include sharing of information regarding probe design and use , or centFlash freezing, either by immersion in liquid nitrogen or on dry ice, is the most common means of stabilizing tissue samples intended for RNA analysis. Local access to the necessary materials and expense of cold shipping and/or storage limit these collection capabilities in most clinical settings. An additional disadvantage of frozen storage is that homogenization of frozen tissue must be accomplished rapidly to avoid the rapid RNA degeneration that occurs during thawing of a previously frozen sample.Room temperature immersion of fresh tissue samples in aqueous sulfate salt solutions (such as ammonium sulfate) at controlled pH precipitates degenerative RNAses and otheWe studied the effects of differences between storage conditions on gene expression as measured by expression array. Duplicate uterine myometrial tissue samples from three women were processed under each of 4 fixed storage conditions \u2013 fresh, frozen, 24 hours RNA-later and 72 hours RNA-later. The 24 labeled cRNA samples Figure were hybWe found no systematic bias in measured quantitative level of gene expression by processing method, indicating that short term storage in RNALater is a valid alternative to traditional frozen storage.3 possible ways of permuting 4 pairs of replicate samples within each subject were considered. For each of these, the F statistics were computed for each gene. To control the overall error rate, the distributions of the maximum F statistics over the genes were used. That is, for each gene, the p-value is the proportion of permutations with the maximum F statistics over all genes greater or equal to the observed value for a particular gene. A test declaring as significant any genes with p < 0.05 then guarantees that the chance of any false positives being selected is < 5%. Similar analyses were performed replacing the distribution of the maximum F statistic with the distribution of the F statistics at the 95th percentile and then at the 90th percentile. After closer examination of the 387 genes in the 5% tail, we noted that most were exhibiting expression values below 100 for all 24 samples. In fact, within a storage condition, 2 out of 3 patients exhibited null expression while the third patient showed expression values other than null but less than 100 for at least one of their replicates. Therefore, as an additional analysis, any expression values less than 100 were recoded as 100. Genes that showed expression levels of 100 across all 24 samples, and therefore lacked variability, were then removed from the analysis. This resulted in 7,853 genes for which there was at least one sample with expression level greater than 100 across all 24 samples.Of the 22,283 genes, 14,639 did not have absolutely null expression across all 24 samples. We fit the mixed model ANOVA from their log values and recorded this F statistic. The permutation distribution was used to assess the significance of F statistics calculated for each gene in the dataset. In this approach all 13,824 or (4!)Patient subjects provided the greatest sources of variation in the mixed model ANOVA, with replicates and processing method the least Figure . The magth percentile was 2.58. Corresponding p-values were 0.94, 0.55 and 0.51, respectively. The values of test statistics seen at the 95% level in a randomly permuted dataset are evenly intermingled throughout the myometrial compartment, lending itself to physical subdivision into equivalent aliquots. This would not be possible with more complex tissues in which differing cell types are distributed asymmetrically within the specimen. Despite the equivalency of subdivided fractions that underwent varying storage treatments, it must be noted that this is a hormonally responsive tissue whose expression patterns would be expected to differ between individual women as a function of monthly changes in circulating sex hormones. We did not control for hormonal factors or indication for hysterectomy (prolapse or fibroids) but selected patients randomly. It comes as no surprise that expression differences between women, irrespective of processing method, emerged as the dominant source of inter-sample variation. This was anticipated in constructing the model, by assigning the subject source of specimens as a random variable which could be measured against the fixed processing effects. It is likely that if a larger number of women had been included in the study, the observed biologic variation attributable to subject would have been even greater. Since our goal was to compare magnitude of variation contributed by subjects to that conferred by processing method, we achieved a balanced design by having comparable degrees of freedom for those two variables.There are several critical procedural elements that must be highlighted for successful preservation of solid tissues in aqueous sulfate salt solutions such as RNALater. These reagents enter the tissue through passive diffusion, a process which follows simple physical principles. The distance between the tissue surface, which is exposed to preservative, and the innermost regions of the fragment should be minimized. We did this by cutting the tissues into 2 mm thick slices, thereby reducing the diffusion distance to 1 mm or less. Clumping of multiple fragments into a mass that excludes preservative may obviate the benefits of fine division. This can be avoided either by gentle agitation or placement in a sufficiently large container that individual pieces are likely to disperse. Results reported here are for tissues stored at room temperature (23\u201325\u00b0C). Storage under cooler conditions (4\u00b0C) as recommend by the manufacturer of RNALater were not directly evaluated in this experiment because it was our intent to mimic storage interval and conditions commonly encountered when sending a specimen by express courier to a centralized processing facility. Storage at temperatures substantially higher than 25\u00b0C, especially before the preservative has had an opportunity to penetrate the tissue, should be avoided.Split samples of fresh human tissue yield quantitatively similar RNA expression profiles whether processed fresh, frozen, or following 24\u201372 hour storage in RNALater. Formal statistical analysis shows patient source is the predominant source of variation between samples, with processing method contributing a random level of variation comparable to that seen in split duplicates (replicates). Subset analysis by functional gene category did not identify a specific class of genes which responded differently by processing method.Use of nontoxic ambient environment tissue preservatives makes it practical to engage practicing clinicians directly in decentralized sample collection for high throughput expression analysis in a central location. Tissue handling closely resembles that used by clinicians to prepare specimens for routine pathology analysis. Upon receipt in a centralized facility, the samples can either be immediately homogenized or archived at -60\u00b0C.Normal fresh uterine myometrial tissues were collected randomly from three women undergoing hysterectomy for benign uterine disease. For each hysterectomy, a single 4 to 8 gram tissue fragment was subdivided into eight aliquots composed of thin slices measuring no more than 2 mm in thickness. Replicate aliquots were immediately triaged into one of four storage conditions prior to homogenization: 1)immediate homogenization; 2)flash frozen in liquid nitrogen and storage for 48 hours at -80\u00b0C; 3)24 hour immersion in RNALater at room temperature with gentle agitation; or 4)72 hours immersion in RNALater at room temperature with gentle agitation.Tissue was solubilized in Trizol reagent , and RNA isolated according to the manufacturers instructions. In brief, the aqueous phase was resolved by addition of chloroform, and RNA precipitated from the aqueous phase by addition of isopropyl alcohol. Pelleted RNA was washed with 70% ethanol, dried, and resuspended in water. Quality of total RNA was assessed by running a non-denaturing 1% agarose tris-acetate buffer which confirmed the integrity of 18S and 28S ribosomal bands for all 24 total RNA preparations.\u00ae Expression Analysis Technical Manual) and all arrays were scanned under a low PMT (Photo Multiplier Tube) of 570 nm. Global scaling to a target value of 75 was applied to normalize all the arrays so they were comparable . The Affymetrix average-difference expression data and the P/A calls were used in the analysis. Those probe sets determined to have no detectible signal above background mismatch hybridization were assigned a nominal value of 1 to facilitate future log transformations. Probesets having at least one tissue with detectable expression and an average difference above either 1 or 100 were selected to define subsets of 14639 permissively or 7853 stringently expressed genes, respectively. Further analysis was performed using the natural log transformed data of these probe subsets. Data files for all specimens processed are deposited online at the Gene Expression Omnibus at the National Center for Biotechnology Information [Double-stranded cDNA was generated from 8 \u03bcg total RNA using the Superscript Choice System (Life Technologies) with T7-(dT)24 oligomer. cDNA was purified by phenol/chloroform extraction and ethanol precipitation. Biotin-labeled cRNA was prepared using the Enzo BioArray HighYield RNA Transcript labeling kit (Affymetrix). Unincorporated NTPs were removed from the biotinylated cRNA using an RNeasy kit (Qiagen). 10 \u03bcg of quality, fragmented cRNA was hybridized to each Affymetrix HG-U133A arrays containing probe sets representing approximately 22,000 genes. Array hybridization, washing was done according to the manufacturer's protocol was used, regarding storage condition as a fixed factor with four levels and subject as a random factor with three levels. The analysis of variance calculations for sums of squares in the mixed model ANOVA are identical to those for the fixed ANOVA model. Similarly, the degrees of freedom and mean squares are exactly the same. The mixed ANOVA model departs from the fixed ANOVA model only in the expected mean squares and the consequence choice of the appropriate test statistic. The mixed model also included a random storage by subject interaction. Replicate samples enabled us to estimate the replication error in the model. To test for the presence of storage main effects for each gene we divided the mean square for storage by the mean square for the interaction effect between storage and subject . The ANOtm download center [Functional annotation of probesets on the U133A chip, were downloaded from the NetaffxGM and JW conceived and designed the research plan and participated in all aspects of data collection and analysis. DF participated in data analysis and interpretation. DN and DZ performed the statistical analysis. CL and HB performed the RNA isolations, chip hybridizations, and data collation."} +{"text": "Over 450 transfer RNA (tRNA) genes have been annotated in the human genome. Reliable quantitation of tRNA levels in human samples using microarray methods presents a technical challenge. We have developed a microarray method to quantify tRNAs based on a fluorescent dye-labeling technique. The first-generation tRNA microarray consists of 42 probes for nuclear encoded tRNAs and 21 probes for mitochondrial encoded tRNAs. These probes cover tRNAs for all 20 amino acids and 11 isoacceptor families. Using this array, we report that the amounts of tRNA within the total cellular RNA vary widely among eight different human tissues. The brain expresses higher overall levels of nuclear encoded tRNAs than every tissue examined but one and higher levels of mitochondrial encoded tRNAs than every tissue examined. We found tissue-specific differences in the expression of individual tRNA species, and tRNAs decoding amino acids with similar chemical properties exhibited coordinated expression in distinct tissue types. Relative tRNA abundance exhibits a statistically significant correlation to the codon usage of a collection of highly expressed, tissue-specific genes in a subset of tissues or tRNA isoacceptors. Our findings demonstrate the existence of tissue-specific expression of tRNA species that strongly implicates a role for tRNA heterogeneity in regulating translation and possibly additional processes in vertebrate organisms. Transfer RNAs (tRNAs) translate the genetic code of genes into the amino acid sequence of proteins. Most amino acids have two or more codons. Every organism has multiple tRNA species reading the codons for the same amino acid (tRNA isoacceptors). In bacteria and yeast, differences in the relative abundance of tRNA isoacceptors have been found to affect the level of highly expressed proteins. This tRNA abundance\u2013codon distribution relationship can have predictive power on the expression of genes based on their codon usages. Approximately 450 tRNA genes consisting of 49 isoacceptors and 274 different sequences have been annotated in the human genome. This work describes the first comparative analysis of tRNA expression levels in eight human tissues using microarray methods. The authors find significant, tissue-specific differences in the expression of tRNA species and coordinated expression among tRNAs decoding amino acids with similar chemical properties in distinct tissue types. Correlation of relative tRNA abundance versus the codon usage of highly expressed, tissue-specific genes can be found among a subset of tissues or tRNA isoacceptors. Differential tRNA expression in human tissues suggests that tRNA may play a unique role in regulating translation and possibly other processes in humans. These tRNA genes are scattered throughout the genome and are present on all but the Y chromosome. Twenty-two additional tRNA genes are present in human mitochondrial DNA )rial DNA ,7. Thus,To our knowledge, no systematic studies of tRNA expression among human tissues have been published. The dearth of information on tRNA expression is the result of technical and intellectual obstacles. Accurate quantitation of individual tRNA species is challenging due to the extensive secondary and tertiary structure of tRNA and numerous post-transcriptional modifications , both ofWhy should the human genome contain such a diverse array of tRNA sequences? A compelling explanation is that controlling expression of individual tRNA species enables another level of translational control for specific gene products. In bacteria and yeast, differences in the relative abundance of tRNA isoacceptors for a given amino acid clearly impact the synthesis of highly expressed proteins ,13. CodoHere we describe the comparative analysis of tRNA levels in eight human tissues and two human cell lines using a microarray method adapted from our previously developed arrays for bacterial tRNAs ,20. Our Arg isoacceptors read the six arginine codons.tRNA was quantified by taking advantage of its universally conserved 3\u2032CCA sequence to attach a fluorescently labeled probe to tRNA present in total RNA prepared from tissues or cell lines. tRNA labeled in this manner was hybridized to DNA probes arrayed on glass slides. The brain sample was included in all hybridizations to correct for the variations in fluorescence labeling and array manufacturing. We used probes that are 70 to 80 nucleotides long, covering the length of the entire tRNA minus the conserved 3\u2032CCA sequence. Probes at these lengths significantly increase hybridization efficiency and eliminate the sensitivity to potential variations in post-transcriptional modifications . We desiArg isoacceptors (more than ten among 70 to 75 residues) to enable design of three isoacceptor probes that separately cover tRNA genes with ACG (modified to ICG can be found with r-values from 0.90 to 0.94 and p-values from 0.016 to 0.039 (Arg for two tissues (liver and thymus) can be found with r-values of 0.93 and 0.97 and p-values of 0.067 and 0.033, respectively was explored in three ways. First, all data points from the same tissue are plotted. For liver and brain, a linear fit of this plot gives an relation A. Similarelation , suggestto 0.039 B. Similaectively C. SimilaWe have found that human tRNA expression varies by as much as tenfold among human tissues. Given the central role of tRNA in protein synthesis, this wide variation of tRNA abundance may reflect translational control via the availability of certain tRNAs. Since tRNA is the dominant ligand for the multitasking protein EF-1\u03b1, variations in tRNA levels may provide a mechanism to link translation with the dynamics of the cytoskeleton. Transcriptional control of tRNA genes may therefore play a role in the function of human tissues or possibly in cellular development and differentiation. tRNA abundance may also play a role in translational control of highly expressed, tissue-specific genes via their codon usage. Determination of tRNA abundance and charging levels for diffhttp://www.stratagene.com): brain , liver (No. 735017), vulva (No. 735067), testis (No. 735064), ovary (No. 735260), thymus (No. 540141), lymph node (No. 540021), and spleen (No. 540035). After the microarray measurements were completed, it was found that the tissue RNA samples from Stratagene underwent an LiCl precipitation step which is known to result in the loss of small RNAs [Total RNA from eight human tissues was purchased from Stratagene (http://www.ambion.com) according to manufacturer's manuals. This procedure does not include LiCl precipitation or other known steps biasing against tRNAs.The total RNA from the HeLa and HEK293 cell lines were obtained using RNAwiz , E. coli tRNATyr (No. R0258), and yeast tRNAPhe (No. R4018), were purchased from Sigma-Aldrich and used without further purifications.The three tRNA standards, The microarray experiment consists of four steps starting from total RNA: (i) deacylation to remove remaining amino acids attached to the tRNA, (ii) selective Cy3/Cy5 labeling of tRNA, (iii) hybridization with prefabricated arrays, and (iv) data analysis.For deacylation, 0.25 \u03bcg/\u03bcl total RNA premixed with the three tRNA standards at 0.17 \u03bcM each was incubated in 100 mM Tris-HCl (pH 9.0) at 37 \u00b0C for 30 min. The solution was neutralized by the addition of an equal volume of 100 mM Na-acetate/acetic acid (pH 4.8) plus 100 mM NaCl, followed by ethanol precipitation. Deacylated total RNA was dissolved in water, and its integrity was examined using agarose gel electrophoresis.http://www.amersham.com, described in [http://www.usbweb.com) for Oligo-2. Hence, the ligation of Oligo-2 required substantially less T4 DNA ligase compared to Oligo-1.For Cy3/Cy5 labeling, tRNA in the total RNA mixture was selectively labeled with either Cy3 or Cy5 fluorophore using an enzymatic ligation method described previously ,20. The Hybridization was performed at 60 \u00b0C overnight with 1 \u03bcg each of Cy3- or Cy5-labeled total RNA mixture using Oligo-1 and 1 \u03bcg of Cy3-labeled total RNA and 2.5 \u03bcg of Cy5-labeled total RNA using Oligo-2 (because only 40% of Oligo-2 used in this work contained the Cy5 fluorophore). Multiple arrays were run using the brain reference sample labeled with either Cy3 or Cy5.Drosophila nuclear tRNA genes, 34 probes for C. elegans nuclear tRNA genes, three probes for bacterial and yeast tRNA standards, and eight probes for human tRNA hybridization controls. Nonhuman probes were used as specificity controls for hybridization of human samples. Eighteen replicates of each probe were printed on each array. The descriptions and sequences of the DNA oligonucleotide probes used for human nuclear and mitochondrial tRNA genes are provided in The microarray printing and hybridization conditions were the same as those in bacterial tRNA studies ,20. DNA http://www.axon.com) to obtain fluorescence intensities and the Cy5/Cy3 ratio per pixel at each probe spot. The averaged Cy5/Cy3 ratio per pixel at each probe spot was first normalized to an averaged value of the three tRNA standards prior to subsequent analysis. For all tissue samples, the brain total RNA was used as the reference sample at equal amounts of total RNA as determined by the UV absorbance. tRNA constitutes up to 15% of total RNA.For data analysis, arrays were scanned using GenePix 4000b scanner Click here for additional data file.Figure S2(36 KB PDF)Click here for additional data file.Table S1(10 KB PDF)Click here for additional data file.Table S2(17 KB PDF)Click here for additional data file.Table S3(8 KB PDF)Click here for additional data file.Table S4(12 KB PDF)Click here for additional data file.Table S5(429 KB XLS)Click here for additional data file.Table S6(34 KB XLS)Click here for additional data file."} +{"text": "Scores aimed at contributing to the optimization of exercise testing (ET) have been developed and the experience with their application in coronary artery disease (CAD) has proven to be favorable .Although there is debate on the use of scores in clinical practice, those that stand for it argue that they may decrease the rate of undiagnosed CAD, besides reducing the number of patients without disease that undergo highly expensive tests . AdditioBesides improving diagnostic and prognostic accuracy, scores remove interpretation biases and reduce variability in the decision-making process. Physicians frequently make clinical decisions based on their personal experience, instead of following a rational decision-making process, in addition to trusting more the results from more expensive tests, such as perfusion imaging or echocardiography. Using scores has shown great results, as good as or even better than the formerly mentioned tests , but theAlthough many scores adopt particular features of electrocardiographic responses in their composition, there are no references for a proposal to graduate myocardial ischemia documented in ET.Most cardiologists divide the wide spectrum of electrocardiographic alterations into just two categories , which still lack a clearer definition. Such lack of a proper categorization for ischemic response, determines an inappropriate comparison of results from large studies, thus promoting interpretation biases.To this moment, no line of research has turned primarily toward the CAD evaluation through ET, enabling a view beyond the simple dichotomy (negative/positive test), to provide more objective data about the degree of myocardial ischemia documented during the test.In this article, we propose an electrocardiographic score for myocardial ischemia graduation during the exercise testing. This is a conceptual proposition and clinical studies must be designed to validate the new score among many populations and clinical conditions. Inter-observer variations studies and comparisons with other scoring systems and diagnostic methods will help the medical community to evaluate the reliability and better understand the clinical relevance of the new exercise testing ischemic score.Diagnostic approach to CAD: The degree of ET positivity clearly influences the probability of coronary artery disease. The greater the magnitude of the ST segment depression, the greater the probability of CAD. Also, the worse the mophological type of ST segment depression, the more likely the presence of coronary insufficiency. The presence of ST segment elevation defines a high probability of severe CAD. Also, the earlier and the longer the ST segment depression, the higher the probability of severe CAD. Currently, patients that present ST segment depression of 1.0 mm of horizontal morphology, restricted to the maximal exercise, are mistakenly categorized in different studies, as in the same probabilistic level of disease as in individuals with ST segment elevation of great magnitude in the initial phase of exercise. Therapeutic management: The type, the time, and the intensity of the therapeutic management vary according to the severity of the alterations documented in ET. Frequently, such classification of severity is made empirically by most cardiologists. We are missing an objective form of classification for myocardial ischemia severity for a better basis for clinical decisions.Evaluation of therapeutics: Usually, ET are conducted to evaluate the result of different therapeutic approaches in individuals with CAD. When revascularization procedures are incomplete, graduating ischemia would be useful in monitoring clinical evolution.Risk stratification: To better establish the risk of future events, graduating alterations in ET is essential, since there is a great range of presentations of myocardial ischemia. Certainly, an individual that presents a 1.0 mm ST segment depression with horizontal morphology restricted to maximal exercise, should not be classified in the same risk level as individuals with ST segment elevation of great magnitude in the initial phase of exercise.Many studies with a prognostic approach that assessed ST segment depression as an independent risk factor, presented an interpretation bias of the results, because in clinical practice, an individual that presents a positive ET, invariably ends being invasively investigated, and consequently, being submitted to revascularization procedures. Thus, the presence of myocardial ischemia during a stress test determines a therapeutic approach that modifies the natural history of coronary artery disease.Serial and comparative analysis: A serial analysis of ET performed with the passing of time is necessary for patients who have some contraindication for myocardial revascularization. Serial monitoring of myocardial ischemia could be the basis for different adjustments of pharmacological therapeutics. A graphic demonstration of the degree of myocardial ischemia could be useful to express the patient's clinical evolution. Besides this, as clinicians are used to comparing patients with their prior condition or even patients between each other, severity gradients may have a wide range of applicability.Graduating myocardial ischemia may also minimize differences between results of ET and perfusion images. Early electrocardiographic alterations with great magnitude, may be even more valued than perfusion images, when there is homogeneous uptake of the radiotracer.Ischemic preconditioning (IPC): This is a physiological phenomenon of cardioprotection, where myocardial ischemia episodes induce a greater myocardial tolerance to a subsequent ischemic damage. This may be objectively shown by two sequential ET, where an attenuation of ischemia signs is observed in the second test [ond test . IPC indThe time for 1.0 mm of ST segment depression is considered a tolerance index. Ischemic threshold indexes correspond to heart rate and the double product at the time in which 1.0 mm ST segment depression is reached.Scientific research: To prevent great biases in the interpretation of results, an appropriate data homogenization is necessary. Graduating myocardial ischemia may reduce variability of interpretation of the results, by improving data systematization and patient classification. Scales are more systematized forms of observation, and translate a biological phenomenon in more objective and quantified information.Countless situations of clinical practice require graduating myocardial ischemia documented in ET. We may enumerate the following situations: Scores to assess CAD may be classified didactically, as pre-test, post-test, simplified, multivariate, diagnostic or prognostic. Diagnostic scores are structured, essentially aiming at disease probability estimation. When only clinical variables are considered, this is deemed as a pre-test analysis; by including ET parameters, this is deemed a post-test score. A diagnostic score may be prognostic when severe CAD estimation is made . Prognostic scores are structured to assess risk, mainly of cardiovascular death.There are scores based on multivariate equations that possess complex and difficult calculations. Simplified scores derived from those equations, which as a table, allow for a quick estimation of coronary artery disease through the simple addition of points. Computerized systems perform the calculations of multivariate predictive equations, which may by data weighing and logistic regression, make a diagnosis as accurate as more expensive and sophisticated tests. In spite of forecast equations being possibly intimidating for clinicians, there are programs that automatize such calculations.Although there is a great abundance of alternatives and proposals of scores in ET, with many of them considering aspects of electrocardiographic response in their composition, a few are scores specifically aimed at electrocardiographic aspects. Within the scores that consider exclusive aspects of electrocardiographic response, those of Atenas and KoidThis is a system that depends on identifying electrocardiographic variables that may be clearly defined and graduated according to a scale of values, the points of which represent a graduation of myocardial ischemia that is documented in ET.Within the great range of electrocardiographic alterations, this scale classifies the different patterns into 3 fundamental aspects: magnitude, morphology and moment of ST segment deviations. Each aspect is hierarchically classified into 5 types and graduated from 0 to 4 points -3, resulUpsloping ST segment depression: The rapid and slow upsloping patterns are classified within this item. The point of reference for measuring ST segment depression in the upsloping type is the Y point, at 80 ms from the J point (J80).ST segment convex depression: The presence of ST segment convexity characterizes this morphological pattern. It should be measured at the Y point, at 80 ms from the J point (J80).ST segment horizontal depression: It should be measured at the Y point, at 80 ms from the J point (J80).ST segment downward depression: It should be measured at the J point. ST segment elevation: It should be measured at the Y point, at 40 ms from the J point (J40).The electrocardiographic interpretation should be systematic and comprehensive, weighing globally the findings for a proper correlation establishment with morphological types defined by the scale of ischemia. There are 4 morphological patterns of depression and one morphological type of ST segment elevation : UpsNo ST segment deviations. The point of reference (J or Y) is in the PQ baseline or keeps the same magnitude of the baseline situation.ST segment shift of small magnitude. ST segment depression or elevation inferior to 1.0 mm.ST segment shift from 1 to 1.5 mm. ST segment depression or elevation between 1.0 mm and 1.5 mm.ST segment shift from 1.6 to 2.0 mm. ST segment elevation or depression superior to 1.5 mm and inferior or equal to 2.0 mm.ST segment shift superior to 2.0 mm. ST segment depression or elevation of great magnitude.The measurement should be conducted according to a morphological pattern of ST segment deviations. The lead with the greatest alteration is adopted for scoring. To make up the score, the magnitude of deviations is classified into 5 categories: Transitory peak. When ST segment shift occurs exclusively in the phase of exercise, appearing after 10 MET, which corresponds to stress test that is altered after the third stage of Bruce's protocol. The total resolution of ST segment elevation should occur before the first minute of recovery.Peak and/or recovery. It corresponds to tests where ST segment deviations appear after 10 MET or after the third stage of Bruce's protocol, with a total resolution of ST segment shift after the first minute of recovery. ST segment deviations that occurred exclusively in recovery should also be considered in this pattern.Early with rapid reversion. When ST segment shift occurs between 5 and 10 MET, with total resolution of ST segment shift that occurs before the third minute of recovery.Early with slow reversion. When ST segment shift occurs between 5 and 10 MET, with total resolution of ST segment shift that occurs after the third minute of recovery.Very early. When ST segment shift occurs with up to 5 MET, corresponding to alteration that occurs in the first stage of Bruce's protocol.To score this component, 5 patterns are considered that are exemplified in To validate ST segment alterations that characterize myocardial ischemia, the following should be used: morphological data, referential baseline stability, tracing quality, definition and number of leads involved, type and calibration of the equipment and recording system.ET are considered to be satisfactory when they present recordings in a representative amount, such that visualization would enable a diagnostic conclusion. ET are considered to be unsatisfactory when their reading is hampered by the presence of interference, artifacts or wide variations of the baseline. To validate interpretation, there should be at least three beats with a stable baseline. Electrocardiographic alterations determined by breathing alterations, movements and variations of the baseline, all should be excluded of the morphological analysis.Using only 3 simultaneous leads substantially hinders the sensitivity to detect myocardial ischemia, and using bipolar precordial leads increases sensitivity but decreases specificity due to a greater amplification of the electrocardiographic signal. Inappropriate equipment may produce artifact alterations, magnify or mask alterations by problems of acquisition, processing or even inappropriate graphic printing of electrocardiograms.Leads involved in morphological alterations. Specific leads and the number of leads should be pondered in global interpretation. When morphological alterations affect isolated leads, without comprising a region; or affect a not so relevant lead (e.g. aVR), such alterations should not be validated to define ischemia. Leads with a simultaneous recording should be compared to assess the equivalence of alterations.The measurement of any ST segment shift is made from the PQ baseline and not from the PR line. ST segment depressions with downsloping morphology are measured at the J point, and horizontal, upsloping and convex patterns are measured at the J80 point. ST segment elevations should be measured at J40 point.The slow upsloping pattern is considered abnormal; however, it was excluded as positivity criterion, since it determines a high rate of false positives. The convex pattern is not so acknowledged in international literature, and it determines a high rate of false positives as well; it normalizes rapidly in recovery, before the first minute. Classically, the horizontal and downsloping patterns are the ones that define the internationally accepted positivity criteria, and the downsloping pattern indicates a more severe ischemia .It is worthwhile to highlight that positivity criteria are the same for both genders, since the electrocardiographic manifestation of ischemia is not different between men and women. What changes is the CAD probability. The probabilistic analysis of coronary disease involves not only gender, but also age, symptoms and other numberless risk factors. These numberless variables may be organized and classified through scores, indexes or diagnostic and/or prognostic scales.ECG morphological alterations in rest should be weighed in interpreting electrocardiographic response. In the case of baseline ST depression, ST depression line is adopted as a reference to measure the possible additional shift during exercise. In the cases where baseline ECG presents great morphological alterations, with depressions greater than 1.0 mm, there is limitation of morphological analysis to define myocardial ischemia. Morphological interpretation of ECG to define ischemia should also be considered limited in the following situations: use of digitalis, left ventricular enlargement, intraventricular conduction disorder, ventricular pre-excitation and long QT.Q wave increases physiologically during exercise. When there is doubtful ST segment depression, observing Q wave reduction should be applied to validate the alteration.The presence of pathological Q wave in baseline ECG should be considered to interpret the possible ST segment elevation in stress test. Once ST segment elevation occurs, with magnitude criterion, in the presence of electrically inactive area, dyskinesia should be investigated. To make up the score for ischemia, ST segment elevations with areas with pathological Q waves should not be considered.In the case of early repolarization pattern in baseline ECG, the measurement of possible ST segment depression is considered from PQ baseline.When baseline ECG presents ST segment elevation in electrically inactive area, the reference line for measurement is considered to be the level of elevation to interpret possible additional elevations. As a general rule, ST segment elevations are measured at 0.40 sec from the J point.In the presence of baseline RBBB, the presence of ST segment depressions is only validated in the V5 and V6, D1, aVL, D2, D3 and aVF leads.LBBB hinders morphological analysis to define myocardial ischemia. Morphometric analysis of ST segment deviations (elevation or depression) does not allow characterizing myocardial ischemia in most cases.ST segment morphological alterations that occur after transitory episodes of branch blocks or tachyarrhythmias, are considered to be a memory effect (electronic memory), and are not considered to define myocardial ischemia.Altered beats that follow early ventricular or supraventricular beats, are not considered for morphological analysis.Alterations that may be correlated to myocardial ischemia in other contexts. E.g.: T wave pseudo-normalization and inversion; they are not considered in the composition of ischemia score. Of note, late inversions of T wave in the recovery phase are frequently mistaken by ST segment depressions with downward morphology.QRS complex of great magnitude (very broad R wave), which determines greater magnitudes of ST segment deviations. A small ST alteration may be magnified when the R wave is large. When there is low QRS voltage, with R wave amplitude inferior to 11 mm, the rate of false negatives is greater . Thus, sHypertensive patients, with baseline ECG alterations compatible to LVE, have a positive predictive value of impaired test. Anyway, a normal ST in LVE carriers has a high negative predictive value for coronary artery disease. While there is LVE, ST segment alterations should not be considered for the ischemia score.Digitalis and other antiarrhythmic drugs substantially affect ventricular repolarization, limiting morphological analysis to define myocardial ischemia and invalidating the use of the score for ischemia. It is essential to weigh the result of the test when these specific medications are used, or even when there are still residues of therapeutic influence.After episodes of tachyarrhythmias, whether supraventricular or ventricular, ventricular repolarization alterations may be observed that resemble myocardial ischemia. Thus, morphological alterations that occur immediately after sustained tachyarrhythmia episodes are not considered for ischemia score composition.Atrioventricular and sinoatrial blocks are not directly correlated to myocardial ischemia, even when transitory during exercise. They should not be considered for ischemia score composition.The analysis of electrocardiographic response should be made by the visual method instead of the computerized method, which is not considered to validate the interpretation due to the high degree of contamination by artifacts.Prolonged cool down may obscure possible early ST segment alterations.Accelerated atrioventricular conduction favor Ta wave expression, which manifests as ST segment depression with a pattern similar to the slow rising one. Such depression is more emphasized in D2, D3 and aVF, and should be considered as an electrocardiographic response deviant from normal .As the morphological interpretation of the electrocardiogram is subjective, and there is a large amount of data that are manipulated during the process of analysis, there has been an increasing need to systematize and organize the process of data interpretation.Over the last decades, cardiologists are focused on expensive and sophisticated diagnostic methods, in the belief that they can provide a better diagnostic and/or prognostic accuracy. Collecting scientific data, it is still possible to verify that conventional ET can, most of the times, be superior to more recent tests when scores are used .There are many bases or scores to classify CAD based on the duration of signs or symptoms or in anatomical location and in the extension of the atherosclerotic plaque . Also inA score, besides systematizing and organizing the interpretation of the data from the ET, may be a useful quantifying instrument for alterations. The proposed scale for ischemia may be understood as an optimized score, where there are legitimized entries of data classified by applications in different specific populations.Scales are useful to characterize and quantify, i.e. translate better the clinical phenomenon into objective and quantified information. The graduation system for ischemia that we propose, categorizes the response in patterns based on three predominant aspects that, when added, result in a scale or score. This system enables a description of the extension of myocardial ischemia, and even in a carrier of defined CAD, may involve a given number of correlative objectives, namely: help the physician to plan the treatment; provide a better prognostic definition; help to assess the results of the treatment; facilitate the exchange of information between the treatment centers; contribute with the continuing research about myocardial ischemia.It is important to reach a greater harmony in the record of data that result from ET, and promote a more precise frame for the extension of ischemia, to correlate it better with the most diverse patterns of coronary anatomy involvement, facilitating the exchange of information between different centers as well.Scientific research is the classical territory to use scales, thus guaranteeing that the information collected about specific alterations is organized in patterns and feasible of comparison in a reliable, consistent and reproducible fashion.The graduation system proposed primarily works with classification of myocardial ischemia documented by electrocardiogram during ET. The score has three components: magnitude, morphology and moment of alterations. All these aspects are easily evaluated, thus allowing a good integration of information between clinicians and specialists. Although the different parameters of this scale are already evaluated as a routine, and were previously studied , there iThis system will enable structuring more objective and optimized criteria, permitting a better discrimination and classification of patterns of the different electrocardiographic expressions of myocardial ischemia, one of the most representative paradigms of the multifaceted disease.Myocardial ischemia is an expression of CAD and it affects the electric activity of the heart, which represents the documentary proof of ischemic disease. To be able to document myocardial ischemia, there is a series of factors that should be considered: presence of critical coronary obstruction, number of coronary arteries with critical lesions, presence of collateral circulation, location of coronary obstruction, extension of ischemia, insufficient increase of myocardial demand of oxygen, anemia, concomitant valve diseases, superimposed coronary spasm, hydroelectrolytic disorders, number of electrocardiographic leads, position of electrodes, limitations in acquisition and processing of cardiac electric signal, baseline electrocardiogram alterations, quality of electrocardiographic recording, QRS complex width, therapeutic influence, etc.The result of the ST does not confirm the presence or absence of CAD, and it should be correlated with other pertinent data for a more coherent probabilistic analysis . As any Tests considered negative are not a synonym of normal tests, since they include true negatives and false negatives. Besides this, even true negatives include situations of normal coronary arteries and coronary arteries with non-critical lesions. Therefore, positive tests are not synonyms of abnormal tests, since they contemplate true-positive and false-positive results. And, even true positives include situations of critical coronary artery disease and angiographically normal coronary arteries (microcirculatory disease).For a system to be widely accepted, it needs to be practical, easy to remember and interpret, both by the clinician and the specialist. Anyway, seeking simplicity cannot determine analysis errors and regard alterations in an absolute way. The score for ischemia we propose, focuses on the essential aspects of the ET results, with a representative and consistent character, and in an easily understandable fashion. Besides this, even physiological manifestations are considered: when the score is zero, it means that there was no documentation of myocardial ischemia.The scale does not include angina pectoris that manifests during the test, because we consider that symptoms are described in a very subjective way for qualification and quantification, with wide inter-observer interpretation variations. Other clinical factors were not considered, because clinical information is not commonly available, or it is incomplete or uncertain.As concomitance of highly severe hemodynamic alterations is very frequent with more severe ST segment deviations, indexes such as inotropic deficit and chronotropic incompetence were not considered for the score.We do not oppose detailed analysis of clinical, hemodynamic and functional capacity variables, and integrating data as a whole. We just think that the magnitude of electrocardiographically documented myocardial ischemia surmounts other clinical and hemodynamic parameters besides displaying a greater reproducibility as well.The scale cannot be applied in situations where there are limitations of morphological analysis to define myocardial ischemia;It is necessary for the cardiologist to be initially trained for a routine use of this score of ischemia, taking into consideration the homogenization of possible conceptual differences;There may be small imprecision in measuring the magnitude of ST segment deviations;As the morphological interpretation of the electrocardiogram is subjective, some ventricular repolarization patterns, even deviations from normal, may be confused in the score of morphological aspects. Less objective items tend to yield differences in scores, which may result in inter- and intra-observer variations.Critical aspects to use the score include: In fact, it is to be expected that there should be imperfections in the proposed system, but this is a proposal that better clarifies and classifies the different aspects of myocardial ischemia. With a routine use of ischemia score, different observers may elicit the different aspects of the scale with a high degree of consistency, determining a low probability of inappropriate categorization, and reducing inter-observer discrepancies. Training on the application of scales in clinical assays, may favor a better systematization of data.In practice, this ischemia score may prevent badly structured and disorganized clinical cases from resulting in ambiguities and misunderstandings, positioning the patient in a better defined prognostic and/or diagnostic category."} +{"text": "Assessing RNA quality is essential for gene expression analysis, as the inclusion of degraded samples may influence the interpretation of expression levels in relation to biological and/or clinical parameters. RNA quality can be analyzed by agarose gel electrophoresis, UV spectrophotometer, or microcapillary electrophoresis traces, and can furthermore be evaluated using different methods. No generally accepted recommendations exist for which technique or evaluation method is the best choice. The aim of the present study was to use microcapillary electrophoresis traces from the Bioanalyzer to compare three methods for evaluating RNA quality in 24 fresh frozen invasive breast cancer tissues: 1) Manual method = subjective evaluation of the electropherogram, 2) Ratio Method = the ratio between the 28S and 18S peaks, and 3) RNA integrity number (RIN) method = objective evaluation of the electropherogram. The results were also related to gene expression profiling analyses using 27K oligonucleotide microarrays, unsupervised hierarchical clustering analysis and ontological mapping.vs. Ratio showed concordance (good vs. degraded RNA) in 20/24, Manual vs. RIN in 23/24, and Ratio vs. RIN in 21/24 samples. All three methods were concordant in 20/24 samples. The comparison between RNA quality and gene expression analysis showed that pieces from the same tumor and with good RNA quality clustered together in most cases, whereas those with poor quality often clustered apart. The number of samples clustering in an unexpected manner was lower for the Manual (n = 1) and RIN methods (n = 2) as compared to the Ratio method (n = 5).Comparing the methods pair-wise, Manual i.e. when the RNA became degraded. Ontological mapping using GoMiner revealed deoxyribonuclease activity, collagen, regulation of cell adhesion, cytosolic ribosome, and NADH dehydrogenase activity, to be the five categories most affected by RNA quality.Assigning the data into two groups, RIN \u2265 6 or RIN < 6, all but one of the top ten differentially expressed genes showed decreased expression in the latter group; The results indicate that the Manual and RIN methods are superior to the Ratio method for evaluating RNA quality in fresh frozen breast cancer tissues. The objective measurement when using the RIN method is an advantage. Furthermore, the inclusion of samples with degraded RNA may profoundly affect gene expression levels. In breast cancer, for example, microarrays have been suggested to be useful for predicting clinical outcome and for tailoring treatment strategies for individual patients -3. This e tumors ,4,5.e.g. the Agilent 2100 Bioanalyzer [Microarrays were first described by Schena and co-workers in 1995 . The difanalyzer . The Bioanalyzer ,9. FurthIn the present study we have focused on 1) different ways of evaluating the quality of RNA, 2) how the quality of RNA influences microarray-based gene expression analyses, and 3) which type of gene categories that are affected by decreased RNA quality.The results indicate that the Manual and RIN methods are superior to the Ratio method for evaluating RNA quality in fresh frozen breast cancer tissues. The objectively obtained measurement of the RIN method is, in addition, clearly an advantage. Furthermore, the inclusion of samples with degraded RNA can profoundly influence gene expression profiles, and hence clustering of samples as well as absolute expression levels of individual genes.We analyzed the RNA quality using three different methods; Manual, Ratio and RIN, respectively (see Methods). Visual inspection of the Bioanalyzer electropherograms showed that of the six samples included, the majority were degraded at room temperature, but after different lengths of time [see Additional file Similar results were obtained using the Ratio method. According to the Ratio method however, Sample 3 was considered good at 10 minutes Fig. , whereasThe electropherograms from one of the samples (Sample 3), showed an unexpected appearance over time Fig. . It was vs. Ratio showed concordance in 20/24, Manual vs. RIN in 23/24, and Ratio vs. RIN in 21/24 samples. All three methods showed concordant results in 20 of the 24 samples.In summary, pair-wise comparisons of the methods revealed that Manual Our hypothesis was that if the RNA quality of the sample was good for all four time periods, the corresponding gene expression profiles should be similar and the samples should consequently cluster together. Conversely, upon RNA degradation, changes in gene expression profiles would cause the sample replicates to cluster apart. Using unsupervised hierarchical clustering to assess which samples clustered together, we noted that the samples clustered into two separate groups, one including most of the good samples (including those partly degraded) and one including most of the degraded samples, irrespective of evaluation method Fig. .i.e. samples considered to be of good RNA quality clustering with degraded samples or vice versa) was five to find Gene Ontology (GO) mapping using GoMiner , . In 20/24 (83%) samples, all three methods came to the same result (good or degraded RNA). The Manual and RIN methods were concordant in 23/24 (96%) samples, whereas the Ratio method showed discordant results with the other two methods in four and three samples, respectively. In some of the discordant samples, the discrepancy could be explained by values near the cut-off. The results indicate that the Manual and RIN methods are more similar to one another than the Ratio method is to either. This finding is in line with the evaluation principles. While both the Manual and RIN methods take the whole electropherogram into consideration, manually or objectively, the Ratio method relies only on the ratio between the 28S and 18S peaks. Furthermore, the ratio calculation is based on area measurements and is heavily dependent on the definitions of the start and end of the peaks. In addition, small peaks make this measurement even more uncertain, which is often the case with partly degraded samples. Therefore, the ribosomal ratio may not be sufficient to evaluate RNA degradation efficiently in all instances. Copois and co-workers, using colorectal cancer, liver metastases, and normal colon, compared the ratio method with the computer-based RIN and Degradometer methods, as well as with an in-house \"RNA Quality Scale\" method, and came to the conclusion that the 28S/18S ratio resulted in misleading categorization . To addrImbeaud and co-workers obtained similar results in their study including both cell lines and different normal tissues, demonstrating ambiguity with the Ratio method . When rii.e. samples considered to be of good RNA quality clustered with degraded samples or vice versa. These results indicate that the Manual and RIN methods are more concordant and superior to the Ratio method for evaluating RNA quality in fresh frozen breast cancer samples. An advantage with the RIN method in comparison with the Manual method is that it yields an objective measurement, whereas the subjective interpretation of the Manual method, especially for the partly degraded group, may show both intra- and inter-individual variation. In order to validate the cut-off of 6 for the RIN method, we also tested 5 or 7 as cut-offs. The number of samples clustering in an unexpected way was thereby increased to three and seven, respectively. The use of 6 as a cut-off was also strengthened when the RIN values were compared to the Pearson correlation coefficients of the association between the gene expression of the samples for the different time points (2\u20133 minutes to 50 minutes) and the gene expression after 50 seconds.In concordance with the above-mentioned studies, the results of our investigation demonstrate that the gene expression profiles change considerably upon RNA degradation. We hypothesized that if the RNA quality in different samples from the same breast tumor was good, the corresponding gene expression profiles should be similar, and the samples should consequently cluster together. In contrast, when RNA is degraded, changes in gene expression profiles would cause the samples to cluster apart. Our findings indicate that the results of the RNA quality evaluation using the Manual and RIN methods were more concordant with the results of the clustering analyses than when using the Ratio method. While only one and two (RIN) sample replicates clustered apart, five samples clustered in an unexpected way when the Ratio method was used, One sample showed an unexpected appearance over time, as the RNA quality appeared superior after extended exposure to room temperature compared to shorter time periods when it was deemed degraded Fig. . This suOf the top ten most differentially expressed genes, all but one showed decreased gene expression levels in the RIN < 6 compared to the RIN \u2265 6 group Fig. , suggeste.g. differences in tissue composition. Some samples may be rich in fatty tissue, whereas others may be rich in epithelial cancer cells. Furthermore, the amount of connective tissue may also influence the amount and quality of extracted RNA. Another explanation for the differences between samples may be that the time period from surgical excision until the sample is placed at -80\u00b0C varies and that they are collected from several pathological departments, with different routines. The tissue composition and suboptimal sample collection procedures may also explain the relatively low ratio values obtained in breast cancer, in comparison with other tissue materials. In a recent publication from our group [Our results demonstrate that RNA was degraded at room temperature, but the RNA in the six samples showed variable sensitivity. This variation may be explained by different sensitivity to room temperature due to ur group , we had ur group ,18.From the electropherograms it was, furthermore, demonstrated that RNA degradation is a gradual process. Not all RNA follows the same pattern during degradation; however, the larger ribosome is typically degraded first, resulting in a decrease and broadening of this peak. Consequently, as degradation proceeds, there is a decrease in the 28S to 18S ribosomal ratio and an increase in the baseline signal between the two ribosomal peaks.The results indicate that the Manual and RIN methods are superior to the Ratio method for evaluating RNA quality in fresh frozen breast cancer tissues. The RIN method gives an objective measure of RNA quality, while the Manual method may be subject to inter-, as well as intra-observer variation. In addition, the inclusion of samples with degraded RNA may affect the outcome of the study, as the levels of gene expression are highly dependent upon RNA integrity. Based on our experience, we recommend RIN values \u2265 6 to be used for fresh frozen breast cancer tissue.Frozen samples from six patients were retrieved from the tissue bank (-80\u00b0C) owned by the South Swedish Breast Cancer Group. In order to obtain RNA of different quality, four equally sized pieces (by weight) from each invasive breast cancer sample were placed at room temperature for four different lengths of time: 50 seconds, 2\u20133 minutes, 10 minutes, and 30 minutes, after which the samples were placed in liquid nitrogen.The ethical committee at Lund University approved this project.i.e. the ratio between the ribosomal subunits, 28S/18S.The samples were pulverized with a Micro-dismembrator II , and RNA was extracted using Trizol reagent , and purified with Qiagen RNeasy Midi columns . The RNA concentration was determined using a Nanodrop Spectrophotometer . The RNA quality was assessed using an Agilent 2100 Bioanalyzer together with the reagents in the RNA 6000 Nano LabChip kit. All samples were within the kit capacity (5\u2013500 ng/\u03bcl). The Agilent 2100 Bioanalyzer generates an electropherogram and a gel-like image and displays results such as sample RNA concentration and the so called ribosomal ratio, e.g. Fig. e.g. Fig. e.g. Fig. i.e. to proceed to the hybridization step. However, methods that rely on visual inspection are subjective and have a tendency to vary over time. A more objective way to evaluate the quality of RNA may be to use a certain threshold for the 28S/18S ratio as a cut-off (Ratio method). From previous studies, we have established a threshold for the Bioanalyzer ratio at \u2265 0.65 (data not shown). A more recent approach is to use the RNA Integrity Number (RIN) method, which is a standardization of RNA quality control [e.g. type of organism, type of tissue, type of microarray platform, RNA extraction procedure, etc.) the validation procedure needs to be repeated. There are, thus, no established cut-off values and each laboratory needs to establish their own.The electropherogram can be evaluated in three ways. With visual inspection, the quality of RNA is considered good if the electropherogram shows two distinct peaks, one for 28S and one for 18S, and a flat baseline . The median values for the partly degraded and degraded were 6 (range: 3\u20137) and 4 (range: 2\u20136), respectively. Based on these results we considered values greater or equal to 6 to represent good RNA. This cut-off was therefore also used in the present study.\u00ae dCTP , and 5 \u03bcg of reference RNA , consisting of a pool of ten different tumor cell lines, was labeled with Cy5\u00ae dCTP , according to the manufacturer's instructions using the reagents in the ChipShot\u2122 labeling system kit .Five micrograms of tumor RNA was labeled with Cy3Arrays were produced by the Swegene DNA Microarray Resource Centre, Department of Oncology at Lund University, Sweden, using a set of 26,819 70 base-pair human oligonucleotide probes , which were obtained from Operon Biotechnologies, Inc. . The probes represent 16,641 gene symbols.2 and pre-treated using the Pronto!\u2122 Plus System 6 , according to the manufacturer's instructions. Arrays were scanned at two wavelengths using an Agilent G2505A DNA microarray scanner , with 10 \u03bcm resolution. Gene Pix Pro 4.0 software , was used for image analysis. Gene names were linked to the spots and spots with poor quality were manually excluded. Raw-data are available at Gene Expression Omnibus [Prior to hybridization, slides were UV-cross linked at 800 mJ/cm Omnibus .Background correction of Cy3 and Cy5 intensities was calculated, using the median feature and the median local background intensities provided in the data matrix. Within arrays, intensity ratios for individual features were calculated as background corrected intensity of tumor sample divided by background corrected intensity of reference sample. The data matrix was uploaded to BASE , where tSpots with intensities lower than zero, and spots that were flagged bad or not found were excluded. Reporters that were not present in 100% of the arrays were filtered out, and the data was normalized using Lowess , resultiOntological mapping using the publicly available software GoMiner was perfi.e. a low correlation coefficient and a RIN value above the cut-off or vice versa (Fig. In order to validate the RIN cut-off value, the RIN values were compared to the Pearson correlation coefficients Fig. . In BASErsa Fig. . Both saMF conceived of the study. CS contributed to the development of methodology and executed the experiments. CS and JE analyzed and interpreted the data. CS, MF and IH took active part in writing the manuscript. All authors read and approved the final manuscript.Bioanalyzer electropherograms. Bioanalyzer electropherograms for the six samples at different time points.Click here for fileGO categories. Gene ontology analysis of the 7,672 differentially expressed genes using GoMiner, with a p-value \u2264 0.05 and with \u2265 3 changed genes in each category.Click here for file"} +{"text": "Microarray gene expression (MAGE) signatures allow insights into the transcriptional processes of leukemias and may evolve as a molecular diagnostic test. Introduction of MAGE into clinical practice of leukemia diagnosis will require comprehensive assessment of variation due to the methodologies. Here we systematically assessed the impact of three different total RNA isolation procedures on variation in expression data: method A: lysis of mononuclear cells, followed by lysate homogenization and RNA extraction; method B: organic solvent based RNA isolation, and method C: organic solvent based RNA isolation followed by purification.We analyzed 27 pediatric acute leukemias representing nine distinct subtypes and show that method A yields better RNA quality, was associated with more differentially expressed genes between leukemia subtypes, demonstrated the lowest degree of variation between experiments, was more reproducible, and was characterized with a higher precision in technical replicates. Unsupervised and supervised analyses grouped leukemias according to lineage and clinical features in all three methods, thus underlining the robustness of MAGE to identify leukemia specific signatures.The signatures in the different subtypes of leukemias, regardless of the different extraction methods used, account for the biggest source of variation in the data. Lysis of mononuclear cells, followed by lysate homogenization and RNA extraction represents the optimum method for robust gene expression data and is thus recommended for obtaining robust classification results in microarray studies in acute leukemias. Microarrays have been demonstrated to be a powerful technology capable of successfully identifying novel taxonomies for various types of cancers -5 and geHere we present a comparative study of the microarray data using three different RNA isolation and purification techniques . We have performed standardized experiments with total RNA extracted from pediatric acute leukemia patients to investigate whether different extraction protocols (see methods) result in comparable gene expression data from the same sample source Figure . MoreoveBacillus subtilis control transcripts from the Poly-A control kit are greater or equal to 1, and the intensity ratio of the 3' probe set to the 5' probe set for the housekeeping gene GAPD is less than 3.0. Four samples showed a higher 3'/5' GAPD ratio but had otherwise acceptable quality parameters.In this study we first monitored data quality parameters. All gene expression profiles passed the quality filter and met our criteria for inclusion into further data analyses [see Additional File GAPD ratios. Preparations of total RNA by TRIzol (method B) yield slightly higher amount of cRNA, generate a lower image background as measured by Q value, but have a higher 3'/5' GAPD ratio. When the total RNA was prepared by TRIzol followed by RNeasy purification (method C) the cRNA yield was high, the background low, with the 3'/5' GAPD ratio being a little bit higher than for preparations of total RNA by QIAshredder homogenization followed by RNeasy purification. All three preparation methods generated an acceptable range of present calls on the whole genome microarray.As illustrated in Figure Simpleaffy\" Bioconductor analysis package [Total RNA quality can also be indirectly assessed by a so-called RNA degradation plot analysis as implemented in the \" package . The samTo assess the comparability of global gene expression data between samples isolated with different preparation methods it is useful to examine the overall signal distribution of all probe sets as density curve for each microarray experiment. Outlier experiments would be detected by their different behavior of the density curves. As shown in Figure We next investigated the consistency of gene expression measurements of leukemia samples when using different total RNA extraction methods by performing an unsupervised hierarchical clustering analysis. Expression data have been normalized using the PQN algorithm . 2821 geA supervised analysis was performed to assess the potential impact of the use of different total RNA extraction methods on a leukemia classification approach. An all-pairwise t-test analysis identified differentially expressed genes that would distinguish between the 9 classes of pediatric leukemias that are represented in our dataset. A gene set of 1089 differentially expressed probe sets was then examined by three-dimensional PCA. As shown in Figure As shown in Figure We next investigated the percentage of overlapping genes that are found to be differentially expressed between the three methods used when analyzing the various leukemia subclasses in a supervised way. The percentage of overlapping genes is another suitable parameter to address the impact of the use of different total RNA extraction methods on a leukemia classification approach. Figure Additionally, to further illustrate the assay performance, a statistical power analysis for the RNA preparation methods A, B, and C is performed based on the Bioconductor package \"ssize\". The power analysis is used, for statistical comparison of identical leukemia samples, to assess the precision of technical replicates obtained from different RNA preparation methods. The data sets generated based on the preparations of total RNA following the methods A and B have greater average statistical power than the microarray data set based on method C [see Additional File In summary, these analyses indicate that preparation of total RNA by QIAshredder homogenization followed by RNeasy purification is a robust sample preparation method for microarray experiments that outperforms other procedures for isolation of total RNA.2), box plots, scatter plots, and coefficient of variation (CV) assessments. These analyses included all 54675 probe sets represented on the HG-U133 Plus 2.0 microarray.As three patients had been analyzed with three technical replicates Figure we there2 range from 0.985 to 0.989 for preparations of total RNA by QIAshredder homogenization followed by RNeasy purification (method A), 0.976 to 0.987 for TRIzol isolation (method B), and 0.967 to 0.988 for TRIzol followed by RNeasy purification (method C). Between the three different sample preparation methods the mean value of R2 is 0.952 and standard deviation is 0.005 for method A versus method B, 0.976 mean value and 0.005 standard deviation for method A versus method C, and 0.965 mean value and 0.011 standard deviation for method B versus method C, respectively.As shown in Figure Analysis of coefficient of variation is a useful way for assessment of reproducibility and precision of the gene expression profiles generated from three different total RNA sources. The box plots demonstrate the variability in gene expression measurements within the three technical replicates using different sample preparation methods [see Additional File Recent investigations successfully applied gene expression microarrays to classify known tumor types and also various hematological malignancies ,25,28-34P = 5,308e-12), the isolation of total RNA using QIAshredder homogenization followed by RNeasy purification (method A) resulted in a better quality of starting material as demonstrated by the A260/280 ratio of cRNA , by very reproducible low 3'/5' GAPD ratios, and by consistently lower scaling factors . This was then further examined by a so-called RNA degradation plot analysis as implemented in the Simpleaffy Bioconductor analysis package [After a first analysis of the quality of our microarray data, we could assert that since in all cases the quality parameters met our criteria, each of the three preparation methods is able to generate acceptable gene expression profiles of pediatric leukemias. We found that samples representing different leukemia subclasses and extracted using different RNA preparation methods are characterized by a high comparability of gene expression data thus demonstrating that sample preparation procedures do not impair the overall probe set signal intensity distribution. Importantly, even though yielding lower amounts of cRNA if compared to TRIzol (method B) and TRIzol followed by RNeasy (method C) protocols differentially expressed genes between the nine distinct leukemia categories that we studied here, all samples are clearly separated by leukemia lineages and without being influenced by the total RNA isolation method. Furthermore, AML with normal karyotype is separated from the two patient samples with AML with t(11q23)/MLL demonstrating an intra-lineage distinction within the AML group. The same separation can be observed in the B lineage ALL group where samples with the chromosomal aberrations t, t, t, or t are split into distinct groups. As such, this is also an independent confirmation of the clustering organizations as presented in recent gene expression profiling studies of acute lymphoblastic leukemias ,30-33,36The first conclusion we draw from this study is that underlying biological characteristics of the pediatric acute leukemia classes are quite significant and largely exceed the variations between different total RNA sample preparation protocols. Having shown that at a chosen false discovery rate of 0.01% method A is producing a higher number of differentially expressed genes as compared to method B and method C, we would propose that lysis of the mononuclear cells, followed by lysate homogenization (QIAshredder) and total RNA purification (Qiagen) is the more robust total RNA isolation procedure for gene expression experiments using microarray technology. The importance of this new data is further strengthened by the analysis of the technical replicates. In fact, the gene expression data obtained with method A show the lowest degree of variation and are more reproducible, as compared to the alternative methods we tested for the isolation of total RNA. Finally, all these evidences, combined with the standardized microarray analysis protocol that we followed for this study let us conclude that the initial homogenization of the leukemia cell lysate followed by total RNA purification using spin columns is currently the optimal protocol available with respect to the robustness of gene expression data and that this method is practical for a routine laboratory use. Here we limited our microarray study to pediatric leukemia, but certainly these statements could also be applied to similar cohorts of adult leukemias.E2A-PBX1), t(MLL-AF4), t(BCR-ABL) t(TEL-AML1), t(AML1-ETO), t(PML-RARA), inv(16)(CBFB-MYH11), and t were screened following the BIOMED-1 concert action protocol [MLL rearrangement and #26 is t) and AML patients with normal karyotype or other abnormalities (n = 2). The ALL group included Pro-B-ALL t (n = 1), Pro-B-ALL/c-ALL with t (n = 2), T-ALL (n = 5), c-ALL with t (n = 3), Pre-B-ALL with t (n = 1), B lineage ALL with hyperdiploid karyotype (n = 3), and B lineage ALL negative for the screened recurrent translocations and with a DNA index value equal to 1.0 (n = 8). The percentage of blast cells ranged between 70% and 98%.Between December 2005 and March 2006 samples from twenty-seven acute pediatric leukemia patients were analyzed at the time of diagnosis. All patients received a laboratory diagnosis based on white blood cell count, cytomorphology, cytochemistry, multiparameter immunophenotyping, cytogenetics, fluorescence in situ hybridization (FISH), and molecular genetics (PCR). Chromosome aberrations t1;19). Subsequently, total RNA was extracted from aliquots of 5 \u00d7 106 cells and 10 \u00d7 106 cells following two distinct total RNA purification method A and method B, respectively . Total RNA obtained from method B was either used for the subsequent microarray analysis without further purification (method B), or was additionally purified following method C . Microarray analysis was performed on each sample and each preparation method (Affymetrix HG-U133 Plus 2.0). Thus, for 24 patient samples a total of 72 microarrays were analyzed , resulting in additional 27 gene expression profiles on Affymetrix HG-U133 Plus 2.0 microarrays and electropherogram .Mononuclear cells were processed immediately after or within 24 hours after the biopsy was obtained. Appearance and fluidity of the samples were monitored before starting with RNA isolation. Total RNA was isolated using three different methods. Method A: lysis of the mononuclear cells, followed by lysate homogenization using a biopolymer shredding system in a microcentrifuge spin-column format , followed by total RNA purification using selective binding columns . The cell lysate homogenization phase reduces viscosity caused by high-molecular-weight cellular components and cell debris. Method B: TRIzol RNA isolation . Method C: TRIzol RNA isolation (Invitrogen) followed by a purification step . The RNA purification step previously mentioned combines the selective binding properties of a silica-based membrane with the speed of microspin technology. This system allows only RNA longer than 200 bases to bind to the silica membrane, providing an enriching for mRNA since nucleotides shorter than 200 nucleotides are selectively excluded. In all three methods we followed the protocols provided by the manufacturers. After extraction, total RNA was stored at -80\u00b0C until used for microarray analyses. RNA quality was assessed on the Agilent Bioanalyzer 2100 using the Agilent RNA 6000 Nano Assay kit . RNA concentration was determined using the NanoDrop ND-1000 spectrophotometer . The overall total RNA quality was assessed by A24 \u2013 T7 primer and the Poly-A control transcripts . The generated cDNA was purified using the GeneChip Sample Cleanup Module (Affymetrix). Then, labeled cRNA was generated using the Microarray RNA target synthesis kit (Roche Applied Science) and an in vitro transcription labeling nucleotide mixture (Affymetrix). The generated cRNA was purified using the GeneChip Sample Cleanup Module (Affymetrix) and quantified using the NanoDrop ND-1000 spectrophotometer. In each preparation an amount of 11.0 \u03bcg cRNA were fragmented with 5\u00d7 Fragmentation Buffer (Affymetrix) in a final reaction volume of 25 \u03bcl. The incubation steps during cDNA synthesis, in vitro transcription reaction, and target fragmentation were performed using the Hybex Microarray Incubation System and Eppendorf ThermoStat plus instruments . Hybridization, washing, staining and scanning protocols, respectively, were performed on Affymetrix GeneChip instruments as recommended by the manufacturer.From each RNA preparation 2.0 \u03bcg of total RNA were converted into double-stranded cDNA by reverse transcription using a cDNA Synthesis System kit including an oligo(dT)260/A280 ratio included: (i) background noise , (ii) percentage of present called probe sets, (iii) scaling factor, (iv) information about exogenous Bacillus subtilis control transcripts from the Affymetrix Poly-A control kit , and (v) the ratio of intensities of 3' probes to 5' probes for a housekeeping gene (GAPD).Microarray image files (.cel data) were generated using default Affymetrix microarray analysis parameters GCOS 1.2 software). Subsequently, intensity signals were calculated based on the non-central trimmed mean of Perfect Match intensities with Quantile Normalization . For eac softwareThe data pre-processing included the summarization to generate probe set level signals for each microarray experiment and was performed using the PS or PQN algorithms as described elsewhere . To analThis study is part of the MILE Study (Microarray Innovations In LEukemia) program, an ongoing collaborative effort headed by the European Leukemia Network (ELN) and sponsored by Roche Molecular Systems, Inc., addressing gene expression signatures in acute and chronic leukemias. This study further supports the AmpliChip Leukemia Test program, a gene expression microarray test for the subclassification of leukemia. Roche Molecular Systems, Inc. has business relationships with Qiagen and is currently validating Qiagen products for the AmpliChip Leukemia Test.MCDO performed the microarray experiments and wrote the paper, LT contributed to perform the experiments, AZ, RL, and WML analyzed the microarray data, GB recorded clinical data, GK supervised the study and writing of the manuscript, and AK provided the original concept of the study, and contributed to writing the paper.Supplementary Data. This file contains supplementary figures with additional comments explaining details of analysis, results, and interpretation.Click here for fileThis Excel file contains further details about each total RNA isolation method, including cRNA quality and quantity values as well as microarray quality and quantity values for each experiment.Click here for fileThis Excel file contains details about each total RNA isolation method and leukemia classification details for each CEL file. All microarray raw data (*.cel files) are available online through the Gene Expression Omnibus database with the series accession number GSE7757.Click here for file"} +{"text": "Gene expression microarray experiments are expensive to conduct and guidelines for acceptable quality control at intermediate steps before and after the samples are hybridised to chips are vague. We conducted an experiment hybridising RNA from human brain to 117 U133A Affymetrix GeneChips and used these data to explore the relationship between 4 pre-chip variables and 22 post-chip outcomes and quality control measures.We found that the pre-chip variables were significantly correlated with each other but that this correlation was strongest between measures of RNA quality and cRNA yield. Post-mortem interval was negatively correlated with these variables. Four principal components, reflecting array outliers, array adjustment, hybridisation noise and RNA integrity, explain about 75% of the total post-chip measure variability. Two significant canonical correlations existed between the pre-chip and post-chip variables, derived from MAS 5.0, dChip and the Bioconductor packages affy and affyPLM. The strongest correlated RNA integrity and yield with post chip quality control (QC) measures indexing 3'/5' RNA ratios, bias or scaling of the chip and scaling of the variability of the signal across the chip. Post-mortem interval was relatively unimportant. We also found that the RNA integrity number (RIN) could be moderately well predicted by post-chip measures B_ACTIN35, GAPDH35 and SF.We have found that the post-chip variables having the strongest association with quantities measurable before hybridisation are those reflecting RNA integrity. Other aspects of quality, such as noise measures (reflecting the execution of the assay) or measures reflecting data quality (outlier status and array adjustment variables) are not well predicted by the variables we were able to determine ahead of time. There could be other variables measurable pre-hybridisation which may be better associated with expression data quality measures. Uncovering such connections could create savings on costly microarray experiments by eliminating poor samples before hybridisation. Conducting microarray experiments using Affymetrix arrays is expensive. The quality of the starting material, for instance human post-mortem tissues, is often predetermined and samples may be scarce, leading to variable quality of the extracted RNA. We set out to explore the relationship between quality control (QC) variables used to assess samples prior to hybridisation (pre-chip) and those used to assess the quality of the hybridisation and resulting microarray data (post-chip). We sought better to define which variables were important in determining the quality of the final data and to see in turn whether any post-chip measures could predict pre-chip variables.Examination of quality in GeneChip experiments has been hampered by the relatively new technology, rapidly changing platforms (chip types) and the inability of most centres, because of expense, to run large series of samples to examine the characteristics and limitations of the technology. In addition, the output of the QC measures reflects both technical variation in the performance of the experiment and the biological variation of the samples available. Affymetrix give a series of guidelines about threshold values for quality control measures produced in the RPT file by their algorithm (GCOS or MAS 5.0) . They doDumur et al. examinedWe examined gene expression in a large series of RNA samples extracted from post-mortem human brain : technicWe have now used these data to examine the relationship of pre-chip variables to post-chip quality control measures. Although we believe our original subjective decision to exclude samples at each step in the process from sample collection to expression analysis was justified , these dThe samples we used were derived from a series of HD and control brains from the New Zealand Neurological Foundation Human Brain Bank. The full consent of all families was obtained at the time of autopsy and the University of Auckland Human Subjects Ethics Committee approved the protocols used in these studies. For most brains RNA was isolated from three regions: caudate nucleus (CN), cerebellum (CB) and motor cortex (MC). Because there was a significant effect of brain region on some of the pre-chip (RIN and cRNA yield) and post-chip variables (SF and RawQ) (see methods for a detailed explanation of the pre- and post-chip variables) we adjusted all variables for brain region (see methods). All further analyses were carried out with the adjusted variables.The pre-chip variables included two assessments of RNA quality: a four-category subjective visual assessment of Bioanalyzer traces (SUBQUAL) carried out by two of us on all samples and the Agilent-derived RIN, which only became available after the GeneChips had been hybridised and thus was generated retrospectively Figure . SubjectAs expected from post-mortem brain tissue, where there is little control over the events leading up to availability and preservation of tissue, PMI and RNA integrity as measured by subjective and objective assessment were variable Table . In the -18). Subjective quality and RIN also correlated significantly with yield. Figure PMI correlated moderately negatively with our subjective assessment of RNA quality , with the Agilent RIN and with cRNA yield Table . As expeWe computed quality measures using three different softwares: MAS 5.0 , dChip and the The different algorithms generating the post-chip variables measure aspects of the same underlying hybridization process. To explore the relationships between the variables generated by the different algorithms we carried out a principal component analysis using data from U133A chips hybridised in the experiment for which measures of all variables were available N = 112). Since many of the pre- and post-chip variables were not normally distributed, we based the principal component analysis on Spearman (non-parametric) correlations. The first 4 components explained 75% of the variation is highly correlated with variables which include IQR of relative log expression . This quantity measures the variability in the degree to which a given chip differs from a virtual median chip \u2013 that is, a chip which would have median expression for each probe set. Consistent with the property of principal components of capturing the variability in the data, those chips with highest IQR relative log expression tend to be those which are largest on PC1 Figure .The largest outliers in Figure Canonical correlation analysis was then performed in order to detect relationships between the two sets of variables: the set of 4 pre-chip and that of 22 post-chip variables. Four canonical correlations were calculated Table . These wThe first canonical correlation indexes nearly all of the relevant relationships between the pre- and post-chip variables: it is highly significant Table . The secAlthough estimation of RIN based on post-hybridization measures is generally not of interest for a lab analyzing its own data, it is potentially useful in the context of secondary analyses done by other labs. An increasingly common example is analysis by different groups of publicly available data, for example data deposited in Gene Expression Omnibus (GEO) ,15. In tIn our data, excluding the gross outlier HC79CB, the single variables most highly correlated with RIN are B_ACTIN35, GAPDH35 and SF. B_ACTIN35 and GAPDH35 are very highly correlated with each other (\u03c1 = 0.99), with somewhat lower correlation with SF (\u03c1 = .55 \u2013 .65). Robust regression ,17 of RIestimated RIN = 9.8 - 1.1 * B_ACTIN35,-15), and an adjusted R2 of 0.42. Using GAPDH35 as the predictor yieldswith a residual standard error of about 0.9. Ordinary regression yields nearly identical coefficients, both of which are highly significant and adjusted R2 of 0.36. Including SF as an additional term does not greatly improve the fit of the model. Using SF alone, we obtainwith residual standard error of about 1.1. Again, coefficients estimated by ordinary regression are very close, with highly significant coefficients and adjusted R2 = 0.27. SF may be easier to obtain than B_ACTIN35 or GAPDH35 for users without MAS/GCOS. Figure with residual standard error of about 1.0, highly significant coefficients or that did not generate sufficient cRNA we cannot judge their impact on our post-chip measures. There are no reports assessing the performance of the Agilent-generated RIN on GeneChip data quality to date. Using the post-chip quality control measures in our experiment, we found that samples with a RIN > 5.5 produced expression data of sufficient quality to be included in analyses. We found that longer post-mortem intervals were associated with poorer quality RNA (lower RIN or SUBQUAL) as might be expected, although the level of correlation is low, around -0.2. Tomita et al. ,18. Of tThe post-chip measures can be generated from data available in the public databases, but the pre-chip quality control measures are usually not provided. Therefore it might prove useful to predict the RIN, as an objective measure of RNA integrity, retrospectively. It is not clear how generalisable the models generated from our data will be although they indicate that there may be relationships that can provide an estimate of this information. Only examination of a large number of varied data sets will give a true indication of their general validity. However, one of these predictors might be used to obtain a 'quick and dirty' estimate of the quality of RNA from which the data is derived. More importantly, it highlights the most appropriate post-chip measures that predict RNA quality (B_ACTIN35 and GAPDH35 and to a lesser extent SF) and gives an estimate of their relationship which can be used to select chips to exclude from analysis based on their outlier status in these variables which would improve data quality, particularly in small datasets or datasets combined from several experiments.Yield of cRNA is significantly correlated with measures of RNA integrity. It is thus difficult to know if yield per se is related to PMI or whether this is simply a result of its relationship with RNA integrity. Although yield clearly reflects RNA integrity it also indexes the quality of the reactions from total RNA to cRNA applied to the chips. However, it is clear that yield and RNA integrity have different relationships with the post-chip QC factors. In our experiment where all reactions were carried out by the same person in large batches it is unlikely that there were large variations in yield due to technical factors. Our study is limited by having no systematic technical replicates, for as in most such studies this would have been too expensive. The only RNA sample that was re-hybridised to an A chip, having been re-generated from RNA, was rated as poor and failed on both occasions.It is useful to distinguish between the various facets of the catch-all term 'quality'. In chronological order: there is first the condition of the starting RNA; next is the calibre of the experimental process and resulting hybridisation; finally comes the acceptability of the resulting expression measures, including identification of outliers.array outliers, the second comprises variables assessing array adjustment, the third contains variables measuring hybridisation noise, and the fourth consists of the set of variables related to RNA integrity. Interestingly, these components correspond roughly to the three aspects mentioned above, but in the reverse order. The first and second components together give insight into the outlier status of a chip when it is considered as part of a set of chips. The component explaining most variability contains variables providing numerical assessments for outlier identification. A related but somewhat distinct aspect is given by an assessment of how far off the chip is from the others, or how the signal would need to be adjusted to make it more like the rest of the chips in the set; this is provided by variables strongly represented in the second component. The third and fourth components, respectively, reflect directly the second (hybridisation) and first (RNA integrity) areas of quality.We can think of the first four principal components as providing a grouping of post-chip measures, with each component representing a different aspect of quality or measures reflecting data quality (outlier status and array adjustment variables) are not well predicted by the variables we were able to determine ahead of time. To the extent that random variation affects chip hybridisation, this finding is not very surprising. However, we do not rule out the possibility that there are other variables measurable pre-hybridisation which may be better associated with expression data quality measures. Uncovering such connections could create savings on costly microarray experiments by eliminating poor samples before hybridisation. We therefore encourage investigators to keep careful track of potentially relevant variables so that further studies may continue to shed light on features predictive of array quality.\u00ae Expression Analysis Protocol, Rev. 2, March 2003, Affymetrix). Briefly, total RNA was extracted using TRIzol (Invitrogen) followed by RNeasy column cleanup (Qiagen) using the manufacturers' protocols. 10 \u03bcg total RNA from each sample was used to prepare biotinylated fragmented cRNA, with products from Affymetrix. Arrays were hybridized for 16 h in a 45\u00b0C incubator with constant rotation at 60 rpm. Chips were washed and stained on the Fluidics Station 400, and scanned using the GeneArray\u00ae 2500, according to the GeneChip\u00ae Expression Analysis Protocol, Rev. 2, March 2003 (Affymetrix). All RNA extractions and reactions were prepared using master mixes and batches of 8 and 24 respectively. Arrays were processed in batches of 16. All reactions and array hybridisations were carried out by the same person. The U133A GeneChips came from two manufacturing batches.A large set of post mortem brain samples (N = 134) that had previously been included in an analysis of gene expression using Human Genome U133A arrays (Affymetrix) were included in the current study Table 6]. The. The6]. Gene expression was quantified by robust multi-array analysis (RMA) using thA number of pre- and post-array variables were selected for analysis on the basis of their predicted contribution to quality control assessment at various points in the procedure, from sample processing to expression data (Table (PMI) Post mortem interval was the time from death to tissue preservation, in hours.(SUBQUAL) 300 ng of total RNA was run on a 2100 Bioanalyzer (Agilent Technologies). A pre-defined four category subjective rating of RNA quality for all samples was made independently by an experienced biologist (AKH) and checked by another (RLC): in the few cases of disagreement the sample trace was re-evaluated and a consensus was reached. In performing this assessment, attention to the following features of the total RNA trace were made: ribosomal peak definition, baseline flatness, and whether there were increased low molecular weight species. Traces were used to classify RNA quality as excellent, good, fair or poor. For subsequent statistical analysis a value of 4 was assigned to samples considered excellent, 3 to good, 2 to fair and 1 to poor total RNA integrity. This rating was used to select samples for further processing. Since a limited number of chips were available for the experiment, a minority of samples rated as 1 were not processed further, (Table (RIN) Subsequent to the completion of the experiment, 2100 Bioanalyzer Expert software became available that enabled automatic assessment of RNA quality for mammalian eukaryotic total RNA (Agilent Technologies). The RNA integrity number (RIN) assesses RNA integrity on a scale from 0 (low integrity RNA) to 10 (high integrity RNA). The algorithm for generating a RIN number for a given RNA sample is based on the entire electrophoretic trace of the RNA sample. It uses an artificial neural network based on the determination of the most informative features that can be extracted from the traces out of 100 features identified through signal analysis. The selected features which collectively capture the most information about the integrity levels include the total RNA ratio , the height of the 18S peak, the fast area ratio and the height of the lower marker . RINs we(YIELD) A standard amount of total RNA (10 \u03bcg) was used to carry out cDNA and subsequent cRNA reactions for each sample. The yield of adjusted cRNA was used as a pre-chip parameter and reflects both RNA quality and the technical variation of the sample preparation.(BG) Background is a measure of the fluorescent signal on the array due to non-specific binding and autofluorescence from the array surface and the scanning wavelength (570 nm). A high background may indicate the presence of impurities such as cell debris or salts. This non-specific binding causes a low signal to noise ratio, resulting in reduced sensitivity for the detection of low expressing mRNA.(RAWQ) Raw Q assesses the pixel to pixel variation within each probe cell. Electrical noise from the scanner and sample quality can both contribute to Raw Q.(NOISE) Noise is calculated by dividing the array into 16 zones. The standard deviation of the lowest 2% of signal is calculated for each zone and then the average value for all zones is determined. Noise is used to calculate background adjusted signal values by taking a weighted average of the zone-specific noise levels.(PC_PRESENT) The number of probesets called \"present\" relative to the total number of probesets on the array as a percentage. A probeset is determined to be present, marginal or absent by a statistical algorithm within the MAS 5.0 software.(SF) When global scaling is performed, the overall intensity for each array is determined and is compared to a Target Intensity value in order to calculate the appropriate scaling factor. The Scaling Factor should be comparable between arrays.ACTB and GAPDH transcripts are calculated from the chip. The 3' and 5' probesets' expression values are divided to give a ratio of 3'/5' mRNA representation. Expression values for probesets specific to the 5', middle, or 3' portion of the (DCHIP_AR_OUTLIER) The algorithm identifies outliers as a result of image contamination or saturated PM or MM signals. It cross-references one array with all the other arrays in an experiment using modelling of both perfect match (PM) and mismatch (MM) probe information at the probeset level. It flags an array as an outlier if > 5% of the probesets on that array are outliers relative to all other arrays in the experiment and recommends discarding arrays with >15% outlier probesets from analyses.(DCHIP_SING_OUTLIER) The algorithm also determines single outliers, individual probes within a probeset. These are most likely due to cross-hybridization to non-target or alternatively spliced genes.(DCHIP_PCCALL) Probesets that the dChip algorithm considers to be greater than zero, i.e. expressed, are determined as a percentage of the total number of probesets, similar to the Affymetrix %P.(MEDINT) The median intensity is determined across the array.From affyPLM (MED_NUSE) Standard error (SE) estimates for each probeset on the array are taken and adjusted so that the median standard error across all arrays is equal to 1. Arrays with elevated SE relative to other arrays are typically of lower quality. Boxplots of these values are used to compare arrays.Values are computed for each probeset by comparing the expression value on each array against the median expression value for that probeset across all arrays. The assumption is that most genes have constant expression across arrays, and thus have RLE values close to zero. Deviations from this are assessed using boxplots. A number of statistics within this analysis can be assessed as variables including: inter-quartile range of log ratios (IQR_LR), median (or bias) log ratio (B_LR), interquartile range plus absolute median of log ratios (IQRplusAbsB_LR) andthe coefficient of variation of log ratios (CV_LR) whichsummarise distributions of log ratios, or relative log expression, at the chip level. The relative log expression for each probe set is obtained by subtracting a baseline log expression from the log expression of each probe set. Due to computing limitations, we separated our experiment into two sets of chips. We used one series to fit the RMA model, and then applied the fitted model to all chips. Log ratios are computed using two different sets of chips for baseline. In LR1 the baseline is the probe set median log expression for the fitting set of chips. In LR2, the baseline is the median log expression from the group under study itself.P-value from the linear regression fit for the RNA degradation plot is also assessed as a variable (PVAL_SLOPE).(RNADEG_SLOPE) Within each probeset, probes are numbered directionally from the 5' end to the 3' end. Probe intensities are averaged by probe number, across all probesets. Outlying arrays are identified as those with a different gradient to the majority of plots within the experiment. The All analyses were performed on the combined dataset for all three brain regions from a subset of samples (N = 112) for which data was available for all variables (U133A arrays). Each of the pre- and post- chip variables were adjusted by \"brain region\" in order to remove the effect of the latter, using the univariate general linear modelling technique where the \"brain region\" variable was fitted as a fixed effect. The residuals were considered as continuous variables and used for further analysis. The adjusted variables were assessed for normality using Skewness and Kurtosis measures in SPSS and distributions were considered non-normal if they did not fall between 0\u20131.Factor analysis was performed on 22 adjusted post-chip variables (see above) using SAS PROC FACTOR. The analysis is based on the Spearman correlation matrix generated using the SAS PROC CORR. Initial factors were extracted using the principal component method and rotations were then performed by the VARIMAX method. To assess relationships between the \"pre-chip\" and \"post-chip\" multidimensional variables (4 and 22 dimensions respectively), canonical correlation analysis was performed using the SAS PROC CANCOR. As above, the analysis was based on Spearman correlation matrix (SAS PROC CORR).For predicting RIN from post-chip variables, we assume a linear model. High multicollinearity between variables precludes formulation of a robust, interpretable model to predict RIN based on post-chip variables. We thus considered simple models containing one or two predictors.Robust linear regression modeling ,17 of RILaboratory work was carried out by AKH, and judgment of RNA quality by AKH and RL-C. AKH and GH collated the data. VM and DRG did the main statistical analysis and FC, AA, CK and SBD all contributed to the statistical and bioinformatic analysis of data. Study design was by AKH, JMO, CK, SJA, RLMF, RL-C and LJ. RLMF supplied the tissue and the clinical information about the samples. LJ, DRG and AKH interpreted the data and took the primary role in writing the manuscript. All authors read and commented upon the manuscript.Boxplots illustrating the summary statistics for the pre- and postchip variables from Table Click here for fileScatterplots of the pre- and post-chip variables for all chips studied showing the behaviour of the identified outlying chips in Table Click here for file"} +{"text": "Archives of annotated formalin-fixed paraffin-embedded tissues (FFPET) are available as a potential source for retrospective studies. Methods are needed to profile these FFPET samples that are linked to clinical outcomes to generate hypotheses that could lead to classifiers for clinical applications.We developed a two-color microarray-based profiling platform by optimizing target amplification, experimental design, quality control, and microarray content and applied it to the profiling of FFPET samples. We profiled a set of 50 fresh frozen (FF) breast cancer samples and assigned class labels according to the signature and method by van 't Veer et al and then-7). When applied to the matched FF samples, the FFPET-derived classifier was able to assign FF samples to the correct class labels with 96% accuracy. The single misclassification was attributed to poor sample quality, as measured by qPCR on total RNA, which emphasizes the need for sample quality control before profiling.When a classifier developed with matched FF samples was applied to FFPET data to assign samples to either \"good\" or \"poor\" outcome class labels, the classifier was able to assign the FFPET samples to the correct class label with an average error rate = 12% to 16%, respectively, with an Odds Ratio = 36.4 to 60.4, respectively. A classifier derived from FFPET data was able to predict the class label in FFPET samples with an error rate of ~14% (p-value = 3.7 \u00d7 10We have optimized a platform for expression analyses and have shown that our profiling platform is able to accurately sort FFPET samples into class labels derived from FF classifiers. Furthermore, using this platform, a classifier derived from FFPET samples can reliably provide the same sorting power as a classifier derived from matched FF samples. We anticipate that these techniques could be used to generate hypotheses from archives of FFPET samples, and thus may lead to prognostic and predictive classifiers that could be used, for example, to segregate patients for clinical trial enrollment or to guide patient treatment. Standard extraction and amplification microarray protocols total RNA, few discovery platforms have reliably been applied to formalin-fixed paraffin-embedded tissue (FFPET) samples. Some approaches that assay fewer transcripts are promising, but do not allow for unbiased discovery of diagnostic signatures, which requires a genome-wide profiling method ,3. For et System and NuGEt System ) and arrt System ,8) have t System -9, and it System . While st System , similarThe importance of expression-based classification of human tumors to predict treatment response or disease outcome is highlighted by recent publications ,12. van'In the clinical diagnosis of patients with cancer, it is routine to obtain a FFPET sample, but generally rare to obtain a FF sample. Consequently, the requirement of FF samples has limited expression profiling to patients treated at specialized research centers. The ability to use FFPET samples would make this technology available for virtually all cancer patients both in the context of retrospective analyses of banked samples and clinical trials seeking to identify molecular tumor characteristics associated with patient outcomes to treatment. Being able to do such analyses from FFPET samples would simplify sample biopsy collection requirements and enable retrospective studies to develop and test hypotheses for prognostic classifiers for other cancers. As additional tests are developed and as molecular profiling methods mature, health-care providers will come to rely more on such classifiers in the risk-management of disease.In this report, our primary focus was to develop sample processing and classification methods with archived FFPET samples for hypothesis generation. To this end, we optimized a microarray platform and applied it to the profiling of FFPET samples. We demonstrated that FFPET samples can be accurately assigned to class labels using a classifier developed from fresh frozen samples, and we show that a classifier derived from FFPET samples that performs well in classifying FF samples can be developed.Matched pairs of Fresh Frozen and Formalin-Fixed Paraffin-Embedded breast cancer samples were obtained from Genomics Collaborative and Cytomyx . Colon carcinoma tissues were also from Cytomyx. RNA extraction reagents for FFPET samples were obtained from Epicentre Biotechnologies . Jurkat total RNA and amplification reagents used in this study were from Ambion . Matched FF and FFPET liver and muscle RNAs were obtained from MPI Research . The Universal Human Reference (UHR) total RNA was obtained from Stratagene . Cy-dye reagents were from GE Health sciences . Quantitative PCR reagents were purchased from Applied Biosystems . Microarrays were designed at Rosetta and manufactured by Agilent Technologies .Total RNA from FF samples was extracted by the vendor immediately prior to shipment. For FFPET samples, the extraction protocol is adopted from MasterPure RNA Purification kit . Briefly, three 10 \u03bcm sections were subjected to paraffin solubilization with xylene. Tissue was pelleted from solution by centrifugation and residual xylene was removed by two ethanol rinses. The tissue pellets were then air dried and digested overnight in a lysis buffer with Proteinase K. Digested protein and other cellular components were removed by ammonium acetate precipitation and centrifugation followed by a DNase I treatment of the resulting supernatant. A second ammonium acetate precipitation to remove any residual protein was then performed prior to ethanol precipitation and nuclease-free water rehydration of the purified total RNA.For measuring the relative quality of total RNA from FFPET samples, primer pairs for two house-keeping genes, GAPDH and ribosomal protein L13a, were used to estimate the integrity of the total RNA . The original protocol employs oligo-dT priming for first strand cDNA synthesis, in which an oligo-dT primer incorporates a T7 RNA polymerase promoter, necessary for subsequent in vitro transcription of RNA , and co-purified with the same mass of cRNA from the UHR pool labelled with Cy5. Hybridizations were done in fluor-reversed pairs as described . LabelinC) exists in two array-based ratio profiles, if hybridization 1 consists of the ratio C versus A (A/C), and hybridization 2 consists of the ratio C versus B (B/C), re-ratioing of A/C and B/C creates a new ratio experiment B versus A (A/B).Since we are mostly interested in ratio profiles between FFPET samples, we developed methods to recover such information by re-ratioing the ratio profiles derived from each array experiment between the FFPET samples and the corresponding hybridizations with UHR as the reference. Re-ratio of the ratio profiles effectively cancels out the UHR profile while accounting for dye labelling biases and leaving only the FFPET profiles of interest. In other words, given a typical configuration in which the common reference was initially designed to evaluate several oligonucleotide selection criteria rule sets . The cross-hybridization score reflects the smallest difference between the dG value of self binding and the largest dG value of the probe binding to any other molecular species. However, despite making the cross hybridization filters more lenient, we noticed that the cross hybridization can only be minimized since certain probes still contain relatively higher GC content compared to the majority of probes on the array in order to meet the 3' distance requirement. Selection of the genes for the HumFFPET 44 k v2.0 array was kept as close as possible to those on the current default Human 44 k v1.1 array. The human 44 k v1.1 and HumFFPET v2.0 array share 20,327 probes in common while the HumFFPET v2.0 array has 19,231 unique probes that are specifically designed for FFPET samples. Documentation for the HumFFPET 44 k v2.0 array will be available in the Gene Expression Omnibus (GEO) website in support of this publication, and the HumFFPET 44 k v2.0 array pattern will be publicly available through the Agilent eArray ordering system.Formalin-preserved samples present multiple challenges to whole-genome RNA profiling methodologies. To overcome the extensive degradation, contaminants from the formalin treatment and limited RNA mass availability, we sought to improve 1) robustness of the amplification protocol, 2) quality control assessment of FFPET samples, 3) microarray performance through probe selection and 4) experimental design by validating the use of UHR as the reference channel in two-color hybridization experiments.We started with a commercially available two-round RT-IVT protocol for the target cRNA preparation for non-FFPET samples . We used a mass imbalance of intact Jurkat RNA to assess the potential impact of degradation and chemical modifications that would vary across FFPET samples. While evaluating imbalanced FFPET samples directly was more appealing, we reasoned that the primary impact from the degradation and chemical modifications of the fixation and embedding procedures would be loss of amplifiable mRNA, which the Jurkat RNA experiment adequately approximates.To quantify the degree to which the amplification steps are susceptible to mass imbalance, we performed titrations of input mass for the first round amplification with 25 to 200 ng of Jurkat total RNA sample to model the impact of mass imbalance on microarrray data quality. Following the first round amplification using different mass inputs, a fixed amount (500 ng) of cRNA derived from each input mass titration in the first round amplification was used for the second round amplification. The resulting cRNA from the second round amplification was labelled and hybridized in fluor-reversed pair that was formed to reflect the initial first round mass imbalance between the reference input of 100 ng and the other mass inputs which were originally titrated in the first round amplification. The same experiments were done to titrate input mass for the second round amplification by holding the mass of the first round constant at 100 ng and varying the second round from 250 to 2,000 ng, with the input of 500 ng used as the baseline to form the reference for the different fluor-reversed pairs.In these mass imbalance experiments, hybridizations of fluor-reversed pairs that are formed between different mass input are still defined as 'same-vs-same' hybridizations since the exactly same mRNA-containing total RNA are used in the amplification whether the input varied in the first or the second round. If there is no amplification bias resulting from the initial mass inputs, either for the first round or the second round, then these same-vs-same hybridizations should have shown no signatures of differential expression beyond background level. Same-vs-same hybridization data are presented as a heat map in Figure The extent and nature of RNA degradation in FFPET blocks depends on FFPET preparation method, length of storage and storage conditions ,17,18. WFirst, we noted that mean log ratio of some samples displayed a dependence on the distance of the probe to the 3' end of the message. We reasoned that a non-zero slope of this plot indicates a bias in the data quality , and that the 3' slope of mean log ratio could be used as a key quality metric for microarray hybridization. In fact, the 3' slope metric is analogous to the RNA Degradation metric on AffymThen, we developed a qPCR-based assay to measure the relative abundance of transcripts in the total RNA and the amplified cRNA to quantify the relative quality of FFPET-derived total RNA as a way to relate the RNA quality to the subsequent microarray hybridization quality. Then, as we had done with FF samples, we started by selecting a set of FFPET samples that had been previously profiled and that cover a range of microarray data quality, and then assayed several housekeeping transcripts by qPCR. Figure In summary, the hybridization data of a sample of ideal quality should show no correlation between the Ct count of the total RNA or the amplified cRNA and the detected expression pattern, either positive or negative. Since the measured Ct count from the total RNA and amplified cRNA correlate with the quality of microarray hybridization Figure , the qPCStandard microarrays with probes approximately 500 nucleotides or greater from the 3' end have been found to be ill-suited for FFPET profiling. Therefore, we designed an array with probe content more suited to nature of FFPET samples and less susceptible to cross-hybridization by optimizing probe distance to the 3' end of the transcript, the probe base composition and the OSD cross-hybridization filter. While we had preliminary success at obtaining gene expression data from FFPET samples on a 44 K array (Human 44 K) designed for FF samples, the raw intensity of signals measured on the array drops off more precipitously for FFPET samples than FF samples as the probes' distance from the 3' end is increased. This is due to the extensive degradation of total RNA and the use of 3' poly-A priming amplification approach. One way to overcome the reduced intensity of hybridizations with cRNA from the 3' poly-A priming method is to optimize the probe content and composition on the array to reflect the proximity of amplified cRNA sequence of each transcript to its 3' end from degraded FFPET samples. To this end, we designed an array specifically suited to profiling FFPET samples with probe sequences within the first 400 nucleotides of the 3' end of each transcript , a set of matched FF and FFPET liver and muscle samples were used to measure the sensitivity and specificity of the HumFFPET 44 k 2.0 array in comparison with our standard Human 44 k array (see methods for details). To compare the overall performance of the FFPET and standard array designs, we used a ROC curve analysis to show 4 k 2.0, Using the HumFFPET 44 k 2.0 array with a set of FF and FFPET samples, we found that the measured expression ratio correlation between FF and FFPET samples is also improved on the HumFFPET 44 k 2.0 array (r = 0.80) over the Human 44 k array versus a FFPET-based self-reference pool. UHR RNA is composed of total RNA from 10 human cell lines and has been used widely as a reference in gene-profiling experiments, microarray platform evaluations, and cross-platform comparisons . Using UBefore applying the two-color microarray platform for classification, we validated the optimized platform measuring the correlation of gene expression profiles from the matched fresh frozen and FFPET tumor samples. As shown in Figure A number of papers have recently been published utilizing microarray technology to identify prognostic and diagnostic biomarkers as a tool to predict treatment response or disease outcome ,23. SuchTo answer the first question, we chose a well established biomarker, the breast cancer prognostic signature identified by van 't Veer et al. . In the Using the prognostic scores from 50 FF samples, we assigned 25 \"good prognosis\" patients and 25 \"poor prognosis\" patients according to their signature patterns. Figure -7), with errors mostly coming from poor quality samples flagged by qPCR. If poor quality samples as indicated by qPCR are excluded, the error rate will be 10% or less, mostly caused by samples originally very close to the threshold. More importantly, the classifier derived from the FFPET samples is confirmed by the FF sample data process to select \"prognostic\" genes based only on FFPET profiling data. The FFPET-derived classifier prediction accuracy was then evaluated against the FF samples. As shown in Figure Enabling personalized medicine in the near future will rely to a large extent on extracting data from well-annotated archived samples for which the three-to-five year outcome of the subject is known. We consider this approach as a bridging strategy for patient stratification and enrollment, during which time the hypotheses are tested and confirmed. Part of this strategy includes integrating FDA guidelines relating to retrospective sample and data mining; key factors in this regard are (1) storage conditions; (2) samples are representative of intended use; (3) samples meet inclusion/exclusion criteria; and (4) performance is comparable to that expected from a prospective study . The FDAWe demonstrated the ability to derive gene expression-based classifiers from FFPET data that sort patient samples into class labels that recapitulates the sorting of FF samples. The method involved the development of an optimized microarray platform with two-round RT-IVT amplification using 100 ng of total RNA input for the target preparation, a quantitative PCR method for assessing the relative quality of FFPET samples and a custom microarray with content and probe features specifically optimized for FFPET profiling. We found this microarray platform reliably and reproducibly measured differential gene expression of FFPET samples with a good correlation to corresponding FF samples. FFPET samples were correctly assigned to class labels developed from FF-derived classifiers. Although we cannot directly attribute the success of the classifiers to the optimizations performed on the profiling platform, further study with FFPET samples also showed our platform is of sufficient quality to enable hypothesis generation that could be validated with FF or FFPET samples in controlled, well-designed clinical trials.The authors declare that they have no competing interests.SD, RS, JG carried RNA extraction optimization, amplification, and hybridization. SD, SL, HD, TF, GT, BF and MM contributed to the design of the studies. MZ and MM drafted the manuscript. AK and JC designed the FFPET array. HD and YW developed the classifier and led the interpretation of the data. All authors read and approved the final manuscript.Two-round amplification workflow. Diagram of the Two-round amplification work flow used for amplification of total RNAClick here for fileFF to FFPET correlation. The correlation between the FF and FFPET samples is increased significantly on the HumFFPET 44 k array 2.0 when compared to the Human 44 k v1.1 array.Click here for file"} +{"text": "The quantitative polymerase chain reaction (qPCR) is a widely utilized method for gene-expression analysis. However, insufficient material often compromises large-scale gene-expression studies. The aim of this study is to evaluate an RNA pre-amplification method to produce micrograms of cDNA as input for qPCR.The linear isothermal Ribo-SPIA pre-amplification method was first evaluated by measuring the expression of 20 genes in RNA samples from six neuroblastoma cell lines and of 194 genes in two commercially available reference RNA samples before and after pre-amplification, and subsequently applied on a large panel of 738 RNA samples extracted from neuroblastoma tumours. All RNA samples were evaluated for RNA integrity and purity. Starting from 5 to 50 nanograms of total RNA the sample pre-amplification method was applied, generating approximately 5 microgams of cDNA, sufficient to measure more than 1000 target genes. The results obtained from this study show a constant yield of pre-amplified cDNA independent of the amount of input RNA; preservation of differential gene-expression after pre-amplification without introduction of substantial bias; no co-amplification of contaminating genomic DNA; no necessity to purify the pre-amplified material; and finally the importance of good RNA quality to enable pre-amplification.Application of this unbiased and easy to use sample pre-amplification technology offers great advantage to generate sufficient material for diagnostic and prognostic work-up and enables large-scale qPCR gene-expression studies using limited amounts of sample material. Amongst the various methods available to measure gene-expression, the reverse transcription quantitative polymerase chain reaction (RT-qPCR) is the most rapid, sensitive, and reproducible method -5. HowevTherefore, it seems that a method capable of pre-amplifying nanogram quantities of RNA is essential, to ensure that sufficient material is available for high-throughput gene-expression profiling. Various pre-amplification methods have been proposed including as well PCR-based ,7 as linThis paper extensively evaluates the linear isothermal Ribo-SPIA pre-amplification method for qPCR ,18. The Total RNA was extracted from 6 neuroblastoma cell lines and 738 fresh frozen neuroblastoma tumour biopsies according to three methods in collaborating laboratories. Two commercial RNA samples were mixed from Stratagene and Human Brain Reference RNA (HBRR) from Ambion) to generate the four MAQC reference samples .In order to assess the RNA purity and integrity, we performed a SPUD assay for the detection of enzymatic inhibitors and a caStarting from 5, 15, or 50 ng of total RNA, the WT-Ovation RNA Pre-amplification method (NuGEN) was used according to the manufacturer's instructions, generating approximately 5 \u03bcg of cDNA ,18.In parallel the same RNA extracted from the neuroblastoma cell lines and the MAQC samples were used for conventional cDNA synthesis using the iScript cDNA Synthesis Kit according to the manufacturer's instructions (Bio-Rad).A qPCR assay was designed for each gene [Additional files See [Additional file In order to assess the influence of the amount of input RNA on the yield of pre-amplified cDNA we measured the expression of ten reference genes after pre-amplification starting from 5, 15 or 50 ng as input RNA from three cultured neuroblastoma cells and UHRR. Figure MYCN single copy (MNS) and three MYCN amplified (MNA) neuroblastoma cell lines, we first measured the expression of 10 known differentially expressed genes before ed genes before aed genes . QualityNEUROD1; RTPrimerDB ID 8113 [NEUROD1 could be observed in the pre-amplified cell lines spiked with DNA as resulting DNA concentration after a 200\u00d7 dilution of the pre-amplified product is lower than 0.5 pg/\u03bcl, which is below the detection level for qPCR. Moreover, the Cq-value of NEUROD1 was equal in the HGDNA that had undergone the above described pre-amplification procedure and in the HGDNA used as positive control. These results indicate that DNA is not co-amplified (data not shown).In order to determine if residual DNA in the RNA extract is co-amplified and consequently might confound the results, we pre-amplified pure human genomic DNA (HGDNA) and two RNA samples from neuroblastoma cell lines verified for absence of DNA and subsequently spiked with 1% and 10% HGDNA (2 ng DNA per 20 ng RNA input for pre-amplification) (Roche). We next performed qPCR with a DNA-specific primer pair ( ID 8113 ) and useTo determine if purification of the pre-amplification product is required we performed a SPUD assay as described in [additional file In a last step of the evaluation of the necessity of pre-amplification clean-up, we measured the expression of ten reference genes in ten samples before and after pre-amplification. Comparison of the cumulative distribution plots of the ddCq-values obtained on purified and non-purified pre-amplified product showed that the plots almost completely overlap, providing further evidence that purification is not required [Additional file In order to assess the RNA quality of 738 neuroblastoma tumour samples, we performed a capillary gel electrophoresis analysis to establish an RQI. All samples were pre-amplified and qPCR was performed to measure the expression of two low abundant universally expressed reference genes (SDHA and HPRT1) [Additional file An import limitation of gene-expression analysis in the current diagnostic workflow is the fact that often minimal amounts of biomaterial are procured. As such, in many cases only a few nanograms of total RNA are available. In order to measure a large number of genes on this limited material and to maximize the number of samples through collaborative studies, a robust sample pre-amplification method is required. In this study we evaluated the linear isothermal Ribo-SPIA pre-amplification method for qPCR-based gene-expression analysis in cancer cell lines and commercially available reference samples, optimized the pre-amplification workflow, and used the method in a large clinical sample set.First, we could clearly demonstrate that differential expression is preserved after pre-amplification and that no substantial bias is introduced. The fold-changes between pre-amplified samples were compared to those observed between non-amplified samples in the largest set to date , revealing an accurate preservation of relative transcriptome composition despite the pre-amplification process. This is in accordance with previously reported findings on smaller datasets using qPCR ,26. HoweWe also assessed the need of DNase treatment before and of purification after pre-amplification. The results obtained show that neither of these procedures is required. This is an important finding, especially in large-scale gene-expression studies, as both techniques are time-consuming and add a substantial cost to the experiments. Furthermore, DNase treatment may lead to a loss of material and of mRNA integrity due to the exposure of the RNA samples to a high temperature during heat inactivation required for many commercial DNases.SDHA and HPRT1). As expected, pre-amplification of highly degraded samples turned out to be unsuccessful. In addition, there was a negative correlation between the Cq-values of the reference genes and the RQI. A possible explanation for the imperfect negative correlation is the use of random primers in the RNA pre-amplification process, resulting in successful pre-amplification of partially compromised RNA samples.Monitoring RNA quality and using intact RNA is of critical importance to obtain reliable gene-expression data and to ensure reproducibility of the results ,28. In tAs the tumour sample size is often very limited, the applied RNA pre-amplification procedure offers the possibility to perform large multicenter studies. This enabled us to establish and validate a robust prognostic multigene-expression signature in the largest neuroblastoma study cohort till now . MoreoveAn additional advantage of the evaluated pre-amplification method is its potential usefulness to generate a sufficient nucleic acids concentration for use in ultra high-throughput qPCR systems. These systems operate with very low volumes and have the potential disadvantage of compromised detection sensitivity as only limited volumes of nucleic acids can be added. As the concentration of the pre-amplified material is very high, this technique may offer a solution and should be evaluated in future studies.In conclusion, the results obtained from this study indicate that differential gene-expression is preserved after sample pre-amplification using the linear isothermal Ribo-SPIA pre-amplification method, that DNA is not co-amplified, that a pre-amplification clean-up step is not required, and that the pre-amplification product is free of enzymatic inhibitors. Application of this unbiased and straightforward pre-amplification technology offers a great advantage in terms of accessibility of material for diagnostic and prognostic work-up and enables large-scale qPCR gene-expression studies.MYCN amplified; MNS: MYCN single copy; RNA: Ribonucleic acid; RQI: RNA quality index (determined by microfluidic capillary electrophoresis as a measure for RNA integrity); RT-qPCR: reverse transcription quantitative polymerase chain reaction; UHRR: Universal Human Reference RNA.Cq: quantification cycle; dCq: difference in quantification cycle or delta-Cq ; ddCq: difference in dCq or delta-delta-Cq and interpretation of the data. KDP participated in the data analysis. FPT and SLF participated in the design and validation of the primers. ADP, FSP and JVS coordinated the study. All authors contributed to the revision of the manuscript for important intellectual content. All authors read and approved the final version of the manuscript.RDML file 1. Primer sequences of the MYCN and MYCN regulated genes and raw data from the expression analyses on the 6 neuroblastoma cell lines.Click here for fileRDML file 2. Primer sequences of the MAQC target genes and raw data from the expression analyses on the 4 MAQC reference samples.Click here for fileSupplemental Material and Methods. Details on sample preparation, gene-expression analysis, formulas and raw data availability.Click here for fileSupplemental Figure S1. Frequency distribution (left-axis) and cumulative frequency (right-axis) of the difference in quantification cycle value (dCq) induced by sample pre-amplification (x-axis) for 194 genes measured in the 4 MAQC reference samples. There is a clear sequence-specific pre-amplification bias, meaning that some sequences or parts of transcripts pre-amplify better than others.Click here for fileSupplemental Figure S2. SPUD assay for the detection of enzymatic inhibitors in purified pre-amplified samples (P) and in non-purified pre-amplified samples (NP) with negative control (NC) and positives controls with known inhibitor (PC). Difference in Cq or delta-Cq (dCq) (NP or P vs. NC) < 1 indicates absence of enzymatic inhibitors.Click here for fileSupplemental Figure S3. PCR efficiencies estimated with two different single curve efficiency algorithms, PCR Miner (red) and LinReg (blue). Efficiencies of purified and non-purified pre-amplified samples for each gene are comparable indicating that non-purified pre-amplified samples do not contain inhibitors and amplify with the same PCR efficiency.Click here for fileSupplemental Figure S4. Cumulative distribution plot of the delta-delta Cq (ddCq) before and after pre-amplification for 10 reference genes and 10 samples without purification of the pre-amplified product (black) and with purification of the pre-amplified product (grey). Each dot represents a ddCq-value between 2 samples before and after pre-amplification. Purification is not required for the preservation of differential expression.Click here for fileRDML file 3. Primer sequences of HPRT1 and SDHA and raw data from the expression analyses on 738 neuroblastoma tumour samples.Click here for file"} +{"text": "Gene expression profiling of small numbers of cells requires high-fidelity amplification of sub-nanogram amounts of RNA. Several methods for RNA amplification are available; however, there has been little consideration of the accuracy of these methods when working with very low-input quantities of RNA as is often required with rare clinical samples. Starting with 250 picograms-3.3 nanograms of total RNA, we compared two linear amplification methods 1) modified T7 and 2) Arcturus RiboAmp HS and a logarithmic amplification, 3) Balanced PCR. Microarray data from each amplification method were validated against quantitative real-time PCR (QPCR) for 37 genes.For high intensity spots, mean Pearson correlations were quite acceptable for both total RNA and low-input quantities amplified with each of the 3 methods. Microarray filtering and data processing has an important effect on the correlation coefficient results generated by each method. Arrays derived from total RNA had higher Pearson's correlations than did arrays derived from amplified RNA when considering the entire unprocessed dataset, however, when considering a gene set of high signal intensity, the amplified arrays had superior correlation coefficients than did the total RNA arrays.Gene expression arrays can be obtained with sub-nanogram input of total RNA. High intensity spots showed better correlation on array-array analysis than did unfiltered data, however, QPCR validated the accuracy of gene expression array profiling from low-input quantities of RNA with all 3 amplification techniques. RNA amplification and expression analysis at the sub-nanogram input level is both feasible and accurate if data processing is used to focus attention to high intensity genes for microarrays or if QPCR is used as a gold standard for validation. Expression array analysis has provided valuable new insights into the biology and pathophysiology of many cancers -4. HowevFeasibility and reproducibility has been established for linear amplification from nanogram to low microgram input quantities of total RNA to yield high fidelity gene amplification products suitable for gene expression microarray analysis -16. In aTo test our hypothesis, serial dilutions of stock RNA was employed for a comparison of 3 amplification techniques with assessment by technical replicates of microarrays and with subsequent validation by QPCR of both total and amplified RNA.Total RNA was prepared from the BT474 cell line and Stratagene Universal Human Reference RNA (StratRef) using the Arcturus PicoPure RNA isolation kit as per manufacturer's instructions. StratRef is composed of total RNA from 10 human cell lines and is designed to be used as a reference for microarray gene-profiling experiments. Serial dilutions of StratRef and BT474 RNA served as the substrate for all amplification reactions to minimize sources of variability.24NN in the presence of 10 \u03bcg of random hexamer (Amersham Pharmacia) and oligo d(T)in vitro transcription (IVT) based on a modified T7 amplification[in vitro transcription were performed[Total RNA from the BT474 breast cancer cell line and StratRef was linearly amplified through two rounds of ification. Reverseification, howeverification. Second performed. Samplesin vitro transcription according to the manufacturer's published instructions . Their protocol specified that a minimum input of 100\u2013500 picograms of total RNA are required for successful amplification with this kit designed specifically for low-input total RNA samples, which is equivalent to 10\u201350 cells. Per the manufacturer's specifications, 200 ng of Poly dIdC nucleic acid carrier was added to each reaction. IVT reaction time was 6.5 hours.Total RNA from the BT474 breast cancer cell line and StratRef , was linearly amplified through two rounds of Total RNA from the BT474 breast cancer cell line and StratRef , was reverse transcribed using separate oligo dTT7 primers, pooled and exponentially amplified in the same PCR tube. BrieflyTo 20 \u03bcl of purified-ligated DNA, we used the Advantage 2 PCR Polymerase system as per manufacturer's instructions , dNTP mix 10 mM ea.), 1 \u03bcl of 10 \u03bcM common primer P1 than the linear amplifications , an aliquot of RNA from the same tube of StratRef RNA and an aliquot of the same preparation of BT474 RNA as was used to minimize input variability. Reverse transcription of this RNA was performed before shipping the cDNA on dry ice for the subsequent Balanced PCR reactions. Coupling, array hybridization and analysis were all performed in the Haqq lab, as were all other techniques described.The molecular weight profile and integrity of each amplified RNA/DNA species was evaluated using the Agilent Bioanalyzer 2100 with a RNA 6000 Pico Lab Chip.1.25 \u03bcg of amplified RNAs (aRNAs) produced with Modified T7 and Arcturus RiboAmp HS were converted to amino-allyl modified cDNA and coupled to N-hydroxysuccinimidyl esters of Cy3 or Cy5 . The BalThe 20,862 cDNAs used in these studies were from Research Genetics , now Invitrogen . On the basis of Unigene build 166, these clones represent 19,740 independent loci. Hybridization, washing, scanning and primary data analysis was performed as previously described,24. A toGene expression was analyzed with Cluster using thcDNA was made from total RNA for both BT474 and StratRef, in 100-\u03bcL reactions using M-MLV reverse transcriptase and random hexamers incubated at 25\u00b0C for 10 min then 48\u00b0C for 30 min. Expression of each gene was analyzed using the 5' nuclease assay with theThe mean expression ratios for each of the 3 amplification techniques and for the total RNA method were calculated for each technique. Analysis A is defined as the 37 QPCR genes, Analysis B, the high intensity dataset, is defined as all genes that had a 635 Median Intensity OR a 532 Median Intensity > 1500. Analysis C, the unfiltered dataset, is defined as all of the genes on the microarray that had data present for 90% of the arrays. Sensitivity, specificity and percentage correct were calculated for each method in Analysis A using QPCR results as the gold standard. All pairs of correlation coefficients for Analysis B and C were used to perform intramethod and intermethod comparisons of mean correlation coefficients for all methods using expression results from microarrays of unamplified RNA as the gold standard.Figure The lower limits of total RNA required for each method were defined as the lowest RNA input amount where amplification reactions consistently yielded sufficient product to permit analysis on cDNA microarrays and confirmatory functional assays. These were 500 pg for modified T7, 250 pg for Arcturus RiboAmp HS, and 500 pg for balanced PCR Table . All 3 tThe size of the amplified products ranged from 100\u20134400 bases for all attempted amplifications. For the modified T7 method, after two rounds of amplification the product measured a mean of 3400 bases on the Agilent Bioanalyzer PicoChip. For Arcturus RiboAmp HS, after two rounds of linear amplification the product measured a mean of 3372 bases on the Agilent Bioanalyzer PicoChip. The Balanced PCR products were evaluated on a 1% Agarose gel and measured over 3500 bases (data not shown). Amplification reactions that did not yield the requisite 1.25 ug of RNA were considered unsuccessful; all successful reactions were validated with the Agilent Bioanalyzer and none required study exclusion on that basis.The Pearson's correlations and standard deviations for intra-method and inter-method comparisons for the high intensity genes are shown in Table Table QPCR of amplified RNA was correlated with the QPCR of total RNA for all methods with R2 of 0.87 for modified T7, R2 0.86 for Arcturus RiboAmp HS and R2 0.75 for balanced PCR.When dealing with clinical samples, microarray results are often validated with QPCR; therefore techniques that demonstrate a low false expression result (FER) by this type of analysis are very desirable. We defined a FER as number of discordant values divided by number of genes analyzed for expression ratio by microarray compared to QPCR using unamplified total RNA. Both array and QPCR measurements were normalized to levels of \u03b2-glucuronidase. As shown in Table When we compared QPCR of amplified to total RNA for each method , the FER was 0% (0/21 genes) for Arcturus, 4.5% (1/22 genes) for modified T7 and 15% (3/19 genes) for Balanced PCR.Not surprisingly, the unfiltered arrays showed poorer overall correlation even between replicate arrays than did the analysis filtered for high intensity spots. Even the unamplified arrays had correlation coefficients somewhat lower than expected when array data was unfiltered, indicating that the filtration process removes the effect of background and nonspecific hybridization on the cDNA arrays. The filtration process did not change the individual expression ratios but rather focused the analysis on genes in which the hybridization gave a particularly strong signal, a strategy that seems logical when attempting to work with very rare clinical specimens to ensure a lower false call rate. Other groups have reported analysis of total RNA arrays with input as high as 20\u201340 ug per microarray, which may have improved the results for our total RNA arrays That beiRNA amplification technologies serve translational clinical research well. Linear amplification already has enabled examination of gene expression in clinical core needle biopsies, surgicaWhile each method was able to provide data in the sub-nanogram range, certain methods are advantageous over others in terms of lower limit of RNA that can reliably be amplified, cost per reaction and the number of days required for processing of samples. The Arcturus RiboAmp HS method was more reliable at generating expression arrays at the threshold of below a nanogram of total RNA. For modified T7 and balanced PCR, a nanogram of total RNA will ensure reliability. The Arcturus amplification procedure was able to produce very substantial amounts of nucleic acids from just 250 pg of total RNA and therefore should be considered more reliable than the other two methods at lower input thresholds. Caretti et al. performed a comparison of amplifications of a colon biopsy subjected to laser capture microdissection with purification of an estimated 1 nanogram of RNA per specimen; they compared the two cycle Arcturus OA and the one cycle Nugen amplification and found that the Arcturus method showed the lowest variance and highest correlation. HoweverShearstone et al performed a comparison of their laboratory's novel IVT amplification, termed BIIB, to Arturus Ribo Amp HS, Nugen Ovation, Affymetrix One and Affymetrix Two Cycle. Their mOur paper describes a platform independent measure, the FER, useful in comparing amplification methods based on QPCR versus microarray. FER was calculated based on QPCR of total RNA rather than amplified RNA, since there was not sufficient amplified product available for performing QPCR on all 37 primer probe pairs. In addition, QPCR of amplified RNA is biased towards recapitulating the results of the microarray experiment due to truncation of the RNA products.Each of the three amplification techniques yielded fairly consistent expression results within the constraints of each technique's input threshold of total RNA, both on microarray analysis and when compared to QPCR. Based on these excellent correlations, it is feasible to reproducibly perform high fidelity amplifications by a variety of techniques when starting with sub-nanogram input quantities of total RNA. However, when attempting expression arrays analysis from less than 500 pg input, linear amplification with Arcturus RiboAmp HS was more successful than the other methods studied.Below 1 ng, the modified T7 method could not reproducibly amplify such that insufficient RNA was typically generated for even a single microarray hybridization. Only by performing multiple attempts at amplification were we able to achieve successful amplified product with this technique. While we were successful in hybridizing 3 arrays with this method at 500 pg we do not recommend this method below 1 ng of input total RNA as several operators quite experienced with this method could not repeat these results. One drawback of this technique is the greater length of time involved (3 days) compared to other amplification reactions (2 days) and the relative complexity of the protocol.Arcturus RiboAmp HS was able to provide expression array data at a lower input concentration than any of the other tested methods and we were able to use smaller amounts than the manufacturer's recommended minimum sample input of 500 pg total RNA. Below 250 pg, this method typically failed to amplify in our hands . This likely represents a theoretical limit of 25 cells total RNA content , unless specialized tissues that bear more RNA such as oocytes are examBalanced PCR is a promising technique for the amplification of low-input quantities of RNA. It maintains a high degree of accuracy with an input as low as 667 pg of RNA (FER 10.8\u201313.5%). While theoretical concern exists regarding the accuracy of logarithmic amplification methods, this method overcomes the potential problem by stopping the PCR reaction before the logarithmic phase of the PCR curve. This method had the lowest cost per reaction and also required the least amount of technician time compared to the other methods. In addition, it has been recently demonstrated that the same balanced-PCR protocol used for cDNA amplification may also be used for the unbiased amplification of whole genomic DNA followed by array-CGH analysis. HoweverThese results have certain important limitations to consider. First, hybridizations were carried out sequentially over a period of several months rather than all at the same time. It is recognized that arrays that are hybridized together under identical conditions are more similar to each other than arrays hybridized on separate occasions. Additionally, no dye swap experiments were performed. The rationale for this is that since a standardized universal reference (StratRef) was employed, it has been demonstrated that this mitigates the effect of potential experimental bias introduced by separate hybridization reactions and even permits the comparisons of array data between members of the research community. Use of Each laboratory will have to weigh their decision on which amplification technique is most suitable based on factors including amount of starting input total RNA, cost per reaction, technician time, and experience/comfort level with the techniques. Laboratories that routinely work with samples in excess of 1 ng starting material should focus on cost-savings as each of the methods tested proved to be reliable above this threshold. Balanced PCR could be further optimized to include amino-allyl-dUTP incorporation in the PCR reaction. This would facilitate indirect Cy dye labeling, which would reduce the labeling cost for this method.It is important to ascertain the linearity of a chosen method at the low input range before going on to work with precious clinical specimens. Each of the 3 tested methods performed surprisingly accurately when amplifying from low inputs of total RNA based on microarray analysis validated with QPCR of 37 genes.We have demonstrated that it is feasible to reliably and accurately perform expression profiling from sub-nanogram quantities of total RNA. These methods will likely enable exciting new directions for molecular analysis of samples previously considered to be of insufficient quantity of total RNA for expression profiling. The data processing and filtration of microarray results is of fundamental importance when attempting analysis of amplification reactions from sub-nanogram input amounts of total RNA.(QPCR): quantitative real-time PCR; (SMART PCR): switching mechanism of 5'end of RNA template PCR; (StratRef): Stratagene Universal Human Reference RNA.JEL developed the study design, carried out the preparation of RNA, the total RNA array hybridizations, the Arcturus and modified T7 RNA amplifications and hybridizations, quantification and assessment of transcript integrity, coordinated efforts with the laboratory of GMM, participated in data analysis, drafted the manuscript and performed critical revisions. MJM participated in data analysis and critical revision of the manuscript. JS performed the QPCR and provided the BT474 cell line. GMM and GW performed the balanced PCR reactions. LJE participated in study design and conception. JWP participated in study design and conception. CMH and his laboratory provided the modified T7 protocol, conducted gene expression array quality control and assisted with data analysis. CMH participated in study design.Table S2. QPCR genes.Click here for fileFigure S1: Dynamic Range of QPCR Probes for BT474 Versus StratRef. The delta delta CT of our QPCR probes covered a dynamic range of negative 21 to positive 9 and were selected without bias towards any of the amplification techniques.Click here for fileTable S1-8. SAM Analysis.Click here for file"} +{"text": "Microarray technology provides a powerful tool for defining gene expression profiles of airway epithelium that lend insight into the pathogenesis of human airway disorders. The focus of this study was to establish rigorous quality control parameters to ensure that microarray assessment of the airway epithelium is not confounded by experimental artifact. Samples of trachea, large and small airway epithelium were collected by fiberoptic bronchoscopy of 144 individuals and hybridized to Affymetrix microarrays. The pre- and post-chip quality control (QC) criteria established, included: (1) RNA quality, assessed by RNA Integrity Number (RIN) \u2265 7.0; (2) cRNA transcript integrity, assessed by signal intensity ratio of GAPDH 3' to 5' probe sets \u2264 3.0; and (3) the multi-chip normalization scaling factor \u2264 10.0.Of the 223 samples, all three criteria were assessed in 191; of these 184 (96.3%) passed all three criteria. For the remaining 32 samples, the RIN was not available, and only the other two criteria were used; of these 29 (90.6%) passed these two criteria. Correlation coefficients for pairwise comparisons of expression levels for 100 maintenance genes in which at least one array failed the QC criteria (average Pearson r = 0.90 \u00b1 0.04) were significantly lower (p < 0.0001) than correlation coefficients for pairwise comparisons between arrays that passed the QC criteria (average Pearson r = 0.97 \u00b1 0.01). Inter-array variability was significantly decreased (p < 0.0001) among samples passing the QC criteria compared with samples failing the QC criteria.Based on the aberrant maintenance gene data generated from samples failing the established QC criteria, we propose that the QC criteria outlined in this study can accurately distinguish high quality from low quality data, and can be used to delete poor quality microarray samples before proceeding to higher-order biological analyses and interpretation. The assessment of gene expression of the human transcriptome using microarray technology is a powerful tool for identifying genes and gene expression patterns involved in mechanisms of normal organ function and the pathogenesis of disease -3. MicroWhile it is easy to obtain the cells, the output from microarray data critically depends on the quality of the RNA and the cRNA derivatives hybridized to the microarray -27. AlthSome of the results of these studies have been previously reported in the form of an abstract .6 cells were recovered from trachea, large airway and small airway in all five pulmonary phenotypic groups and cell counts were not dependent upon phenotype of the subject or site of bronchial brushing (p > 0.05 by ANOVA). From all locations, an average of 99 to 100% of all cells recovered were epithelial with less than 1% contamination by non-epithelial cells. The cell differentials varied depending on location as previously described [A total of 223 samples of airway epithelium were obtained by bronchial brushing from three different locations from 144 subjects with 5 different pulmonary phenotypes . For the GAPDH 3'/5' signal intensity ratio and scaling factor criteria, all 223 samples were included.Of the 223 samples, all three criteria were assessed in 191; of these 184 (96.3%) passed all three criteria. For the remaining 32 samples, the RIN was not available, and only the other two criteria were used; of these 29 (90.6%) passed these two criteria. Only 10 (4.5%) failed at least one QC criterion, and were therefore considered to have failed QC. The overall breakdown of samples failing QC was: 2 large airway samples and 8 small airway samples . The greatest source of failure was the scaling factor criterion, which contributed to 70% of the overall failures. All of the 10 samples failing the QC criteria failed the RIN and/or scaling factor criterion, indicating that these metrics may be the most sensitive to technical variance, and therefore are central to assessing overall array quality. While 7 samples failed by one criterion each, 1 sample failed by both the RIN and GAPDH 3'/5' ratio criteria, and 2 samples failed by both the RIN and scaling factor criteria, suggesting that the quality control parameters exert correlated effects on array performance.The RNA quality was examined by the Bioanalyzer-generated RIN score in 191 samples for which there was data available (see above). Based on published data ,31-33, s2 = 0.92; p < 0.0001), application of addition cutoff criteria beyond GAPDH was considered redundant. In the context of airway epithelium, although GAPDH is not an ideal \"housekeeping\" gene as its expression may vary under different conditions, this does not interfere with its use in assessing cRNA quality [As a metric for the efficiency of transcription and amplification of antisense cRNA from the cDNA derivative of the starting RNA material, the signal intensities for the probe sets for GAPDH residing at the 5' end and within the 600 nucleotides most proximal to the priming site at the 3' end of the transcript were compared. For all samples hybridized to microarrays, 3' to 5' probe set intensities for the GAPDH gene were extracted to compute the 3'/5' signal intensity ratio. Based on published data ,23,34-36 quality .The scaling factor was used as an overall index of the microarray hybridization, washing, and scanning process. Scaling factor values for all 223 samples computed at a target intensity value of 500 were examined. The criterion of scaling factor values \u2264 10.0 was established . Correlation coefficient values indicated that samples passing QC criteria were highly correlated with other samples passing QC criteria (average Pearson r = 0.97) while samples failing QC criteria showed lower correlations with all other samples . By contrast, across the \"fail\" data set, the median coefficient of variation for the 100 genes was 35.7% .Of the 24 samples passing QC criteria that were used for the correlation matrix analysis, 10 samples matched in airway location with the 10 samples failing QC criteria were selected to assess coefficient of variation of each of the 100 maintenance genes. Expression levels for the 100 maintenance genes showed significantly greater variability among the 10 samples failing QC criteria (\"fail\" data set) than among the 10 samples passing QC criteria , was as2<0.05 for all genes) with age (average 45 \u00b1 8.8) across the 144 individuals from whom airway epithelium was derived. None of the genes showed strong correlation (r2<0.15) with smoking history (average pack-yr 30 \u00b1 18). Correlation analysis of expression levels with pulmonary function parameters showed no relationship .To examine potential causes of the variation in maintenance gene expression levels unrelated to the QC criteria, differences among the subjects were assessed. The 223 airway epithelial samples acquired for this study were derived from 144 individuals, as it was possible for a single individual to undergo bronchial brushing at one or more of the three target sites: trachea, large airway, and small airway. By independent linear regression, there was no correlation of gene expression level for the 100 maintenance genes was used to compared samples that passed QC to those that failed. For this analysis, an independent set of microarray data that failed QC was available from a technician training program in the Weill Cornell Medical College Department of Genetic Medicine. From this training program, 11 microarrays that failed QC were available from small airway epithelium samples collected from individuals with COPD . The data from these 11 samples was compared to microarray data from n = 11 samples from the small airway epithelium of individuals with COPD that passed all QC criteria of trachea, large airway and small airway were obtained from healthy subjects and from subjects with lung disease, including smokers and non-smokers, to assess quality control criteria for microarray analysis. Using Affymetrix Human Genome U133 Plus 2.0 arrays, a tripartite QC cutoff was established consisting of: (1) RNA quality, assessed by RNA Integrity Number (RIN) \u2265 7.0 using Agilent 2100 Bioanalyzer software; (2) cRNA transcript integrity, assessed by signal intensity ratio \u2264 3.0 of GAPDH 3' to 5' probe sets; and (3) the multi-chip normalization scaling factor \u2264 10.0. Of the 223 samples, 10 failed one or more of the QC criteria in a way that did not depend on phenotype of the subject or location of sampling. By using the QC cutoff criteria, the inter-array variability, as assessed by the coefficient of variation in the expression levels for 100 maintenance genes, decreased significantly. These QC criteria should be applicable to minimize experimental variation in gene expression microarray experiments.vs formalin-fixed paraffin-embedded (FFPE) pelleted human bone marrow stromal cells, despite all RNA samples having equivalent and comparable 28s/18s ratios as visualized by computerized gel electrophoresis, more than twice as many genes were identified as expressed in snap frozen cells than in formalin-fixed paraffin-embedded cells, reflecting possible RNA quality effects in play that were not captured by quantitative assessment of the rRNA subunit peak heights [We have previously utilized the 28s/18s rRNA peak ratio, as calculated by electropherogram, to verify quality of RNA samples prior to microarray hybridization . However heights .Since the implementation of the Agilent Bioanalyzer RIN software, we have relied on the RIN as the primary indicator of RNA integrity, based on published data showing that the RIN accounts for numerous properties of the RNA degradation process to provide an unambiguous and comprehensive index of the overall quality of the starting material ,41,43,44in vitro transcription reactions and hybridization has the potential to save substantial costs in wasted reagents and technical time.Illustrating the predictive power of the RIN as a pre-chip criterion, linear regression modeling and ordinary least squares linear regression have shown that the scaling factor and GAPDH 3'/5' signal intensity ratio are negatively correlated with the RIN value . InteresPublished recommendations for an acceptable range of scaling factors computed at the same target intensity value vary in numerical fold cutoffs, or alternately, suggest all values within 2 standard deviations from the mean in either direction ,34,35. Hin vivo tissue from which sample RNA is limiting and alternate technical procedures are utilized, the scaling factor metric can be useful to assess the impact of technical artifact, and the quality of the expression data. For example, in an analysis of small sample RNAs from rat liver, significantly increased scaling factor values indicated that the amplification technique used contributed to technical variability in the form of a substantial decrease in the percent of transcripts detected on the array [In gene expression profiling studies of samples obtained from biopsies, cell sorting, or laser capture microdissection, yields of cellular RNA are often small quantities and require specialized amplification methods to generate sufficient biotinylated cRNA for array hybridization ,54. In the array . In conthe array .Despite large amounts of published lung gene expression data, there is often little attention focused on microarray quality control, with the consequent risk of skewing the data by including poor quality arrays in the analysis ,57-63. Fin vitro transcription and amplification reagent used, and the date of array hybridization. These factors may contribute to batch effects, where the overall intensity of a batch of microarrays more closely resembles the batch than the rest of the group of arrays [One methodology for testing data integrity is that of unsupervised hierarchical sample clustering based on Spearman correlations-based distance metric ,65. The f arrays ,66. WhilAnother strategy often used for differentiating high quality from low quality microarray data is based on outlier status of any given sample in an experiment. Software packages such as dChip and Probe Profiler can identify intensity outliers of a sample in a group of microarrays, and take into account such features of the array hybridization such as brightness, saturation, dynamic range, and background -67. The The current study provides an efficient and simple approach for quality assessment of gene expression microarray data. It emphasizes good experimental execution and discarding unsatisfactory microarrays rather than salvaging data through complex statistical analyses of array data of variable quality. We provide a standardized tripartite criteria specifically addressing starting RNA quality, integrity of the cRNA transcript, and hybridization efficiency. Each parameter has been assigned a threshold value, outside of which samples are readily identifiable as being low quality and can be eliminated or re-hybridized before proceeding to analysis. All measures are available through Agilent Bioanalyzer software and the Affymetrix GCOS report automatically generated after array washing and scanning. Although the Agilent Bioanalyzer and Affymetrix platforms are widely used, analogous criteria may be applied for alternate methodologies. For example, assessment of the relative signal for probes representing the 3' and 5' ends of any mRNA could be included as QC for any microarray platform.In the context of data sharing via public repositories, the criteria presented in this study has the benefit of including two parameters that are guaranteed to be available for any Affymetrix data deposited in GEO. The initial processing by GCOS of CEL files produces a Quality Report containing the 3'/5' GAPDH signal intensity ratio and a multi-chip normalization scaling factor for the array. The GCOS software is available for free download from Affymetrix and can be applied by all investigators. In this way, two of the three QC criteria discussed in this paper provide a consistent quality control approach to not only current data, but also to previously published, archived data. Even though the RIN criterion as applied here requires specialized equipment and software, the RIN can be indirectly predicted from the 3'/5' ratio which is extracted from the CEL files deposited in GEO .In the context that minimizing undesirable technical variation allows for more accurate analysis of gene expression and increased power for significance testing, we propose that the simple method described here, consisting of a universally available set of three criteria, can ensure that microarray data reflects biological differences as opposed to experimental variability.nd-3rd order bronchi) and small airway (10th-12th order bronchi) in five phenotypic groups and flicking five to ten times. An aliquot of 0.5 ml was used for differential cell count and the remainder (4.5 ml) was centrifuged at 6,000 rpm for 10 minutes within less than 60 minutes from the time of bronchial brushing. Pelleted airway epithelial cells were lysed with the TRIzol reagent , and after chloroform extraction the RNA was purified directly from the aqueous phase using the RNeasy MinElute RNA isolation kit . For each sample, 1 \u03bcl of RNA was used for quantification of yield by NanoDrop ND-1000 spectrophotometer and quality assessment by Agilent 2100 Bioanalyzer software. The samples were stored in RNA Secure at -80\u00b0C until time of biotin-labeled cRNA preparation.Fiberoptic bronchoscopy was performed to obtain pure populations of tracheal, large and small airway epithelium by using methods previously described ,29,73. Bin vitro transcription reaction and the Genechip Sample Cleanup Module was used for cleanup of the biotin-labeled cRNA . Final yield of biotin-labeled cRNA was confirmed by NanoDrop spectrophotometric analysis. For each sample, 10 \u03bcg of biotin-labeled cRNA was fragmented, and hybridized to the Human Genome U133 Plus 2.0 array according to Affymetrix protocols, processed by the Affymetrix GeneChip Fluidics Station 450 and scanned with an Affymetrix GeneChip Scanner 3000 7G , as previously described [th percentile of all measurements on that array. All microarray data has been deposited at the Gene Expression Omnibus (GEO) site .Double stranded cDNA was synthesized from 1.0 to 2.0 \u03bcg of total RNA using the GeneChip One-Cycle cDNA Synthesis Kit, followed by cleanup of the double stranded product with the GeneChip Sample Cleanup Module. The GeneChip IVT Labeling Kit was used for the 16 hr The selection of the three QC criteria was targeted towards addressing quality control in the three integral stages of the microarray process: (1) extraction of the starting RNA material; (2) synthesis of cDNA and antisense biotin-labeled cRNA target; and (3) the array hybridization efficiency.An RNA Integrity Number (RIN) for each RNA sample in this study was generated by an Agilent Bioanalyzer algorithm that uses a Bayesian approach to train and select a prediction model incorporating features extracted from an electropherogram including pre-region, 5S-region, fast-region, 18S-fragment, inter-region, 28S-fraction, precursor-region, and post-region ,44. RIN in vitro transcription of cRNA can result in under representation of the 5' moiety of the transcript [Per Affymetrix guidelines, the ratio of the 3' to 5' signal intensity values can be used as a method of quality control for the array data ,35,36,75anscript ,52. In aanscript ,34,35. TAccording to Affymetrix microarray guidelines, comparable scaling factors between arrays in a given experiment are critical to minimizing differences in overall signal intensities, thereby allowing for more reliable detection of biologically relevant changes ,52. Basea priori knowledge that they exhibit relatively low signal variation over different sample types and are consistently called Present in a large number of different tissues and cell lines . In the present study, for notational convenience, we use the term \"gene\" in place of \"probe set\", as each one of the 100 probe sets represents a different gene.Samples that failed any one of the three criteria described were considered to have failed the QC criteria, while those samples that passed all three criteria were considered to have passed the QC criteria. To confirm the validity of this quality assessment strategy, expression levels were determined for a set of 100 constitutively expressed maintenance genes and differences in gene expression profile for these genes were compared between the samples failing the quality control criteria and the samples passing the QC criteria. The set of control genes was selected by Affymetrix using the To examine potential causes for variation in QC criterion values between samples, the effects of differences in phenotype or biologic origin of the sample were assessed by ANOVA.U test.In regard to the maintenance genes we used Pearson's correlation to assess correlation in expression levels for the 100 maintenance genes among the 10 samples that failed the QC criteria and 24 randomly selected samples that passed QC criteria . The ide\u00ae Genomics Suite software (version 6.8 Copyright\u00a9 2008) for 11 COPD subjects who failed chip quality control and 11 COPD subjects who passed . Affymetrix HG-U133 Plus 2.0 CEL files were imported into Partek using the Robust Multi-chip Average (RMA) method. All 54,675 log2-transformed small airway gene expression data were mapped to principal components to preserve the variation of this data, projected in 3 dimensions, and plotted. In order to identify the specific probe sets that were differentially expressed between the two groups, microarray data were processed using the MAS5 algorithm (Affymetrix Microarray Suite Version 5 software), which takes into account the perfect match and mismatch probes. MAS5-processed data were normalized using GeneSpring by setting measurements <0.01 to 0.01 and by normalizing per chip to the median expression value on that array and, per gene to the median expression value for each gene across all arrays. Genes that were significantly differentially expressed between the two groups were selected according to the following criteria: (1) P call of \"Present\" in 20% of samples; (2) magnitude of fold change in average expression value for pass QC vs fail QC of >1.5; and (3) p < 0.01 using a t test with a Benjamini-Hochberg correction to limit the false positive rate [To compare gene expression profiles in samples that passed QC to those that failed QC, principal components analysis was carried out using Partekive rate .TR conducted the microarray experiments, performed the analysis and helped draft the manuscript. TO and NH participated in study design and statistical analysis, supervised the data collection and helped draft the manuscript. WW assisted with data analysis and study design BGH coordinated the collection of all of the clinical samples used in the microarray analysis MA, DD, and MT processed samples and conducted microarray experiments RGC conceived of the study, and assisted in study design and helped to draft the manuscript.All authors have read and approved the final version of the manuscript. Demographics of the Study Population and Biologic Samples for the Comparison of Fail QC or Pass QC Data. All subjects from both groups were smokers with COPD. The \"passed\" and \"failed\" groups are comprised of samples that passed or failed, respectively, the QC criteria. Data is presented as mean \u00b1 standard deviation.Click here for fileSignificant Genes in the Small Airways Epithelium of Smokers with COPD Between Chips that Failed QC and Chips that Passed QC. Shown are the 888 probe sets that are differentially expressed (using criteria of a fold change greater than 1.5 and a p value, with Benjamini-Hochberg correction, less than 0.01, in n = 11 pass QC samples and n = 11 fail QC samples, all from the small airway epithelium of individuals with COPD.Click here for file"} +{"text": "RNA isolation and purification steps greatly influence the results of gene expression profiling. There are two commercially available products for whole blood RNA collection, PAXgene\u2122 and Tempus\u2122 blood collection tubes, and each comes with their own RNA purification method. In both systems the blood is immediately lysed when collected into the tube and RNA stabilized using proprietary reagents. Both systems enable minimal blood handling procedures thus minimizing the risk of inducing changes in gene expression through blood handling or processing. Because the RNA purification steps could influence the total RNA pool, we examined the impact of RNA isolation, using the PAXgene\u2122 or Tempus\u2122 method, on gene expression profiles.Using microarrays as readout of RNA from stimulated whole blood we found a common set of expressed transcripts in RNA samples from either PAXgene\u2122 or Tempus\u2122. However, we also found several to be uniquely expressed depending on the type of collection tube, suggesting that RNA purification methods impact results of differential gene expression profiling. Specifically, transcripts for several known PHA-inducible genes, including IFN\u03b3, IL13, IL2, IL3, and IL4 were found to be upregulated in stimulated vs. control samples when RNA was isolated using the ABI Tempus\u2122 method, but not using the PAXgene\u2122 method . Sequenom Quantiative Gene Expression (QGE) measures confirmed IL2, IL4 and IFN\u03b3 up-regulation in Tempus\u2122 purified RNA from PHA stimulated cells while only IL2 was up-regulated using PAXgene\u2122 purified (p < 0.05).Here, we demonstrate that peripheral blood RNA isolation methods can critically impact differential expression results, particularly in the clinical setting where fold-change differences are typically small and there is inherent variability within biological cohorts. A modified method based upon the Tempus\u2122 system was found to provide high yield, good post-hybridization array quality, low variability in expression measures and was shown to produce differential expression results consistent with the predicted immunologic effects of PHA stimulation. Microarrays have rapidly become the assay of choice for clinical investigators wanting to measure gene expression, owing to their high-throughput and relative ease of use. As with any assay, it is critical that experimental variance is minimized in order to permit measurement of true biological variance. In clinical microarray studies, the sources of experimental variance can be considerable. While a range of corrections exist to detect and correct for variability introduced during hybridization and due to chip quality, little attention has been paid to the impact of specimen collection, handling and processing on the resulting gene expression measures.For immunological studies, peripheral blood is commonly used in microarray experiments, as it is the most easily obtained source of lymphocytes, granulocytes, and other cells that may provide insight into immune function. Historically, gradient density-based methods have been used to purify white blood cells from peripheral blood. However, it is known that, within minutes of collection, peripheral blood gene expression profiles change significantly due to transcript induction and transcript degradation, and it To address these concerns, RNA whole blood collection tubes have been developed that have the considerable advantage of lysing whole blood at the time of collection, while simultaneously stabilizing RNA for later purification. The PAXgene\u2122 Whole Blood RNA isolation system contains a proprietary solution that reduces RNA degradation and transcript induction upon peripheral blood collection ,3. UsingIn this report, we compare these two whole blood RNA purification methods for use in microarray experiments to measure immune response gene expression, and show that the choice of RNA purification method can have significant implications. Using phytohemagglutinin (PHA) stimulated whole blood as a test case, we found that RNA yield and hybridization quality indicators were better for RNA isolated using the Tempus\u2122 RNA purification method. While PHA induced a set of transcript expression changes detected in both Tempus\u2122 and PAXgene\u2122 samples, use of the Tempus system resulted in the identification of a greater number of gene expression changes that would be expected to result from PHA stimulation.260/230ratios , a mitogen known to induce expression of immune activation transcripts including those for IL-2, IL-4, and IFN\u03b3. However\u00ae microarray. Only two transcripts, for the B-cell translocation transcript BTG-1, were identified as down-regulated in samples collected with Li Heparin tubes, compared to those collected in Tempus tubes. Interestingly, just one transcript was up-regulated using PAXgene\u2122 tubes, corresponding to one of the transcripts up-regulated in the Tempus\u2122 system, BTG1. These results suggest that collection of samples into Li Heparin tubes prior to PHA stimulation had a minimal effect on differential expression and would have negligible effect on the interpretation of differences between the two tube types.To control for the effect of Li Heparin tube collection, we performed an initial comparison of 5 healthy control samples drawn directly in PAXgene\u2122 or Tempus\u2122 tubes vs. the same 5 healthy control samples drawn in Li Heparin tubes with no PHA stimulation. RNAs were hybridized to the HG-U133 2.0 Plus Affymetrix GeneChip\u00ae microarray.To compare PAXgene\u2122 and Tempus\u2122 tubes, whole blood samples collected in Li Heparin tubes from 7 healthy controls were split into PHA-stimulated or unstimulated aliquots. After 3 hrs, samples were transferred to each respective RNA isolation system and hybridized to the HG-U133 2.0 Plus Affymetrix GeneChipHierarchical clustering of absolute expression levels, irrespective of gene function, showed that the primary separation occurred between stimulated vs. unstimulated samples (as opposed to different sample collection systems). The clustering was generated on the 28 individual samples using the 1,266 differentially expressed transcripts that were either up- or down-regulated by PHA-stimulation Fig. . StimulaPHA-stimulation resulted in 538 transcripts being detected as upregulated, and 392 downregulated when the Tempus\u2122 system was used. For the PAXgene\u2122 system, 400 were found to be upregulated and 539 downregulated p < 0.01, FDR) by PHA stimulation Fig. . Among a, FDR by To validate differential expression measured by microarray, real time PCR was performed on four of the seven samples that had sufficient RNA remaining. Using the Sequenom Quantitative Gene Expression (QGE) platform, eighteen immune function transcripts were assessed, including IL-2, IL-4, IFN\u03b3, controls and others not known to be regulated by PHA. Other transcripts assessed include: TGF\u03b2, P19, Perforin, MIG, IP10, IL10, GB, FOXP3, CXCR3, CTLA4, CTGF, CD3, CD25, CD20, and CD103. Transcripts for IFN\u03b3, IL-2, IL-4 were statistically significant for differential expression between un-stimulated and stimulated samples using the Tempus\u2122 system (p < 0.05). Samples prepared using the PAXgene\u2122 system exhibited greater variability in transcript levels in both unstimulated and stimulated conditions, thus trending towards upregulation for IL-2, IL-4, and IFN\u03b3, but not statistically significant . Both PAXgene\u2122 and Tempus\u2122 whole blood collection tubes were used in this step. Since cells are immediately lysed upon collection with either PAXgene\u2122 or Tempus\u2122, blood had to be initially drawn in Li Heparin tubes prior to transfer into either system. A secondary comparison was made between samples drawn directly in both collection systems versus. Samples were drawn in Li Heparin and then immediately transferred into their respective tube types, to determine if Li Heparin adversely effected gene expression on its own.Our objective was to assess the effect of two available blood collection systems with regard to differential gene expression in clinical samples. The primary comparison was of unstimulated peripheral blood samples with those stimulated For these comparisons, whole blood was collected from seven healthy individuals who provided informed consent, under the approval of the Institutional Review Board of Brigham and Women's Hospital. A total of 110 mL of peripheral blood was collected from each participant using Li Heparin tubes. Whole blood from individual participants was pooled into a 200 mL plastic container and a set of seven aliquots from the pool was incubated for 3 hrs at room temperature with no stimulant added; a second set of seven aliquots was stimulated with 25 \u03bcg/mL of PHA and incubated for 3 hrs. After incubation, samples were subsequently transferred to either PAXgene\u2122 or Tempus\u2122 tubes. To assess whether stimulation by PHA was successful, FACS detection for IFN\u03b3 was performed. Samples from five of the seven subjects were drawn directly into PAXgene\u2122 and Tempus\u2122 tubes for assessing the effect of Li Heparin alone as described above. In total, 48 samples were collected and hybridized as part of the analysis.RNA was extracted at the ITN Central Nucleic Acid Isolation Core Facility in Pittsburgh, PA according to the ITN-modified method for the Tempus\u2122 system. The PAXgene\u2122 Whole Blood RNA samples were processed using the PAXgene\u2122 Blood RNA Kit based on the Qiagen method for column purification of nucleic acids . Whole blood samples collected into Tempus\u2122 vacuette were extracted using ABI Prism\u2122 6100 Nucleic Acid PrepStation\u2122 and using Tempus\u2122 extraction reagents. Samples were frozen immediately at -70\u00b0C upon collection. Extraction steps included addition of PBS buffer to compensate for short blood sample draws, a wash step with Purification Wash Solution 1 two times at 80% vacuum for 500 seconds, followed by a single washing step using Purification Wash Solution 2 at 80% vacuum for 120 seconds. An extra vacuum step was performed to eliminate Purification Wash Solution 2 from the filter. Upon changing the reservoirs on the ABI 6100 PrepStation\u2122, three more washing steps were performed using Purification Wash Solution 2 at 80% vacuum for 120 seconds. At the elution step, adapter plates were changed one extra time to ensure the purity of the eluted RNA. Eluted RNA was then concentrated using the Microcon YM-100 . RNA purity and yield were assessed prior to hybridization using the Agilent 2100 Bioanalyzer [The ITN Central Microarray Facility performed all hybridizations with the Affymetrix HG-U133 2.0 Plus microarray. All processing was done according to manufacturer's instructions. Globin reduction was performed followed by cRNA target amplification using the Affymetrix In-vitro Transcription (IVT) Kit. Standard pre- and post-hybridization quality control metrics were used to assess sample processing and hybridization quality. Microarrays have been deposited within GEO (Accession number GSE12711).Multiplexed primer and competitive template designs were created using the MassARRAY QGE Assay Design software v1.0 for random hexamer priming, such that at least one PCR primer spanned an exonic boundary per each transcript assayed. The 20 gene panel assayed for this study was designed as a single 20-plex reaction.Copy number determination for each transcript was conducted using real-time competitive PCR coupled with product resolution via Matrix-Assisted Laser Desorption/Ionization Mass Spectrometry , as previously described . ProductNormalization of copy numbers between samples for the different assays was conducted using a multiplexed set of ten well-characterized human housekeeping transcripts plus an 18s RNA assay and geNorm software. Normalization factors per sample were calculated using the geometric mean of the most stable combination of these normalization assays, determined by the measure of their pairwise variation as calculated by geNorm.Microarray normalization and preprocessing was performed as follows: Log-transformed Affymetrix microarray intensity values were processed using the 'threestep' function in the R/Bioconductor affyPLM package; backgroThe normalized data from the experiments were fit into linear models using the Bioconductor package limma. The priPair-wise comparisons of interest were performed using moderated t-statistics to test for significant differential expression. The Benjamini-Hochberg multiple comparison adjustment was usedSequenom QGE analysis was performed using the Mann-Whitney test to identify differentially expressed transcripts between samples with and without PHA stimulation using Tempus\u2122 and PAX platforms respectively.ALA contributed to the study design, data analysis and drafting of the manuscript. SAK contributed to the study design, management of samples, and drafting the manuscript. ZG and RW performed the gene expression analysis. KR processed the samples for PHA stimulation and performed flow cytometry analysis. KB and VS contributed to the study design, interpreting the results in the context of immune profiling, and drafted and reviewed the manuscript.Transcripts up-regulated by PHA using Tempus ABI and PAXgene\u2122 collection systems.Column headings are:Probe Affymetrix IDABI 0 = Failed statistical criteria1 = Passed statistical criteriaPAX 0 = Failed statistical criteria1 = Passed statistical criteriaABI-log2ratio log 2 ratios for Tempus ABI tubeABI-p value p-value for Tempus ABI tube, FDR correctedPAX-log2ratio log 2 ratios for PAXgene tube tubePAX-p value p-value for PAXgene tube, FDR correctedSymbol Gene symbolDescription Gene descriptionUniGene UniGene IDOMIM OMIM IDPathway Pathways associatedClick here for fileTranscripts down-regulated by PHA using Tempus ABI and PASGene collection systemsColumn headings are:Probe Affymetrix IDABI 0 = Failed statistical criteria1 = Passed statistical criteriaPAX 0 = Failed statistical criteria1 = Passed statistical criteriaABI-log2ratio log 2 ratios for Tempus ABI tubeABI-p value p-value for Tempus ABI tube, FDR correctedPAX-log2ratio log 2 ratios for PAXgene tube tubePAX-p value p-value for PAXgene tube, FDR correctedSymbol Gene symbolDescription Gene descriptionUniGene UniGene IDOMIM OMIM IDPathway Pathways associatedClick here for file"} +{"text": "Whole genome gene expression profiling has revolutionized research in the past decade especially with the advent of microarrays. Recently, there have been significant improvements in whole blood RNA isolation techniques which, through stabilization of RNA at the time of sample collection, avoid bias and artifacts introduced during sample handling. Despite these improvements, current human whole blood RNA stabilization/isolation kits are limited by the requirement of a venous blood sample of at least 2.5 mL. While fingerstick blood collection has been used for many different assays, there has yet to be a kit developed to isolate high quality RNA for use in gene expression studies from such small human samples. The clinical and field testing advantages of obtaining reliable and reproducible gene expression data from a fingerstick are many; it is less invasive, time saving, more mobile, and eliminates the need of a trained phlebotomist. Furthermore, this method could also be employed in small animal studies, i.e. mice, where larger sample collections often require sacrificing the animal. In this study, we offer a rapid and simple method to extract sufficient amounts of high quality total RNA from approximately 70 \u03bcl of whole blood collected via a fingerstick using a modified protocol of the commercially available Qiagen PAXgene RNA Blood Kit.2 values ranging from 0.94 to 0.97. Similarly both fingerstick collections were highly correlated to the venous collection with r2 values ranging from 0.88 to 0.96 for fingerstick collection 1 and 0.94 to 0.96 for fingerstick collection 2.From two sets of fingerstick collections, about 70 uL whole blood collected via finger lancet and capillary tube, we recovered an average of 252.6 ng total RNA with an average RIN of 9.3. The post-amplification yields for 50 ng of total RNA averaged at 7.0 ug cDNA. The cDNA hybridized to Affymetrix HG-U133 Plus 2.0 GeneChips had an average % Present call of 52.5%. Both fingerstick collections were highly correlated with rOur comparisons of RNA quality and gene expression data of the fingerstick method with traditionally processed sample workflows demonstrate excellent RNA quality from the capillary collection as well as very high correlations of gene expression data. Whole genome gene expression profiling has revolutionized research in the past decade especially with the advent of DNA microarrays -3. This Helicobacter pylori infection [However, one limitation of such RNA stabilization systems in humans is the requirement for the collection of a venous blood sample, which involves a venipuncture. Fingerstick capillary blood collection has been widely used and has been shown to be very reliable. A classic example of this methodology is blood glucose monitoring. Fingerstick blood has also been used to test for nfection ,6, cholenfection ,8, glyconfection and syphnfection . Howevernfection . Anothernfection . In addiObtaining reliable and reproducible gene expression data from a fingerstick has obvious advantages in clinical as well as field testing applications. A fingerstick is arguably a less invasive, less time consuming and a more mobile method of blood sample collection, eliminating the need of a trained phlebotomist. The utility of such a method is obvious in studies designed to collect blood samples from physically active subjects (i.e. soldiers and athletes) or for field studies in remote and under-developed areas. Fingerstick blood collection would also be of immense value in several types of subjects where it is commonly difficult to collect venous blood via venipuncture: infants and young children, intravenous drug addicts, and very obese individuals. The value of fingerstick capillary collection as opposed to venipuncture can also be appreciated in study designs where there is a need for serial sample collections or for pharmacokinetic studies that involve gene expression assays. This method could also be employed in small animal studies or nonhuman primates. Therefore, a fingerstick method of blood collection will broaden the current range of genomic profiling possible. We also believe that it will be an integral part of diagnosis and serial monitoring of disease states in the future where one can envision a hand-held device that could monitor expression levels of validated panels of genomic biomarkers, much like the glucose monitoring systems of today.In the present study, we offer a rapid and simple method to extract sufficient amounts of high quality total RNA from approximately 70 \u03bcl of whole blood collected via fingerstick using a modified protocol of the commercially available Qiagen PAXgene RNA Blood Kit. RNA amplification, labeling, and fragmentation were performed using the Nugen Ovation kits . This approach hybridizes biotinylated cDNA onto the microarray and has been shown to perform with superior sensitivity especially with smaller amounts of input RNA -14. Our http://www.scripps.edu/researchservices/dna_array[http://www.nugeninc.com/tasks/sites/nugen/assets/File/user_guides/userguide_encore_biotin.pdf[In order to test the success and reproducibility of the fingerstick method of RNA isolation we took two fingerstick capillary samples from 5 donors on two separate days . From each collection time point we chose one purified total RNA sample from each donor and went forward with the RNA amplification, fragmentation and hybridization protocols. A venous sample was also collected in parallel from each of the donors by a trained phlebotomist and processed according to standard protocols to represent the established method dna_array. All samiotin.pdf.http://www1.qiagen.com/Products/RnaStabilizationPurification/DSP/PaxGeneBloodRnaKitIVD.aspx#Tabs=t2[Approximately 70 \u03bcL of blood collected via fingerstick was combined with 200 \u03bcL PAXgene RNA stabilizing reagent (a PAXgene Reagent:blood ratio of 2.86) and left to incubate at room temperature for at least two hours as per manufacturer suggestions protocol for RNA isolation and purification, with the exception of one modification. After the first spin, we washed the pellet with 1 mL RNase free water instead of 4 mL due to its small volume. We initially tested a \"scaled down\" version of the entire PAXgene protocol, but, through further testing, we found that using the standard volumes of buffers and washes had no effect on the yields and were easier to employ (data not shown). Furthermore, we also found the DNase step in the protocol was crucial for the yield and fidelity of the amplified cDNA. Without the DNase step, contaminating DNA was subsequently amplified causing the GAPDH and Actin ratios used as quality control metrics on the GeneChips to be abnormally high (data not shown).260/280 ratios, an average of 3.9 \u00b1 1.35, and lower OD260/230 ratios, an average of 0.09 \u00b1 .07, we found that this did not affect downstream applications and was probably caused by the high concentration of salts in the elution buffer relative to the low concentration of RNA http://www.flychip.org.uk/protocols/gene_expression/rna_qc.php[From 20 samples of 70 \u03bcL fingerstick blood, the average total RNA yields ranged from 138 to 430 ng (average of 255.7 \u00b1 72.6 ng) which was well above the maximum of 50 ng required for the Nugen Ovation RNA Amplification System v2 . While we did experience higher than normal ODna_qc.php. To assena_qc.php.260/280ratio of 2.0 \u00b1 0.03 and an average OD260/230 ratio of 1.0 \u00b1 0.45. To assess the quality and integrity of the RNA, the samples were then run on the Agilent 2100 BioAnalyzer using an RNA NanoChip. The average RIN of the 5 total RNA samples obtained by venipuncture was 9.2 \u00b1 0.09.The total RNA from the 5 normal venipuncture PAXgene blood collection tubes was extracted and purified according to the PAXgene Blood RNA kit (product# 762164) protocol. From 2.5 mL of blood the yield ranged from 4.1 to 7.9 \u03bcg of total RNA (average 5.96 \u00b1 1.5 \u03bcg) with an average OD\u00ae RNA Amplification System V2 (Cat.# 3100) and Ovation\u00ae WB Reagent (Cat.# 1300). The total RNA yields and Agilent traces for these 15 samples are shown in Figure \u00ae PCR Purification Kit . As shown in Figure 260/280ratio of the 10 samples was 1.9 \u00b1 0.01.50 ng total RNA from each donor was taken from all three sample sets and subsequently amplified using the Nugen Ovation260/280 ratio of 1.9 \u00b1 0.01. Before fragmenting and labeling the cDNA with the Nugen FL-Ovation\u2122 cDNA Biotin Module V2 (Cat.# 4200), the BioAnalyzer was used again to determine quality of the amplified whole cDNA , the cDNA yields ranged from 8.2 to 9.3 \u03bcg (average of 8.6 \u00b1 0.43 \u03bcg) . Following hybridization, the chip was then washed, stained and scanned according to standard Affymetrix protocol http://www.affymetrix.com/support/downloads/manuals/expression_analysis_technical_manual.pdf[4.4 \u03bcg of the labeled cDNA was then hybridized to Affymetrix GeneChipanual.pdf.The Affymetrix quality metrics for the fingerstick samples are given in Table 2 values ranging from 0.94 to 0.97. Similarly both fingerstick collections were highly correlated to the venous collection with r2 values ranging from 0.88 to 0.96 for fingerstick collection 1 and 0.93 to 0.96 for fingerstick collection 2.We calculated correlation coefficients for each donor comparing fingerstick collection 1 vs. fingerstick collection 2 and also each fingerstick collection vs. the venipuncture collection Table . Both fiWe analyzed the degree of disagreement in Affymetrix present/absent calls between the fingerstick collections and also between the fingerstick and venous collections for each donor. Disagreement is described as a change in the present, marginal or absent calls between any two comparisons. Within each comparison we binned the average signal intensities of each probe set in the ranges between 0-100, 101-250, 251-500, 501-1000 and >1001. We got similar results for all three comparisons that we performed. There was an inverse correlation between the signal intensities and the disagreement calls for the probesets in each bin Figure &3c. In We further analyzed the disagreement as the number of calls that changed from present to absent and vice versa to test the hypothesis that a change from absent to present would indicate higher sensitivity for the method used. On average, there was a higher number of disagreement calls, namely, change in call of a probe set from present in the venipuncture method to absent in the fingerstick method. However, there were also a number of calls that changed from present in the fingerstick collection to absent in the venipuncture collection Table . For exaDespite the fact that technologies for RNA isolation have shown tremendous improvements over the past decade with RNA isolation kits for tissue, cells as well as whole blood, there is still no commercially available methodology for the isolation of good quality RNA from microliter volumes of human whole blood suitable for gene expression profiling on DNA microarrays. To test the hypothesis that we can isolate excellent quality and sufficient quantities of RNA from small volumes of whole blood (<100 \u03bcl) we investigated a modified protocol of whole blood RNA isolation in the present study.Our results show that we were able to successfully isolate RNA that gave comparable results with the standard venous blood method when assayed on an Affymetrix GeneChip. The advantages of the fingerstick RNA isolation method were the ease of small volume blood collection, minimal modification of the standard PAXgene protocol, no requirement of a trained phlebotomist, and amenability to off-site studies as well as feasibility in studies involving serial blood collections on a large number of subjects. The fingerstick RNA was of very high quality, comparable to the venous blood RNA when assayed by the Agilent Bioanalyzer with no signs of degradation despite the technical differences between a venipuncture and a fingerstick collection. It is important to note that even though the volume of blood used in the fingerstick collection was small, the DNase treatment of the RNA was found to be crucial. This was especially necessary for the accurate quantitation of the RNA since the Nugen protocol calls for nanogram quantities of RNA as the starting material. The accurate quantitation of RNA could be biased due to the presence of contaminating DNA.The yields of amplified cDNA were only slightly higher in the venipuncture method compared to both fingerstick collections. However, the Agilent Bioanalyzer traces for the amplified cDNA from the Nugen protocol showed no differences between the two collection methods. The quality control metrics for the GeneChip data showed that the venous collections had slightly higher average percent present calls (3.5% more) but this could reflect the inherent biological differences between venous and capillary blood. In a recent publication, Schalk et al., show that there are significantly higher WBC and RBC counts in capillary blood but lower numbers of platelets. Such diOn further analysis we showed that the differences are mainly due to the flux between calls at the lower signal intensities (<100), which accounts for almost all the variation between the methodologies. On average only about 12% of the total probesets on the GeneChip disagreed between any two comparisons that we made, which is well within the range of expected variation between samples in any DNA microarray study (data not shown). A study which looked at the individuality and variation in gene expression patterns in human blood found that there were several significant variations in gene expression among 77 samples of peripheral blood collected from normal healthy volunteers. The varIn the present study we demonstrate a simple, modified RNA isolation protocol from small volumes of whole blood (70 \u03bcL by fingerstick) that is highly comparable to the standard method of RNA isolation (2.5 mL by venipuncture). The RNA from the fingersticks were further assayed on Affymetrix GeneChips and the results were very similar to the venipuncture collections. We effectively show that this fingerstick RNA isolation methodology can be used and should open up a broad range of applications for whole blood DNA microarray analysis ranging from pediatric studies to animal studies where there is access to only smaller volumes of blood. This methodology is especially suitable for serial monitoring, field testing, pharmacokinetic assays and rapid diagnosis of disease states.All samples were drawn from normal, healthy volunteers as part of The Scripps Research Institute's Normal Blood Drawing Service approved by the Office of Research Subjects Protection of the Scripps Health Human Subjects Committee. Both fingerstick and venous blood samples were collected by a trained phlebotomist.http://www1.qiagen.com/Products/RnaStabilizationPurification/DSP/PaxGeneBloodRnaKitIVD.aspx#Tabs=t2[Sterile, DNase and RNase free, 1.5 mL Eppendorf tubes pre-filled with 200 \u03bcL PAXgene RNA stabilizing reagent were prepared and set aside. The donor finger was cleaned with an alcohol wipe and stuck with a single-use, spring loaded, retracting needle lancet (Unistick2 Super Lancet) according to the manufacturer's directions. Approximately 70 \u03bcl of blood was immediately collected into a capillary tube (Fisherbrand Heparinized Micro-Hematocrit). The sample was then transferred into the Eppendorf tube containing the PAXgene solution and the capillary tube was aspirated with a pipette to increase sample recovery. The sample was then mixed well with a pipette, given a quick spin, and left to incubate at room temperature; allowing complete lysis of blood cells in sample at room temperature for 15 minutes. After carefully aspirating and discarding the supernatant, the pellet was washed with 1 mL RNase free water. From this point forward, we followed the processing guidelines given in the PAXgene Blood RNA kit (product# 762164) protocol. Note: As the pellet was often in the form of a streak on the back of the tube, we used the pipette tip to scrape it from the tube and fully re-suspend the pellet.Due to the small amount of total RNA in the eluate, the samples were dried in a SpeedVac to about 20 \u03bcL (from the standard 80 \u03bcL elution) before any quality assessment was done. The samples were then measured using the NanoDrop 1000 to determine the concentration and purity. The Agilent 2100 BioAnalyzer and accompanying software was used to determine the integrity and quality of the total RNA.\u00ae RNA Amplification System V2 (Cat.# 3100) in conjunction with the Ovation\u00ae WB Reagent (Cat.# 1300). They were subsequently purified using the Qiagen QIAquick\u00ae PCR Purification Kit . Other purification methods can be used, according to the Nugen user guide, but we only tested the kit listed above. Again, the NanoDrop 1000 and Agilent 2100 BioAnalyzer were used to assess sample quantity and quality. The samples were then labeled and fragmented using the Nugen FL-Ovation\u2122 cDNA Biotin Module V2 (Cat.# 4200).50 ng aliquots were amplified using the Nugen Ovationhttp://www.affymetrix.com/support/downloads/manuals/expression_analysis_technical_manual.pdf[All sample hybridization and GeneChip processing were done according to the Nugen standard array protocol for cDNA hybridization (Nugen user guide) and the standard Affymetrix protocols anual.pdf.Conceived and designed: DS, SK, AW, TM, ER, and SH. Performed experiments: SK, AW, ER, and TM. Analyzed data: SK, ER, TM, and AW. Wrote paper: SK, TM, and ER. Read and approved paper: DS, SK, AW, TM, ER, and SH."} +{"text": "Streptococcus (GAS) is a Gram-positive human pathogen that is capable of causing a wide spectrum of human disease. Thus, the organism has evolved to colonize a number of physiologically distinct host sites. One such mechanism to aid colonization is the formation of a biofilm. We have recently shown that inactivation of the streptococcal regulator of virulence (Srv), results in a mutant strain exhibiting a significant reduction in biofilm formation. Unlike the parental strain (MGAS5005), the streptococcal cysteine protease (SpeB) is constitutively produced by the srv mutant (MGAS5005\u0394srv) suggesting Srv contributes to the control of SpeB production. Given that SpeB is a potent protease, we hypothesized that the biofilm deficient phenotype of the srv mutant was due to the constitutive production of SpeB. In support of this hypothesis, we have previously demonstrated that treating cultures with E64, a commercially available chemical inhibitor of cysteine proteases, restored the ability of MGAS5005\u0394srv to form biofilms. Still, it was unclear if the loss of biofilm formation by MGAS5005\u0394srv was due only to the constitutive production of SpeB or to other changes inherent in the srv mutant strain. To address this question, we constructed a \u0394srv\u0394speB double mutant through allelic replacement (MGAS5005\u0394srv\u0394speB) and tested its ability to form biofilms in vitro.Group A speB in the srv mutant background restored the ability of this strain to form biofilms under static and continuous flow conditions. Furthermore, addition of purified SpeB to actively growing wild-type cultures significantly inhibited biofilm formation.Allelic replacement of srv mutant strain is responsible for the significant reduction of biofilm formation previously observed. The double mutant supports a model by which Srv contributes to biofilm formation and/or dispersal through regulation of speB/SpeB.The constitutive production of SpeB by the Streptococcus (GAS) is a Gram-positive human pathogen that is capable of causing a wide spectrum of human disease [covS, part of the two component regulatory system CovRS (CsrRS)[ speB[Listeria monocytogenes regulator PrfA, results in a mutant strain exhibiting a significant reduction in biofilm formation [srv mutant suggesting Srv contributes to the control of SpeB production [srv mutant was due to the constitutive production of SpeB. In support of this hypothesis, we demonstrated that treating cultures with E64, a commercially available chemical inhibitor of cysteine proteases, restored the ability of the srv mutant to form biofilms [in vitro biofilms by western immunoblot analysis [srv was due only to the constitutive production of SpeB or to other changes inherent in the srv mutant strain. To address this question, we constructed a \u0394srv\u0394speB double mutant through allelic replacement . This muRS)[ speB,19. ReprRS)[ speB,20,21. Wormation ,22. Unlioduction . SpeB isoduction -30. Prevoduction -34. Givebiofilms . Furtheranalysis . Still, speB ORF was amplified from MGAS5005 genomic DNA using speBsrv UP FWD (Table speBsrv UP REV (Table BsrGI-XhoI site of pFW14 [speB-UP. Sequence located downstream of the speB ORF was amplified from MGAS5005 genomic DNA using speBsrv DOWN FWD (Table speBsrv DOWN REV (Table XmaI-AgeI site of pFW14\u0394speB-UP. The resulting plasmid (pFW14\u0394speB) was transformed into NovaBlue competent cells (Novagen). Electrocompetent MGAS5005\u0394srv cells (200 \u03bcL) were incubated with pFW14\u0394speB for 10 minutes on ice. The competent cells and DNA were placed in a pre-chilled 0.2 cm cuvette and electroporated . Electroporated cells were incubated for 10 minutes on ice. Cells were allowed to outgrow at 37\u00b0C with 5% CO2 for 3.5 h in Todd Hewitt broth supplemented with 2% Yeast extract (THY) . Selection for MGAS5005\u0394srv\u0394speB occurred on THY agar supplemented with chloramphenicol (5 \u03bcg/mL) (Sigma) and incubated at 37\u00b0C with 5% CO2 for 48 hours. The speB deletion was verified in chloramphenicol resistant transformants using PCR and restriction digestion. A PCR utilizing internal srv and internal speB primers (Table srv (II), MGAS5005\u0394speB (III) and MGAS5005\u0394srv\u0394speB (IV) Figure to validspeB mRNA was not produced by MGAS5005\u0394srv\u0394speB, total RNA was isolated from MGAS5005 (control) and MGAS5005\u0394srv\u0394speB and subjected to TaqMan real-time reverse transcriptase PCR (RT-PCR) analysis [srv or speB (data not shown) in the MGAS5005\u0394srv\u0394speB strain. Transcript of prsA, a gene located immediately downstream of speB, was ~ 3 fold higher in MGAS5005\u0394srv\u0394speB than MGAS5005, indicating that transcription of downstream genes was not disrupted. It should be noted that MGAS5005\u0394srv [speB have previously been shown to be free of detectable polar effects [To verify that analysis ,38. Resu5005\u0394srv and MGAS effects ,34,39. A effects ,23.srv, MGAS5005\u0394speB [srv\u0394speB cultures were grown under static conditions (0.5 h - 48 h); biofilm production was measured through crystal violet (CV) staining as previously described [speB in the srv mutant background restored biofilm formation to near wild-type levels after 24 h . Over time, biofilm formation of MGAS5005\u0394srv\u0394speB closely resembled what we have previously reported for MGAS5005 with maximal formation occurring between 24 h and 30 h with a subtle decline in CV staining thereafter .To examine biofilm formation, MGAS5005, MGAS5005\u0394005\u0394speB ,34,39 anescribed 3 times over the course of static biofilm development . CV staining was performed on treated and untreated samples at 18 h post-seeding Figure . SpeB adv Figure .srv is due to the constitutive production of mature SpeB. Inactivation of speB in the MGAS5005\u0394srv background restored biofilm formation to wild-type levels. Complementation of MGAS5005\u0394srv\u0394speB through the addition of exogenous SpeB significantly reduced biofilm formation to MGAS5005\u0394srv levels. These results support a model in which the Srv mediated control of SpeB production regulates GAS biofilm formation that mutation of covS resulted in a strain with decreased biofilm formation due to increased capsule production[covS mutant[srv in a covS+ M1T1 background (as well as in other serotypes) to understand the role of Srv in biofilm formation and GAS disease.Taken together, the data indicate that the biofilm deficient phenotype of MGAS5005\u0394e profile,15. Receroduction. They shvS mutant. Thus, oThe authors declare that they have no competing interests.in vitro experiments, and drafted manuscript. RCH designed and developed MGAS5005\u0394srv\u0394speB mutant and critically analyzed manuscript. SDR participated in the design of the study and helped to draft the manuscript. All authors read and approved of the final manuscript.ALR participated in the design of the study, conducted"} +{"text": "Orthodontic management of anterior open bites is a demanding task for orthodontists. Molar intrusion as a primary means of open bite correction entails the need for appropriate anchorage. Orthodontic mini implants can provide the required mechanical support. The suggested procedure aims to reduce the risk of complications such as root damage or soft tissue irritations while minimizing overall complexity.Three female patients aged 14, 18 and 19 years who decided against a surgical correction were treated with a device consisting of mini implants in the palatal slope, a palatal bar and intrusion cantilevers.In all three patients, an open bite reduction of more than a millimeter occurred within four months. An anterior overbite of 2 mm or more could be established within 6 to 9 months.The method presented in this article enables the practitioner to use mini implants in an easily accessible insertion site. A lab-side procedure is optional but not required. The management of anterior open bites is considered one of the most difficult tasks in orthodontics.In growing patients, functional or orthopedic approaches can be applicable-3. WithoOrthodontic solutions may involve an unfavorable extrusion of the incisors and may result in an increased display of gingiva. In manyVarious mini implant insertion sites are available for the purpose of molar intrusion. Interradicular insertion bears the risk of root damage. Even after correct implant insertion, root contact may still occur later due to the progress of the intrusion, which may impair implant stability. Insertion beyond the attached gingiva may also lead to an increased failure rate. The palThe aim of this work is to present an approach that does not require incision, interradicular implant placement or placement in the movable alveolar mucosa while keeping the complexity of the orthodontic mechanics at a minimum.Instead of a gingival collar the Jet Screw (JS) type mini implants used in this work have a long neck which widens towards the implant head. This makes them applicable in areas covered by thick soft tissue. They are advertised for use with the TopJet Distalizer . The insertion site recommended by the manufacturer is located at half of the distance of the perpendicular line segment from the raphe to the palatal cusp tip of the first bicuspid Figure.The mini implants were inserted according to the following protocol:\u2022Informed consent regarding potential risks, complications and behavior via standardized documents\u2022surface anesthesia with 1% lidocaine spray applied using a cotton ball\u2022infiltration using 4% articaine solution\u2022mouth rinse with 0.2% chlorhexidine digluconate\u2022assessment of gingival thickness using a probe\u2022choice of JS neck length: 3 mm neck for gingival thicknesses not exceeding 3.5 mm, otherwise 5 mm\u2022insertion using a surgical handpiece or a hand screw driverPosterior intrusion was achieved through distally extended cantilevers fabricated out of 16\u00d722 stainless steel wire. The connection between implant neck and wire was established by bending the anterior end of the wire into the shape of a clip Figure.A palatal bar served to avoid palatal tipping of the molars during intrusion. Also, it helped in preventing the cantilevers from interfering with the gingiva. Notably, the palatal bar was adjusted to leave at least 3 mm of space to the roof of the palate in order to obviate excessive soft tissue contact as the intrusion progresses.Once attached to the mini implant, the cantilevers rest on the palatal bar. To establish the desired intrusive force level, they were then tied to the molar bands with steel ligatures. An initial force level of 60 cN per side was chosen for the intrusion of the first molars and verified using a spring balance. A force of 200 cN per side was applied when second bicuspids, first molars and second molars were being intruded simultaneously. During subsequent appointments the ligatures were gradually tightened to maintain the intrusive force.An 18 year old woman with an anterior open bite was treated according to above method Figure. The negInitially, only the upper first molars were intruded Figure. After fAfter removal of the intrusion cantilevers three months later, treatment ceased for two months in order to estimate the amount of relapse. Panoramic x-ray images served to assess the apical situation and to identify possible root resorptions Figure.The amount of dentoalveolar change was assessed clinically as well as by means of digitized plaster casts. An optical 3D-scanner and a spezialzed software served to obtain the according data and to overlay initial and post-intrusion maxillary casts using the best fit method Figure. The antIn a 19 year old woman, the intrusion procedure was commenced when a fixed appliance was already in situ and the leveling and aligning stages of treatment were almost complete (FigureA 14 year old girl received a similar treatment FigureAt the time of the most extensive over-correction, the anterior open bite was remediated and an iatrogenic lateral open bite of 3.5 mm on the right side and 4 mm on the left had occurred. In the following four months, the lateral open bite closed incompletely. A lateral open bite of 1.5 mm on each side remained. The anterior overbite persisted. It amounted to 1 mm at the right lateral incisors and to 4 mm in the left central incisor area.The x-ray images Figure showed nIn FigureIn addition to the vertical correction, an improvement of the molar relationship was observed. On the right side, a class I molar relationship was established. On the left side, a quarter of a unit class II remained.The transversal distance between the upper first molars increased by 2.5 mm which may be attributed to a slight over-activation of the palatal bar.Within five months, the anterior open bite was closed and a lateral open bite of 1 mm was established. Occlusal contacts remained between the second molars and first bicuspids. On both sides, a super class I occlusion resulted.The cephalometric tracings Figure reveal aFive months of intrusion resulted in an anterior overbite and a vertical distance of 2 mm between upper and lower first molars.The aspired treatment objective was reached in all cases. The clinically visible open bite reduction can be attributed to the intrusion mechanics since no intermaxillary elastics were used. Still, an incisor extrusion was observed in case 2. It may result from the fact that the intrusive force was applied distal to the center of resistance of the maxillary dentition entailing a clockwise rotation of the entire dental arch.In terms of implant placement, the suggested procedure benefits from an insertion site which is accessible with a surgical hand piece as well as with a straight hand screw driver unless pathological limitations of mouth opening are present. Slight deviations regarding the implants\u2019 position as well as their angulation can be compensated for by adapting the cantilevers accordingly.Maxillary molar intrusion may also be conducted by using miniplates instead of mini implants. HoweverMini implant insertion in the buccal alveolus may bear the risk of root damage. Additionally, in order to achieve a favorable force vector, a high insertion location in the area of the movable mucosa would be necessary, which is suboptimal in terms of implant survival. The palThe suggested procedure can be performed chair side. The clip connectors can be pre-fabricated. A lab-side fabrication of the palatal bar is optional.Although the reliability of the presented method requires further investigation, the results appear propitious. The first case is especially insightful since the observed effect can be fully attributed to the intrusion mechanics. No brackets, elastics or other appliances were being used. It can, however, be argued that treatment might have been more efficient if molar intrusion had been performed simultaneously with leveling and aligning.The intrusion cantilevers were fabricated out of .016\u201d\u00d7.022\u201d stainless steel wire. The decision for this material was made because TMA wires are more prone to breakage whereas NiTi wires cannot as easily be bent. A hybrid construction with a steel clip on one end and a NiTi lever is conceivable but was not deemed necessary. The sheer length of the levers provided for a sufficient level of elasticity. Finally, with regard to the possibility of root resorption, dissipating forces may even be favorable.While additional research is required, present results indicate that the proposed method is suitable for treatment of anterior open bites. It is advantageous in terms of surgical difficulty and mechanical complexity.Written informed consent was obtained from all patients for publication of this report and the accompanying images. In all cases, a medical indication for the respective treatment was present. The surgical procedure constitutes a routine treatment. The authors declare that no ethical approval was necessary.The authors declare that they have no competing interests.TZ suggested the original idea for the paper. TZ, SF and JK wrote the main part of the manuscript. JK and DW reviewed the paper for content, and reviewed and contributed to the writing of all iterations of the paper, including the final version of the manuscript. All authors read and approved the final manuscript."} +{"text": "In the crystal, mol\u00adecules are linked by N\u2014H\u22efO and C\u2014H\u22ef\u03c0 inter\u00adactions.In the title compound, C N-acetamide derivatives, see: Barluenga et al. \u00c5b = 17.668 (8) \u00c5c = 10.103 (5) \u00c5\u03b2 = 107.914 (7)\u00b0V = 1555.0 (13) \u00c53Z = 4K\u03b1 radiationMo \u22121\u03bc = 0.08 mmT = 153 K0.61 \u00d7 0.07 \u00d7 0.02 mmRigaku AFC10/Saturn724+ diffractometerCrystalClear; Rigaku, 2008Tmin = 0.954, Tmax = 0.998Absorption correction: multi-scan (12600 measured reflections3028 independent reflectionsI > 2\u03c3(I)2386 reflections with Rint = 0.050R[F2 > 2\u03c3(F2)] = 0.070wR(F2) = 0.156S = 1.003028 reflections197 parametersH atoms treated by a mixture of independent and constrained refinementmax = 0.22 e \u00c5\u22123\u0394\u03c1min = \u22120.24 e \u00c5\u22123\u0394\u03c1CrystalClear used to solve structure: SHELXS97 I, global. DOI: Click here for additional data file.10.1107/S1600536813007320/lx2279Isup2.hklStructure factors: contains datablock(s) I. DOI: Click here for additional data file.10.1107/S1600536813007320/lx2279Isup3.cmlSupplementary material file. DOI: crystallographic information; 3D view; checkCIF reportAdditional supplementary materials:"} +{"text": "Rituximab has emerged as an alternative in the induction of remission in ANCA-associated vasculitis. Recent studies revealed a non-inferiority of a single dose rituximab in addition to a six month course of steroids compared to standard therapy after a follow-up period of 18 months. The results of the RAVE-ITN trial are encouraging and a special cohort of patients might benefit from this protocol.Randomized controlled trials have revolutionised the treatment options in small vessel vasculitis. From large genome wide association studies we have learned that granulomatosis with polyangiitis (Wegener\u2019s) and microscopic polyangiitis are distinct diseases . These fRituximab (RTX), a monoclonal chimeric antibody targeting CD20 bearing B-cells, has been approved by the FDA and the EMEA for induction of remission in ANCA-associated vasculitis. The trials leading to the approval have been published with the aim to show non-inferiority of RTX when compared to cyclophosphamide (CYC). Both studies, the RAVE-ITN trial , includeThe RAVE-ITN study population was further followed over a period of 18 months. The results revealed that a single course of RTX along with a 6 month regimen of steroids is as effective as CYC followed by azathioprine for the induction and maintenance of remission in ANCA-associated vasculitis. Remission could be maintained in 39% of the patients receiving RTX and in 33% of patients with CYC followed by AZA . These rReduction of relapses is one of the main outcomes relevant in daily clinical practice. Strategies with the aim to reduce relapses have been designed, but to date the rate of prolonged remission in ANCA-associated vasculitis is still unsatisfactory. More recently, a single center study revealed an efficacy of repeated RTX applications to maintain remission in ANCA-associated vasculitis. Interestingly, 73% of patients who received a single course of RTX to induce remission had a relapse over a period of 2 years, whereas 88% of the patients with RTX infusion on a regular basis had sustained remission. Unfortunately, patients receiving RTX maintenance in this study showed a trend to relapse after cessation . ClinicaTaken together, the RAVE-ITN trial was important to approve RTX as treatment strategy to achieve remission in ANCA-associated vasculitis. The single dose strategy together with no maintaining immunosuppression may be an option for a small collective, i.e. elderly patients with co-morbidities when prolonged immunosuppression may increase the overall risk for mortality. RTX is increasingly used in the therapy of autoimmune disorders and futuAK is the single author of this manuscript.Dr. Kronbichler received unrestricted research grants from GlaxoSmithKline and Roche/Genentech.Ethical issues have been completely observed by the authors.None declared."} +{"text": "Genetic testing is increasingly used for clinical diagnosis, although variant interpretation presents a major challenge because of high background rates of rare coding-region variation, which may contribute to inaccurate estimates of variant pathogenicity and disease penetrance.To use the Exome Aggregation Consortium (ExAC) data set to determine the background population frequencies of rare germline coding-region variants in genes associated with hereditary endocrine disease and to evaluate the clinical utility of these data.Cumulative frequencies of rare nonsynonymous single-nucleotide variants were established for 38 endocrine disease genes in 60,706 unrelated control individuals. The utility of gene-level and variant-level metrics of tolerability was assessed, and the pathogenicity and penetrance of germline variants previously associated with endocrine disease evaluated.In silico variant prediction tools demonstrated low clinical specificity. The frequency of several endocrine disease\u2012associated variants in the ExAC cohort far exceeded estimates of disease prevalence, indicating either misclassification or overestimation of disease penetrance. Finally, we illustrate how rare variant frequencies may be used to anticipate expected rates of background rare variation when performing disease-targeted genetic testing.The frequency of rare coding-region variants differed markedly between genes and was correlated with the degree of evolutionary conservation. Genes associated with dominant monogenic endocrine disorders typically harbored fewer rare missense and/or loss-of-function variants than expected. Quantifying the frequency and spectrum of rare variation using population-level sequence data facilitates improved estimates of variant pathogenicity and penetrance and should be incorporated into the clinical decision-making algorithm when undertaking genetic testing. The frequency of germline coding-region variation was evaluated in 38 endocrine disease genes in 60,706 individuals. The clinical utility of these data in improving genetic diagnosis is demonstrated. IncreasTherefore, to address these challenges, we used the ExAC cohort to quantify the spectrum and frequency of rare germline missense and loss-of-function (LOF) SNVs in 38 genes associated with hereditary endocrine disease and explored the utility of these data when applied to several clinical settings. Our results demonstrate the value of large control cohorts such as ExAC and illustrate how estimates of cumulative rare variant frequencies, together with additional gene- and variant-level factors, may be incorporated into the workflow for clinical genetic testing.http://exac.broadinstitute.org; accessed March 2016 to October 2017). Details of the contributing populations, sequencing methods, and variant filtering and calibration methods have been reported constraint metrics were obtained directly from the ExAC browser . Detailed descriptions of these metrics are reported elsewhere, and a brief overview is provided in the Supplemental Appendix were evaluated in the ExAC cohort. Disease-associated mutations were identified from publicly available sources (details are provided in delines) . For eacAIP SNVs with an AF <0.5% were established for the ExAC cohort and compared with those observed in 1866 individuals with apparently sporadic pituitary tumors reported in nine earlier studies. Odds ratios with 95% confidence intervals were established.Individual and cumulative frequencies of all missense Four disease-targeted gene panels were formulated to model estimated background frequencies of rare missense and LOF SNVs when undertaking multiple-gene sequencing. These represented PPGL, calcium-/parathyroid-related disorders, pituitary tumor disorders, and MEN syndromes. Cumulative rare variant frequencies were used to establish the likelihood that a given individual would harbor a rare variant in at least one of the panel genes.i.e., single-nucleotide substitutions resulting in missense or nonsense amino acid changes or directly affecting donor or acceptor splice sites) in each of the 38 genes revealed that the overwhelming majority were rare, with ~60% occurring as singletons , whereas ~92% of gene-specific SNVs had an AF <0.05% (chromodomain helicase DNA-binding protein 7 (CHD7) and neurofibromin (NF1) demonstrated the highest frequencies of singleton variants , whereas the lowest frequency was observed for adaptor-related protein complex 2 sigma 1 subunit (AP2S1) . Of note, removal of the TCGA subgroup (n = 7601) from the analysis had a minimal impact on cumulative rare SNV frequencies of the genes evaluated , with the exception of von Hippel-Lindau (VHL), for which reduced sequence coverage of part of the gene reduced the reliability of the estimates .Thirty-eight genes were selected for study, representing a range of endocrine disorders reported to be associated with heterozygous germline missense and/or LOF SNVs . The ideviduals) . The cumviduals) . For exae.g., for singleton SNVs2, r= 0.84; P < 0.0001) , marked differences persisted after correcting for gene size . Furthermore, a significant correlation was observed between rare nonsynonymous SNV frequency and the degree of amino acid sequence identity between orthologs of the encoded protein, with the most highly conserved genes demonstrating the lowest rates of rare variation .Although a positive correlation was observed between rare nonsynonymous SNV frequency and coding-region nucleotide length . Of the 38 genes, 20 (53%) were categorized as missense and/or LOF intolerant or X-linked\u2012dominant disorders , thereby indicating appropriate constraint against nonsynonymous heterozygous variation. In contrast, many disease-associated genes were categorized as missense and/or LOF tolerant, including several in which the role of heterozygous variation and endocrine disease is less established or was previously associated with reduced disease penetrance . For other genes in the tolerant groups , their small size reduced the reliability and utility of the respective constraint metrics , the cumulative estimates of LOF SNV allele frequencies were established for each gene . Although many genes displayed an absence or very low number of LOF SNV alleles consistent with their known haploinsufficiency function , some genes harbored cumulative LOF SNV frequencies considerably higher than the associated disease phenotype , indicating a reduced penetrance of such variants. For other genes, , the apparent high LOF SNV frequency observed was consistent with the known disease prevalence . Similarly, small indels resulting in an LOF were absent or very rare in the majority of genes , although higher frequencies were observed in several genes in which LOF indels would be anticipated to be disease causing, including NF1 , CASR , SDHB , and CDC73 . However, it is important to note that the reliability of indel variant calls is most likely reduced compared with SNVs, whereas larger indels will not be identified by capture-based sequencing methods.To further quantify the variability in LOF SNV frequency . To assess the potential utility of such tools, we analyzed SIFT, Polyphen2, and CADD scores of all rare missense SNVs in a subset of 12 genes. In total, an average of 21% of all rare SNVs in each gene were categorized as deleterious using criteria encompassing all three tools, whereas 53% were described as possibly deleterious . This analysis revealed that for several genes, the frequency of alleles reported as pathogenic far exceeded the reported prevalence of the associated disease , giving rise to four possible explanations: The prevalence of disease is higher than reported; prior reports of SNV pathogenicity are incorrect ; the gene- or allele-specific disease penetrance is lower than reported; or the cohort is unknowingly enriched for hereditary endocrine disorders. For example, approximately one in 1750 individuals was observed to harbor a pathogenic ret proto-oncogene (RET) allele , in which approximately one in 3500 individuals harbored a variant previously associated with FHH type 1 , although in this setting it is plausible that the condition is more prevalent than currently recognized because of its typically asymptomatic phenotype [MEN1 and VHL variants in the ExAC cohort indicates that several of these variants were likely misclassified. However, it is important to note that in the case of VHL, four of the six likely pathogenic variants occurred in individuals from the TCGA cohort, which included 344 patients with a history of sporadic renal clear cell carcinoma who may be at an increased risk of harboring such germline variants .An apparent similar overrepresentation of disease alleles was observed for the calcium sensing receptor . Of note, this collection of disease-associated SNVs did not demonstrate a significant excess of alleles originating from the TCGA cohort (P > 0.1) .Next, the ExAC cohort was evaluated for individuals harboring clinically-actionable variants in one of seven hereditary endocrine tumor predisposition genes currently included in the ACMG guidelines . Only prAIP are reported in familial isolated pituitary adenoma kindreds but are associated with reduced penetrance, making it difficult to differentiate between hereditary and sporadic forms [AIP variants have also been reported in individuals with apparent sporadic pituitary tumors, although ascribing pathogenicity may be challenging because several AIP variants are observed in both disease and control populations [AIP in this setting, we compared the frequencies of rare germline missense AIP variants in 1866 individuals with sporadic pituitary adenomas with those observed in the ExAC cohort. Prior analysis demonstrated the predicted frequency of likely cases in the ExAC control population to be negligible . Of note, only a small excess of rare missense AIP variants was observed in the tumor cohort compared with the ExAC cohort , whereas no overall excess was identified when compared with the European ExAC subpopulation, selected to represent the most relevant cohort for comparison were observed in the tumor group, indicating that such variants are most likely benign or associated with very low penetrance . A small but notable excess of novel singleton SNVs was observed in the tumor group, predominantly in patients with acromegaly, suggesting genuine enrichment of pathogenic variants in this subgroup . Thus, modeling of four disease-targeted gene panels revealed high cumulative estimates of identifying rare variants for each panel , with ~3% of control individuals harboring a novel singleton variant . Such estimates were then modified to allow direct comparison with literature reports. For example, a recent study evaluating 14 genes in patients with sporadic PPGL reported a rare germline mutation frequency of 7%, which did not exceed the background frequency predicted from our analysis of the ExAC cohort [Finally, the gene-level estimates of rare SNV frequency were used to model expected rates of background variation likely to be observed when performing simultaneous sequence-analysis of multiple genes . Therefore, these genes are likely to be under strong evolutionary selection pressure to conserve key cellular processes, thereby resulting in a relative intolerance to variation.In this study, we used the ExAC cohort to quantify the spectrum and frequency of rare nonsynonymous germline variants occurring in a broad range of genes associated with hereditary endocrine diseases and illustrate the potential utility of this information for improved variant interpretation, both in ascribing potential pathogenicity and in reevaluating estimates of disease penetrance. Of note, we observed marked differences in the frequency or rare nonsynonymous SNVs between genes after correcting for coding-region length, and this was correlated with the degree of evolutionary conservation. The observation that the lowest rare SNV frequencies were observed in genes with the highest degrees of evolutionary conservation is not unexpected, as several of the most highly conserved genes regulate essential cellular functions , current analyses often use a combination of variant-level features, such as the absence of a variant in a control population, as well as predictions from computational tools [i.e., enabling categorization only as a variant of uncertain significance), they are frequently cited as supporting evidence of pathogenicity and may ultimately result in a patient being managed as if the variant is disease causing. Our analyses indicate that variant rarity together with computational prediction tools frequently have a low specificity for ascribing clinically relevant effects and that relying on such features likely overestimates pathogenicity. For example, we observed that across the 38 genes, the majority of individual variants were observed only once in the ExAC cohort and that a similar majority of rare SNVs were classified as potentially deleterious by at least one of the computational tools evaluated. However, our studies also demonstrated how the use of additional gene-specific factors, including cumulative rare variant frequencies together with metrics of missense and LOF constraint, may provide important additional context when incorporated into the variant interpretation workflow . Indeed, it is plausible that in the future such gene-specific constraint metrics and/or estimates of rare variant burden may be incorporated into bioinformatic or clinical computational algorithms to improve estimates of variant pathogenicity and/or disease penetrance.Although consensus guidelines have been established by the ACMG for the clinical interpretation of germline sequence variants, in many instances an unambiguous assignment of pathogenicity is not possible . In the al tools . Althougworkflow . For exaFurthermore, we demonstrated how quantifying the burden of rare variation in a given set of genes may be used to derive an expected frequency of background rare variants that should be anticipated when using disease-targeted gene panels. Of note, these studies illustrate how a failure to consider the high frequency of background rare variation arising from the use of such gene panels is likely to contribute to diagnostic uncertainty and may have confounded earlier genetic studies.e.g.,RET, VHL, and MEN1), indicating their likely prior misclassification and/or overestimates of disease penetrance. Indeed, the need to define allele-specific estimates of disease penetrance is an important concept, as recently illustrated for prion disease in which the disease-penetrance of individual PRNP variants ranged from <0.1% to ~100% [e.g.,RET), and quantifying these is essential to enable appropriate patient care in a large pituitary tumor cohort compared with the ExAC population cannot exclude an etiological role in disease, although it suggests that any disease relationship is associated with extremely low penetrance and that the overwhelming majority of such variant carriers will not manifest clinical features. Indeed, establishing accurate estimates of disease penetrance should be a priority, and it is evident that for several hereditary endocrine genes, penetrance may have been overstated because of unintentional ascertainment bias and/or the inclusion of index cases in such estimates [However, differentiating benign from low-penetrance alleles remains challenging, even with large disease and control cohorts . For exastimates .e.g., the presence of founder mutations in local populations). For example, although ExAC offers large numbers of individuals from specific ethnic groups , other populations are underrepresented, and for these groups, it may not provide a suitable comparator group.However, for accurate estimates of penetrance to be established, it is essential that the control population used be closely matched to the study population, thereby avoiding confounding from population-specific differences in variant frequencies [e.g., in SDHB and SDHD). This not only highlights the potential clinical burden that increased genetic testing may bring but also emphasizes the need for accurate estimates of variant pathogenicity and penetrance because the potential for patient harm arising through tumor surveillance programs is not insignificant [Periodic clinical, biochemical, and/or radiological screening is generally recommended for carriers of pathogenic alleles associated with hereditary endocrine tumor syndromes (nd SDHB) , 27, 29.nificant , 30. Fure.g., PPGL, medullary thyroid cancer), it was reassuring that the TCGA samples did not contribute an excess of rare SNVs or likely pathogenic alleles in the genes investigated, with the possible exception of VHL, in which the inclusion of renal carcinoma cases may have introduced a risk of bias.Our analysis has several potential limitations. First, although not knowingly enriched for hereditary endocrine disorders, the ExAC cohort will contain individuals with these diseases as well as controls with polygenic disorders with accompanying disease-associated alleles. However, the inclusion of a broad range of genes and associated conditions reduced the likelihood of widespread enrichment for relevant disease phenotypes. For example, one potential concern was that the inclusion of germline samples from the TCGA cohort might result in an overrepresentation of alleles in genes associated with hereditary endocrine tumors; although this cohort does not include endocrine tumors relevant to the genes under study could not be evaluated. In addition, high-quality SNV calls were assumed to be accurate, and although the performance of the ExAC variant calling pipelines has been extensively validated , confirmatory sequencing of individual variants was not undertaken. Furthermore, caution is required in interpreting SNVs in regions with reduced sequence coverage or those presenting difficulties in the sequencing pipelines. For example, the reliability of sequence data for genes with multiple pseudogenes may be reduced, although in these instances, visual inspection of individual sequence reads covering regions adjacent to the SNVs enabled increased confidence in the variant call .A further limitation of the current study is that individual rare variants were considered in isolation in our analysis, and potential interactions between variants or other modifying influences .Another limitation of our study is that we excluded indels from the main analysis because of their reduced reliability of detection ; althougIn summary, these studies demonstrate how quantifying rare germline variations in a large control cohort such as ExAC may be exploited to improve variant interpretation and clinical decision-making. Furthermore, this information may be incorporated into different stages of the clinical genetic testing workflow and may"} +{"text": "This case report exhibits a patient with generalized aggressive periodontitis who has been under maintenance for the past 12 years after being surgically treated in a single sitting and restored with dental implants. A 41-year-old systemically healthy male patient presented complaining of lower anterior teeth mobility and pain in the upper right quadrant. After clinical and radiographic examination, the upper right molars and lower anterior incisors were deemed unrestorable. Covered by doxycycline, the patient received a nonsurgical periodontal treatment. Three weeks later, teeth extraction, immediate implant placement, immediate nonloading provisional prosthesis, and a guided tissue regeneration were performed at indicated areas in a single sitting. The clinical decisions were based on patient compliance, the status of the existing periodontal tissues, and the prognosis of the remaining teeth. During the 12-year follow-up period, no residual pockets were observed and there was no exacerbation of the inflammatory condition. Marginal bone stability is present on all implants. For aggressive periodontal disease, a high risk of relapse as well as limited success and survival of dental implants should be considered. This case shows proper containment of the disease based on appropriate treatment planning and a strict maintenance program. In 1999, the term \u201caggressive periodontitis\u201d (AgP) was introduced by the American Academy of Periodontology (AAP) to define a group of destructive periodontal diseases with a rapid progression. This definition was used to include previous terminologies of early-onset periodontitis, juvenile periodontitis, and rapidly progressive periodontitis, using \u201caggressive\u201d nomenclature [Lang et al. classifiAggregatibacter actinomycetemcomitan (AA) and, in some populations, Porphyromonas gingivalis (Pg), phagocyte abnormalities, and a hyper-responsive macrophage phenotype, including elevated levels of prostaglandin E2 and interleukin-1beta.Microbiological criteria were not mentioned as primary features separating chronic from aggressive periodontitis. However, for AgP, the secondary features that are generally, but not universally, present included elevated proportions of The following additional specific features were proposed for defining the localized and generalized forms. Localized aggressive periodontitis is usually circumpubertal onset, localized to first molar and incisor presentation with interproximal attachment loss on at least two permanent teeth (one of which is a first molar) and involving no more than two teeth other than first molars and incisors. Generalized aggressive periodontitis usually affects adult persons between 20 and 30 years of age, but patients may be older. There is generalized interproximal attachment loss affecting at least three permanent teeth other than first molars and incisors. The disease has a pronounced episodic nature of the destruction of attachment and alveolar bone with a history of relapse.Although success in implant dentistry depends on marginal bone stability and health, patient systemic factors and susceptibility to periodontal diseases play a role in achieving long-term stability.Many studies have shown the negative effect of previously treated and untreated periodontal disease on marginal bone stability around implants, including higher frequency of mucositis and peri-implantitis and lower success and survival rates of implants placed in patients with history of chronic or aggressive periodontitis \u201312.Swierkot et al. reportedThis case report exhibits a 41-year-old systemically healthy male patient with GAgP who has been maintained for the past 12 years after being treated periodontally in a single sitting and restored with dental implants. He has been compliant for 10 years with supportive and maintenance therapy since the conclusion of his treatment in 2007.An engineer living abroad sought a second opinion at our office in 2005. The patient's oral surgeon had previously recommended extraction and immediate implantation of all compromised teeth including any tooth with a vertical defect. He sought an alternative treatment option after being shocked that at only 41 years of age, the only treatment will be losing 16 of his teeth. The patient's chief complaint was that he is not able to bite on his front teeth and that his esthetics are compromised due too increased crown length.Diagnostics included radiographic examination Figures and 2 anA conventional supra gingival scaling and subgingival root planing was performed under the coverage of antibiotics having a significant action against AA (doxycycline (Doxylag)\u00ae 100\u2009mg 2 tabs first day and 1 tab daily for 21 days) , 15 and Upon the patients' return, teeth 18, 17, and 16 all had class III furcation involvement with grade 3 mobility; they were extracted under full thickness flap allowing visibility of a 3-wall defect of 10\u2009mm at the mesial of tooth number 13 which was then treated with guided tissue regeneration technique using bovine xenograft bone substitute Bioss\u00ae and resorbable collagen membrane Resolute\u00ae.At the same visit, extraction of tooth number 26 was performed as it presented with a 9 and 10\u2009mm bone loss at the mesial and distal sites, respectively, with a class III furcation involvement.At the lower right quadrant, tooth number 48 was extracted, and scaling and root planing was performed at the distal of tooth number 47 which presented with a wide shallow bony defect.At the lower left quadrant, extraction of tooth number 38 was done, and the full thickness flap showed a narrow and deep bone defect until the apex of tooth number 46 distally maintaining the mesial bone peak at tooth number 47 which presented as well with a wide moderate bone defect distally. Scaling and root planing was done, due to the contained geometry of the defect on the 46; the bone substitute Bioss was used as a filling materiel, without the need for a membrane.Teeth 32, 31, 42, and 41 were extracted and immediately replaced with 3 narrow neck SLA Straumann dental implants (3.3\u2009\u00d7\u200912\u2009mm at site 41 and 31 and 3.3\u2009\u00d7\u200910\u2009mm at site 32). The choice of cantilevering was based on the presence of a wide intrabony defect surrounding tooth number 42; immediate nonloading temporization was provided on implants number 41 and 32 to maintain the esthetic appearance.No provisional prosthesis was delivered for molar areas, and an association of amoxicillin 500\u2009mg and metronidazole 250\u2009mg was prescribed 3 times a day for 8 days .\u2217 implants were used in the maxilla replacing the first and second molars: implants were inserted on the right side, and implants on the left side (Figures Almost 5 months later, the clinical exam showed a perfect soft tissue integrity, and the radiographic evaluation revealed total bone formation at the site of extracted molars and bone stability around Straumann mandibular implants . Four os Figures .The patient could not return until 16 months after the implant surgery, and for the realization of final fixed prosthesis, all prosthesis were delivered within 10 days and the case was documented radiographically and clinically and follow-up maintenance program was scheduled after this visit.The patient is placed on a strict maintenance schedule; prophylaxis is performed every 3 to 6 months and periapical follow-up radiographs are done every year to follow-up on the surgical sites. During the 12 year follow-up period, no residual pockets were observed, and there was no exacerbation of the inflammatory condition. Marginal bone stability is present on all implants. Since the case was done, there was no need for the adjunct use of antibacterial mouth rinses or systemic antibiotic use Figures and 8.The high risk of relapse as well as limited success and survival rate of dental implants is considered as a severe complication related to aggressive periodontal disease, and this case showed a perfect bone stability, after guided tissue regeneration and around implants, over 10 years after treatment without any sign of inflammation, and the stability of such results is maybe related to the strict supportive therapy program or the choice of doing full mouth surgery in one day which may assure the complete eradication of bacteria and prevent the contamination of treated areas when surgeries are usually done at variable intervals. To the best of our knowledge, this is the first case report treating surgically a case of generalized aggressive periodontitis with GTR and immediate implantation in one single day. The successful treatment is maybe related to the choice of the treatment; however, additional clinical studies with more patients are necessary in order to support this choice."} +{"text": "To validate these findings in a distant vertebrate species, we used single-cell RNA sequencing of lck:GFP cells in zebrafish and obtained the first transcriptome of specific immune cell types in a nonmammalian species. Unsupervised clustering and single-cell TCR locus reconstruction identified three cell populations, T cells, a novel type of NK-like cells, and a smaller population of myeloid-like cells. Differential expression analysis uncovered new immune-cell\u2013specific genes, including novel immunoglobulin-like receptors, and neofunctionalization of recently duplicated paralogs. Evolutionary analyses confirmed the higher gene turnover of trans-membrane proteins in NK cells compared with T cells in fish species, suggesting that this is a general property of immune cell types across all vertebrates.The immune system of vertebrate species consists of many different cell types that have distinct functional roles and are subject to different evolutionary pressures. Here, we first analyzed conservation of genes specific for all major immune cell types in human and mouse. Our results revealed higher gene turnover and faster evolution of The immune system of vertebrate species has evolved into a highly complex structure, comprising many different types of both innate and adaptive immune cells. Adaptive immune cells are broadly classified into B and T lymphocytes that can directly recognize antigens with great specificity. Innate immune cells include a variety of myeloid cells such as monocytes, neutrophils, basophils, eosinophils, and mast cells. A third major type of lymphocytes, the Natural Killer (NK) cells, has also been historically classified among innate immune cells . TraditiRag-mediated somatic V(D)J (variable diversity joining) rearrangement, and this process is conserved across all jawed vertebrates J variable regions have been recently annotated has emerged as a promising technology to unravel the landscape of cell types in heterogeneous cell populations without relying on specific antibodies . The simlck promoter \u2013specific genes. Next, we analyzed immune cells in zebrafish, a powerful model in biomedical research . To thispromoter and perfImmune related genes tend to evolve more rapidly than other genes, and between functionally distinct immune cells, the selective pressures might vary significantly. Here we performed a conservation analysis of the most differentially expressed genes in resting T, B, NK, and myeloid cells in the mouse and human at the genome-wide level (see Methods) .trans-membrane (TM) or secreted protein coding genes, those specifically expressed in NK cells have proportionally fewer orthologs across all vertebrates compared with other immune cells. The difference is most evident between NK and T cells, although these are closer from a functional and ontogenical point of view . Examples of other mouse or human NK TM genes poorly conserved across vertebrates include Fc receptors, granulysin (GNLY), CD160, CD244, and IFITM3. In addition, among conserved protein coding genes, NK-cell\u2013specific genes consistently had lower sequence identity across all vertebrates for TM genes but not for cytoplasmic ones .Our analysis revealed that among of view A,C. No d of view B,D. As edN/dS ratio) of one-to-one orthologs between human and mouse can provide a good estimation of the evolutionary pressure acting on a gene. Our results indicate that NKs\u2019 TM genes evolve faster compared with T cells\u2019 TM genes .The ratio between nonsynonymous and synonymous substitutions cells. This transgenic line expresses GFP under the control of the lymphocyte-specific protein tyrosine kinase (lck) promoter, and it was proposed to be mainly restricted to zebrafish T cells zebrafish may therefore provide an ideal model to investigate the large difference in conservation between T- and NK-cell\u2013specific genes observed in mammalian species. To simultaneously obtain information about cell morphology and high-quality gene expression profiles, we used high-throughput single-cell RNA sequencing combined with FACS (fluorescent-activated cell sorting) index sorting analysis of two adult zebrafish (3- and 10-mo-old) spleen-derived lck:GFP cells.As reliable antibodies to isolate pure immune cell populations in fish species are not available, we used single-cell transcriptome analysis of zebrafish T cells . However+ cells isolated from the spleen of two different fish from a different clutch and different age (see Methods). Following quality controls (see Methods) , 15 cells were removed, and gene expression profiles for the remaining 263 cells were generated. Average single-cell profiles showed good correlation with independent bulk samples (Pearson's correlation coefficient [PCC] = 0.82) . Correlations between single-cell gene expression profiles were used to calculate cell-to-cell dissimilarities (see Methods), and these were represented into low-dimensional space using classical multidimensional scaling . Interestingly, a clear cell subpopulation structure emerged and determine whether absence of expression of specific markers in many cells was mainly due to technical limitations of single-cell RNA-seq technology . Indeed, the vast majority of the cells displayed clear expression of one of the gene signatures . We named the obtained Clusters 1 (T-cells), 2 (NK-like cells), and 3 (myeloid-like cells).We first generated and sequenced libraries from 278 single GFP emerged A showingke cells , and spiMethods) . Next, wgnatures B, suggesTCR locus (see Methods) . Occurrence of V(D)J recombination was associated with Cluster 1 , which provides additional genomic evidence of the T-cell identity. As expected, V(D)J recombined segments were also strongly associated with expression of the T cell receptor beta constant 1 .To further support the hypothesis that most cells in Cluster 1 are bona fide T cells, we adapted a recent method for detection of V(D)J recombination events of the Methods) . With a < 10\u22125) . InteresT cells) A, 3 and nitr and dicp genes, the second cluster expressed NK lysins with high specificity (in particular nk-lysin tandem duplicate 2 and 4) (see recombination activating gene 1\u2013deficient (rag1\u2212/\u2212) zebrafish (In addition to d 4) see , which hebrafish . This fucd4, cd8, nitr, dicp, and spi1b) is clearly visible .The clustering structure of our fish immune cells was further validated in a set of more than 300 single cells from a third fish and additional cells from the first fish, where despite much lower coverage due to external RNA contamination of the samples, the separation between cells expressing the different markers (P = 1.4 \u00d7 10\u22125). This is consistent with NK-like cells possessing dense cytoplasmic granules (P = 1.6 \u00d7 10\u22125). The high granularity of cells in Cluster 3 further supports the hypothesis that these cells originate from a subpopulation of lck+ myeloid cells, such as granulocytes (Since ic mice) and humaSupplemental Table S2).To identify additional genes specific for each cell population, we performed differential expression analysis of each cluster versus the other two (see Methods) whose expression is highly correlated with the alpha chain cd8a within Cluster 1 (cd28 (ENSDARG00000069978), and an uncharacterized Ig-like protein (ENSDARG00000098787) related to CD7 antigen indicates activation of the death-receptor-ligand pathway . Expression of these genes further shows that zebrafish presumably resting NK-like cells transcriptionally resemble effector CD8 T cells, as observed in mammals , the interleukin 2 receptor beta , tnfsf14 , and chemokines of the families ccr38 and ccr34. In addition, we detected differential expression of putative activating NK receptors\u2019 adaptors (ITAMs), Fc receptor gamma subunit (fcer1g), hematopoietic cell signal transducer (hcst), CD247 antigen like (cd247l), and multiple putative transcription factors . Finally, within the top differentially expressed genes of these NK-like cells, we found putative homologs of mammalian granzyme B that is expanded in ray-finned fish genomes , and many uncharacterized putative immunoglobulin-like receptors and cytokines, such as immunoglobulin V-set domain-containing proteins or interleukin-8-like domain-containing chemokines . Other differentially expressed genes included Fc receptor gamma subunit\u2013like (fcer1gl), hck, a member of the Src family of tyrosine kinases mostly expressed by phagocytes in mammals and potentially implicated in signal transduction of Fc receptors and degranulation (zgc:158446), and id2 (a transcription factor interacting with spi1b) see .P = 0.008; see Methods). Similarly, the comparison of differentially expressed genes in Cluster 2 with human gene expression data confirmed a significant enrichment in NK-specific genes (P = 0.009), thus supporting the conservation of a core transcriptional program between mammalian and zebrafish NK-like cells (see Methods). Finally, differentially expressed genes in Cluster 3 were weakly enriched in human myeloid-specific genes .We next compared differentially expressed genes in each cell population to human transcriptomic data of homogeneous FACS sorted immune cells . For genlck+ cells, compared with 41% of post-speciation duplicated paralogs. As expected, prespeciation duplicated immune genes were more likely (94%) to functionally diverge compared with the more recent post-speciation paralogs (62%). Ray-finned fish\u2013specific duplicated genes with conserved expression patterns included, for instance, the NK receptors nitr, which although expanded in zebrafish, have kept their cell-type specificity. In contrast, other fish-specific paralogs show distinct expression, suggesting possible neofunctionalization events provide an interesting example of such recent functional divergence. In our data, nkl.4 was expressed in both myeloid- and NK-like cells. However, nkl.3 was only expressed in myeloid-like cells, while nkl.2 expression was restricted to NK-like cells (Fc receptor gamma subunit (fcer1g), which in mouse (Fcer1g) and human (FCER1G), is highly expressed in myeloid and NK cells was expressed in myeloid- and NK-like cells, while its paralog Fc receptor gamma subunit\u2013like (fcer1gl) expression was restricted to the myeloid-like cells . Interestingly, genes more recently duplicated (ray-finned fish specific) show lower expression in our data set. For example, 53% of prespeciation duplicated genes showed expression in ents see . Notablyke cells , 4. A seNK cells . In zebrke cells . Other eP < 10\u22124) , suggest < 10\u22124) , as obsenitr and dicp among NK-like specific genes , Ig-like molecules , NK lysins, and the NK receptors see also , as well mammals , 5.Supplemental Fig. S2). Moreover, as in human and mouse, zebrafish TM genes conserved in vertebrates (for which orthologs were detected) were more divergent in NK/NK-like cells than in T cells (P = 0.03) . In contrast, cytoplasmic and nuclear T-cell\u2013specific genes displayed similar sequence identity compared with other genes .When compared at the sequence identity level, the conserved TM genes specifically expressed in either T or NK/NK-like cells from human, mouse, or zebrafish had lower sequence identity than other genes across vertebrates J recombination in 22 cells . Interestingly, a single TCR recombinant was found in each cell , which is consistent with allelic exclusion. Although V(D)J recombination was clearly correlated with T-cell identity, five cells with evidence of V(D)J recombination fall in Cluster 2 and three of them show clear expression NK genes. It is tempting to speculate that these cells could be NKT cells. However, in mammals, the process of TCR rearrangement first initiates in uncommitted hematopoietic progenitors before NK/DC/B/T divergence. Therefore, incomplete rearrangements are also observed in subpopulations of non-T cells, such as NKs , specific transcription factors, and multiple novel putative NK-specific receptors and chemokines . This suggests that rapid evolution of NK TM genes is key for their function in all vertebrates. As NK genes cannot undergo somatic rearrangement, we propose that this fast evolution reflects, at least partly, a need for NK cells to possess a diverse repertoire of species-specific germline encoded receptors and associated proteins to perform their functions. In particular, both T and NK cells recognize the fast-evolving and highly polymorphic MHC molecules. While T cells do so by rearranging their TCR sequence, NK cells possess an expanded family of receptors. The fast evolution of these receptors may be the result of a need to adapt to MHC rapid evolutionary changes. Our observations also suggest a model of high gene turnover and faster evolution of immune TM/secreted genes but, at the same time, conservation of core cytoplasmic immune genes from zebrafish to mammalian species zebrafish single cells, see Overall, our work expands the analysis of immune cell types and their evolution to lower vertebrates. To our knowledge, this is the first study to characterize T and NK cells at the whole-transcriptome level in a nonmammalian species and one of the first studies to analyze NK cells\u2019 gene expression at the single-cell level were ordered based on expression fold-change, and the top 100 genes unique for each cell type were selected as \u201csignature genes\u201d for downstream analysis . Results were robust to different cut-offs for the top N differentially expressed genes . Human and mouse dN/dS ratios of one-to-one orthologs between these two species were obtained from Ensembl version 82. The two protein groups enriched in (1) TM and secreted proteins and (2) cytoplasmic and nuclear proteins were defined based on the presence of predicted TM domains and/or signal peptide. Statistical significances of differences in sequence identity and dN/dS differences were assessed using Wilcoxon rank-sum tests. Statistical significances of differences in proportion of orthologs were assessed as follows: (1) in a specific species by comparison against a null-model distribution generated from 10,000 random permutations of gene/cell-type specificity class pairs, and (2) globally across all species using paired Wilcoxon rank-sum test .Orthologs of mouse and human protein-coding genes and their sequence identities, as well as TM domains and signal-peptide predictions were downloaded from BioMart/Ensembl Genes 82 . For gen3.28.14) . SignifiTg(lck:EGFP) lines were maintained as previously described (Wild type (Tubingen long fin) and transgenic zebrafish escribed , in accoTg(lck:EGFP) adult fish from a different clutch and at different ages (3 and 10 mo) and one adult wild-type fish were dissected and carefully passed through a 40-\u03bcm cell strainer using the plunger of a 1-mL syringe, and cells were collected in cold 1\u00d7 PBS/5% FBS. The nontransgenic line was used to set up the gating and exclude autofluorescent cells. Propidium iodide (PI) staining was used to exclude dead cells. Individual cells were sorted, using a Becton Dickinson influx sorter with 488- and 561-nm lasers . The Smart-seq2 protocol .The spleens from two heterozygote m lasers and collprotocol was usedhttp://sourceforge.net/projects/bbmap) with parameters minlen=25 qtrim=rl trimq=10 ktrim=r k=25 mink=11 hdist=1 tbo.Following Illumina HiSeq2000 sequencing (125-bp paired-end reads), single-cell RNA-seq reads were quality trimmed and cleaned from Nextera adaptor contaminant sequences using BBduck . Next, gene expression levels were quantified as Ei,j = log2, where TPMi,j refers to TPM for gene i in sample j, as calculated by RSEM 1.2.19 . On average, 1240 expressed genes per cell were detected . Cells having fewer than 500 detected genes or fewer than 10,000 reads mapped to transcripts were excluded from further analyses.An average of 2.1 million paired-end reads were obtained per single cell for dimensionality reduction . MDS attempts to preserve distances between points generated from any dissimilarity measure. PCCs between full transcriptional profiles were used to define cell-to-cell similarities, and 1 \u2212 PCC was then used as MDS's input dissimilarity measure.In order to visualize cell heterogeneity at the transcriptomic level, we used classical MDS .To correct for batch effects and remove unwanted variation between the first and second fish, we used the ComBat function from R Bioconductor's package . After thttps://github.com/epierson9/ZIFA in July 2016) that explicitly models gene drop-out events . As input to ZIFA we used ComBat-adjusted log2 (TPMs + 1) having a minimum variance across cells of one (6350 genes passed this filter). As the batch effect adjustment can produce negative expression values and ZIFA requires all values to be positive, we set all negative values to zero. ZIFA was run in the fast \u201cblocks\u201d mode with k = 5.Similar low-dimensionality projection was obtained using zero inflated factor analysis (ZIFA) , NK-like cells , and myeloid cells (spi1b/pu.1). We calculated a score for each set of marker genes [mean log2 (TPMs + 1)] and assigned identities to cells having a score higher than one in only one of the three sets, yielding 91 T cells, 44 NK-like cells, and five myeloid cells . These extended signatures were used to color all cells (hclust using Ward.D2 method) applied on the first four principal coordinates generated by the MDS. The choice of the components was based on the eigenvalue decomposition of the MDS . Eigenvalues decrease very smoothly after the fourth component, i.e., contributing less significantly to the overall variability. The number of clusters was determined by maximizing the mean silhouette coefficient .To identify different cell populations, we first defined, based on literature, minimal sets of marker genes for candidate T cells and Rag-dependent V(D)J recombination are found in zebrafish J recombination.All four ebrafish . Howevernnotated . To expl aligner , with lo aligner using it aligner using thSCDE R package v1.99 .Estimated gene counts obtained from RSEM were used as input for ge v1.99 that exphttps://CRAN.R-project.org/package=pheatmap), using \u201ccorrelation\u201d distance with \u201cWard.D2\u201d criteria to cluster rows, and whole-transcriptome distance ; and (2) 19,499 genes that underwent \u201cearly\u201d duplication, where their most recent common ancestor was mapped to bony vertebrates (Euteleostomi) or any of its parent taxa . Many of these genes suffered multiple duplication events both before and after the fish common ancestor. Therefore, to compare differences in expression between these two groups, we did not include the set of overlapped genes and obtained 3235 unique recently duplicated genes and 8609 unique early duplicated genes. From these, 1315 (41%) and 4569 (53%) were detected in our data (genes with >0 TPM in at least 1% of the cells).A list of paralogs in zebrafish was obtained from Ensembl Compara GeneTrees (version 82) . We defiFor the analysis of expression pattern divergence, we searched pairs of paralogs where both genes show some specific expression pattern according to one of the following criteria: (1) within the top 100 differentially expressed genes in Cluster 1, Cluster 2, or Cluster 3; (2) within the top 100 differentially expressed genes in Cluster 2 and Cluster 3 versus Cluster 1 , Cluster 1 and Cluster 3 versus Cluster 2 , or Cluster 1 and Cluster 2 versus Cluster 3 ; or (3) expressed in all the three clusters (in at least 10% of the cells of each cluster). In the latter case, we only considered pairs of paralogs where only one gene is expressed in the three clusters, and the second is either specifically expressed or depleted from the major Clusters 1 or 2. Pairs of paralogs where both genes are expressed in all three clusters were not considered since most of them are not immune-related genes, while Cluster 3 is too small to accurately assess enrichment/depletion.Supplemental Table S3.Next, we identified cases where both paralogs belong to the same expression pattern group (duplicate genes with conserved expression pattern) and cases where they differ . For recently duplicated genes, we found 23 pairs with distinct expression patterns and 14 pairs with the same expression patterns , while for early duplicated genes, we found 121 pairs with distinct patterns and eight pairs with the same expression patterns , as shown in Z-score >1 and sorting by fold-change , although results were robust to different cut-offs . To assess orthologs\u2019 conservation among nondifferentially expressed genes, we first excluded lowly expressed genes from the analysis . The reason for this is that we observed a bias of higher gene conservation among highly expressed genes compared with lowly expressed genes. After this filter, conservation of differentially expressed genes could be compared with that of nondifferentially expressed genes as in Supplemental Table S4 shows, for all analyzed genes, the gene sequence identity shared with orthologs in the vertebrate species analyzed.Orthologous genes of zebrafish in vertebrate species and their sequence identities were downloaded from BioMart/Ensembl Genes 82. For comparisons between differentially expressed genes between Cluster 1 (T cells), Cluster 2 (NK-like cells), and Cluster 3 (myeloid-like cells), we chose the top 100 differentially expressed genes after filtering by https://www.ebi.ac.uk/arrayexpress/) under accession number E-MTAB-4617.Raw and processed sequence data sets from this study have been submitted to ArrayExpress ("} +{"text": "In the last two decades, numerous powered ankle-foot orthoses have been developed. Despite similar designs and control strategies being shared by some of these devices, their performance in terms of achieving a comparable goal varies. It has been shown that the effect of powered ankle-foot orthoses on healthy users is altered by some factors of the testing protocol. This paper provides an overview of the effect of powered walking on healthy and weakened users. It identifies a set of key factors influencing the performance of powered ankle-foot orthoses, and it presents the effects of these factors on healthy subjects, highlighting the similarities and differences of the results obtained in different works. Furthermore, the outcomes of studies performed on elderly and impaired subjects walking with powered ankle-foot orthoses are compared, to outline the effects of powered walking on these users. This article shows that several factors mutually influence the performance of powered ankle-foot orthoses on their users and, for this reason, the determination of their effects on the user is not straightforward. One of the key factors is the adaptation of users to provided assistance. This factor is very important for the assessment of the effects of powered ankle-foot orthoses on users, however, it is not always reported by studies. Moreover, future works should report, together with the results, the list of influencing factors used in the protocol, to facilitate the comparison of the obtained results. This article also underlines the need for a standardized method to benchmark the actuators of powered ankle-foot orthoses, which would ease the comparison of results between the performed studies. In this paper, the lack of studies on elderly and impaired subjects is highlighted. The insufficiency of these studies makes it difficult to assess the effects of powered ankle-foot orthoses on these users.To summarize, this article provides a detailed overview of the work performed on powered ankle-foot orthoses, presenting and analyzing the results obtained, but also emphasizing topics on which more research is still required. Walking is the most common form of locomotion to move from one place to another. Despite its apparent simplicity, it is a complex movement that requires a precise coordination of multiple body segments and muscles. Although the human gait pattern appears to be energetically optimized , walkingThe capability of the ankle joint to deliver these functionalities can be reduced as a consequence of aging, pathologies, and injuries. It has been shown that the elderly walk slower, take shorter steps, and exhibit a smaller range of motion in the joints of the lower limbs , 7. FurtSubjects with weakened capabilities of the ankle joint due to injuries or diseases, such as strokes, hemiplegia, and incomplete spinal cord injuries, show an altered gait pattern when compared to that of healthy subjects \u201310. The The crucial role of the ankle joint in human walking, in the last two decades, led to the development of numerous powered ankle-foot orthoses (PAFOs). Their aim is to improve the gait pattern of impaired users or decrease the biological effort of healthy subjects during walking. Some of the developed PAFOs share the same combination of type of actuators and type of controllers , but theThe aim of this paper is to collect the results of studies assessing the assistance provided by PAFOs while walking on healthy and impaired users and compare their outcomes to give an overview of the effects of walking with a PAFO in both groups of subjects. For this purpose, in this paper only articles which analyzed the effects of PAFOs on users during walking experiments were included. On the other hand, articles that reported exclusively on the design of PAFOs, the results of characterization tests which did not involve users, or in which the protocol of the experiments did not involve walking trials, were excluded. Furthermore, articles involving walking experiments in which the discussed results were only about the performance of the actuator of the PAFO and not its effect on the user, were also omitted. Some studies that were performed with a soft exosuit providing ankle assistance on healthy and impaired users were included, due to the relevance of their findings with respect to the aspects discussed by this article. However, it is important to highlight that these devices also provide passive hip assistance along with active ankle assistance \u201327. The Basic Science PAFOs: PAFOs that have been developed to study human physiology and biomechanics by analyzing the user\u2019s response to external ankle actuation , in which the action of the PAFO is proportional to the activity of a predefined user\u2019s muscle 21] is siGalle et al. also assFew studies have assessed the effects of the different onset timing on users walking uphill Table\u00a0. LookingAnother parameter that alters the effect of the PAFO on users is the assistance magnitude. The influence of this parameter has been analyzed in few studies in unloaded , 21 and Galle et al. assessedThe results obtained in , 21 showProviding positive assistance magnitude to the ankle joint of a user can help in reducing the metabolic cost of loaded walking also, as shown in Table\u00a0The PMc has the advantage of being better synchronized with the user, resulting in a more physiological controller because the user has direct control over the timing and amplitude of the actuation;The P-Bc has lower complexity and it does not need sensors on the user\u2019s limbs, since they can, in general, all be placed on the device.As previously introduced, the proportional myoelectric controller (PMc) and the phase-based controller (P-Bc) are the main types of controllers used in PAFOs. The benefits and drawbacks of the two controllers have already been discussed in several studies \u201343 and tThe determination of the specific effects of the two controllers on the user is a very interesting point since it could help define whether one of the two controllers is more suitable for a specific goal. A discussion about this topic is provided in the sequel.A highly debated point is one that refers to the ease of adaptation of the subjects to the two controllers. On one hand, the PMc is considered to be more natural for subjects, making it easier for them to learn how to walk with the device . On the Cain et al. found noA more systematic comparison of an adaptive gain PMc (Ag-PMc) and a P-Bc was performed by Koller et al. , 42. TheKoller et al. , 42 asseThe bigger reduction of the user\u2019s muscle activation found by Koller et al. , 42 withIn the studies performed by Mooney et al. \u201349, the p+ and pdis are the average positive and negative mechanical power provided by the exoskeleton, per leg in a stride, i.e. the positive and negative assistance magnitude, \u03b7 is human muscle-tendon efficiency, mi and \u03b2i are the added mass and the location factor related to the ith human segment, respectively. The muscle-tendon efficiency \u03b7 is found to be equal to 0.41 in [\u03b2i are equal to 14.8 W/kg, 5.6 W/kg, 5.6 W/kg and 3.3 W/kg for the foot, shank, thigh and waist, respectively [MetAdvnorm of powered walking was determined in [where 0.41 in , 30, 37;ectively . The relmined in and giveMetAdvnorm represent a decrease in the metabolic cost of powered walking as compared to normal walking.in which the positive values of MetAdvunpow), as described in Eq. Galle et al. determinTon) and the positive assistance magnitude summed for the two legs (P+). Furthermore, the weight of the device is not considered, since it affects both the unpowered and powered conditions.In contrast to the formulation by Mooney et al. Eq. and 2, EThe possible advantages of using a PAFO as a tool to assist or rehabilitate subjects with ankle deficiencies are well illustrated in literature , 70, 72.Contrary to augmentation PAFOs, the main goal of assistive and rehabilitation PAFOs is to improve the altered gait pattern of users with weakened ankle capabilities. Although there is no agreement on what the metrics are for assessing the improvement, Ward et al. designatThe main goal of assistive and rehabilitation PAFOs is to improve the altered gait pattern of their users, by correcting the ankle RoM and preventing the occurrence of drop foot in subjects with weakened ankle dorsiflexors. Another goal of these PAFOs is to improve the subjects\u2019 walking speed, which is usually reduced by the impairment . Some suIn the following section, the effects of assistive and rehabilitation PAFOs on these parameters are discussed.The results reported in Table\u00a0Different effects of the PAFO on the gait symmetry of the user have been found by different studies , 53. WheAnother interesting result was reported by Awad et al. . In theiTable\u00a0Interesting results are reported by Kim et al. , 59, whoOnly a few studies that have been performed with assistive PAFOs assessed the effect of a powered push-off during level walking on muscle effort and the metabolic cost of walking of elderly , 56 and Sawicki et al. and TakaThe effect on the metabolic cost of a powered push-off in assistive PAFOs was assessed by Galle et al. and NorrTakahashi et al. found noIn opposition to this, Awad et al. and Bae Some common trends of the effects of PAFOs on healthy and weakened users have been identified in the previous sections. In addition to them, some divergences can be noticed in the results presented by different studies. A discussion about these findings is addressed in this section.As previously introduced, the assessment of the adaptation of the user to powered assistance is important to compare the effects of the PAFO in different studies. This is due to the changes in the kinematics, muscle activation, and metabolic cost of walking between the adaptation and the steady-state period , 40, 43.Some differences have been noted in the time to achieve a steady state in works with similar protocols Table\u00a0. This suThe time needed to achieve a steady state during powered walking has not been assessed in elderly and impaired users. Assessing whether the subjects achieved a steady state is important in assistive and rehabilitation PAFOs to evaluate their effects. For example, assessing the achievement of a steady state at the end of each training session with rehabilitation PAFOs could help distinguish whether the changes in the gait pattern of the subject between sessions are an effect of the robotic training or of the adaptation of the user to the PAFO.As already discussed in the previous sections, it is not easy to define a trend for the influence of the onset timing and the assistance magnitude on the reduction of the soleus activation and the metabolic cost of walking. One of the reasons for this is the fact that the comparison between studies performed on different actuation setups, is not always straightforward.As presented above, two formulae that relate the metabolic advantage of the PAFO to the assistance parameters have been proposed by Mooney et al. \u201349 Eqs. and 2 anThe results reported in the previous section highlight the advantages of using a PAFO to improve the ankle kinematics and walking speed of its users and to rehabilitate impaired subjects. However, while numerous studies have analyzed the effects of the assistance parameters on the biological effort of healthy young users , 46, 56,Some of the presented studies on PAFOs assess the activity of lower limb muscles that are not directly related to ankle movements, such as the vastus lateralis, vastus medialis, biceps femoris, rectus femoris, medial hamstring, lateral hamstring, and gluteus maximus , 43, 65.Galle et al. did not Some of the works that reported a reduced activity of more proximal muscles also measured the total ankle work during powered and unpowered walking. A common finding of these works is that the subjects\u2019 total ankle work during powered walking was higher than it was during unpowered walking. This effect would suggest that in these studies, the enhancement of human capabilities was achieved by the PAFO augmenting the action of the human ankle, instead of replacing part of it. Other works found an increase in the total ankle work during powered walking with a PAFO , 39, 41,From these results, it seems that there are two possible effects of augmentation PAFOs providing powered push-off to the user: the PAFO could replace part of the biological ankle work, or it could augment it and assist not only the ankle joint, but also the hip joint. However, due to the small sample of works that analyzed the effects of the PAFOs on more proximal joints, it is difficult to explain which are the elements that make the PAFO augment rather than replace the biological ankle work.A very important aspect that should be studied in PAFOs is the interaction of the device with the user\u2019s body. This interaction can be divided into two levels: a physical and a neural one. The physical interaction plays a very important role in accomplishing the successful transmission of torques and forces from the device to the user . It is kAn example of the importance of assessing the physical human\u2013robot interaction when analyzing the effects of the PAFO on the user is given by Van Dijk et al. . In theiAnother important aspect that should be considered in the development of PAFOs is how the nervous system will respond to the provided assistance, i.e. the neural interaction. Kao et al. showed tA similar consideration was made by Kinnaird et al. . In theiPAFOs have been shown to have great potential to enhance the capabilities of healthy users and assist or rehabilitate the ankle joints of weakened ones. However, more research is necessary to improve the understanding of the impact of these devices on the user.From the results presented in this paper, it seems that the adaptation time of healthy users is influenced by the assistance parameters. The determination of a relationship between these variables is complicated by the lack of information regarding the onset timing and assistance magnitude in most of the studies. More research should be performed to determine the influence of the assistance parameters on the adaptation time, both on healthy and weakened users.Future studies should be conducted with more combinations of onset timing and assistance magnitude to assess their interplay in the determination of the metabolic advantage of the PAFO.More studies are needed on elderly and impaired subjects. The small number of studies performed on these subjects makes it difficult to accurately compare the results obtained by different studies. These studies should focus on the influence of the assistance parameters on the effect of the PAFO on the user. Furthermore, the time needed by these subjects to reach a steady state during powered walking should be assessed.An interesting topic to be investigated is also the influence of the type of controller on the response of weakened subjects to the assistance provided by the PAFO. The determination of distinctive effects of the different controllers would define whether a specific controller is more suitable for a certain group of subjects or for a precise objective.Another open question to be addressed is how long the subjects can retain the steady-state walking pattern that has been learned during powered assistance. This would be particularly interesting for rehabilitation PAFOs since it could determine the frequency of robotic rehabilitation sessions.Additionally, the effect of powered walking on more proximal joints should be studied to explain the parameters determining whether a PAFO will augment or replace the biological ankle work.Furthermore, another aspect that should be better studied is the physical interaction of the device with the user, which is of great importance in the understanding of the effect of a PAFO on the user.The performance of PAFOs in terms of healthy and weakened users varies between studies with similar protocols and goals. The effect of powered walking on users is influenced by a set of key factors, which have been identified in this article. It has been shown that these factors mutually impact the performance of PAFOs on users, thus, the influence of each one of them cannot be considered independently from the others. In this paper, it has been highlighted that the comparison of the results of different studies is not always straightforward. This is due to the fact that the behavior of a PAFO is greatly influenced by the dynamics of its specific actuation setup and the comparison of the results obtained from different actuation setups is difficult to make. This comparison would be facilitated with the development of a standard methodology to benchmark actuators, which is, however, still an open issue , 77. FroThe results presented in this paper lead to the conclusion that more experiments need to be performed on elderly and impaired subjects. In the future, studies should specify the parameters used in the protocol and report, together with the results, whether the subjects had reached a steady state in the experiment. This is particularly relevant for studies performed on elderly and impaired users. Assessing the influence of these parameters on these users would simplify the analysis of the effects of powered walking."} +{"text": "M/T method and T method are adopted to work out the rotational speed. Experiments were conducted on a laboratory-scale test rig to compare the proposed method with the auto-correlation method. The largest relative errors of the auto-correlation method with the sampling rate of 2 ksps, 5 ksps are 3.2% and 1.3%, respectively. The relative errors using digital approaches are both within \u00b14\u2030. The linearity of the digital approach combined with the M/T method or T method is also superior to that of the auto-correlation method. The performance of the standard deviations and response speed was also compared and analyzed to show the priority of the digital approach. In industrial production processes, rotational speed is a key parameter for equipment condition monitoring and fault diagnosis. To achieve rotational speed measurement of rotational equipment under a condition of high temperature and heavy dust, this article proposes a digital approach using an electrostatic sensor. The proposed method utilizes a strip of a predetermined material stuck on the rotational shaft which will accumulate a charge because of the relative motion with the air. Then an electrostatic sensor mounted near the strip is employed to obtain the fluctuating signal related to the rotation of the charged strip. Via a signal conversion circuit, a square wave, the frequency of which equals that of the rotation shaft can be obtained. Having the square wave, the In industrial applications, rotational speed measurement is a crucial part for condition monitoring, speed control, and protective supervision of rotation equipment, such as generators, steam turbines, and gas turbines. Various kinds of tachometers based on different mechanisms, such as optical, electrical, and magnetic induction, have been developed and widely used to measure the rotational speed of target objects. W.H. Yeh presented a high-resolution optical shaft encoder to monitor the rotation behavior of a motor and J. NIn order to overcome the hash condition, such as a high temperature, heavy dust environment, the electrostatic method has been used to realize the rotational speed measurement. The electrostatic sensor is adaptable for the speed measurement in various industrial conditions for the advantages of contactless measurement, low cost, simple structure, and easy installation and maintenance. Recently, Y. Yan and L.J. Wang utilized electrostatic sensors and a correlation algorithm to calculate the period or elapsed time and successfully obtained the rotational speed of a rotational shaft ,9. The eThe principle of rotational speed measurement using an electrostatic sensor is shown in By now, electrostatic sensors in conjunction with correlation methods, including the cross-correlation method using dual electrostatic sensors and the auto-correlation method using a single electrostatic sensor, have been used to determine rotational speed ,9,13. Fix(t) and y(t) is:In the time domain, the definition of the cross-correlation function between real power signals cv can be calculated by the sensor angle spacing s):x(t) and y(t) are the same signals obtained by one electrode, Equation (1) turns out to be the auto-correlation function. With respect to the auto-correlation method, only one channel of the electrostatic signal is needed. The time-delay \u03c4 between signal x(t) and signal y(t) is the rotational period T(s) and av (rpm) can be obtained as follows:If Obviously, the correlation method needs to locate the coordinate of the first dominant peak in the waveform of the correlation function, which is influenced by the sampling rate to a great extent. At the same time, the waveforms collected by inducing the signal from a cylinder dielectric sleeve contain complex information and a faint sign of the periodical component. Although the correlation calculation of the waveform has good performance and successfully extracts the elapsed time, the computational accuracy of the correlation method is obviously affected by the sampling rate and signal noise .For the sake of improving the performance of rotational speed measurement via the electrostatic method, this paper proposes an approach to generate a square wave from an electrostatic sensor in order to obtain the rotational speed via digital methods, thus eliminating the influence of the sampling rate and signal noise, and also simplifying the system complexity. In the following article, \u201csquare wave\u201d refers to the output waveform from the comparison circuit which generates a pulse every rotational period. Implementation of a rotational speed measurement system based on this method is presented. Compared to the rotational speed measurement method using an electrostatic sensor in conjunction with correlation, this designation leaves out the AD converter and simplifies the computation code which is more adaptive for the implementation in a microprocessor system.Inspired by the photoelectric method which fixes a strip of a reflection element, this experiment uses a strip of polytetrafluoroethylene (PTFE) stuck to the rotational shaft. The measurement principle is shown in T method, which calculates the reciprocal of the duration between consecutive pulses to obtain the frequency; (2) pulse counting, commonly termed as the M method, which counts the number of pulses generated within a prescribed period of time; and (3) constant elapsed time, commonly termed as the M/T method, is a combination of pulse counting and measuring elapsed time are zeros and the waveform of function rf(\u03b8) is the same as in circleq(\u03b8) and the function rf(\u03b8), thus rf(\u03b8) can be regarded as a filter function. The low pass filter property can also be obtained from [rf(\u03b8) is influenced by the rotational speed (rf(\u03b80 + wt)). The cutoff frequency of rf(\u03b8) increases with the speed . Through the analysis, the electrostatic electrode in this case of application can be regarded as a low pass filter which adaptively adjusts its cut-off frequency. Thus, the waveform of the signal mainly contains a low frequency component if the electro-magnetic interference is well shielded, which helps to explain the signal obtained in the experimental part.As shown in f range \u22120, 180 ar8 \u03a9 resistors connected in series, which determines the transimpedance gain. In actual application, a feedback capacitor is needed to guarantee the stability of the circuit by inhibiting the high frequency noise.The sensor board shown in The relationship between the output voltage and the input current from the electrode can be calculated according to Equation (17). Thus, the sensitivity of the circuit is 0.2 V/nA. The purpose of the balance resistor and capacitor is to make the impedance of the two inputs equal, thus, the bias current of the amplifier generates no additional offset voltage on the output.When the electric field near the electrode varies with the rotation of the charged strip, a small current signal will be generated and transformed into a voltage signal via the feedback resistance on the amplifier. The voltage output of the sensor board is collected by the condition unit via a shielded cable to avoid electromagnetic interference in the space.The condition unit in our experiment is shown in The amplifying circuit uses the same amplifier chip with the sensor board to meet the performance requirements. The voltage gain of the amplifier can be adjusted by a slide rheostat. Then a third-order Butterworth low pass filter with Sallen-Key topology is used for filtering and inhibiting the noise. The passband frequency is 400 Hz and the stopband frequency is 2.4 kHz. A smooth waveform improves the stability of the square wave. Finally, a hysteresis comparator is utilized to transform the waveform into a square wave which is connected to the DSP board for speed calculation.A laboratory-scale test rig is designed and built for rotational speed measurement. T method and M/T method transformed from the DSP for each point were saved for analysis. A photoelectric reflection digital tachometer with an accuracy of \u00b10.05% of the reading plus 1 rpm was used to provide a reference speed in our experiment. The ambient temperature was controlled between 20 \u00b0C and 24 \u00b0C and the relative humidity was kept between 55% and 65%. The square wave was connected to the external interrupt pin to contend with the square wave immediately. The code of realizing the T method and the M/T method were programed and written into the DSP board separately to test the measurement performance. The DSP transmitted the measurement results of the T method and the M/T method to computer via RS232 serial communication. In the experiment, the prescribed time of the M/T method is set to be 1 s.Experiments were conducted on the rig using the same dimension parameters as the simulation. The rotational speed of shaft was adjusted from 300 rpm to 3200 rpm with an increment of 100 rpm via the VFD. To make a comparison between the digital approach and the correlation method, each point was measured five times. Meanwhile, five values of the Seen from the principle of the correlation method, it can be found that the auto-correlation method can be regarded as a particular case of the cross-correlation method, which leaves out the influence of the installation angle error, the distance differences of the two electrodes to the shaft, and the differences between two channels\u2019 circuits. These factors make the accuracy of cross-correlation method not as good as the auto-correlation method. Meanwhile, the cross-correlation method needs two channels of circuits, which is not consistent with the setting in this experiment. Thus, the experiment only makes a comparison between digital approaches and the auto-correlation method.The proposed approach utilizes the electrostatic sensor to induce the charge on the strip of PTFE, which obtains a strong periodic signal. T method and the M/T method are plotted in T method and M/T method are about 0.81\u2030 and 1.31\u2030, correspondingly. The measurement results are highly consistent with those of the photoelectric tachometer. Meanwhile, the differences between the T method and the M/T method are hardly discernible by eye.The mean values of the measurement results for the As seen from the principle, the proposed digital method needs no sampling via an analog-digital converter, while the sampling rate is an important factor that determines the accuracy of the method based on the correlation algorithm. In order to make a comparison between these two methods, the analog signals are also collected at different sampling rates. The auto-correlation functions are calculated using the filtered analog signal. T of the rotation can be obtained. It can be observed that the waveform near the first peak after 0 s is very smooth, which benefits confirming the accurate and stable value of the period. However, due to the discretization of the data series, the obtained period T will be a time length away from the actual time of the rotation period with a significant probability despite the auto-correlation method confirming the nearest time point to the ideal time point. Moreover, when the signal contains an obvious level of noise or a weak periodicity, the waveform near the peak of the auto-correlation function will be fluctuant, which impairs the result\u2019s accuracy.The waveform in M/T method and the T method.In order to show the accuracy of this digital method, signals are collected at the sampling rate of 2 ksps and 5 ksps and analyzed using the auto-correlation method. Relative errors of the two methods are plotted and compared in M/T and T methods both have significantly small standard deviations. The standard deviations of the auto-correlation method in In order to research the robustness of the proposed method, the standard deviations of each measurement point are listed in M/T and T methods in M/T method are much smaller than those of the T method. As seen from the principle, the M/T method can be regarded as a mean value of several consecutive T methods. Due to the high response speed of the T method, it is more sensitive to the variation of rotation, which makes its standard deviations greater than those of the M/T method. The minor standard deviations of the M/T and T methods mainly arise from the slight fluctuations of the actual rotation state, which is probably related to the unsteady output rotational speed of the motor and the slippage of the belt on the sheave. With respect to the digital approaches, no matter the M/T method or T method, both have very little spread in the measured speed.The standard deviations of the M/T method, the response time is decided by adding extra time to the prescribed time. Regarding the analog method, usually the sampling length should be predetermined. Thus, the time needed to acquire one measurement result is nearly confirmed. Even if the auto-correlation method self-adaptively adjusts the sampling length according to the nearest obtained rotational speed, the response speed is still not as fast as the T method for the reason that the auto-correlation method needs at least two periods of rotation to achieve the correlation calculation. Moreover, data collection and processing also consume a certain amount of time.The response time of each approach can be determined from their principles and data process procedures. Regarding the T method can output a result every rotational period for the reason that the counter in the DSP can work independently from the code and the DSP only needs to perform an easy computation of the counter number and serial communication. Experiments were conducted to test the capability of the T method to measure the variable speed. The motor was adjusted by the VFD output frequency to work at three stages: acceleration by increasing the frequency from 0 Hz to 20 Hz over 5 s, 4 s of constant speed, and deceleration by decreasing the frequency from 20 Hz to 0 Hz over 3 s. T method, which successfully monitors the acceleration and deceleration processes. The rising and decline curves are not perfectly straight lines because the acceleration of the shaft is not absolutely constant. It can be observed that at a constant frequency, the measurement results are of good stability.By contrast, the M/T method and T method were adopted to calculate the speed in a DSP system. Experiments were conducted to compare the digital approaches with the auto-correlation method. Through experimental analysis, several conclusions can be summarized as follows:The work in this paper dedicates to find a more effective approach to cooperate with electrostatic sensors to improve the performance of rotational speed measurement. The proposed approach utilized the electrostatic sensor to induce a charge on a strip PTFE, which obtained a strong periodical signal. Simulation results also described the expected waveform when a strip of charges rotates near an electrode. By adopting a suitable signal condition unit, a square wave, the frequency of which was equal to that of the rotational speed, has been obtained. Having the square wave proportional to rotational speed, the M/T method and T method both have an obviously higher accuracy. The linearity of the M/T method and T method are about 0.81\u2030 and 1.31\u2030, correspondingly, which are much better than those of the auto-correlation method sampled at 2 ksps (3.17%) or 5 ksps (1.33%). Due to the signal discretization, the auto-correlation method can only obtain some discrete values. Improving the sampling rate, calculation quantity, and storage space, the hardware cost will also increase correspondingly.1. Accuracy: Compared with the auto-correlation method, the M/T method and T method obtained particularly small standard deviations among all the measurement points, both within 1 rpm. The M/T method acquired more stable results than the T method due to differences of their respective principles.2. Robustness: The auto-correlation method has a stable performance in some measurement points and also has some obvious standard deviations, which resulted from the signal discretization. However, the T method has the fastest response speed. The correlation method and M/T method have relatively slower response speeds. Experiments also shows that the T method is capable of detecting the variable speed.3. Response speed: The proposed approach combined with the M/T method can be adopted for constant speed measurement or a mean value of rotational speed during a certain time and the T method can be employed for dynamic measurement of variable rotational speed. In actual programming, the M/T method and T method can be written into one piece of a DSP or FPGA simultaneously. An FPGA is more recommended to deal with the square wave for its property of parallel processing and high code execution efficiency.Indeed, having the square wave related to the rotational period, the There are several factors limit the application of this method working at a low speed. The amount of charge on the strip is unstable and the response time is poor at low speed. Further studies can be conducted to deal with these issues by adopting an electret material, adding adaptive numbers of strips and electrodes, and improving circuit properties."} +{"text": "Medication errors are commonly reported in the pediatric population. While evidence supports the use of e-prescribing to prevent certain errors, prescribing with an electronic health record (EHR) system is not devoid of errors. Furthermore, the majority of EHRs are not equipped with functionalities addressing pediatric needs. This study analyzes three unique EHRs in three pediatric clinics. It describes the functionality of each system and identifies errors found in e-prescribed prescriptions. Finally, the study estimates the proportion of e-prescribing errors that could have been avoided if those EHRs had met requirements set by the American Academy of Pediatrics (AAP). The number of prescriptions reviewed for Clinics 1, 2, and 3, respectively, were: 477, 408, and 633 with total error rates of 13.2%, 8.8%, and 6.6%. The clinic EHRs included 21%, 26%, and 47% of the AAP pediatric requirements for safe and effective e-prescribing for children. If all AAP elements had been included in the EHRs, over 83% of errors in the examined e-prescriptions could have been prevented. This study demonstrates that EHR systems used by many pediatric clinic practices do not meet the standard set forth by the AAP. To ensure our most vulnerable population is better protected, it is imperative that medical technology tools adequately consider pediatric needs during development and that this is reflected in selected EHR systems. The Joint Commission reports that errors linked to medications are believed to be the most common medical related errors . FurtherSince 2008, there has been significant growth in the adoption of EHR and e-prescribing as tools to increase the quality and safety of health care . The US While e-prescribing may reduce pediatric medication errors, pediatric practices have historically been slow adopters of e-prescribing with few of those adopters actually using a pediatric-supportive system . More evIn support of the American Academy of Pediatrics (AAP) recommendation that providers working with children adopt e-prescribing systems with pediatric functionality , this st\u00ae 2016 database for analysis. All results were de-identified with no link to protected health information (PHI) and the study was conducted in accordance with the Declaration of Helsinki. The protocol was approved by the University of Oklahoma Health Sciences Center Institutional Review Board (#3463).A cross-sectional, descriptive study design was carried out with three clinics representing three unique EHR systems and practice settings. Including three unique systems provides both depth and breadth as we define systems\u2019 characteristics and capabilities. To be included, clinics must provide care for children, utilize an EHR, and transmit electronic prescriptions to community pharmacies. To estimate a 10% error rate (as observed in prior research ) with 5%Three clinics within the same community were included in the study. Each clinic represented an office within a bigger system with a unique EHR. Clinic 1 is a pediatric clinic within a Federally Qualified Health Center (FQHC) which was the second largest FQHC in the state. Clinic 1 is not a teaching facility and was staffed by three pediatricians. All prescriptions written for a 2-month period were included in the review. Clinics 2 and 3 are both part of university-based health systems representing different state universities, but similar in that they are teaching facilities staffed by trainees/medical residents, nurse practitioners or physician assistants, and faculty physicians. Clinic 2 is a family medicine clinic (only prescriptions for pediatric patients were included in the study) and Clinic 3 represents a pediatric clinic. To obtain the needed pediatric prescriptions to review for Clinic 2, two months of review was required. Whereas, Clinic 3 only required the review of one month of prescription data. EHRs analyzed include Success EHS (Clinic 1), Centricity\u2122 EMR (Clinic 2), and Epic (Clinic 3).To accomplish study aim 1, each of the three clinic EHR systems was assessed for its compliance with the AAP pediatric requirements for safe and effective e-prescribing in the categories of patient information, medication information, cognitive support, pharmacy information and data transmission . The ninFor study aim 2, over 400 new prescriptions from each practice setting were used to identify and classify e-prescribing errors by reviewing the name of medication, patient age, patient weight, medication directions/dose, quantity, and indication. The method for prescription review has been previously described . PrescriFinally, the resulting medication errors were reviewed to determine if they could have been prevented if the EHR system had met AAP recommendations. As with study aim 2, two expert panel members independently completed the review with a third adjudicating any differences.All three clinics provided a list of de-identified, retrospective prescriptions that represented either one or two months of prescription data to meet or exceed the study-determined 400 prescriptions. The number of prescriptions reviewed for Clinics 1, 2, and 3, respectively were: 477, 408, and 633 with total error rates of 13.2%, 8.8%, and 6.6%, respectively . Errors All three EHR systems were assessed for compliance with the AAP requirements for safe and effective e-prescribing . Table 1Compliance with the AAP EHR recommendations could have prevented over 83% (range: 83.3\u201388.9%) of the medication errors at the clinics . A numbeChildren have unique needs compared to their adult counterparts when it comes to the delivery of safe and effective medical care. Their unique needs must be reflected in the EHR systems used to create and transmit e-prescriptions. This study highlights the problem of pediatric prescribing errors in the outpatient setting found when using an EHR and identifies errors that could have been avoided if an EHR had appropriate pediatric focused functionality. In 2013, AAP published their policy titled \u201cElectronic prescribing in pediatrics: toward safer and more effective medication management\u201d . Using tTo further assist in evaluating an EHR, AAP has created an EHR shopper\u2019s guide for clinicians caring for children . ProvideEven an AAP compliant EHR would have failed to avoid up to 17% of errors identified in this study. A solution to some of these problems could involve EHR customization. The use of indication specific dosing and the inclusion of weight based dosing recommendations and calculations would resolve the majority of dosing errors. The use of a customized pediatric-clinic specific medication list within the EHR could additionally minimize errors with dosage form selection or dosage form specific dosing recommendations. For example, the albuterol dosage form error earlier identified would have been avoided if the prescriber had a customized pediatric list that included the commercially available albuterol 0.083% (2.5 mg/3 mL) vials. A custom medication list would also further promote proper medication selection.On the other hand, some errors would not have been avoided even with full AAP compliance and custom lists. One error, for example, revealed an issue with the dosing calculator for certain ferrous sulfate products. In a 220 mg/5 mL oral solution formulation of ferrous sulfate, 44 mg/5 mL is elemental iron. Dosing should be based on this elemental (44 mg/5 mL) component, however the dose was calculated using 220 mg/5 mL leading to an 80% under dose. It is clear that EHR systems are further along in meeting AAP requirements related to patient information and medication information; while less equipped to meet cognitive support, pharmacy information or data transmission needs.Limitations exist with this novel review of EHR shortcomings related to pediatric use. This study represents a retrospective, descriptive study that identifies and demonstrates a need for EHR systems to address pediatric needs. However, a larger, more vigorous assessment would be beneficial to help develop a more robust, statistically driven plan. In addition, each of the three clinics reviewed utilized different EHR systems so comparing error rates and types with different user groups of the same system may be enlightening and could reveal other issues including education differences with respect to system use. Finally, seasonal variation is not accounted for with the snapshot of prescriptions reviewed at the clinics.With only a small percent of AAP criteria being met by each EHR used in the clinic setting today, there is much work to do in providing technology tools with the pediatric patient in mind. EHR systems should meet a minimum standard for pediatric support without requiring institution specific customization in order to provide appropriate safety checks and balances. Transmitting the patient\u2019s weight to the pharmacy to allow dose checking; and providing mathematical conversion of fluid doses from mg/kg to mg then to milliliters per dose should be standard for all systems.Lastly, more research is needed in the area of pediatric medication safety related to technology in the outpatient environment. A growing volume of data exists looking into EHR safety related to children in the hospital setting, but little data is available examining EHRs in the outpatient pediatric setting. Some error types are consistent between the inpatient and outpatient settings, such as improper dose. However, unique errors in the hospital setting such as wrong rate or monitoring errors are not applicable to our outpatient-focused study . Of noteThis study demonstrates that EHR systems used by many pediatric clinic practices do not meet the standards set forth by the AAP. To ensure our most vulnerable population is better protected, it is imperative that medical technology tools adequately consider pediatric needs during development and be reflected in selected EHR systems."} +{"text": "Administering anti-vascular endothelial growth factor (anti-VEGF) by intraocular injection has been shown to have a safe systemic profile. Nevertheless, incidents of acute kidney injury following anti-VEGF injection have been reported. We assessed the long-term effect of multiple intravitreal anti-VEGF injections on measures of renal function in patients with diabetes including rate of change of estimated glomerular filtration rate (eGFR) and urine albumin-to-creatinine ratio (ACR).A retrospective review of patients receiving diabetic macular oedema (DMO) treatment was undertaken. Serum creatinine, ACR, number of intravitreal anti-VEGF injections and clinical characteristics were collected from electronic healthcare records (EHR). A co-efficient of eGFR and ACR change with time was calculated over a mean duration of 2.6\u2009years. Regression modelling was used to assess variation in the number of anti-VEGF injections and change in eGFR and ACR.p\u2009=\u20090.22) or ACR change over time was detected, following adjustment for hypertension, cerebrovascular disease, T2DM, and medications taken.The EHR of 85 patients with DMO were reviewed. On average, 26.8 intravitreal anti-VEGF injections were given per patient over a mean duration of 31\u2009months. No association between increasing number of anti-VEGF injections and rate of eGFR decline (beta\u2009=\u20090.04, 95% confidence intervals [CI]: \u2212\u20090.02, 0.09; Our data suggests regular long-term intravitreal VEGF inhibition does not significantly alter the rate of change in eGFR and/or ACR with increasing number of treatment injections. Vascular endothelial growth factor (VEGF) inhibitors have transformed the therapeutic management of several retinal ophthalmic conditions. By improving visual acuity, they have surpassed the ability of laser photocoagulation to limit visual deterioration . DespiteIn addition to its role in the eye, VEGF plays a crucial part in maintaining normal renal function. VEGF released from podocytes interacts with VEGF receptor 2 on glomerular capillaries and promotes integrity of endothelial fenestrations and resultant glomerular barrier function . Loss ofSustained hyperglycaemia secondary to diabetes mellitus (DM) has been shown to activate abnormal metabolic pathways that trigger a complex cascade of inflammatory and vasogenic responses in the eye . VEGF isUse of VEGF inhibition therapy as an ophthalmic therapeutic involves local administration into the vitreous humour by intra-ocular injection with the dosage used approximately 400 times lower than that used in oncology , 11. AflIntravitreal ranibizumab (IVR) exists as a human monoclonal Fab antibody fragment with a molecular weight of 48\u2009kDa and binds to all isoforms of VEGF-A . IntraviA pooled analysis of 751 population-based studies reported a global increase in the number of adults with DM from 108 million in 1980 to 422 million in 2014 . It is pThis was a retrospective, cohort, observational study using electronic healthcare records to access information on patients with DMO receiving intravitreal anti-VEGF treatment in the Belfast Health and Social Care Trust. This study received approval by the Office for Research Ethics Committee Northern Ireland (MREC Reference: 14/NI/1132).Serum creatinine measurements (\u03bcmol/L) and ACR (mg/mmol) were collected from the Northern Ireland Electronic Care Record (ECR) system. Each eGFR measurements was calculated using the Chronic Kidney Disease Epidemiology Collaboration equation (CKD-EPI). In this study, participants received aflibercept, ranibizumab or both throughout the course of their treatment. The number of aflibercept, ranibizumab and total intravitreal anti-VEGF injections was recorded for each participant. Changes in eGFR and ACR over time were calculated using multiple eGFR and ACR measurements. These included a measure of renal function before the start of anti-VEGF therapy and after the defined injection period. Data was collected on demographic factors, glycaemic parameters and clinical variables including co-morbidities and medications.>\u200960\u2009mL/min/1.73\u2009m2. Study participants with an ACR\u00a0>\u00a03\u2009mg/mmol or an eGFR <\u200960\u2009mL/min/1.73\u2009m2 were classified as DKD.This study included patients who were administered their first to last recorded anti-VEGF injections between 25th April 2012 and 22nd January 2018. For inclusion, each patient was required to have renal function measurements prior to their first anti-VEGF injection and after their last injection was administered. Patients were excluded on the basis of insufficient number of renal function measurements or if they experienced an acute decline in eGFR or rapid increase in ACR. Patients with diabetic kidney disease (DKD) can be classified depending on their level of kidney function (eGFR) and the amount of protein present in the urine (ACR). This information forms the basis of DKD staging which is useful for planning follow up and management. Individuals were classified as \u2018No DKD\u2019 if they had an ACR\u2009<\u20093\u2009mg/mmol and an eGFR P\u2009<\u20090.05 was considered statistically significant.Independent samples T-tests, chi squared or Fishers exact tests were used to compare the distribution of demographic factors, glycaemic parameters and clinical variables between patients with DKD and those without DKD. Covariates significantly associated with a diagnosis of DKD were adjusted for in subsequent linear regression modelling. Simple and multiple linear regression models were used to generate beta estimates (\u03b2) and 95% confidence intervals (CI) for the total number of intravitreal anti-VEGF injections against the change in eGFR and ACR over time. In cases where ACRs were not present as absolute values (e.g. <\u20093\u2009mg/mmol), arbitrary values were used to facilitate slope calculation. A previous study has demonstrated that the median ACR value for patients with an ACR\u2009<\u20093\u2009mg/mmol was 1.06\u2009mg/mmol and we used this as an arbitrary value for ACR values categorised as <\u20093\u2009mg/mmol on the ECR .Data was collected on 90 patients undergoing regular intravitreal anti-VEGF treatment for DMO in the Belfast Health and Social Care Trust. Although, episodes of acute kidney injury following anti-VEGF injection have been reported previously, in order to evaluate the long-term effect of intravitreal anti-VEGF treatment on renal function and limit potential confounding from co-morbidities, five patients were excluded because an obvious reported co-morbidity led directly to an acute decline in renal function. A total of 42 participants were classified as \u2018No DKD\u2019 controls and 43 individuals were classified as \u2018DKD cases\u2019. Study cohort characteristics, co-morbidities and glycaemic parameters are summarised Table\u00a0. The meaThe eGFR data met the assumptions of linear regression including normal distribution, homoscedasticity and absence of multicollinearity. However, ACR data was skewed lacking normal distribution and homoscedasticity. Importantly, absence of multicollinearity remained. Log transformation of ACR data did not improve the distribution curve and as a result, no log transformation was performed.2 to a mean follow-up eGFR of 65.9\u2009mL/min/1.73\u2009m2 with a mean rate of decline of 2.6\u2009mL/min/1.73\u2009m2 /year (Table\u00a0p\u2009=\u20090.21) and remained non-significant following adjustment for T2DM, cerebrovascular disease (CVD), hypertension and treatment with proton pump inhibitors .Participants demonstrated a decline in eGFR from a mean baseline of 75\u2009mL/min/1.73\u2009m2 compared to 83.8\u2009\u00b1\u200913.3\u2009mL/min/1.73\u2009m2 /year in patients without DKD (p\u2009<\u20090.01). Additionally, patients with DKD also had significantly lower follow-up eGFR at 57.1\u2009\u00b1\u200924.6\u2009mL/min/1.73\u2009m2 compared to 75.7\u2009\u00b1\u200915.9\u2009mL/min/1.73\u2009m2 (p\u2009<\u20090.01). Patients with DKD did not have a greater rate of eGFR decline (\u2212\u20092.5\u2009\u00b1\u20093.6\u2009mL/min/1.73\u2009m2 /year) compared to individuals without DKD (\u2212\u20092.7\u2009\u00b1\u20093.4\u2009mL/min/1.73\u2009m2 /year).As expected, participants with DKD had a significantly lower mean baseline eGFR of 66.5\u2009\u00b1\u200924.4\u2009mL/min/1.73\u2009mp\u2009=\u20090.91) and remained non-significant following adjustment for T2DM, CVD, and treatment with beta blockers and proton pump inhibitors .Study participants had increased ACR from a mean baseline value of 17.9\u2009\u00b1\u200962.1\u2009mg/mmol to a mean follow-up ACR of 18.8\u2009\u00b1\u200948.5\u2009mg/mmol with a rate of increase of 0.7\u2009\u00b1\u200912.3\u2009mg/mmol/year. In an unadjusted analysis the rate of change of ACR over time was not significantly associated with the number of intravitreal anti-VEGF injections increased . Additionally, participants with DKD had significantly higher ACR at follow-up 35.4\u2009\u00b1\u200964.3\u2009mg/mmol compared to 1.8\u2009\u00b1\u20092.6\u2009mg/mmol (p\u2009<\u20090.01).Participants with DKD had significantly higher mean baseline ACR of 34.4\u2009\u00b1\u200984.6\u2009mg/mmol compared to 1.0\u2009\u00b1\u20090.67\u2009mg/mmol in patients without DKD (p\u2009=\u20090.59). The mean number of aflibercept injections received by participants with DKD was 9.0\u2009\u00b1\u20097.0 injections compared to 11.2\u2009\u00b1\u20094.5 injections in those without DKD (p\u2009=\u20090.09).Across all participants, the mean number of ranibizumab injections received by those with DKD was 16.0\u2009\u00b1\u20099.7 injections compared to 17.2\u2009\u00b1\u200910.5 in those without DKD and meta-analysis investigated the systemic safety profile of IVA and IVR respectively, in DMO, neovascular age-related macular degeneration and retinal vein occlusion, by pooling data from existing randomised controlled trials but found no difference in the incidence of adverse systemic events between either intravitreal anti-VEGF treatment and placebo . AdverseIt is important to highlight that the clinical trials investigating IVR and IVA in DMO were not designed or powered to evaluate differences in low frequency systemic events, mainly as a consequence of their small sample sizes. Therefore, a firm conclusion on the systemic safety profile of intravitreal anti-VEGF is limited. Larger prospective studies with a longer follow up period and sufficient power to assess low frequency systemic adverse effects are required. A greater focus on the systemic safety of intravitreal anti-VEGF in high-risk groups is also needed. A population based, nested case-control study including 91,000 participants assessed post-marketing data on intravitreal anti-VEGF injections and found no significant increased risk of stroke, myocardial infarction, venous thromboembolism or congestive heart failure . While tThe Diabetic Retinopathy Clinical Research Network measured baseline and 52-week follow-up urinary ACR in 654 patients receiving ranibizumab, aflibercept or bevacizumab. On average, each patient had 9\u201310 injections during the treatment period. Across all three treatment groups, over 77% of patients maintained their baseline urinary ACR, while 10\u201316% of patients experienced a worsening of ACR by the 52-week follow up period, with more than 7% of patients experiencing an improvement in ACR. In the absence of a control group no definitive assessment could be made on the influence of anti-VEGF treatment. However, intravitreal anti-VEGF treatment did not appear to increase the risk of developing or worsening proteinuria .2 increased from 26% at baseline to 39% at follow up, following an average duration of 2.6\u2009years of anti-VEGF treatment. The difference observed for both renal markers highlights the variation in the sensitivity of their measurement outcomes and the importance of monitoring both in diabetic populations.In our study 54, 34 and 12% of patients had a baseline ACR\u2009<\u20093, 3\u201330 and\u2009>\u200930\u2009mg/mmol, respectively, with no significant change detected over the 2.6\u2009year treatment period. In comparison, the percentage of participants with an eGFR <\u200960\u2009mL/min/1.73\u2009m2 compared to an eGFR of 78.1\u2009mL/min/1.73\u2009m2 for patients with T1DM. The mean follow-up eGFR was also lower for T2DM patients with an eGFR of 64.9\u2009mL/min/1.73\u2009m2 compared to T1DM with an eGFR of 71.3\u2009mL/min/1.73\u2009m2. The mean rate of eGFR decline was 2.9\u2009mL/min/1.73\u2009m2 /year compared to 1.6\u2009mL/min/1.73\u2009m2 /year for T2DM and T1DM respectively. Eighty four per cent of patients with T2DM had a diagnosis of CKD compared to 16% of patients with T1DM. Our findings reflect those from a large US study which showed a significantly higher prevalence of CKD in T2DM compared to T1DM patients [In this study, 66 patients with DMO had T2DM and 19 patients had type 1 DM (T1DM). The mean baseline eGFR for patients with T2DM was lower at 74.1\u2009mL/min/1.73\u2009m<\u20090.001) .There are a number of limitations with our study including the inability to perform a sensitivity analysis to assess the relative contributions of IVR and IVA on change in renal function over time. However, a secondary analyses of a randomised comparative effectiveness trial, known as Protocol T, carried out by the Diabetic Retinopathy Clinical Research Network, showed no significant difference in renal function as assessed by urinary ACR over a 52\u2009week follow-up period between patients who received intravitreal ranibizumab, aflibercept or bevacizumab for the treatment of DMO . In addiDespite these limitations, our study had several strengths. In collecting prospective eGFR/ACR data we were able to assess long-term changes in renal function that would not have been reported as an adverse event. Additionally, we collected data on a wide range of co-morbidities and glycaemic parameters, which allowed for appropriate adjustment of potential confounding factors. We used the CKD-EPI equation rather than the Modification of Diet in Renal Disease equation to calculate estimated glomerular filtration rates. The CKD-EPI equation is generally considered to be a better predictor of renal function, particularly at higher eGFR values .This study supports the previously demonstrated effective renal safety profile of intravitreal anti-VEGF in patients with DMO. Regular long-term intravitreal VEGF inhibition does not significantly alter the rate of change in eGFR and/or ACR with increasing number of treatment injections. The long-term assessment of renal function provides additional evaluation and detection of subtle changes in eGFR and ACR that may not present clinically as adverse events. Larger prospective and post-marketing trials, using renal markers including eGFR, ACR and Cystatin C, as well as assessing incidence of AKI and CKD, are required to strengthen the renal safety of intravitreal anti-VEGF treatment modalities. A greater focus on at-risk groups such as those with CKD is required."} +{"text": "Soil and the human gut contain approximately the same number of active microorganisms, while human gut microbiome diversity is only 10% that of soil biodiversity and has decreased dramatically with the modern lifestyle. We tracked relationships between the soil microbiome and the human intestinal microbiome. We propose a novel environmental microbiome hypothesis, which implies that a close linkage between the soil microbiome and the human intestinal microbiome has evolved during evolution and is still developing. From hunter-gatherers to an urbanized society, the human gut has lost alpha diversity. Interestingly, beta diversity has increased, meaning that people in urban areas have more differentiated individual microbiomes. On top of little contact with soil and feces, hygienic measures, antibiotics and a low fiber diet of processed food have led to a loss of beneficial microbes. At the same time, loss of soil biodiversity is observed in many rural areas. The increasing use of agrochemicals, low plant biodiversity and rigorous soil management practices have a negative effect on the biodiversity of crop epiphytes and endophytes. These developments concur with an increase in lifestyle diseases related to the human intestinal microbiome. We point out the interference with the microbial cycle of urban human environments versus pre-industrial rural environments. In order to correct these interferences, it may be useful to adopt a different perspective and to consider the human intestinal microbiome as well as the soil/root microbiome as \u2018superorganisms\u2019 which, by close contact, replenish each other with inoculants, genes and growth-sustaining molecules. The large diversity of microbiota in soil affects its microbial ecology, including its primary productivity and nutrient cycling. In addition, soil is part of the habitat of humans, providing space for living, recreation and food production . From eaMeanwhile, the human microbiome has become a major field of biomedical research, especially the intestinal microbial community, which plays a major role in human health and disease . The intIn view of the functional similarities between the intestinal microbial community and the soil microbial ecosystem, a relationship between both appears possible. Looking at the entire ecological system, the human body and its microbes can be regarded as an extended genome .Therefore, the question \u2018To what extent does a relationship between both systems exist, for example by human exposure to different soil microbiological environments?\u2019 arises. As human activities are changing the distribution and abundance of soil microorganisms, e.g., by agricultural land use , the resIn this context, we discuss the soil microbiome and its potential link to the (human) intestinal microbiome and assess the possible interrelation of the human intestinal microbiome and the soil microbiome.Since the start of the Human Microbiome Project in 2007, aiming at sequencing all microbes inhabiting human body sites, the Human Microbiome Project has developed into a major field of biomedical research focussing mainly on the intestinal microbial community that plays a major role in human health and diseases ,10. The In view of the fact that, phylogenetically, humans developed in close contact with soil as their physical basis for living, providing shelter and water as well as food for daily life, the question arises as to whether the soil microbiome as an exogenous parameter affects the development of the human intestinal microbiome. Soils existed globally a long time before mammals and hominids came into existence and are by far the most extensive natural microbial gene reservoir on earth .Since 2010, the Earth Microbiome Project has focused on this gene reservoir. It is a major collaborative effort to characterize microbial life on this planet by using DNA sequencing and mass spectrometry of crowd-sourced samples to understand patterns in microbial ecology across the microbiomes and habitats of the earth ,16.On a per gram basis, the intestine, specifically the colon, has the highest concentration of cells of all biomes listed in The main factors that presumably determine the human intestinal microbiome are (i) host genetics and metabolism (heritage), (ii) lifestyle (environment) in particular, and (iii) diet and nutritional habits ,20,21.The microbial diversity in the human gut is a coevolution between microbial communities and their hosts . \u201cAncienTo identify the evolutionary history of the biosphere, it is crucial to explore the microbiome of different hosts and habitats. Human-associated intestinal microbial communities are more similar to one another than to those of other mammalian species . When maInterestingly, the gut microbiomes of several mammalian lineages have diverged at roughly the same rate over the past 75 million years ; ContrarThe microbial population of the human gut derived from the ancestors, individually from the mother through vertical transmission during gestation, during birth, and after birth through contact with maternal body sites, with the greatest contribution of the maternal gut . Within Alterations in the foraging behavior and diet of early Homo species also included interaction between family members but in a different way. In the \u2018grandmother hypothesis\u2019 , foraginIt is suggested that dietary intake has a stronger influence on gut microbial composition than host genetics . The GI The importance of the environment is also shown by the fact that with increasing age, the variation between individuals decreased . FurtherThe living environment of urban dwellers shows a lower natural biodiversity and exposure to environmental microbes . The losThere is further evidence that soil biodiversity is interrelated with the gut microbiome. In particular, the gut microbial diversity in mice was increased by exposure to soil microbes . Gut micThe health of mammals is largely affected by domestication, which can be related to a less diverse gut microbiome observed in horse species, compared to undomesticated species . This reA recent study of the gut microbiome of terrestrially living baboons showed that soil is the most dominant predictor for shaping the gut microbiota with a 15 times stronger effect than host genetics . While tThe close link between reduced soil biodiversity and gut microbial richness in baboons is an aspect that deserves particularly critical scrutiny in view of the global megatrend of biodiversity loss, especially for sustaining human health. Rural environments that are rich in microbiota, such as traditional farms, have been shown to have health benefits in humans . In partEven the recycling of human feces in the form of \u2018night soil\u2019 that re-entered the agricultural sites depicts the closed cycling of resources in those times . NowadayIncreasing global population and the need for housing and food have intensified agricultural practices and urbanization. Growing industrialization of agriculture results in reduced soil biodiversity . AlreadyOf particular importance in this context is the fact that the richness of gut bacterial species in adults is higher in rural societies as compared to urban communities ,54,55, wMart\u00ednez and colleagues proposedHygienic measures reduce the risk for transmission of pathogens, but also of gut symbionts . This liThus, fecal contamination of water resources poses a significant risk for human health by spreading infectious diseases from fecal pathogens . SanitatBesides the many ecological functions, such as the production of biomass and support of biodiversity, soil has a unique function to provide clean drinking water . This soUrban citizens do not only lose contact with soil but also with feces , e.g., bAs mentioned above, with protection from any form of pathogens, artificial environments such as cities can be seen as habitats that eventually share a lower number of beneficial microbes and may concentrate pathogens. Even in most cases, the exposure to feces has a great potential to perturb an otherwise healthy gut microbiome . The hygAccess to more biodiverse areas in urban environments, such as green spaces and parks, is related to health benefits regardless of socioeconomic status , which cTherefore, we assume that the modern human lifestyle and the loss of direct contact with soil cause interruptions in the microbiological cycle in urban environments in contrast to pre-industrial rural environments. Soil is therefore a key primary source of a healthy intestinal microbiome of humans. However, the exact way how soil and the environment shape the human gut microbiome and how lifestyle changes affect the gut microbiome needs to be further elucidated. It is a dynamic research topic with relevance for preventive medicine.Besides the urban lifestyle and loss of contact with nature, our diet has also changed within the last decennia. In order to preserve food for long transport, storage and distribution, it is often sterilized. In addition to more processed nutrition, the intake of more energy-rich food, abundant in sugars and fat, decreases the biodiversity of the intestine . Often, Medication, in particular the intake of specific drugs, mainly drives the gut microbiome of Western populations and explains the greatest total variance of the fecal microbiome as shown by a large size Danish study . IncreasA large scale study of fecal microbiomes including clinical and questionnaire-based covariates reported that stool consistency had the greatest effect size on the fecal microbiome, while oral medication explained the greatest total variance . Among tThat diet is relevant for shaping the human gut microbiome is further supported by the study of Martinez et al. which suThe gatherer\u2013hunter community in Tanzania showed a low content or total absence of Bifidobacteria in the gut microbiome , which uThe microbiomes of non-Westernized populations resemble those of vegetarians and vegans ,70. The In this context, it is interesting that, in a Danish study, the dietary habits that were considered as relevant for shaping the gut microbiome identified a set of carbohydrates as relevant, such as fruit, bread and alcohol consumption .Specific types of food result in predictable shifts in intestinal microbiota, and hence the human intestinal microbiome can be directly affected by the diet . Human dThe change in human lifestyle also includes several post-harvesting operations before consumption . These oAs with bitter plant species, the consumption of fresh fruit that are almost un-processed is beneficial for human health specifically as soil biodiversity stimulates the secondary metabolite production . While tIn contrast to traditional smallholder farming, large scale farms, rather common in many industrialized countries, perform intensive farming practices, such as monoculture cropping of few plant species for optimizing yields. This has reduced the variety of food for humans and additionally increased the potential threats through contaminants due to the use of agrochemicals. Therefore, organically grown vegetables show a higher biodiversity of microbial endophytes and epiphytes than those conventionally grown . A recenWe conclude that on top of antibiotic medication, the elimination of microbes from food via processing has direct impacts on the human gut microbiome. Anyhow, the intake of diverse food rich in fibers and secondary plant metabolites, with living microbiota, from a diverse soil environment may positively influence the gut.https://www.globalsoilbiodiversity.org/atlas-introduction/). Thus, it has become possible to compare the geographical data of soil microbiomes with human gut microbiomes [Globally, soils are highly diverse, as are their microbes. There are only a few species that can be found in all soils, while there are numerous rare species that only occur in particular soils or geographical areas . This enrobiomes .Unexpectedly, Tasnim et al. reportedBeside phylogenetic similarities between the plant rhizosphere and the human gut microbiome, there are many functional similarities : (i) TheIn order to sustain rich rhizosphere biodiversity, we have to understand the major drivers of this functional ecosystem: The rhizosphere microbiome is related to soil type, moisture, age, plant genotype and root lysates and exudates . SimilarAlso biofilms can be found in both the human GI tract/gut ,90 and tThe gut as well as the rhizosphere microbiome can be considered as \u201csuperorganisms\u201d contained in/around the host, which are of paramount importance for the health and performance of the host, because (i) the gut microbiota are important for producing essential amino acids and vitamins such as B12 and K, and (ii) the root microbiota for producing hormones that are promoting plant health by improved nutrient acquisition, resistance to abiotic and biotic stress, and by sustaining growth .The deficiency of some micronutrients in humans, derived from nutrient-depleted soils, can have substantial effects as co-factors in metabolism, modulating enzyme activities, or functioning as coenzymes . Any redWorldwide urbanization as well as the mechanization of agriculture have dramatically increased during the last century. In combination with the use of agrochemicals such as mineral fertilizers and pesticides, soil biodiversity is reduced. In human medication too, a strong transformation occurred during this time by the use of antibiotics and hormones. The substantial effect of medication on the gut microbiome of the Western population was recently proven .From the above-mentioned structural and functional similarities between the soil rhizosphere and the human gut microbiome, we conclude that both can be considered as functional ecosystems, which interact which each other. This interaction has been decreasing in recent times, potentially reinforcing losses of biodiversity, which have occurred in both systems.Recent research data indicate that the modern lifestyle/environment is the most active driver in shaping the human intestinal microbiome despite the confounding influence of dietary habits, culture, and host genetics. The soil (rhizosphere) microbiota clearly influence the quality and storage of our food apart from the impact of post-harvest processing. In this context, more research is necessary to demonstrate how biodiversity of beneficial microbes in our food can be preserved. Furthermore, specific ways of agricultural practices, especially soil management, may improve the current food quality.Basically, recent findings suggest the investigation of the soil and root microbiota in more detail may identify effects on human health, possibly, among others, by adopting a lifestyle of former generations. This lifestyle, such as the reduced consumption of livestock and dairy products and the intake of a higher diversity of nutritional fibers and bitter substances may have beneficial effects on our health. The intake of mostly unprocessed organically grown regional products is one way towards this goal. Further, wild relatives of the currently used high-yielding crop varieties could increase aboveground and belowground biodiversity, and hence provide benefits to soil and human health, e.g., through reintroducing lost beneficial microbes.A rich soil microbiome would also have several advantages for the terrestrial ecosystem through increased nutrient use efficiency and uptake, which may improve plant yields as well as plant resistance and resilience against global climatic change and biotic stressors. The fact that the gut microbiome of hunter-gatherers has a higher species richness than that of humans that are nourished by Westernized food argues for agricultural practices that promote sustainable soil use and human health. Regarding food security under the aspect of predicted changes in human demographics and environmental change, it is of paramount importance to ensure the biologically sustainable use of land and soil.The soil contributes to the human gut microbiome\u2014it was essential in the evolution of the human gut microbiome and it is a major inoculant and provider of beneficial gut microorganisms. In particular, there are functional similarities between the soil rhizosphere and the human intestine. In recent decades, however, contact with soil has largely been reduced, which together with a modern lifestyle and nutrition has led to the depletion of the gut microbiome with adverse effects to human health. Therefore, we suggest increasing research on the geographical and functional relationships to identify the causes and effects between soils and gut microbiota in order to benefit human health and the environment."} +{"text": "Adults with chronic kidney disease (CKD) are hospitalized more frequently than those without CKD, but the magnitude of this excess morbidity and the factors associated with hospitalizations are not well known.2. CRIC participants had an unadjusted overall hospitalization rate of 35.0 per 100 person-years (PY) [95% CI: 34.3 to 35.6] and 11.1 per 100 PY [95% CI: 10.8 to 11.5] for cardiovascular-related causes. All-cause, non-cardiovascular, and cardiovascular hospitalizations were associated with older age (\u226565 versus 45 to 64 years), more proteinuria (\u2265150 to <500 versus <150 mg/g), higher systolic blood pressure (\u2265140 versus 120 to <130 mmHg), diabetes (versus no diabetes), and lower eGFR (<60 versus \u226560 ml/min/1.73m2). Non-Hispanic black (versus non-Hispanic white) race/ethnicity was associated with higher risk for cardiovascular hospitalization , while risk among females was lower . Rates of cardiovascular hospitalizations were higher among those with \u2265500 mg/g of proteinuria irrespective of eGFR. The most common causes of hospitalization were related to cardiovascular (31.8%), genitourinary (8.7%), digestive (8.3%), endocrine, nutritional or metabolic (8.3%), and respiratory (6.7%) causes. Hospitalization rates were higher in CRIC than the NIS, except for non-cardiovascular hospitalizations among individuals aged >65 years. Limitations of the study include possible misclassification by diagnostic codes, residual confounding, and potential bias from healthy volunteer effect due to its observational nature.Data from 3,939 participants enrolled in the Chronic Renal Insufficiency Cohort (CRIC) Study between 2003 and 2008 at 7 clinical centers in the United States were used to estimate primary causes of hospitalizations, hospitalization rates, and baseline participant factors associated with all-cause, cardiovascular, and non-cardiovascular hospitalizations during a median follow up of 9.6 years. Multivariable-adjusted Poisson regression was used to identify factors associated with hospitalization rates, including demographics, blood pressure, estimated glomerular filtration rate (eGFR), and proteinuria. Hospitalization rates in CRIC were compared with rates in the Nationwide Inpatient Sample (NIS) from 2012. Of the 3,939 CRIC participants, 45.1% were female, and 41.9% identified as non-Hispanic black, with a mean age of 57.7 years, and the mean eGFR is 44.9 ml/min/1.73mIn this study, we observed that adults with CKD had a higher hospitalization rate than the general population that is hospitalized, and even moderate reductions in kidney function were associated with elevated rates of hospitalization. Causes of hospitalization were predominantly related to cardiovascular disease, but other causes contributed, particularly, genitourinary, digestive, and endocrine, nutritional, and metabolic illnesses. High levels of proteinuria were observed to have the largest association with hospitalizations across a wide range of kidney function levels. Hsiang-Yu Chen and colleagues report the factors associated with hospitalization in patients with Chronic Kidney Disease. Chronic kidney disease (CKD) is increasingly common globally, and individuals with CKD have a high risk of health complications, including hospitalizations.Many of the hospitalizations experienced by those with CKD are thought to be due to cardiovascular disease, but little else is known about the other causes for hospitalization or why people with kidney disease are at higher risk of hospitalizations.Learning more about causes of hospitalization and risk factors for hospitalizations can guide outpatient management.Research to date on hospitalizations in kidney disease has mainly focused on those with dialysis-dependent kidney disease.We looked at hospitalization data from adults with CKD who were followed for nearly 10 years.We classified hospitalizations by the primary discharge code and found that non-cardiovascular causes, such as genitourinary-, digestive-, and endocrine-related causes, comprised the majority of hospitalizations and that the largest single contributor to hospitalizations was due to cardiovascular reasons.We modeled the risk of hospitalizations with patient characteristics, such as age and sex, and clinical factors, such as level of kidney function, blood pressure, and proteinuria, and found that high levels of proteinuria were found to have a high risk of hospitalization regardless of kidney function level, and the risk of hospitalizations occurred even at moderate levels of kidney function.We also compared the hospitalization data from individuals with known CKD to a sample of the general hospitalized population in the United States and found that adults with CKD have higher hospitalization rates than this general sample.These findings highlight the need for developing better approaches to identifying patients at risk for severe complications of CKD and to guiding outpatient management strategies to improve outcomes in CKD.The findings may be particularly relevant to health care providers in general medicine since the increased risk of hospitalization occurred with even moderate reductions in kidney function, which do not typically correspond to being under the care of a kidney disease expert.Our study\u2019s findings might not be applicable to other CKD populations since the study enrolled volunteers, who might be healthier than other populations. The prevalence of chronic kidney disease (CKD) is high, affecting up to 15% to 20% of the adult population in the United States (US) . NumerouCardiovascular disease has been identified as a major cause of hospitalization among those with advanced kidney disease, but little is known about the other causes contributing to the high burden of hospitalization among those with earlier stages of CKD ,6,19,20.In this manuscript, we characterize the burden of hospitalizations within a diverse study population with mild-to-moderate CKD enrolled in the multicenter observational Chronic Renal Insufficiency Cohort (CRIC) Study. We also explore associations of demographic and kidney-specific factors with rates of hospitalization, characterize the leading causes of hospitalization, and compare the rates of hospitalizations in this population with rates in a sample representative of nearly all hospitalizations in the US general population.This study is reported as per the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guideline .2 for individuals aged 21 to 44 years, 20 to 60 mL/min/1.73m2 for individuals aged 45 to 64 years, and 20 to 50 mL/min/1.73m2 for individuals aged 65 to 74 years. Major exclusion criteria included prior dialysis lasting more than 1 month, NYHA Class III/IV heart failure, polycystic kidney disease, or other primary renal diseases requiring active immunosuppression. Participants completed annual clinic visits at which data were obtained, and blood and urine specimens were collected.The CRIC Study enrolled a total of 3,939 men and women between 2003 and 2008 at 7 clinical centers across the US. Eligibility criteria have been previously described ,26. Age-The National Inpatient Sample (NIS) was utilized as a representation of US nationwide hospitalizations from the year 2012. The NIS is the largest available all-payer inpatient database in the public domain that includes hospital discharge data, reflecting approximately 95% of all hospital discharges within the US . The yeaHospitalizations between 2003 and 2018 among CRIC participants were ascertained through self-report and hospital queries and confirmed after review of medical records by study personnel. Any hospitalization that occurred during the follow-up period was accounted for. The unit of observation in the NIS was an inpatient stay record, and after applying discharge weights, the amount of discharges in the US was estimated .The length of hospitalization was calculated as the date of discharge minus the date of admission, plus one. Hospitalizations that were longer than 1 calendar day are the primary focus of this paper; hospital stays with an admission and discharge on the same calendar day are classified as \u22641-day hospitalizations. To characterize the specific cause of each hospitalization, the primary ICD-9 or ICD-10 admission code was extracted from the hospitalization discharge record and classified into 1 of 18 categories using the CCS Software developed by the AHRQ . The cau2) In light of these reviews, I am afraid that we will not be able to accept the manuscript for publication in the journal in its current form, but we would like to consider a revised version that addresses the reviewers' and editors' comments. Obviously we cannot make any decision about publication until we have seen the revised manuscript and your response, and we plan to seek re-review by one or more of the reviewers. http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments, the changes you have made in the manuscript, and include either an excerpt of the revised text or the location (eg: page and line number) where each change can be found. Please submit a clean version of the paper as the main article file; a version with changes marked should be uploaded as a marked up manuscript.In revising the manuscript for further consideration, your revisions should address the specific points made by each reviewer and the editors. Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/figures. While revising your submission, please upload your figure files to the PACE digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at PLOSMedicine@plos.org.In addition, we request that you upload any figures associated with your paper as individual TIF or EPS files with 300dpi resolution at resubmission; please read our figure guidelines for more information on our requirements: plosmedicine@plos.org) if you have any questions or concerns.We expect to receive your revised manuscript by Jul 09 2020 11:59PM. Please email us and your responses to reviewer comments. If eligible, we will contact you to opt in or out.***http://journals.plos.org/plosmedicine/s/competing-interests.We ask every co-author listed on the manuscript to fill in a contributing author statement, making sure to declare all competing interests. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. If new competing interests are declared later in the revision process, this may also hold up the submission. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT. You can see our competing interests policy here: Please use the following link to submit the revised manuscript: https://www.editorialmanager.com/pmedicine/Your article can be found in the \"Submissions Needing Revision\" folder. http://journals.plos.org/plosmedicine/s/submission-guidelines#loc-methods.To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by \"data not shown\" or \"unpublished results.\" For such statements, authors must provide supporting data or cite public sources that include it. Please ensure that the paper adheres to the PLOS Data Availability Policy in the subtitle .* Please structure your abstract using the PLOS Medicine headings .* Please combine the Methods and Findings sections into one section, \u201cMethods and findings\u201d.Abstract Methods and Findings:* Please ensure that all numbers presented in the abstract are present and identical to numbers presented in the main manuscript text.* Please include the study design, population and setting, number of participants, years during which the study took place, length of follow up, and main outcome measures.* Please quantify the main results .* Please include the important dependent variables that are adjusted for in the analyses.Abstract Conclusions:* Please address the study implications without overreaching what can be concluded from the data; the phrase \"In this study, we observed ...\" may be useful.* Please interpret the study based on the results presented in the abstract, emphasizing what is new without overstating your conclusions.* Please avoid vague statements such as \"these results have major implications for policy/clinical care\". Mention only specific implications substantiated by the results.* Please avoid assertions of primacy (\"We report for the first time....\")https://journals.plos.org/plosmedicine/s/revising-your-manuscript#loc-author-summaryAt this stage, we ask that you include a short, non-technical Author Summary of your research to make findings accessible to a wide audience that includes both scientists and non-scientists. The Author Summary should immediately follow the Abstract in your revised manuscript. This text is subject to editorial change and should be distinct from the scientific abstract. Please see our author guidelines for more information: References- please use square brackets throughout and use Vancouver style for bibliography Methods- please briefly state where the participants were recruited in the USOn page 8- please remove the information regarding the funding source as this is pulled from the article meta-data onlyPlease provide 95% confidence intervals and p values throughout as needed. Please note we need exact p values unless p<0.001Please present and organize the Discussion as follows: a short, clear summary of the article's findings; what the study adds to existing research and where and why the results may differ from previous research; strengths and limitations of the study; implications and next steps for research, clinical practice, and/or public policy; one-paragraph conclusion.Did your study have a prospective protocol or analysis plan? Please state this (either way) early in the Methods section.a) If a prospective analysis plan was used in designing the study, please include the relevant prospectively written document with your revised manuscript as a Supporting Information file to be published alongside your study, and cite it in the Methods section. A legend for this file should be included at the end of your manuscript. b) If no such document exists, please make sure that the Methods section transparently describes when analyses were planned, and when/why any data-driven changes to analyses took place. c) In either case, changes in the analysis-- including those made in response to peer review comments-- should be identified as such in the Methods section of the paper, with rationale.STROBE- please specify where individual items on the checklist can be found using paragraphs and sections. The Data Availability Statement (DAS) requires revision. For each data source used in your study: a) If the data are freely or publicly available, note this and state the location of the data: within the paper, in Supporting Information files, or in a public repository (include the DOI or accession number).b) If the data are owned by a third party but freely available upon request, please note this and state the owner of the data set and contact information for data requests (web or email address). Note that a study author cannot be the contact person for the data.c) If the data are not freely available, please describe briefly the ethical, legal, or contractual restriction that prevents you from sharing it. Please also include an appropriate contact (web or email address) for inquiries .Comments from the reviewers:Reviewer #1: I confine my remarks to statistical aspects of this paper. The general approach is fine but I have a couple issues to resolve before i can recommend publication.https://medium.com/@peterflom/what-happens-when-we-categorize-an-independent-variable-in-regression-77d4c5862b6c?source=friends_link&sk=1428cd15968e218268121dc507ce8025The biggest one is that continuous independent variables should not be categorized. In *Regression Modeling Strategies* Frank Harrell lists 11 problems that this causes and sums up \"nothing could be more disastrous\". I wrote a blog post taking a graphical look at what can happen: Other issues: p. 7 Please operationalize all variables. p 9 Give the SD for agehttp://biostat.mc.vanderbilt.edu/wiki/Main/DynamitePlots)Fig 1: Use a line plot for age and do not use \"dynamite plots\" for 2012. They found higher hospitalization rate in the CRIS than NIS, predominantly related to diverse illnesses other than cardiovascular disease.I offer the following suggestions for consideration:1. The concluding statement in the abstract \"Adults with CKD appear to have a substantially higher hospitalization rate than those without CKD\" is not supported by the results. 2. On page 8, under statistical analysis, please specify the start of follow-up in the CRIC cohort and the median follow-up time. Also, describe how the hospitalization rates were calculated for the NIS.3. Please indicate the reasoning for selecting the year 2012 for NIS data.4. NIS data represent the number of discharge diagnoses per hospital stay, not per patient, so the incidence of any hospitalization is overestimated, an aspect that should be discussed as a limitation. Along this line, it is not clear if the numerator for the CRIC data included the first (incident) hospitalization during the follow-up period, or any hospitalizations. If the former is true, the authors may consider discussing how this aspect affects the comparison with NIS rates. The authors ought to specify how the numerator was counted in each of the two data sources.Reviewer #3: This is an interesting paper and highlights a major issue for patients with even fairly mild chronic kidney disease. The data presented genuinely contribute to novel understanding of this issue and in my opinion will be useful for research, healthcare policy and resource allocation. The CRIC cohort is well known, but this looks like a completely new and more detailed analysis compared to any of their other work. To my reading, this is well presented, the analyses look sound and the tables and figures are informative. I have little to suggest to this and I think this could be a very useful paper for others working in this field.My main comments are around the grammar, or messaging in the paperAbstract.1.I presume they mean 500mg/g of proteinuria? - there is no explanation as to what this measurement refers to. Also why present this threshold in the abstract when most of the thresholds for uPCR in the paper are 150mg/g. Is this correct? I presume this is as 500mg/g is the upper category - it could be clarified as to why this value appears where it does2. Although I accept that the majority of illnesses are not cardiovascular, the single biggest disease by far affecting these patients is cardiovascular disease as their analysis shows (31.8% compared to 8.7% for the next condition down). I think the message should be slightly rephrased to make sure that the message about CVD is not diminished too muchMain textThey should go through this carefully for units etc. I found a few instances where the units had dropped off inappropriately e.g. for GFR on p10. There are probably others. I don't view it as the role of reviewer to be a proof reader!Any attachments provided with reviews can be seen via the following link:[LINK] 30 Sep 2020AttachmentPLOS_Medicine_Response_to_Reviewers.docxSubmitted filename: Click here for additional data file. 26 Oct 2020Dear Dr. Schrauben,Thank you very much for re-submitting your manuscript \"Description of Hospitalizations Among Adults with Chronic Kidney Disease: An Observational\u00a0Longitudinal Cohort Study\" (PMEDICINE-D-20-01995R2) for review by PLOS Medicine.I have discussed the paper with my colleagues and the academic editor and it was also seen again by xxx reviewers. I am pleased to say that provided the remaining editorial and production issues are dealt with we are planning to accept the paper for publication in the journal.The remaining issues that need to be addressed are listed at the end of this email. Any accompanying reviewer attachments can be seen via the link below. Please take these into account before resubmitting your manuscript:[LINK]plosmedicine@plos.org) will be in touch shortly about the production requirements for your paper, and the link and deadline for resubmission. DO NOT RESUBMIT BEFORE YOU'VE RECEIVED THE PRODUCTION REQUIREMENTS.Our publications team and your responses to reviewer comments. If eligible, we will contact you to opt in or out.***In revising the manuscript for further consideration here, please ensure you address the specific points made by each reviewer and the editors. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments and the changes you have made in the manuscript. Please submit a clean version of the paper as the main article file. A version with changes marked must also be uploaded as a marked up manuscript file.http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. If you haven't already, we ask that you provide a short, non-technical Author Summary of your research to make findings accessible to a wide audience that includes both scientists and non-scientists. The Author Summary should immediately follow the Abstract in your revised manuscript. This text is subject to editorial change and should be distinct from the scientific abstract.Please also check the guidelines for revised papers at plosmedicine@plos.org) if you have any questions or concerns.We expect to receive your revised manuscript within 1 week. Please email us , which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by \"data not shown\" or \"unpublished results.\" For such statements, authors must provide supporting data or cite public sources that include it.Please ensure that the paper adheres to the PLOS Data Availability Policy At the end of the ms, the acknowledgements are mainly funding, I think, which is in the submission form and can be removed. Please ensure all p-values provided are up to three decimal places. All iterations of p<0.0001 should be changed to p<0.001Comments from Reviewers:Reviewer #1: The authors have addressed my concerns and I now recommend publicationPeter FlomReviewer #2: The authors' responses are satisfactory and i have no further comments. Thank you.Reviewer #3: I have no further commentsAny attachments provided with reviews can be seen via the following link:[LINK] 1 Nov 2020AttachmentResponse to Reviewers.docxSubmitted filename: Click here for additional data file. 5 Nov 2020Dear Dr. Schrauben, On behalf of my colleagues and the academic editor, Dr. Meda E Pavkov, I am delighted to inform you that your manuscript entitled \"Hospitalizations Among Adults with Chronic Kidney Disease in the United States: A\u00a0Cohort Study\" (PMEDICINE-D-20-01995R3) has been accepted for publication in PLOS Medicine. PRODUCTION PROCESSBefore publication you will see the copyedited word document (within 5 business days) and a PDF proof shortly after that. The copyeditor will be in touch shortly before sending you the copyedited Word document. We will make some revisions at copyediting stage to conform to our general style, and for clarification. When you receive this version you should check and revise it very carefully, including figures, tables, references, and supporting information, because corrections at the next stage (proofs) will be strictly limited to (1) errors in author names or affiliations, (2) errors of scientific fact that would cause misunderstandings to readers, and (3) printer's (introduced) errors. Please return the copyedited file within 2 business days in order to ensure timely delivery of the PDF proof. If you are likely to be away when either this document or the proof is sent, please ensure we have contact information of a second person, as we will need you to respond quickly at each point. Given the disruptions resulting from the ongoing COVID-19 pandemic, there may be delays in the production process. We apologise in advance for any inconvenience caused and will do our best to minimize impact as far as possible.PRESSA selection of our articles each week are press released by the journal. You will be contacted nearer the time if we are press releasing your article in order to approve the content and check the contact information for journalists is correct. If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. PROFILE INFORMATIONhttps://www.editorialmanager.com/pmedicine, log in, and click on the \"Update My Information\" link at the top of the page. Please update your user information to ensure an efficient production and billing process.Now that your manuscript has been accepted, please log into EM and update your profile. Go to Thank you again for submitting the manuscript to PLOS Medicine. We look forward to publishing it. Best wishes, Adya Misra, PhDSenior Editor PLOS Medicineplosmedicine.org"} +{"text": "Exposure to environmental chemicals may be a modifiable risk factor for progression of chronic kidney disease (CKD). The purpose of this study was to examine the impact of serially assessed exposure to bisphenol A (BPA) and phthalates on measures of kidney function, tubular injury, and oxidative stress over time in a cohort of children with CKD.2-isoprostane). Clinical renal function measures included estimated glomerular filtration rate (eGFR), proteinuria, and blood pressure. Linear mixed models were fit to estimate the associations between urinary concentrations of 6 chemical exposure measures and clinical renal outcomes and urinary concentrations of KIM-1, NGAL, 8-OHdG, and F2-isoprostane controlling for sex, age, race/ethnicity, glomerular status, birth weight, premature birth, angiotensin-converting enzyme inhibitor use, angiotensin receptor blocker use, BMI z-score for age and sex, and urinary creatinine. Urinary concentrations of BPA, PA, and phthalate metabolites were positively associated with urinary KIM-1, NGAL, 8-OHdG, and F2-isoprostane levels over time. For example, a 1-SD increase in \u2211di-n-octyl phthalate metabolites was associated with increases in NGAL , KIM-1 , 8-OHdG , and F2-isoprostane over time. BPA and phthalate metabolites were not associated with eGFR, proteinuria, or blood pressure, but PA was associated with lower eGFR over time. For a 1-SD increase in ln-transformed PA, there was an average decrease in eGFR of 0.38 ml/min/1.73 m2 . Limitations of this study included utilization of spot urine samples for exposure assessment of non-persistent compounds and lack of specific information on potential sources of exposure.Samples were collected between 2005 and 2015 from 618 children and adolescents enrolled in the Chronic Kidney Disease in Children study, an observational cohort study of pediatric CKD patients from the US and Canada. Most study participants were male (63.8%) and white (58.3%), and participants had a median age of 11.0 years (interquartile range 7.6 to 14.6) at the baseline visit. In urine samples collected serially over an average of 3.0 years (standard deviation [SD] 1.6), concentrations of BPA, phthalic acid (PA), and phthalate metabolites were measured as well as biomarkers of tubular injury and oxidative stress (8-hydroxy-2\u2032-deoxyguanosine [8-OHdG] and FAlthough BPA and phthalate metabolites were not associated with clinical renal endpoints such as eGFR or proteinuria, there was a consistent pattern of increased tubular injury and oxidative stress over time, which have been shown to affect renal function in the long term. This raises concerns about the potential for clinically significant changes in renal function in relation to exposure to common environmental toxicants at current levels. Melanie H. Jacobson and colleagues investigate correlations between exposures to chemicals and kidney function in children. The prevalence of chronic kidney disease has been steadily increasing over the last 40 years.While there are several known risk factors, such as diabetes and hypertension, few are potentially modifiable.Recent work has suggested kidney function may be affected by exposure to environmental chemicals such as bisphenols and phthalates, which are synthetic compounds used in the manufacturing of plastics and other consumer products.However, these chemicals are non-persistent, and no longitudinal studies to our knowledge have been conducted to investigate their potential impact on kidney function over time.In urine samples collected annually over 5 years from 618 children and adolescents with chronic kidney disease, we measured bisphenol A, phthalates, and biomarkers of tubular injury and oxidative stress.Clinical renal function outcomes were also monitored and collected over 5 years.We found that bisphenol A and phthalates were associated with increased tubular injury and oxidative stress biomarkers.Although phthalic acid was associated with lower estimated glomerular filtration rate over time, neither bisphenol A nor other phthalates were associated with clinical renal outcomes.Bisphenol A and phthalates were not associated with clinical renal function outcomes, but showed consistent positive associations with tubular injury and oxidative stress, which may signal the potential for clinical events to manifest with prolonged follow-up.These findings raise concern about the potential for clinically relevant changes to renal function to develop over time in relation to environmental exposures at current levels.This study suggests that exposure to environmental chemicals may be a potentially modifiable risk factor for the progression of chronic kidney disease. Chronic kidney disease (CKD) is a growing health problem in children and adults . In the There are a number of traditional risk factors associated with CKD progression to ESKD in children including hypertension, obesity, diabetes, and altered divalent mineral metabolism ,10\u201312. ISurveys of healthy children and adults in the US indicate that exposure to BPA and phthalates is ubiquitous \u201328, and We previously reported findings from a cross-sectional analysis of urinary BPA and phthalate levels in relation to kidney function in children and adolescents with CKD enrolled in the Chronic Kidney Disease in Children (CKiD) observational study ,42. Our 2-isoprostane.In this study, we evaluated the associations between longitudinally measured urinary BPA and phthalates and the trajectory of renal function over a 5-year observation period using a variety of measures over time: eGFR, proteinuria, systolic and diastolic blood pressure , and urinary biomarkers of tubular injury: kidney injury molecule-1 (KIM-1) and neutrophil gelatinase-associated lipocalin (NGAL). In addition, in order to assess the plausibility of a potential mechanism of action, we examined the associations between longitudinally measured urinary BPA and phthalates and serially assessed urinary oxidative stress biomarkers: 8-hydroxy-2\u2032-deoxyguanosine (8-OHdG) and FA prospective analysis plan was followed and is provided in The CKiD study is a multi-center prospective cohort study of children aged 6 months to 16 years with mild-to-moderate CKD with the overall goal of identifying predictors and sequelae of CKD progression. The CKiD study procedures and protocol have been previously described ,49. BrieThe institutional review board at each CKiD study site approved the study protocol, and all research was performed in accordance with established guidelines. Written informed consent was obtained from all parents or legal guardians, and assent from all participants depending on their age and institutional guidelines. The New York University School of Medicine Institutional Review Board deemed this project exempt from review due to data collection being complete and the dataset de-identified.Longitudinally collected and stored urine samples were used for exposure assessment. Phthalic acid (PA), 21 individual phthalate metabolites, BPA, and creatinine were analyzed at the Wadsworth Center, New York State Department of Health, Albany, NY. Details on the analytic methods for the phthalates and BPA have been previously described . BrieflyFor all exposures, measures below the limit of detection (LOD) were imputed by the LOD divided by the square root of 2 . For sta2) = 0.413 \u00d7 height (cm)/serum creatinine.Several correlates of renal function were examined over time. The primary outcome was eGFR, calculated using the modified Schwartz equation : eGFR , SBP, and DBP. Analytical methods for these measures in the CKiD study have been previously described. All laboratory measures were conducted at the central CKiD laboratory (University of Rochester) ,53. Brieh Report .Two biomarkers of tubular injury were measured: kidney injury molecule-1 (KIM-1) and neutrophil gelatinase-associated lipocalin (NGAL). KIM-1 and NGAL were measured in urine samples at the New York University High Throughput Biology Laboratory. They were quantified by solid phase sandwich ELISAs using the Quantikine Human TIM-1 Immunoassay and Quantikine Human Lipocalin-2 Immunoassay, respectively , according to the manufacturer protocol. All analyses were conducted in duplicate. Intra-assay coefficients of variation (CVs) ranged from 3.6% to 3.7% for KIM-1 and from 2.3% to 3.9% for NGAL. Inter-assay CVs ranged from 0.7% to 4.3% for KIM-1 and from 0.6% to 4.8% for NGAL. Measures below the LOD were imputed by the LOD divided by the square root of 2 .2-isoprostane at the New York University High Throughput Biology Laboratory . 8-OHdG was quantified by competitive ELISA using the OxiSelect Oxidative DNA Damage ELISA . Similarly, F2-isoprostane was measured with a competitive enzyme-linked immunoassay, the OxiSelect 8-iso-Prostaglandin F2\u03b1 ELISA Kit (Cell Biolabs). All analyses were conducted in duplicate as directed by the manufacturer. Intra-assay CVs ranged from 4.6% to 11.1% for 8-OHdG and from 6.8% to 9.7% for F2-isoprostane. Inter-assay CVs ranged from 3.9% to 16.6% for 8-OHdG and from 14.2% to 15.9% for F2-isoprostane. Measures below the LOD were imputed by the LOD divided by the square root of 2 In light of these reviews, I am afraid that we will not be able to accept the manuscript for publication in the journal in its current form, but we would like to consider a revised version that addresses the reviewers' and editors' comments. Obviously we cannot make any decision about publication until we have seen the revised manuscript and your response, and we plan to seek re-review by one or more of the reviewers. http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments, the changes you have made in the manuscript, and include either an excerpt of the revised text or the location (eg: page and line number) where each change can be found. Please submit a clean version of the paper as the main article file; a version with changes marked should be uploaded as a marked up manuscript.In revising the manuscript for further consideration, your revisions should address the specific points made by each reviewer and the editors. Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/figures. While revising your submission, please upload your figure files to the PACE digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at PLOSMedicine@plos.org.In addition, we request that you upload any figures associated with your paper as individual TIF or EPS files with 300dpi resolution at resubmission; please read our figure guidelines for more information on our requirements: plosmedicine@plos.org) if you have any questions or concerns.We expect to receive your revised manuscript by Jul 24 2020 11:59PM. Please email us and your responses to reviewer comments. If eligible, we will contact you to opt in or out.***http://journals.plos.org/plosmedicine/s/competing-interests.We ask every co-author listed on the manuscript to fill in a contributing author statement, making sure to declare all competing interests. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. If new competing interests are declared later in the revision process, this may also hold up the submission. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT. You can see our competing interests policy here: Please use the following link to submit the revised manuscript: https://www.editorialmanager.com/pmedicine/Your article can be found in the \"Submissions Needing Revision\" folder. http://journals.plos.org/plosmedicine/s/submission-guidelines#loc-methods.To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by \"data not shown\" or \"unpublished results.\" For such statements, authors must provide supporting data or cite public sources that include it. Please ensure that the paper adheres to the PLOS Data Availability Policy in the subtitle .Abstract \u2013 Please add summary demographic information, including mean age and please ensure here and throughout that p values are provided for all quantifiable data and where 95% Cis are given. Please provide a brief outline of the study\u2019s limitations as the final sentence of the \u2018Methods and Findings\u2019 section of the abstract. Data \u2013 you state that some restrictions apply, then provide a link for the data. Can you clarify what restrictions there are? Is all of the data used in your analysis in the given link?https://journals.plos.org/plosmedicine/s/revising-your-manuscript#loc-author-summaryAt this stage, we ask that you include a short, non-technical Author Summary of your research to make findings accessible to a wide audience that includes both scientists and non-scientists. The Author Summary should immediately follow the Abstract in your revised manuscript. This text is subject to editorial change and should be distinct from the scientific abstract. Please see our author guidelines for more information: References \u2013 in the main text please use square brackets instead if rounded.Line 151- \u201cseveral sites\u201d, please for relevance provide some of the cities.Was written consent provided (from parents / guardians)?Please provide a call-out to the analysis plan at the start of the Methods section (S1text)Please address the comments from the Academic Editor below as well as the referee's comments. Comments from an Academic Editor:The findings should be interpreted in the context of the fact that no associations were observed between the exposures and clinically relevant outcomes including eGFR (identified as the primary outcome), UPCR, SBP or DBP. Urinary NGAL and KIM-1 are recognised as markers of tubular injury but have not been shown to add additional prognostic value to traditional markers like eGFR and UPCR. In addition, the effect size was modest such that a 1 SD increase in exposure was associated with only a 9% increase in urinary NGAL or a 12% increase in urinary KIM-1.A major limitation of the study is that assessment of chemical exposure was based on assays performed on random spot urine samples. As mentioned by the authors, the half life of these chemical is short (3-16 hours) so it is not clear whether random urine values accurately reflect 24 excretion.Comments from the reviewers:Reviewer #1: Statistical reviewThis study is a cohort study examining association between exposure to bisphenol A and phthalates on various measures of kidney function and damage amongst children with chronic kidney disease.I have some comments on the statistical methods and reporting, which are listed below. 1. Abstract - just for clarity it should be clear how many separate phthalate metabolites were tested for association.2. Abstract - if possible within space constraints, report the association between phthalic acid and eGFR over time as is done for the other associations above.3. Were the groupings of phthalates pre-specified?4. Page 10 - what is the interpretation of the intra- and inter-assay CVs? They seem broadly within the rules of thumb of being below 10 and 15 respectively, but not all. What effect would higher CVs have on the analysis?5. Page 11 - I think for some cases it would be useful to explore how well the model fits the data. The authors do provide a sensitivity analysis to explore how cumulative effects of the exposures affect the clinical variables. When the results differed from the two approaches, perhaps the AIC could be used to show which model fitted better? 6. Page 11 - was there much missing data, and how was it handled? Were there many patients who developed ESKD, and how were they accounted for in the analysis?7. Methods/results - for biomarkers such as NGAL and KIM-1 where multiple exposures were associated, did the authors consider models with more than one exposure included? I would recommend this might be useful if the exposures are correlated, in order to determine whether they were independent associations or not.James WasonReviewer #2: see attachmentAny attachments provided with reviews can be seen via the following link:[LINK] 16 Jul 2020AttachmentReview PLOS Medicine_R1.docxSubmitted filename: Click here for additional data file. 19 Aug 2020Dear Dr. Jacobson,Thank you very much for re-submitting your manuscript \"Serially assessed bisphenol A and phthalate exposure and kidney function in children with chronic kidney disease: a longitudinal cohort study\" (PMEDICINE-D-20-00109R2) for review by PLOS Medicine.I have discussed the paper with my colleagues and the academic editor and it was also seen again by one reviewer. I am pleased to say that provided the remaining editorial and production issues are dealt with we are planning to accept the paper for publication in the journal.The remaining issues that need to be addressed are listed at the end of this email. Any accompanying reviewer attachments can be seen via the link below. Please take these into account before resubmitting your manuscript:[LINK]plosmedicine@plos.org) will be in touch shortly about the production requirements for your paper, and the link and deadline for resubmission. DO NOT RESUBMIT BEFORE YOU'VE RECEIVED THE PRODUCTION REQUIREMENTS.Our publications team and your responses to reviewer comments. If eligible, we will contact you to opt in or out.***In revising the manuscript for further consideration here, please ensure you address the specific points made by each reviewer and the editors. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments and the changes you have made in the manuscript. Please submit a clean version of the paper as the main article file. A version with changes marked must also be uploaded as a marked up manuscript file.http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. If you haven't already, we ask that you provide a short, non-technical Author Summary of your research to make findings accessible to a wide audience that includes both scientists and non-scientists. The Author Summary should immediately follow the Abstract in your revised manuscript. This text is subject to editorial change and should be distinct from the scientific abstract.Please also check the guidelines for revised papers at plosmedicine@plos.org) if you have any questions or concerns.We expect to receive your revised manuscript within 1 week. Please email us , which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by \"data not shown\" or \"unpublished results.\" For such statements, authors must provide supporting data or cite public sources that include it.Please ensure that the paper adheres to the PLOS Data Availability Policy b. Please mention any factors adjusted for in your analyses.c. Please use square brackets when nesting, eg.: \u201c(\u2026[\u2026]\u2026)\u201dd. Please quantify all results with 95% CIs and p values (eg. lines 58-59), including statistically non-significant findings mentioned here (eg. lines 60-61).e. Please remove this sentence (lines 70-73): \"This is the first study to examine environmental exposures and biomarkers of tubular injury and oxidant stress in a pediatric population with CKD, and may provide evidence for oxidant stress as a plausible biologic pathway for environmental chemicals to influence kidney function.\u201d6. Line 87: Please correct to \u201c\u2026bisphenol A, phthalates and biomarkers\u2026\u201d7. Line 93: Please correct to \"\u2026neither bisphenol A nor other...\"8. Please remove spaces from within citation callouts, eg. \u201c\u2026prenatally and in early life ,\u2026\u201d, but keep the space between the text and the callout itself as is.9. Methods:a. Please add the following statement, or similar, to the Methods: \"This study is reported as per the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guideline (S1 Checklist).\" b. Line 193: Please provide exact sample collection dates, including month and day.c. Line 198: Please clarify whether parental consent was written or oral.d. Line 250: Please remove trademark symbols.e. Line 299 (and 423 in the main Discussion): Please avoid causal language: \u201ceffects\u201d; please replace with \u201cassociations\u201d.10. Line 357: PLOS does not permit \"data not shown.\u201d Please remove this claim, or do one of the following:a) If you are the owner of the data relevant to this claim, please provide the data in accordance with the PLOS data policy, and update your Data Availability Statement as needed.b) If the data not shown refer to a study from another group that has not been published, please cite personal communication in your manuscript text (it should not be included in the reference section). Please provide the name of the individual, the affiliation, and date of communication. The individual must provide PLOS Medicine written permission to be named for this purpose.c) For any other circumstance, please contact the journal office ASAP.11. At line 361, should that be \"inversely associated\"?12. In the limitations paragraph in the Discussion (line 437 onwards), please mention the limited study size and the possibility of unmeasured confounding.13. Line 443: Please delete extra comma.14. Please provide full access details (eg. DOI or URL) for references 5, 58, and 59. Please also give author names for reference 1, and journal names for references 19-21. Please double check that all references are complete and accurate.15. Table 4 and throughout, please report \u201cp<0.0001\u201d as \u201cp<0.001\u201d instead.-----------Comments from Reviewers:Reviewer #1: Thank you to the authors for addressing my previous comments well. I have no further issues to raise.Any attachments provided with reviews can be seen via the following link:[LINK] 11 Sep 2020Dear Dr. Jacobson, On behalf of my colleagues and the academic editor, Dr. Maarten Taal, I am delighted to inform you that your manuscript entitled \"Serially assessed bisphenol A or phthalate exposure and association with kidney function in children with chronic kidney disease in the US and Canada: a longitudinal cohort study\" (PMEDICINE-D-20-00109R3) has been accepted for publication in PLOS Medicine. PRODUCTION PROCESSBefore publication you will see the copyedited word document (in around 1-2 weeks from now) and a PDF galley proof shortly after that. The copyeditor will be in touch shortly before sending you the copyedited Word document. We will make some revisions at the copyediting stage to conform to our general style, and for clarification. When you receive this version you should check and revise it very carefully, including figures, tables, references, and supporting information, because corrections at the next stage (proofs) will be strictly limited to (1) errors in author names or affiliations, (2) errors of scientific fact that would cause misunderstandings to readers, and (3) printer's (introduced) errors.If you are likely to be away when either this document or the proof is sent, please ensure we have contact information of a second person, as we will need you to respond quickly at each point.PRESSA selection of our articles each week are press released by the journal. You will be contacted nearer the time if we are press releasing your article in order to approve the content and check the contact information for journalists is correct. If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. PROFILE INFORMATIONhttps://www.editorialmanager.com/pmedicine, log in, and click on the \"Update My Information\" link at the top of the page. Please update your user information to ensure an efficient production and billing process.Now that your manuscript has been accepted, please log into EM and update your profile. Go to Thank you again for submitting the manuscript to PLOS Medicine. We look forward to publishing it. Best wishes, Maarten Taal, Renal Medicine PLOS Medicineplosmedicine.org"} +{"text": "The complex etiology of autism spectrum disorder (ASD) is still unresolved. Preterm birth (<37 weeks of gestation) and its complications are the leading cause of death of babies in the world, and those who survive often have long-term health problems. Length of gestation, including preterm birth, has been linked to ASD risk, but robust estimates for the whole range of gestational ages (GAs) are lacking. The primary objective of this study was to provide a detailed and robust description of ASD risk across the entire range of GAs while adjusting for sex and size for GA.p-value < 0.001), 1.35 , and 1.37 , respectively. The main limitation of this study is the lack of data on potential causes of pre- or postterm birth. Also, the possibility of residual confounding should be considered.Our study had a multinational cohort design, using population-based data from medical registries in three Nordic countries: Sweden, Finland, and Norway. GA was estimated in whole weeks based on ultrasound. Children were prospectively followed from birth for clinical diagnosis of ASD. Relative risk (RR) of ASD was estimated using log-binomial regression. Analyses were also stratified by sex and by size for GA. The study included 3,526,174 singletons born 1995 to 2015, including 50,816 (1.44%) individuals with ASD. In the whole cohort, 165,845 (4.7%) were born preterm. RR of ASD increased by GA, from 40 to 24 weeks and from 40 to 44 weeks of gestation. The RR of ASD in children born in weeks 22\u201331, 32\u201336, and 43\u201344 compared to weeks 37\u201342 were estimated at 2.31 is a neurodevelopmental disorder characterized by persistent impairments in social communication and restricted and repetitive behaviors.The etiology remains unresolved. Length of gestation, including preterm birth, has been linked to risk of ASD, but reliable estimates of risks for the whole range of gestational ages (GAs) are lacking.The primary objective of this study was to provide a detailed and robust description of ASD risk across the entire range of GA while taking fetal sex and size at birth into account.This study was based on population-based data from national medical registries in three Nordic countries\u2014Sweden, Finland, and Norway\u2014and included 3,526,174 singletons born 1995 to 2015.Relative risks (RRs) of ASD by GA at birth were estimated with log binominal regression.The RR of ASD increased by each week of GA, pre- as well as postterm, from 40 to 24 weeks of gestation and from 40 to 44 weeks of gestation, independently of sex and birth weight for GA.On a population level, the risks of ASD were increased in children born either pre- or postterm, including children born close to week 40.We found that the risk of ASD increased weekly, with each week further away from 40 weeks of gestation. Autism spectrum disorder (ASD) is a neurodevelopmental disorder, affecting 1%\u20132% of children worldwide \u20135. ASD iThe proportion of preterm birth is rising in many parts of the world, including in the United States , with anThe current study is, to our knowledge, the largest to date. In our cohort, comprising more than 3.5 million individuals, we investigated the association between GA and risk of ASD for children born across the GA continuum. We also investigated the potentially modifying effect of sex and size for GA.www.recap-preterm.eu) In light of these reviews, I am afraid that we will not be able to accept the manuscript for publication in the journal in its current form, but we would like to consider a revised version that addresses the reviewers' and editors' comments. Obviously we cannot make any decision about publication until we have seen the revised manuscript and your response, and we plan to seek re-review by one or more of the reviewers. http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments, the changes you have made in the manuscript, and include either an excerpt of the revised text or the location (eg: page and line number) where each change can be found. Please submit a clean version of the paper as the main article file; a version with changes marked should be uploaded as a marked up manuscript.In revising the manuscript for further consideration, your revisions should address the specific points made by each reviewer and the editors. Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/figures. While revising your submission, please upload your figure files to the PACE digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at PLOSMedicine@plos.org.In addition, we request that you upload any figures associated with your paper as individual TIF or EPS files with 300dpi resolution at resubmission; please read our figure guidelines for more information on our requirements: plosmedicine@plos.org) if you have any questions or concerns.We expect to receive your revised manuscript by May 15 2020 11:59PM. Please email us and your responses to reviewer comments. If eligible, we will contact you to opt in or out.***http://journals.plos.org/plosmedicine/s/competing-interests.We ask every co-author listed on the manuscript to fill in a contributing author statement, making sure to declare all competing interests. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. If new competing interests are declared later in the revision process, this may also hold up the submission. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT. You can see our competing interests policy here: Please use the following link to submit the revised manuscript: https://www.editorialmanager.com/pmedicine/Your article can be found in the \"Submissions Needing Revision\" folder. http://journals.plos.org/plosmedicine/s/submission-guidelines#loc-methods.To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by \"data not shown\" or \"unpublished results.\" For such statements, authors must provide supporting data or cite public sources that include it. Please ensure that the paper adheres to the PLOS Data Availability Policy could contact the national databases quoted as the source of the data- many thanks.*Please revise your title according to PLOS Medicine's style. Your title must be nondeclarative and not a question. It should begin with main concept if possible. \"Effect of\" should be used only if causality can be inferred, i.e., for an RCT. Please place the study design in the title after a colon. *In the Methods section, please clarify if the analytical plan for the study was prespecified prior to collection of data , or if it was only established after data were collected. *In the last sentence of the Abstract Methods and Findings section, please describe the main limitation(s) of the study's methodology.*Please change the referencing format (this should be easy if you've used reference manager software) to call out references using numbers in square brackets rather than superscript numbers. Many thanks*As the paper reports findings from an observational study (prospective cohort), we'd advise the authors to ensure the study is reported according to the STROBE guideline, and include the completed STROBE checklist as Supporting Information. When completing the checklist, please use section and paragraph numbers, rather than page numbers. Please add the following statement, or similar, to the Methods: \"This study is reported as per the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guideline (S1 Checklist).\"-----------------------------------------------------------Comments from the reviewers:Reviewer #1: This is a very clear and well-designed evaluation of the relationship of gestational age to the risk of autism spectrum disorder (ASD), with analysis of whether sex modifies the gestational age - ASD risk association. The authors have succinctly described the gap in knowledge that this paper fills. The sample size is very large and I have no concerns about the quality of the data. This is an important contribution to the epidemiological literature on ASD. -----------------------------------------------------------Reviewer #2: The authors present their work evaluating the association between gestational age and the risk of autism using a large linked dataset.Specific comments:1. The author's may wish to reconsider using \"U-shaped\" to describe the nature of the association between gestational age and ASD. Only in the Finland curve does the shape actually appear close to \"U\" with the risk in the post-term being as high as the risk in the very pre-term time frame. The composite curve shows a gradual decrease in risk from 24 to 40 weeks with a small rise between 40 and 44 weeks.2. In the last paragraph of the abstract and elsewhere in the manuscript, the authors state \"Even though the absolute numbers may seem low, ASD is a rare disorder.\" Both of these clauses seem to say the same thing - perhaps the \"although\" is not necessary or perhaps the authors intended to make a different statement. Please check this.3. The authors state that there are conditions associated with preterm and postterm birth (page 9); this needs to be more fully explored in the discussion section. Intraamniotic infection is highly associated with preterm and very preterm birth, and also associated with subsequent diagnosis of ASD. Postterm birth is associated with delayed maturation of the fetal hypothalamic-pituitary-adrenal axis, and is also associated with ASD. These mechanisms are likely more common than the genetic factors related to preterm birth. The discussion section of this manuscript would benefit from consultation with an expert in mechanisms of parturition (both preterm and postterm) and also an obstetrician or maternal-fetal medicine specialist. These consultations may also assist in further refining some of the implied statements regarding obstetric interventions to reduce the risk of ASD (i.e. induction of labor at 40 weeks)4. -----------------------------------------------------------Reviewer #3: I confine my remarks to statistical aspects of this paper. The general approach is fine, but I have some issues to resolve before I can recommend publication.One overall issue. Since the researchers have the population, many statisticians (including me) would argue that tests of significance and confidence intervals and so on make no sense. There's no population to infer to since you have thge whole population. There are some who argue that it could be a sample from a \"super population\" but I don't think this makes a lot of sense - how can it be a random sample from a hypothetical population? I wouldn't forbid publication for this reason (since some people do it and it's not completely wrong) but I'd prefer to have all those things removed.p. 5 - it is good that GA was treated both ways. Categorical is useful for tables, but continuous is better for analysis. - why was birth year categorized? It should be left continuous and maybe a spline effect added. - same for maternal age at birth, and size for gestational age. The former should be \"week\" and the latter percentile. Categorizing independent variables is a bad idea. In *Regression Modelling Strategies* Frank Harrell lists 11 problems with it and summarizes \"nothing could be more disastrous\".Generally, though, a good job.Peter Flom-----------------------------------------------------------Any attachments provided with reviews can be seen via the following link:[LINK] 29 May 2020Dear Dr. Persson,Thank you very much for re-submitting your manuscript \"GESTATIONAL AGE AND THE RISK OF AUTISM ; A PROSPECTIVE COHORT STUDY\" (PMEDICINE-D-20-00561R2) for consideration at PLOS Medicine.I have discussed the paper with editorial colleagues and it was also seen again by two reviewers. I am pleased to tell you that, provided the remaining editorial and production issues are fully dealt with, we expect to be able to accept the paper for publication in the journal.The remaining issues that need to be addressed are listed at the end of this email. Any accompanying reviewer attachments can be seen via the link below. Please take these into account before resubmitting your manuscript:[LINK]plosmedicine@plos.org) will be in touch shortly about the production requirements for your paper, and the link and deadline for resubmission. DO NOT RESUBMIT BEFORE YOU'VE RECEIVED THE PRODUCTION REQUIREMENTS.Our publications team and your responses to reviewer comments. If eligible, we will contact you to opt in or out.***In revising the manuscript for further consideration here, please ensure you address the specific points made by each reviewer and the editors. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments and the changes you have made in the manuscript. Please submit a clean version of the paper as the main article file. A version with changes marked must also be uploaded as a marked up manuscript file.http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. If you haven't already, we ask that you provide a short, non-technical Author Summary of your research to make findings accessible to a wide audience that includes both scientists and non-scientists. The Author Summary should immediately follow the Abstract in your revised manuscript. This text is subject to editorial change and should be distinct from the scientific abstract.Please also check the guidelines for revised papers at plosmedicine@plos.org) if you have any questions or concerns.We hope to receive your revised manuscript within 1 week. Please email us , which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by \"data not shown\" or \"unpublished results.\" For such statements, authors must provide supporting data or cite public sources that include it.Please ensure that the paper adheres to the PLOS Data Availability Policy .Please review your reference list to ensure that all citations meet journal style. Six author names should be listed (rather than 3) prior to \"et al.\"; and italics should be converted to plain text. Please ensure that journal names are abbreviated consistently .We may have missed the STROBE checklist with your submission. Please ensure that this is present as a supplementary document, referred to in your methods section . In the checklist, please ensure that individual items are referred to by section and paragraph number, not line or page numbers - the latter generally change in the event of publication. Please also supply your analysis plan as a supplementary file, referred to in your methods section . We noted some instances of \"p<0.0001\" in your supplementary files. Please ensure that all p values are quoted as \"p<0.001\" or exact values, unless there is a specific statistical rationale to the contrary. Comments from Reviewers:*** Reviewer #2: 1. The wording of the second sentence of the abstract is a bit confusing, and I think better stated elsewhere in the manuscript. \"RR increased from 40 weeks of gestation to 24 weeks\" would seem like an obvious error to the reader. Perhaps stating in terms of weeks before/after 40 weeks, which is done elsewhere in the paper, would be helpful.2. Page 10, 2nd paragraph. While genetics likely plays a role in timing of birth, many other mechanisms are likely more responsible than \"genetics\" for onset of labor. This paragraph should be reworked with a more rigorous discussion of the factors controlling parturition. Consultation with an expert in the mechanisms of parturition would be most helpful for this discussion.3. What are the policy differences between countries that would lead to a higher rate of birth at 42 weeks in Sweden than in the other two countries. The policy(ies) should be explained in the text.4. The lack of information on whether preterm birth was spontaneous or iatrogenic is a significant limitation and should be described similarly.5. The statement in the last paragraph \"If risks of ASD in offspring born near term could be avoided by delivery at 40 weeks gestation remains to be investigated\" should be removed. It is essentially speculative in nature, and not really supported by the findings in this study.*** Reviewer #3: On my first point regarding statistical tests applied to the population, the authors' reply is satisfactory. I would ask the authors to comment in their limitations section on categorizing variables, regarding the available data on maternal age.I accept the authors' response on the question of gestational age.I believe that the authors can proceed with minor revision.Peter Flom***Any attachments provided with reviews can be seen via the following link:[LINK] 7 Aug 2020Dear Dr. Persson, On behalf of my colleagues and the academic editor, Dr. Michael Fassett, I am delighted to inform you that your manuscript entitled \"GESTATIONAL AGE AND THE RISK OF AUTISM SPECTRUM DISORDER IN SWEDEN, FINLAND AND NORWAY; A\u00a0 COHORT STUDY\" (PMEDICINE-D-20-00561R3) has been accepted for publication in PLOS Medicine. PRODUCTION PROCESSBefore publication you will see the copyedited word document (in around 1-2 weeks from now) and a PDF galley proof shortly after that. The copyeditor will be in touch shortly before sending you the copyedited Word document. We will make some revisions at the copyediting stage to conform to our general style, and for clarification. When you receive this version you should check and revise it very carefully, including figures, tables, references, and supporting information, because corrections at the next stage (proofs) will be strictly limited to (1) errors in author names or affiliations, (2) errors of scientific fact that would cause misunderstandings to readers, and (3) printer's (introduced) errors.If you are likely to be away when either this document or the proof is sent, please ensure we have contact information of a second person, as we will need you to respond quickly at each point.PRESSA selection of our articles each week are press released by the journal. You will be contacted nearer the time if we are press releasing your article in order to approve the content and check the contact information for journalists is correct. If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. PROFILE INFORMATIONhttps://www.editorialmanager.com/pmedicine, log in, and click on the \"Update My Information\" link at the top of the page. Please update your user information to ensure an efficient production and billing process.Now that your manuscript has been accepted, please log into EM and update your profile. Go to Thank you again for submitting the manuscript to PLOS Medicine. We look forward to publishing it. Best wishes, Richard Turner, PhDSenior Editor PLOS Medicineplosmedicine.org"} +{"text": "The mortality impact of pulse oximetry use during infant and childhood pneumonia management at the primary healthcare level in low-income countries is unknown. We sought to determine mortality outcomes of infants and children diagnosed and referred using clinical guidelines with or without pulse oximetry in Malawi.2) thresholds and World Health Organization (WHO) and Malawi clinical guidelines for referral. We used unadjusted and adjusted regression to account for interaction between SpO2 threshold (pulse oximetry) and clinical guidelines, clustering by child, and CHW or HC catchment area. We matched CHW and HC outpatient data to hospital inpatient records to explore roles of pulse oximetry and clinical guidelines on hospital attendance after referral. From 7,358 CHW and 6,546 HC pneumonia episodes, we linked 417 CHW and 695 HC pneumonia episodes to 30-day mortality outcomes: 16 (3.8%) CHW and 13 (1.9%) HC patients died. SpO2 thresholds of <90% and <93% identified 1 (6%) of the 16 CHW deaths that were unidentified by integrated community case management (iCCM) WHO referral protocol and 3 (23%) and 4 (31%) of the 13 HC deaths, respectively, that were unidentified by the integrated management of childhood illness (IMCI) WHO protocol. Malawi IMCI referral protocol, which differs from WHO protocol at the HC level and includes chest indrawing, identified all but one of these deaths. SpO2 < 90% predicted death independently of WHO danger signs compared with SpO2 \u2265 90%: HC Risk Ratio (RR), 9.37 ; CHW RR, 6.85 . SpO2 < 93% was also predictive versus SpO2 \u2265 93% at HC level: RR, 6.68 . Hospital referrals and outpatient episodes with referral decision indications were associated with mortality. A substantial proportion of those referred were not found admitted in the inpatients within 7 days of referral advice. All 12 deaths in 73 hospitalised children occurred within 24 hours of arrival in the hospital, which highlights delay in appropriate care seeking. The main limitation of our study was our ability to only match 6% of CHW episodes and 11% of HC episodes to mortality outcome data.We conducted a data linkage study of prospective health facility and community case and mortality data. We matched prospectively collected community health worker (CHW) and health centre (HC) outpatient data to prospectively collected hospital and community-based mortality surveillance outcome data, including episodes followed up to and deaths within 30 days of pneumonia diagnosis amongst children 0\u201359 months old. All data were collected in Lilongwe and Mchinji districts, Malawi, from January 2012 to June 2014. We determined differences in mortality rates using <90% and <93% oxygen saturation identified 6% of deaths at community health worker level (1/16) and 23% of deaths at health centre level (3/13) not identified by clinical signs.Pulse oximetry readings of less than 90% oxygen saturation not identified by clinical signs at the health centre level only.Increasing the threshold to less than 93% SpOAll of the health centre deaths identified by pulse oximetry except one were also identified by chest indrawing in this high-mortality setting.Our findings suggest that pulse oximetry could be beneficial in supplementing clinical signs to identify children with pneumonia at high risk of mortality in the outpatient setting in health centres for referral to a hospital for appropriate management.In high-mortality settings in low- and middle-income countries, in the absence of pulse oximetry, presence of chest indrawing could potentially be explored as a referral sign to a hospital but needs further research in routine settings. Pneumonia remains a leading cause of death in children under 5, especially in low-income and middle-income countries (LMICs), with around 800,000 pneumonia-related deaths a year globally . IncidenEarly identification and action is required to prevent more pneumonia-related deaths that currently occur in hospital, often because of late presentation , and at 2) measurement by pulse oximeters at outpatient primary care and first level health facilities has a potential role to aid early recognition and referral of severe pneumonia episodes for oxygen and injectable antibiotic treatment ..6].2 measurements were taken using the Lifebox device , with a universal adult clip probe applied to the child\u2019s big toe if less than 2 years of age or below 10 kg. Otherwise, for older or heavier children, providers were instructed to use either the big toe or an appropriately sized finger. A paediatric probe was not available during this time period. CHW and HC workers were trained and retrained in the use of pulse oximetry by a paediatric pulmonologist (EDM) as described by McCollum and colleagues ) and <93% (RR: 6.68 [95% CI: 1.52\u201329.4]) thresholds are significantly associated with mortality in both the Malawi and WHO IMCI guideline models of the 417 episodes had outpatient referral decisions, and these were more common in patients who were clinically or SpO2 eligible (33.3%\u2013100%) than those who were not (3.6%\u20137.0%). Only 0.7% (3) of the CHW episodes were found to be hospital inpatients within 7 days of outpatient diagnosis , healthcare workers referred 90.6% who were eligible by both criteria, 62.3% who were clinically eligible only, and only 20.0% of those eligible because of SpO2 < 90% only. Seventy (10.1%) of the 695 HC episodes were hospital inpatients within 7 days; 60 of these followed an outpatient referral decision .iagnosis , precludiagnosis , 30.4% nor iCCM clinical signs alone (DOR: 1.33 [95% CI: 0.29\u20136.05]) accurately identified those who died. At the HC level, pulse oximetry was able to accurately identify those who die with both Malawi and WHO IMCI clinical signs. Although the point estimates for these DORs are higher than those for when Malawi and WHO IMCI clinical signs alone are used without pulse oximetry , WHO: DOR 3.43 [95% CI: 1.10\u201310.7]), the confidence intervals are wide and overlapping, indicating these differences are not statistically significant.2 is less than 90%; however, pulse oximetry is not frequently available at the outpatient level. With low-cost portable pulse oximeters becoming much more available compared to 2014, especially in the context of the ongoing COVID pandemic In light of these reviews, I am afraid that we will not be able to accept the manuscript for publication in the journal in its current form, but we would like to consider a revised version that addresses the reviewers' and editors' comments. Obviously we cannot make any decision about publication until we have seen the revised manuscript and your response, and we plan to seek re-review by one or more of the reviewers. http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments, the changes you have made in the manuscript, and include either an excerpt of the revised text or the location (eg: page and line number) where each change can be found. Please submit a clean version of the paper as the main article file; a version with changes marked should be uploaded as a marked up manuscript.In revising the manuscript for further consideration, your revisions should address the specific points made by each reviewer and the editors. Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/figures. While revising your submission, please upload your figure files to the PACE digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at PLOSMedicine@plos.org.In addition, we request that you upload any figures associated with your paper as individual TIF or EPS files with 300dpi resolution at resubmission; please read our figure guidelines for more information on our requirements: plosmedicine@plos.org) if you have any questions or concerns.We expect to receive your revised manuscript by Apr 09 2020 11:59PM. Please email us and your responses to reviewer comments. If eligible, we will contact you to opt in or out.***http://journals.plos.org/plosmedicine/s/competing-interests.We ask every co-author listed on the manuscript to fill in a contributing author statement, making sure to declare all competing interests. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. If new competing interests are declared later in the revision process, this may also hold up the submission. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT. You can see our competing interests policy here: Please use the following link to submit the revised manuscript: https://www.editorialmanager.com/pmedicine/Your article can be found in the \"Submissions Needing Revision\" folder. http://journals.plos.org/plosmedicine/s/submission-guidelines#loc-methods.To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by \"data not shown\" or \"unpublished results.\" For such statements, authors must provide supporting data or cite public sources that include it. Please ensure that the paper adheres to the PLOS Data Availability Policy in the subtitle .Abstract- Please structure your abstract using the PLOS Medicine headings . Funding is not required here. Abstract methods and findings- please provide demographics, places where this study took place in Malawi along with datesAbstract Methods and Findings:* Please ensure that all numbers presented in the abstract are present and identical to numbers presented in the main manuscript text.* Please include the study design, population and setting, number of participants, years during which the study took place, length of follow up, and main outcome measures.* Please quantify the main results .* Please include the important dependent variables that are adjusted for in the analyses.Abstract methods and findings- the last sentence should include a limitation of your study designAbstract conclusions- * Please address the study implications without overreaching what can be concluded from the data; the phrase \"In this study, we observed ...\" may be useful. * Please interpret the study based on the results presented in the abstract, emphasizing what is new without overstating your conclusions. * Please avoid vague statements such as \"these results have major implications for policy/clinical care\". Mention only specific implications substantiated by the results. * Please avoid assertions of primacy (\"We report for the first time....\")https://journals.plos.org/plosmedicine/s/revising-your-manuscript#loc-author-summary. Please remove the \u201cresearch in context\u201d section. At this stage, we ask that you include a short, non-technical Author Summary of your research to make findings accessible to a wide audience that includes both scientists and non-scientists. The Author Summary should immediately follow the Abstract in your revised manuscript. This text is subject to editorial change and should be distinct from the scientific abstract. Please see our author guidelines for more information: The Data Availability Statement (DAS) requires revision. For each data source used in your study: a) If the data are freely or publicly available, note this and state the location of the data: within the paper, in Supporting Information files, or in a public repository (include the DOI or accession number).b) If the data are owned by a third party but freely available upon request, please note this and state the owner of the data set and contact information for data requests (web or email address). Note that a study author cannot be the contact person for the data.c) If the data are not freely available, please describe briefly the ethical, legal, or contractual restriction that prevents you from sharing it. Please also include an appropriate contact (web or email address) for inquiries .Square brackets placement- please add a space between text and square brackets followed by a full stop. For example: xxxxxx [3-5]. Did your study have a prospective protocol or analysis plan? Please state this (either way) early in the Methods section. a) If a prospective analysis plan was used in designing the study, please include the relevant prospectively written document with your revised manuscript as a Supporting Information file to be published alongside your study, and cite it in the Methods section. A legend for this file should be included at the end of your manuscript. b) If no such document exists, please make sure that the Methods section transparently describes when analyses were planned, and when/why any data-driven changes to analyses took place. c) In either case, changes in the analysis-- including those made in response to peer review comments-- should be identified as such in the Methods section of the paper, with rationale.http://www.equator-network.org/reporting-guidelines/strobe/ When completing the checklist, please use section and paragraph numbers, rather than page numbers.Please ensure that the study is reported according to the STROBE guideline, and include the completed STROBE checklist as Supporting Information. Please add the following statement, or similar, to the Methods: \"This study is reported as per the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guideline (S1 Checklist).\" The STROBE guideline can be found here: For all observational studies, in the manuscript text, please indicate: (1) the specific hypotheses you intended to test, (2) the analytical methods by which you planned to test them, (3) the analyses you actually performed, and (4) when reported analyses differ from those that were planned, transparent explanations for differences that affect the reliability of the study's results. If a reported analysis was performed based on an interesting but unanticipated pattern in the data, please be clear that the analysis was data-driven.Role of funding source should be added into the financial statement within the article metadata and removed from the main textPlease conclude the Introduction with a clear description of the study question or hypothesis.Conclusions must be toned down since this is an observational studyPlease present and organize the Discussion as follows: a short, clear summary of the article's findings; what the study adds to existing research and where and why the results may differ from previous research; strengths and limitations of the study; implications and next steps for research, clinical practice, and/or public policy; one-paragraph conclusion.S1 Appendix table A1- is the date seen relevant? I imagine this is identifying along with the number of episodes, age and whether they were seen by CHW or at HC. Please amend this table. Manuscripts submitted to PLOS should not contain research participants personally-identifying information. In rare exceptions where this is unavoidable and a manuscript does contain PII, the authors should be willing and able to provide PLOS with confirmation of GDPR compliance upon request.Map of Malawi- you may consider adding this into the main text as it gives an immediate visualisation of where the data were collected fromComments from the reviewers:Reviewer #1: This is a very important piece of work that merits consideration by the journal as it tries to assess the \"\u2026 mortality outcomes of infants and children diagnosed and referred using clinical guidelines with or without pulse oximetry in Malawi\". Recent advances in the wider scale up and implementation of pulseoximetry need to go hand in hand with data like the one presented in this manuscript, which are clear-cut. Pulseoximetry helps identify children at risk of dying. Authors conclude that \"Pulse oximetry identified fatal pneumonia episodes at HCs in Malawi that would otherwise have been missed by WHO referral guidelines alone\", which is a statement with which I do agree. I believe the journal should consider the inclusion of a comment to go hand in hand with the manuscript. I only have a few minor comments to add:* The word data should be used in plural* Current global estimates for Pneumonia mortality are closer to 800,000 than to 900,000/year* The fact that mortality as an endpoint was assessed at day 7 but also at day 30 means that many of these deaths may not (or may indeed) be related to the initial episode that took the patient to the health system (particularly for those at day 30). This should be further discussed, but it doesn't eliminate the validity of the association and therefore of the use of oxygen saturation as a risk stratification tool* In this respect, is there any information regarding the potential underlying causes of those deaths? It would be very helpful to be able to state that an elevated proportion of those deaths were secondary to respiratory problems* I understand vaccination data were also collected. Was there any association between mortality and lack of adequate vaccination against Hib or pneumococcus?* The findings that association with mortality is maintained with threshold 93% is also very important but has the risk to overload the system if all patients fulfilling this criterion are recommended for a transfer. Have you been able to conduct any economic modeling of costs associated with the two scenarios . This would be super helpful, as I predict that costs to the system would be massively increased (and be unassumable). Were there other specific variables associated to mortality within the specific group with saturations between 90-93% which were also associated with mortality? Could the recommendation include a need to fulfil at least two of the risk factors, rather than one?* Noting is mentioned on the absence/availability of emergency oxygen/systems for transfers. Recognizing this is a major deficit in the health systems of LMICS, it may be worth stating it as an important consideration for the future. There is an ethical dilemma of measuring oxygen saturation but not being able to provide life-saving oxygen, at least for the transport. This is perhaps the most neglected field in pneumonia research, how to ensure cheap and durable availability of oxygen for emergency transport* Pulseoximter readings are very variable and often can produce false positive results of hypoxemia values which are not real. I understand that there was a specific training conduted, but perhaps it may be useful to add in the methods section how values obtained in the peripheral health system were considered \"reliable\" and robust . This is particularly important with one of your conclusions in relation to the association with mortality: \"the failure to obtain an SpO2 measure using pulse oximeters in identifying otherwise unrecognized fatal childhood pneumonia cases accessing primary care\". Could some of those cases be failures of the measurement technique? Of the devices used? It may seem very obvious, but this appears important to me.Reviewer #2: PLOS Medicine Colbourn SpO2 Malawi reviewThe lowest denomination of low-cost factors to determine poor outcome is required. Oxygen saturation monitors will be increasingly available for use in developing health and an assessment of their impact is required. This first report is very useful for informing this field and provides structure to the reporting of future studies. CommentsMethodsDescribe the process for patients to access and move through the HC system . ResultsP10 L227. This manuscript would be helped by a figure or initial text in the results that provide cascade detail to the patients, matching and deaths to provide better context to what is reported.i.e. N = 13814 pneumonia episodes, 6941 CHW + 5761 HC in which SpO2 <90% found in 86/6941 (1.2%) and 608/5761 (10.5%) respectively. There was matching of pneumonia episodes to mortality cohort in N=1112, 417/6941 CHW and 695/5761 HC. of which there were 29 deaths representing 2.6% of the matched episodes (16/417 CHW and 13/608).P10 L252 The case for chest indrawing is well made. Page 11 L259In light of the finding of a discrepancy in the SpO2 <90% and deaths in CHW cohort (missing 75% of deaths), please provide discussion on the use of an adult probe in a paediatric community setting and the potential for error from extraneous light and the difficulty in positioning. Please also consider discussion that Table 1 The results identify that an SpO2 threshold of 93% would result in a 6 times higher referral rate by CHW and double referral rate by HC. Please discuss the potential impact of this on the ability to deliver safe and effective care to children with pneumonia with increased sensitivity to identify severe disease, but the potential impact of reduced sensitivity having an economic and hospital bed impact that could reduce the ability to priortise services for those with severe disease. DiscussionKey finding is that study identifies that clinical criteria including chest indrawing together with SpO2 <90% fail to identify 75% of deaths from pneumonia in children. It is difficult to understand that 75% of deaths occurred in children whose parents were concerned enough to attended services so early that they had no severe signs and no desaturation, but then did not re-attend when things got worse. As this represents such a large proportion of 'inaccessible pneumonia deaths' it would be useful to consider how this may be resolved in a future study, i.e red flag information. Please discuss that health seeking behaviours are orientated to younger children and it is older children that do not attend out patient services after referral. Whilst this is regrettable, parents so appear to understand that younger children are at greater risk. Overall the study is of value for providing an indication of the potential value of pulse oximetry in childhood pneumonia - but large missing data and relatively few deaths limit the strength of conclusions despite rigorous analysis. The conclusions could be more circumspect. Line 300 - This evidence 'could' support\u2026 The evidence in itself is not strong enough to support the inclusion at this time. Line 420 'should' is inappropriate based on this evidence alone. Reviewer #3: Alex McConnachie, Statistical ReviewColbourn et al consider the potential for the use of pulse oximetry for identifying children likely to die from pneumonia in Malawi. This review looks at the use of statistics in the paper.Unfortunately, I have a number of concerns.A major problem is the very poor linkage rate, which casts doubt on the validity of the results. The authors comment that there were some differences between those matched and not matched, with reference to the tables and appendix, but should perhaps expand on what these differences were in the main text.My main concern with the analysis is the regression modelling. There were problems with convergence, and the models that it had been intended to fit were not possible. There were generally few events, and the models that were fitted produced estimates with very wide confidence intervals. Some models reported in the supplement only worked using a logit link. All of these factors suggest that these models are unstable, and I would not trust these results.Note that the description of the model is not entirely correct. On line 184, mu.i is not death for subject i, it is the probability of death; Yi is not the log risk of death, it is the outcome, death, for subject i.Nevertheless, there may be valuable data here, but the tables as presented are quite complex, and are not easy to follow. A simpler approach might be to report more standard measures of diagnostic performance, i.e. sensitivity, specificity, PPV, NPV. That might show, in terms that are easily understood, which combinations of screening criteria worked best. I doubt that the sample size is enough to test say whether any differences are statistically significant, but might make the case for additional research.Overall, the conclusions of the paper, that SpO2 measurements (or lack of), and chest-indrawing should be included in screening criteria, seems premature given these data.Minor points on Table 1: p values of \"0.000\" should be reported as \"<0.001\", and there is no need to add asterisks - the actual p-values are reported.Table 2 is confusing - some percentages are in columns, some in rows, but as stated above, if the focus were changed to measures of diagnostic performance, that might help/Tables 3 and 5 mix up risks and odds, with the constant term referred to as baseline odds.I note that individual consent was not obtained for this study, but individual patient data is reported in the supplement, which could potentially identify individuals.Reviewer #4: page 6 the phrase chest in drawing might likely is the same as retractions, a term more familiar to north american. Please addpage 7 it is customary to add the name and address and even model number of a device used in research. Please addAlso, please detail why an adult universal probe was used in young children. did it work better than pediatric one or thats all there was? Thank you.Page 10. Please define what SpO2 eligible meansPage 13. Please expand the discussion about specific limitation with SpO2 measure in your specific setting. It's what people would likely want to read in the paper.Any attachments provided with reviews can be seen via the following link:[LINK] 29 May 2020AttachmentResponse_to_reviewers_Outpatient pulse oximetry mortality.docxSubmitted filename: Click here for additional data file. 17 Jul 2020Dear Dr. Colbourn,Thank you very much for submitting your manuscript \"Predictive value of pulse oximetry for mortality in infants and children presenting to primary care with clinical pneumonia in rural Malawi: a data linkage study\" (PMEDICINE-D-20-00470R2) for consideration at PLOS Medicine. Your paper was evaluated by a senior editor and discussed among all the editors here. It was also discussed with an academic editor with relevant expertise, and sent to independent reviewers, including a statistical reviewer. The reviews are appended at the bottom of this email and any accompanying reviewer attachments can be seen via the link below:[LINK]In light of these reviews, I am afraid that we will not be able to accept the manuscript for publication in the journal in its current form, but we would like to consider a revised version that addresses the reviewers' and editors' comments. Obviously we cannot make any decision about publication until we have seen the revised manuscript and your response, and we plan to seek re-review by one or more of the reviewers. http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments, the changes you have made in the manuscript, and include either an excerpt of the revised text or the location (eg: page and line number) where each change can be found. Please submit a clean version of the paper as the main article file; a version with changes marked should be uploaded as a marked up manuscript.In revising the manuscript for further consideration, your revisions should address the specific points made by each reviewer and the editors. Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/figures. While revising your submission, please upload your figure files to the PACE digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at PLOSMedicine@plos.org.In addition, we request that you upload any figures associated with your paper as individual TIF or EPS files with 300dpi resolution at resubmission; please read our figure guidelines for more information on our requirements: plosmedicine@plos.org) if you have any questions or concerns.We expect to receive your revised manuscript by Jul 27 2020 11:59PM. Please email us and your responses to reviewer comments. If eligible, we will contact you to opt in or out.***http://journals.plos.org/plosmedicine/s/competing-interests.We ask every co-author listed on the manuscript to fill in a contributing author statement, making sure to declare all competing interests. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. If new competing interests are declared later in the revision process, this may also hold up the submission. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT. You can see our competing interests policy here: Please use the following link to submit the revised manuscript: https://www.editorialmanager.com/pmedicine/Your article can be found in the \"Submissions Needing Revision\" folder. http://journals.plos.org/plosmedicine/s/submission-guidelines#loc-methods.To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by \"data not shown\" or \"unpublished results.\" For such statements, authors must provide supporting data or cite public sources that include it. Please ensure that the paper adheres to the PLOS Data Availability Policy . Ref 3 is our statistician and we see these points as important. I realise in some instances toning down will be a compromise. Comments from the reviewers:Reviewer #1: all my comments have been addressed, and I'm pleased how authors have rewritten the manuscript. I'm also happy how they have dealt with other reviewer's comments. I still think, however, that this manuscript would benefit from an accompanying commentary.Reviewer #2: Thank you for responding to the primary review. The responses were very helpful and provide clarity.I have one remaining comment. Whilst I note that the authors have provided detail on the training for CHW in pulse oximetry, that '12 (75% of the 16 deaths) 'CHW may have missed danger signs, meaning further support for training...' could be inferred that CHW may somehow be considered responsible for these omissions as the children were not 'hypoxemic'. I would ask that the authors also acknowledge within their limitations, that the use of adult probes on pulse oximeters was a limitation and some of these children may have been hypoxemic at the time of review - we would not use adult probes where a paediatric probe is available - as is increasingly the case globally. The use of adult probes for young children have documented limitations. This potential technological failure to detect hypoxemia does not detract from their core message - but to me adds to it - as it implies that the use of age appropriate sensors may likely provide even greater sensitivity to detect children who may die from lower respiratory tract infection. Reviewer #3: Alex McConnachie, Statistical ReviewI thank Colbourn and colleagues for their responses to my original comments. Seeing Table 6 (sensitivity etc.) helps me to understand the data a little better. Pulse oximetry adds sensitivity, at the expense of a loss of specificity. The results section of the paper concentrates on sensitivity, but the loss of specificity it is touched on in the discussion. No confidence intervals are provided for any of the diagnostic measures, but the authors do note that none of the differences are statistically significant.The diagnostic odds ratios in table 6 are:A1: 1.26, A2: 1.17, A3: 1.32B1: 7.13, B2: 6.19, B3: 5.24B4: 6.57, B5: 6.82, B6: 3.43This suggests that pulse oximetry may be of no real value in the community setting, but might be in a HC setting, particularly when added to the WHO clinical criteria. It also suggests that the Malawi danger signs may perform better than the WHO danger signs on their own.These data are limited though, because they are derived from real world data in which children are being assessed and treated. The aim of the assessments and subsequent treatment and referrals will be aiming to prevent adverse outcomes. It cannot be known how many of these children might have died in the absence of treatment. So, the actual number of true positives will be higher, and the number of false negatives lower, than suggested by simply looking at death as the outcome. Using referral and hospitalisation data as in Tables 4 and 5 does not really help, because these decisions are being made in the light of the assessments being done; in this study, there is no \"gold standard\" against which to assess the alternative screening criteria.I still feel the logistic regression models are pushing the data too far. For death as the outcome, there are only 13 or 16 events being analysed, depending on the dataset, and models are being fitted (or are being attempted) with a total of 6 parameters , clinical danger signs, interaction (2 df)), so the events-per-variable ratio is very low. Some of the interaction terms are not estimable due to the lack of events in some subgroups. When looking at referral and hospitalisation as the outcome, there are more events, but still very few in some subgroups. The text of the paper mentions associations with low SpO2 and with clinical assessments, but these are the main effects, and each applies only in the absence of the other. The interaction terms are not taken into account, and these are generally below 1. I think these data are interesting, and suggest that chest in-drawing and SpO2 measurements may be of value in some settings, but it is quite messy data, which was not collected with this analysis in mind.The linkage may be highly novel, but the success rate was very low, with many differences between those linked and not linked.Any attachments provided with reviews can be seen via the following link:[LINK] 27 Jul 2020AttachmentPMEDICINE-D-20-00470R2 Response to review.docxSubmitted filename: Click here for additional data file. 11 Aug 2020Dear Dr. Colbourn,Thank you very much for re-submitting your manuscript \"Predictive value of pulse oximetry for mortality in infants and children presenting to primary care with clinical pneumonia in rural Malawi: a data linkage study\" (PMEDICINE-D-20-00470R3) for review by PLOS Medicine.I have discussed the paper with my colleagues and the academic editor. I am pleased to say that provided the remaining editorial and production issues are dealt with we are planning to accept the paper for publication in the journal.The remaining issues that need to be addressed are listed at the end of this email. Any accompanying reviewer attachments can be seen via the link below. Please take these into account before resubmitting your manuscript:[LINK]plosmedicine@plos.org) will be in touch shortly about the production requirements for your paper, and the link and deadline for resubmission. DO NOT RESUBMIT BEFORE YOU'VE RECEIVED THE PRODUCTION REQUIREMENTS.Our publications team and your responses to reviewer comments. If eligible, we will contact you to opt in or out.***In revising the manuscript for further consideration here, please ensure you address the specific points made by each reviewer and the editors. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments and the changes you have made in the manuscript. Please submit a clean version of the paper as the main article file. A version with changes marked must also be uploaded as a marked up manuscript file.http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. If you haven't already, we ask that you provide a short, non-technical Author Summary of your research to make findings accessible to a wide audience that includes both scientists and non-scientists. The Author Summary should immediately follow the Abstract in your revised manuscript. This text is subject to editorial change and should be distinct from the scientific abstract.Please also check the guidelines for revised papers at plosmedicine@plos.org) if you have any questions or concerns.We expect to receive your revised manuscript within 1 week. Please email us , which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by \"data not shown\" or \"unpublished results.\" For such statements, authors must provide supporting data or cite public sources that include it.Please ensure that the paper adheres to the PLOS Data Availability Policy and could upload that to a free to access public repository such as the one held by the corresponding authors university UCL. Please let us know if this is acceptable.\u201dComments from Reviewers:Any attachments provided with reviews can be seen via the following link:[LINK] 1 Sep 2020Attachmentresponse to editorial and production requirements.docxSubmitted filename: Click here for additional data file. 11 Sep 2020Dear Dr. Colbourn, On behalf of my colleagues and the academic editor, Dr. Quique Bassat, I am delighted to inform you that your manuscript entitled \"Predictive value of pulse oximetry for mortality in infants and children presenting to primary care with clinical pneumonia in rural Malawi: a data linkage study\" (PMEDICINE-D-20-00470R4) has been accepted for publication in PLOS Medicine. PRODUCTION PROCESSBefore publication you will see the copyedited word document (in around 1-2 weeks from now) and a PDF galley proof shortly after that. The copyeditor will be in touch shortly before sending you the copyedited Word document. We will make some revisions at the copyediting stage to conform to our general style, and for clarification. When you receive this version you should check and revise it very carefully, including figures, tables, references, and supporting information, because corrections at the next stage (proofs) will be strictly limited to (1) errors in author names or affiliations, (2) errors of scientific fact that would cause misunderstandings to readers, and (3) printer's (introduced) errors.If you are likely to be away when either this document or the proof is sent, please ensure we have contact information of a second person, as we will need you to respond quickly at each point.PRESSA selection of our articles each week are press released by the journal. You will be contacted nearer the time if we are press releasing your article in order to approve the content and check the contact information for journalists is correct. If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. PROFILE INFORMATIONhttps://www.editorialmanager.com/pmedicine, log in, and click on the \"Update My Information\" link at the top of the page. Please update your user information to ensure an efficient production and billing process.Now that your manuscript has been accepted, please log into EM and update your profile. Go to Thank you again for submitting the manuscript to PLOS Medicine. We look forward to publishing it. Best wishes, Clare Stone, PhDManaging Editor PLOS Medicineplosmedicine.org"} +{"text": "Chronic use of proton-pump inhibitors (PPIs) is common in kidney transplant recipients (KTRs). However, concerns are emerging about the potential long-term complications of PPI therapy. We aimed to investigate whether PPI use is associated with excess mortality risk in KTRs.P < 0.001) compared with no use. After adjustment for potential confounders, PPI use remained independently associated with mortality . Moreover, the HR for mortality risk in KTRs taking a high PPI dose compared with patients taking no PPIs was higher than in KTRs taking a low PPI dose . These findings were replicated in the Leuven Renal Transplant Cohort. The main limitation of this study is its observational design, which precludes conclusions about causation.We investigated the association of PPI use with mortality risk using multivariable Cox proportional hazard regression analyses in a single-center prospective cohort of 703 stable outpatient KTRs, who visited the outpatient clinic of the University Medical Center Groningen (UMCG) between November 2008 and March 2011 . Independent replication of the results was performed in a prospective cohort of 656 KTRs from the University Hospitals Leuven (NCT01331668). Mean age was 53 \u00b1 13 years, 57% were male, and 56.6% used PPIs. During median follow-up of 8.2 (4.7\u20139.0) years, 194 KTRs died. In univariable Cox regression analyses, PPI use was associated with an almost 2 times higher mortality risk are commonly prescribed to prevent gastrointestinal side effects of immunosuppressive medication after kidney transplantation, and there is little incentive to discontinue use of PPIs in the long term.Several observational studies among individuals from the general population and among patients on hemodialysis have found that PPI use is associated with a higher mortality risk.Long-term mortality rates in kidney transplant recipients (KTRs) are high. Therefore, we aimed to investigate whether PPI use is associated with increased mortality risk in KTRs.We performed a post hoc analysis using data from the TransplantLines Food and Nutrition Biobank and Cohort Study, a prospective cohort study in 703 KTRs, with baseline assessments performed between November 2008 and March 2011. Follow-up was performed for a median of 8.2 years.We found that PPI users had an almost 2-fold increased mortality risk compared with nonusers. When we looked at the cause of death, we found that PPI use was particularly associated with mortality due to cardiovascular diseases and infectious diseases. We also demonstrated that mortality risk is highest among KTRs taking high PPI dosages .These findings were replicated in an independent cohort of 656 KTRs from the University Hospitals Leuven, which strengthens the evidence for an association between PPI use and mortality risk in KTRs.Results of this study suggest that PPI use is associated with mortality risk in KTRs, independent of potential confounders.The current study highlights the importance of an evidence-based indication for PPI treatment and provides a rationale to perform a randomized controlled trial on chronic PPI therapy in KTRs. Renal transplantation is considered the preferred treatment for patients with end-stage renal disease, providing improved prognosis and quality of life at lower cost compared with dialysis treatment \u20133. AlthoClostridium difficile infections In light of these reviews, I am afraid that we will not be able to accept the manuscript for publication in the journal in its current form, but we would like to consider a revised version that addresses the reviewers' and editors' comments. Obviously we cannot make any decision about publication until we have seen the revised manuscript and your response, and we plan to seek re-review by one or more of the reviewers. http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments, the changes you have made in the manuscript, and include either an excerpt of the revised text or the location (eg: page and line number) where each change can be found. Please submit a clean version of the paper as the main article file; a version with changes marked should be uploaded as a marked up manuscript.In revising the manuscript for further consideration, your revisions should address the specific points made by each reviewer and the editors. Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/figures. While revising your submission, please upload your figure files to the PACE digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at PLOSMedicine@plos.org.In addition, we request that you upload any figures associated with your paper as individual TIF or EPS files with 300dpi resolution at resubmission; please read our figure guidelines for more information on our requirements: plosmedicine@plos.org) if you have any questions or concerns.We expect to receive your revised manuscript by Feb 11 2020 11:59PM. Please email us and your responses to reviewer comments. If eligible, we will contact you to opt in or out.***http://journals.plos.org/plosmedicine/s/competing-interests.We ask every co-author listed on the manuscript to fill in a contributing author statement, making sure to declare all competing interests. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. If new competing interests are declared later in the revision process, this may also hold up the submission. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT. You can see our competing interests policy here: Please use the following link to submit the revised manuscript: https://www.editorialmanager.com/pmedicine/Your article can be found in the \"Submissions Needing Revision\" folder. http://journals.plos.org/plosmedicine/s/submission-guidelines#loc-methods.To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by \"data not shown\" or \"unpublished results.\" For such statements, authors must provide supporting data or cite public sources that include it. Please ensure that the paper adheres to the PLOS Data Availability Policy early in the Methods section.a) If a prospective analysis plan was used in designing the study, please include the relevant prospectively written document with your revised manuscript as a Supporting Information file to be published alongside your study, and cite it in the Methods section. A legend for this file should be included at the end of your manuscript. b) If no such document exists, please make sure that the Methods section transparently describes when analyses were planned, and when/why any data-driven changes to analyses took place. c) In either case, changes in the analysis\u2014including those made in response to peer review comments\u2014should be identified as such in the Methods section of the paper, with rationale.Please ensure that the study is reported according to the STROBE guideline, and include the completed checklist as Supporting Information. When completing the checklist, please use section and paragraph numbers, rather than page numbers. Please add the following statement, or similar, to the Methods: \"This study is reported as per the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guideline (S1 Checklist).\"http://www.equator-network.org/Please report your study according to the relevant guideline, which can be found here: --> transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD) guideline (S1 Checklist).Comments from the reviewers:Reviewer #1: I liked this paper. Very well thought out . well controlled for distracting variables. We wrote on this.... \"Concomitant Proton Pump Inhibitors with Mycophenolate Mofetil and the Risk of Rejection in Kidney Transplant Recipients. J. Knorr, M. Sjeime, L. Braitman, P. Jawa, R. Zaki, J. Ortiz. Transplantation March 15 2014. Vol 97 issue 5\". Your team did not use.I like it. well written. good conclusion.Reviewer #2: The investigators conducted this study to evaluate whether PPI use is associated with mortality risk in kidney transplant recipients.While this is important study, the findings of this study alone to evaluate mortality outcome might not adequately worth publishing in this high impact journal without significant modifications/revisions. There are many factors that can affect mortality risk in kidney transplant patients, such as cardiovascular, infection, rejection, allograft outcomes. However, the investigators barely look into this.It is possible that the investigators plan to separate these outcomes to several publications, since the investigators have published their works by using this cohort to evaluate several outcomes e.g. hypomagnesemia (PMID: 31817776) and iron status (PMID: 31484461)Without knowing the causes of mortality and lack of granularity , this finding of this study does not add much into the field of transplantation.Reviewer #3: I applaud the authors on this significant work. This is an important study as it adds to the growing literature supporting an association between PPI and increased mortality, specifically involving kidney transplant recipients. The authors have provided two different cohorts demonstrating an association between PPI use and increased mortality. Based on current knowledge, I would not be surprised to find that such a positive correlation does exist between PPI and mortality. However, I question whether the reported risk is as great as presented as noted below.Page 7, line 107. Looking at their prior described cohort (reference 28), this study appears to be a retrospective rather than prospective study as the prior cohort was established with an aim that would not have accrued the same information as presented in this study. If the authors can confirm this is accurate, the manuscript should be adjusted accordingly. Page 7, line 112. What other KTR patients were excluded as noted for \"concurrent systemic illnesses\" aside from the named malignancies other than cured skin cancer, opportunistic infections, and overt congestive heart failure? It should be noted that this study may not be generalizable to KTR patients who had these systemic illnesses. As opportunistic infections can be an acute process rather than chronic, it would have been helpful to include these patients. In addition, it would be interesting to know the results from patients with overt CHF as these patients may have an increased atherosclerotic burden compared to non-CHF patients. As such, these patients The patients in the non-PPI group had a significantly increased time since transplantation compared to the PPI group. This would seem to cause a significant survival selection bias. This may be due to the retrospective nature (see above regarding question of retrospective vs prospective) of this study. If this was a prospective study as the authors stated, then performing a study where patients were at the same timepoint after kidney transplant with or without PPI would have helped avoid this significant survival bias . I appreciate that the authors adjusted for this factor, but as we know, there are many unknown confounders that may be present in an individual who has survived with a functional graft already for 9 years vs an individual for 4 years.Table 1: It would have been interesting to include antiplatelet use. Previously, there was some concern that PPI's may interfere with clopidogrel efficacy, and KTR are at high risk for CV complications and mortality. Furthermore, since patients on dual antiplatelet medications for coronary atherosclerosis either before or after stenting tend to also be on PPI's, these patients are likely at higher risk for CV complications and mortality. The omission of patients with overt heart failure and not adjusting for antiplatelet agents may influence the results of this study.Page 11, line 220. What percentage of patients from the PPI vs non-PPI group developed graft failure? Along this line, was the eGFR was only determined at the initial baseline measurement or was it determined closer to patient death? Since tubulointerstitial nephritis (acute or chronic) is associated with PPI's, it would have been useful to see if PPI patients also had a decreased eGFR at time of death. Since renal function is associated with mortality, this would be another important covariate to control for combined? What was this result?I appreciate the authors providing several mechanisms for PPI use and increased mortality. Minor english revisions recommended to help improve readability.Reviewer #4: The authors have attempted to investigate the effect of PPI therapy on mortality in a large single center cohort of stable outpatient KTR, and performed independent replication of the results in a second cohort. Cohort demographics and characteristics - is the sample representative of wider population? It is noted from the discussion on limitations that subjects were predominantly Caucasian - what other comparisons can be made to better understand generalisability of the results? It is also noted that the data used is from a previously described cohort, but a brief overview or summary within this manuscript would be helpful to the reader in their assessment of how robust the findings and conclusions are. Confounding has been accounted for adequately, given the limitations of the data, and the description, production and presentation of all 6 models is very useful. Under methods:\"Follow-up was recorded until September 2015. \"Can follow-up now be extended? The current data has median follow-up of 5.3 [4.5-6.0] years, which could almost be doubled in length adding great value to the longitudinal aspects of this study.The statistical techniques appear to be appropriate for the data and the research question in hand.It is also good to see that a sensitivity analysis of missing data has been performed, by comparing mulitple imputation results to no imputation.Did the authors undertake any sub-group anaylses ? Was there an association between outcome and age, for instance, as suggested by the literature [23-25]?The conclusions are fair given the study design undertaken, and the manuscript as a whole is well written, clear and concise.The Tables and Figures are also clear and informative.Any attachments provided with reviews can be seen via the following link:[LINK] 17 Mar 2020AttachmentResponse to the reviewers_final.docxSubmitted filename: Click here for additional data file. 22 Apr 2020Dear Dr. Douwes,Thank you very much for re-submitting your manuscript \"The Association between Use of Proton-Pump Inhibitors and Excess Mortality after Kidney Transplantation: A Prospective Cohort Study.\" (PMEDICINE-D-20-00132R2) for review by PLOS Medicine.I have discussed the paper with my colleagues and the academic editor and it was also seen again by xxx reviewers. I am pleased to say that provided the remaining editorial and production issues are dealt with we are planning to accept the paper for publication in the journal.The remaining issues that need to be addressed are listed at the end of this email. Any accompanying reviewer attachments can be seen via the link below. Please take these into account before resubmitting your manuscript:[LINK]plosmedicine@plos.org) will be in touch shortly about the production requirements for your paper, and the link and deadline for resubmission. DO NOT RESUBMIT BEFORE YOU'VE RECEIVED THE PRODUCTION REQUIREMENTS.Our publications team and your responses to reviewer comments. If eligible, we will contact you to opt in or out.***In revising the manuscript for further consideration here, please ensure you address the specific points made by each reviewer and the editors. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments and the changes you have made in the manuscript. Please submit a clean version of the paper as the main article file. A version with changes marked must also be uploaded as a marked up manuscript file.http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. If you haven't already, we ask that you provide a short, non-technical Author Summary of your research to make findings accessible to a wide audience that includes both scientists and non-scientists. The Author Summary should immediately follow the Abstract in your revised manuscript. This text is subject to editorial change and should be distinct from the scientific abstract.Please also check the guidelines for revised papers at plosmedicine@plos.org) if you have any questions or concerns.We expect to receive your revised manuscript within 1 week. Please email us , which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by \"data not shown\" or \"unpublished results.\" For such statements, authors must provide supporting data or cite public sources that include it.Please ensure that the paper adheres to the PLOS Data Availability Policy has been accepted for publication in PLOS Medicine. PRODUCTION PROCESSBefore publication you will see the copyedited word document (in around 1-2 weeks from now) and a PDF galley proof shortly after that. The copyeditor will be in touch shortly before sending you the copyedited Word document. We will make some revisions at the copyediting stage to conform to our general style, and for clarification. When you receive this version you should check and revise it very carefully, including figures, tables, references, and supporting information, because corrections at the next stage (proofs) will be strictly limited to (1) errors in author names or affiliations, (2) errors of scientific fact that would cause misunderstandings to readers, and (3) printer's (introduced) errors.If you are likely to be away when either this document or the proof is sent, please ensure we have contact information of a second person, as we will need you to respond quickly at each point.PRESSA selection of our articles each week are press released by the journal. You will be contacted nearer the time if we are press releasing your article in order to approve the content and check the contact information for journalists is correct. If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. PROFILE INFORMATIONhttps://www.editorialmanager.com/pmedicine, log in, and click on the \"Update My Information\" link at the top of the page. Please update your user information to ensure an efficient production and billing process.Now that your manuscript has been accepted, please log into EM and update your profile. Go to Thank you again for submitting the manuscript to PLOS Medicine. We look forward to publishing it. Best wishes, Adya Misra, PhDSenior Editor PLOS Medicineplosmedicine.org"} +{"text": "P\u2009<\u20090.05). At the last follow-up, the limb length difference in the modular group (2.3\u2009\u00b1\u20092.7\u00a0mm) was significantly lower than that in the nonmodular group , and the postoperative prosthesis subsidence in the modular group was significantly less than that in the nonmodular group . Both modular and nonmodular tapered fluted titanium stems can achieve satisfactory mid-term clinical and imaging results in patients who underwent femoral revision. The modular stems have good control of lower limb length and low incidence of prosthesis subsidence.Both modular and nonmodular tapered fluted titanium stems are commonly used in revision total hip arthroplasty (THA). However, which type of femoral stem is superior remains controversial. The purpose of this study was to assess the clinical and radiographic outcomes of modular and nonmodular tapered fluted titanium. The clinical data of patients undergoing primary revision THA from January 2009 to January 2013 in two institutions were retrospectively analyzed. According to the type of prosthesis used on the femoral side, the patients were divided into the modular group and nonmodular group . The operative time, hospital stay, blood loss, blood transfusion volume, hip function, hip pain, limb length discrepancy, imaging data, and complications were compared between the two groups.A total of 218 patients were followed up for 78\u2013124\u00a0months, with an average of 101.5\u00a0months. The incidence of intraoperative fracture in the modular group (16.7%) was significantly higher than that in the nonmodular group (4.5%; ( In the UK, the hip revision rate has exceeded 10% in 2015 and is expected to increase by 31% by 20302. With the development of materials science, prosthesis design, and surgical techniques, the initial THA has been increased year by year around the world. Unfortunately, patients were found to be younger. Therefore, the revision surgery is also increasing yearly. In hip revision, 40% of patients require only acetabular revision, and 60% require femoral revision3.Total hip arthroplasty (THA) is an effective treatment for end-stage hip disease. It not only relieves pain caused by hip disease but also improves hip function. In 2005, a total of 40,800 patients underwent hip revision surgery in the United States. These cases account for approximately 17.5% of all hip arthroplasty, and this number is expected to increase by 137% by 20303. The loosening rate of cement prosthesis after revision is high5. In view of the high loosening rate after revision of the cemented prosthesis, some scholars proposed the use of a biological long-handle revision prosthesis of the femoral side in revision6. In North America, extensively coated cylindrical stems have been widely used for many years. However, concerns regarding severe postoperative thigh pain (8\u20139%), severe stress shielding of the proximal femur (6\u20137.6%), and high failure rate of patients with Paprosky type III femoral defects remain9. Many scholars10 believe that the tapered stem with ridge is more suitable when bone defects affect the rubbing requirements than cylindrical stems. During femoral revision, tapered stems can deal well with various bone defects.The cement prosthesis will lead to a decrease in the cement\u2013bone interface bonding strength. This reduction can affect the stability of the prosthesis, thereby influencing the long-term efficacy. Studies have shown that the cement\u2013bone interface\u2019s bonding strength is only 20.6% of the initial replacement. If the hip joint is subjected to re-revision, the intensity is only 6.8% of the initial replacement13. They also have some disadvantages such as high incidence of intraoperative fracture, corrosion, and fracture at the proximal and distal parts of the prosthesis15. Some scholars suggested the use of nonmodular tapered fluted titanium stems in femoral revision. They believe that the implantation of nonmodular tapered fluted titanium stems is simple, and the prosthesis does not have the disadvantages of corrosion and fracture. However, nonmodular tapered fluted titanium stems have the disadvantages of postoperative dislocation and high incidence of prosthesis sinking16.Previous studies have shown that modular tapered fluted titanium stems have the advantages of easy adjustment of lower limb length, forward inclination, and eccentricity18 only explored the different early clinical effects of the two stems. The mid- and long-term efficacy of modular and nonmodular tapered fluted titanium remains uncertain. The purpose of this study was to assess the clinical and radiographic outcomes of modular and nonmodular tapered fluted titanium.At present, distal fixation femoral prostheses are mainly divided into modular and nonmodular tapered fluted titanium stems, and both types have been widely used. There is no consensus in the academic community on which prosthetic design is presently appropriate for femoral revision. However, in revision, there is no theoretical basis in choosing modular or nonmodular tapered fluted titanium stems. The criteria for clinical selection of modular or nonmodular tapered fluted titanium stems are often based on the preferences and experience of the operators. The selection of a suitable prosthesis can not only improve the success rate of revision surgery but also improve the prognosis of patients. Thus, the design characteristics, clinical efficacy, and radiographic results of the two kinds of prostheses should be compared to provide a basis for the clinical selection of revision prosthesis. Previous studiesPatients who underwent revision THA with modular or nonmodular tapered fluted titanium stems in the Affiliated Hospital of Xuzhou Medical University and Hyogo College of Medicine Hospital from January 2009 to January 2013 were reviewed. This retrospective study was approved by the local Ethical Committee . All methods were performed in accordance with the relevant guidelines and regulations. Informed consent was obtained from all patients.A total of 239 patients (239 hips) were initially identified. Twelve patients were lost to follow-up, and 9 patients died of causes unrelated to their operation. The remaining 218 hips (218 patients) were analyzed. On the basis of the type of prosthesis used on the femur side, the patients were divided into the modular group and nonmodular group . The general information of the patients in the two groups is shown in Table There was no significant difference in gender, age, BMI, initial replacement to repair time, reasons for revision, type of bone defect, ASA classification, number of acetabular side revisions, preoperative Harris score, Visual Analogue Scale/Score (VAS) score, and limb length discrepancy (LLD) between the two groups.19. LLD was assessed via a subjective measurement method20.The patients were clinically evaluated on the basis of operation time, dominant blood loss (intraoperative blood loss\u2009+\u2009postoperative drainage volume), blood transfusion volume, hospitalization time, hip function, thigh pain, LLD, radiographic data , and complications . Hip function was assessed by the Harris hip score before surgery and during each visit. Thigh pain was assessed using the VAS scores21. Prosthetic subsidence was assessed by the criteria set forth by Callaghan et al.22. Stability of femoral prosthesis was assessed with the standard evaluation proposed by Engh et al.23. Heterotopic ossification was assessed using the Brooker24 standard.The degree of the femoral defect was evaluated from the preoperative X-ray according to the Paprosky classificationP\u2009<\u20090.05 was considered statistically significant.The analysis and production of data and charts were processed by IBM SPSS Statistics 19.0 statistical software and GraphPad Prism6.0 . Continuous variables were analyzed using Wilcoxon rank-sum tests. Categorical variables were analyzed by the Pearson chi-square or Fisher exact tests. Kaplan\u2013Meier survivorship analyses were conducted with the endpoint defined as any reoperation due to septic or aseptic complications and with the endpoint defined as any reoperation due to aseptic complications. Test level was set at both sides \u03b1\u2009=\u20090. 05, and 17. We estimated that 194 participants would be required to enable detection of significant difference at the 5% significance level with 85% power.Power of the original study. The observational cohort study was powered to detect a distance of postoperative prosthetic subsidence as the minimum mean difference of significance, and the standardized difference (0.39) was calculated using the standard deviation (0.98) based on an earlier report by Huang et al.P\u2009>\u20090.05). A comparison of intraoperative data between the two groups is shown in Table No significant differences were found between the two groups in terms of operation time, hospitalization time, blood loss, and blood transfusion (P\u2009<\u20090.05) and 40.1\u2009\u00b1\u20096.6 preoperatively to 85.5\u2009\u00b1\u20093.8 in the nonmodular group (P\u2009<\u20090.05). The final follow-up VAS score decreased from 7.6\u2009\u00b1\u20091.3 preoperatively to 1.9\u2009\u00b1\u20090.5 in the modular group (P\u2009<\u20090.05) and 7.5\u2009\u00b1\u20091.1 preoperatively to 1.8\u2009\u00b1\u20090.5 in the nonmodular group (P\u2009<\u20090.05). The leg length discrepancy of the modular group decreased from 18.7\u2009\u00b1\u20096.6\u00a0mm preoperatively to 2.3\u2009\u00b1\u20092.7\u00a0mm at the final follow-up (P\u2009<\u20090.05); the leg length discrepancy of the nonmodular group decreased from 20.3\u2009\u00b1\u20096.1\u00a0mm preoperatively to 5.6\u2009\u00b1\u20093.5\u00a0mm at the final follow-up (P\u2009<\u20090.05). At the final follow-up, the leg length discrepancy of the modular group was significantly lower than that of the nonmodular group (P\u2009<\u20090.05). A comparison of pain and hip function between the two group is shown in Table A total of 218 patients were followed-up for an average of 101.5\u00a0months. The modular and nonmodular groups did not significantly differ in the most recent postoperative Harris hip score and VAS scores. The most recent postoperative Harris hip score increased from 40.5\u2009\u00b1\u20096.1 preoperatively to 86.4\u2009\u00b1\u20093.9 in the modular group , which was significantly lower than that in the nonmodular group and nonmodular groups . Among the 18 cases of fracture in the modular group, femoral trochanteric fracture occurred in 12 cases (with wire banding), femoral shaft fracture was noted in six cases (with wire banding), and all fractures had bone healing. In the nonmodular group, only five cases suffered from femoral shaft fracture (with wire binding) and also demonstrated bone healing.Intraoperative fractures occurred more frequently in the modular group than in the nonmodular group . Patients with periprosthetic fractures were treated with open reduction and internal fixation.Two cases (1.9%) of periprosthetic fracture occurred in the modular group, and two cases (1.8%) of periprosthetic fracture were noted in the nonmodular group (P\u2009>\u20090.05). Three cases of frequent dislocation after closed reduction was treated with replacement of lining and femoral head size.No dislocation occurred in the modular group, and three cases (2.7%) of hip dislocation occurred in the nonmodular group (P\u2009>\u20090.05).Eighteen cases (16.7%) presented heterotopic ossification in the modular group (12 Brooker Degree I and 6 Degree II), whereas 20 cases (18.2%) demonstrated heterotopic ossification in the nonmodular group (16 Brooker Degree I and 4 Degree II). No significant differences were found in the two groups and nonmodular (Wagner SL and AK-SL) stems are grit-blasted prostheses with a conical groove design, which can increase the contact area with diaphyseal cortex and achieve axial stability with the tapered geometry. Eight longitudinal ribs ensure rotational stability. Both stems have a titanium shaft with a circular cross section and a 2\u00b0 taper. Titanium alloy can significantly reduce the elastic modulus in both kinds of prostheses to reduce the incidence of postoperative stress shielding and thigh pain. The design difference between the modular and nonmodular prostheses is the distal end of the modular prosthesis, which has a 3\u00b0 tilt angle that matches the shape of the femur. However, this design was not found in Wagner SL and AK-SL. In addition, the significant difference between modular and nonmodular prostheses is a proximal design. Wagner SL and AK-SL femoral prostheses have an integrated design, and the modular femoral prosthesis adopts a neck component design made of multiple components. A length-adjusting washer in the proximal segment allows the length of the stem to be extended by 30\u00a0mm. Surgeons can select different components depending on the actual situation to achieve the desired leg length and facilitate the adjustment of anteversion and eccentricity.27 reported an average prosthetic subsidence of 1\u00a0mm in patients who underwent femoral revision with modular tapered fluted titanium stems. This finding was similar to the results of this study.This study showed that the LLD of the modular group was significantly smaller than that of the nonmodular group. This difference was mainly because modular tapered fluted titanium stems can be used to precisely adjust the length of the lower limb through the combination of proximal components during the operation. However, due to the integrated design of nonmodular tapered fluted titanium stems, the limb length cannot be adjusted again after the prosthesis is inserted. Therefore, the length of the lower limb with modular tapered fluted titanium stems demonstrates better control than that with nonmodular tapered fluted titanium stems. When the biological prostheses are used in revision, the prostheses will exhibit different degrees of subsidence, and most of them occur in the first year after surgery. The intraoperative prosthesis and the femoral medullary cavity are insufficiently fitted, and sinking can occur when the lower limb is loaded with weight. In this study, the modular stem and nonmodular stem also experienced different degrees of postoperative subsidence. At the last follow-up, the postoperative subsidence in the modular tapered fluted titanium stems was significantly lower than that in the nonmodular tapered fluted titanium stems. Park et al.29. In view of the high postoperative subsidence of the first- and second-generation Wagner SL stems, the modified third-generation Wagner SL stem is widely used in femoral revision. Although the postoperative subsidence is lower than the previous two generations, the subsidence was significantly greater compared with the modular tapered fluted titanium stems. The modular tapered fluted titanium stem is designed with a three-degree curvature to better match the femur, and the medullary cavity is better filled. The intraoperative prosthesis makes a good compression match with the femoral bone marrow cavity, while the nonmodular stem is designed without curvature. Therefore, an adequate match with the femoral bone marrow cavity may be difficult in nonmodular tapered fluted titanium stems. Second, surgeons need to consider the soft tissue tension of the hip joint when implanting nonmodular stems. Repeated tests are often performed to reduce both the risk of dislocation and the difficulty of reduction. Moreover, the distal part of the nonmodular stem is not fully fixed, and the immediate stability of the prosthesis cannot be guaranteed, which increases the risk of later prosthesis sinking or loosening. The distal part of the prosthesis can be firmly established when the modular stem is used. The femoral head and proximal component can be used to adjust the length, eccentricity, and forward inclination of the prosthesis, thereby reducing the length of limb discrepancy and the risk of prosthesis loosening and dislocation.In this study, the average postoperative subsidence of prosthesis in the nonmodular group was 2.20\u2009\u00b1\u20091.94\u00a0mm. The subsidence rate was found to be greater than 10\u00a0mm in 4% and 24% of first-generation and second-generation Wagner SL stems, respectively31. Stem subsidence may have drawbacks, such as limb shortening and altered hip biomechanics33. Tangsataporn et al.34 reported 13 hips (13.1%) with subsidence of at least 10\u00a0mm. In that group, five of the 13 (38.5%) stems required repeat femoral revision because of stem aseptic loosening. Moreover, no dislocation caused by subsidence was found. In our study, dislocation did not occur in the modular group, and three cases (2.7%) of hip dislocation occurred in the nonmodular group. Park et al.35 believed that the low dislocation rate of the modular stem may be related to its ability to make appropriate adjustments to the length of lower limbs, eccentricity, soft tissue tension around the hip joint, and anterior angle with the help of the special design of proximal components during the revision.Concerns regarding subsidence related to the dislocation risk persist in the field. In our study, most of the reasons for dislocation were due to the poor place of the prosthetic position, leading to early postoperative dislocation; no dislocation caused by subsidence was found. Stem subsidence was early after weight bearing, but all prosthetic subsidence stopped within 1\u00a0year after operation. The patient\u2019s soft tissue adapts to subsidence and maintains good tension. Moreover, the hip joint can be stably maintained to a certain extent due to scar contracture. Previous studies have shown that notable tapered stem subsidence after surgery is uncommon; the majority of stems become stable when it does occur, rarely going on to aseptic loosening25 reported an incidence of intraoperative fracture of modular tapered fluted titanium stems of 16.9%, and this percentage was similar to the results obtained in this study. Previous studies have reported that the fracture rate of modular tapered fluted titanium stems during femoral revision can reach 16\u201332%37, and patients with bone defects of Paprosky type IIIB\u2013IV are likely to have fractures during surgery25. Pattyn et al.37 reported that the incidence of intraoperative fracture is as high as 32% when modular tapered fluted titanium stems are used for femoral revision, which may be correlated with the intraoperative operation. We speculate that the high incidence of intraoperative fracture of modular tapered fluted titanium stems may be related to the design of the modular prosthesis and the operation experience of the operator. Compared with periprosthetic fractures, dislocation is one of the more common complications in revision. In this study, three cases of dislocation in the nonmodular group were due to extreme hip flexion, adduction, and internal rotation; the patient was treated with replacement of the inner liner and the femoral head. The advantage of modular tapered fluted titanium stems is that the anterior inclination, eccentricity, and length of the lower limb can be adjusted by changing the neck component, thereby reducing the incidence of dislocation.In this study, 18 cases (16.7%) presented intraoperative fracture in the modular group. Huang27. By contrast, the nonmodular tapered fluted titanium stems avoids the potential risk of modular fretting corrosion and joint dissociation.Some scholars believe that modular tapered fluted titanium stems are prone to fretting wear and corrosion fracture at the connection of proximal components. The proximal cervical junction is cylindrical and does not have a tapered design, allowing the stress concentration to increase. The proximal cervical component of the prosthesis is connected with the distal handle by a locking screw. If the locking screw fails, the teeth on both proximal and distal components will not bite each other, resulting in the proximal part to loosen, followed by mechanical separation. In the present study, the modular groups demonstrated two proximal segment dissociation, and the incidence of mechanical failure of the modular stem was similar to that in the study of Park et al.The present study has several limitations to acknowledge. First, the retrospective nature of the study makes it prone to selection bias. Although prospective randomized controlled studies can better control confounders and selective bias, they are difficult to be carried out in revision surgery. Second, the sample size was limited, and the selection of prosthesis was limited. Therefore, the results obtained were limited and could not be widely generalized to all modular and nonmodular tapered fluted titanium stems. Given the low incidence of subsidence, this study is underpowered, limiting our ability to correlate subsidence with the risk of dislocation. A large investigation with more patients experiencing subsidence of the stem would be necessary to provide greater statistically significant information on the subsidence related to the dislocation risk for this finding. Third, the attending surgeons who performed the revision came from two different institutions.In conclusion, both modular and nonmodular tapered fluted titanium stems can achieve satisfactory mid-term clinical and radiographic results in patients who undergo femoral revision. Furthermore, compared with nonmodular tapered fluted titanium stems, modular tapered fluted titanium stems have good control of lower limb length and low incidence of prosthetic subsidence."} +{"text": "Orientia tsutsugamushi, obtained from patient blood, vector chiggers, and rodent reservoirs, particularly for the dominant 56-kD type-specific antigen gene (tsa56), and whole-genome sequences, which are increasing our knowledge of the diversity of this unique agent. We explore and discuss the potential of sequencing and other effective tools to geographically trace rickettsial disease agents, and develop control strategies to better mitigate the rickettsioses.The rickettsioses of the \u201cFar East\u201d or Asia\u2013Australia\u2013Pacific region include but are not limited to endemic typhus, scrub typhus, and more recently, tick typhus or spotted fever. These diseases embody the diversity of rickettsial disease worldwide and allow us to interconnect the various contributions to this special issue of Tropical Medicine and Infectious Disease. The impact of rickettsial diseases\u2014particularly of scrub typhus\u2014was substantial during the wars and \u201cpolice actions\u201d of the last 80 years. However, the post-World War II arrival of effective antibiotics reduced their impact, when recognized and adequately treated (chloramphenicol and tetracyclines). Presently, however, scrub typhus appears to be emerging and spreading into regions not previously reported. Better diagnostics, or higher population mobility, change in antimicrobial policies, even global warming, have been proposed as possible culprits of this phenomenon. Further, sporadic reports of possible antibiotic resistance have received the attention of clinicians and epidemiologists, raising interest in developing and testing novel diagnostics to facilitate medical diagnosis. We present a brief history of rickettsial diseases, their relative importance within the region, focusing on the so-called \u201ctsutsugamushi triangle\u201d, the past and present impact of these diseases within the region, and indicate how historically, these often-confused diseases were ingeniously distinguished from each another. Moreover, we will discuss the importance of DNA-sequencing efforts for Tropical Medicine and Infectious Disease (TMID), remain of international public health concern [Of all infectious diseases the world has experienced, few have truly altered human history. In centuries past, diseases such as smallpox, plague, tuberculosis, and epidemic typhus have been responsible for millions of deaths through uncontrolled outbreaks. The military impact has been dramatic. Hans Zinsser 1878\u20131940), bacteriologist and immunologist, famous for his significant work on typhus, stated that the explosive outbreaks of epidemic typhus influenced the outcome of more wars between the 16th and the 19th century than any soldier or general [878\u20131940, concern ,3.The Past and Present Threat of Rickettsial Diseases\u201d, leads us to focus on one region of the world that is hyperendemic, both historically and presently, for a host of rickettsial diseases. This is a vast area we are calling the \u201cAsia\u2013Australia\u2013Pacific\u201d or AAP region. For a map of the region, see the paper in this special issue by Luce-Federow et al. [The discovery of antimicrobial agents in the last century altered human perceptions of epidemics of infectious diseases and the associated effects of large outbreaks. Our wartime experiences of the past 100 years led to improved recognition and treatments for many infectious diseases not previously defined. Ironically, the interest in these diseases rapidly declined when effective antimicrobials or vaccines became available and the once devastating diseases became controllable, even though wars themselves subside. This poses a problem for endemic areas, where the diseases may persist. Despite available treatment, the lack of clinical awareness, often coupled with diagnostic difficulties, can lead to under-recognition with considerable morbidity and preventable mortality. The history of \u201ctyphus\u201d in the Far East and the Asia\u2013Australia\u2013Pacific region follows this pattern. The broad topic for this TMID special issue, \u201cw et al. . Indeed,Rickettsia prowazekii-infected lice (Pediculus humanus corporis) has played a relatively minor role in modern times as a cause of fevers in the AAP. During the Korean conflict (1950\u20131953), epidemic typhus had a significant impact on the civilian population. This was due, in part, to growing resistance of the vector body louse to dichlorodiphenyltrichloroethane (DDT) [Orientia tsutsugamushi. Consistent with this finding, scrub typhus is part of the focus in 10 of the 19 papers within this special issue in which rickettsial diseases in the AAP are examined [Today the reported incidences of rickettsioses in the world and for our purposes within the AAP are increasing. This is especially true of scrub typhus , and to a lesser extent murine typhus and spotted fever. Scrub and murine typhus represent the two most common forms of rickettsial diseases in these regions. Other rickettsioses such as tick-borne rickettsioses and spotted fever, though relatively infrequent in the AAP, are, nevertheless, endemic in the region . In trop 1950\u2013195, epidemine (DDT) . The lowexamined ,13,14,15examined reportedRickettsia honei, R. australis, R. japonica, R. felis) do occur in the region, and some have been gaining traction. Most of these rickettsial agents are transmitted by ticks or fleas. For example, Australian cases of spotted fever group rickettsiosis (SFGR), or Queensland tick typhus (QTT), were reported in the 1940s with agent isolation from ticks in the early 1970s and were identified as Rickettsia australis [The other rickettsioses, though relatively infrequent in the AAP, are, nevertheless, endemic in the region, and recent studies, including several in this special issue, suggest the influence of the other rickettsioses in the region ,18,19. Tustralis . In thisustralis report tustralis .Rickettsia honei, was initially recovered in 1962 from an ixodid tick near Chiang Mai [Rickettsia japonica, briefly described above, was isolated from the blood of a patient in Japan and found to cause febrile illness [O. tsutsugamushi and R. japonica occurs in blood of infected patients\u2014they found no cross-reaction. Tshokey, et al. [In northern Thailand, the Thai tick typhus agent, TT-118, now called iang Mai , but theiang Mai . A similiang Mai , and a miang Mai . Another illness . In this illness test the, et al. report t, et al. deals wiThe occurrence of rare cases of epidemic typhus and sporadic occurrence of SFGR cases throughout the AAP, however, can be viewed in stark contrast to scrub typhus. Thus, we have chosen to place our primary focus on one disease in particular, scrub typhus, as it occurs in the AAP region. To understand best the status of scrub typhus in the AAP, it is useful to review the progression of diagnosis of rickettsial diseases.Because research on disease in the AAP region has historically played a major role in redefining rickettsial diseases, we present here a brief account of the rickettsial diseases that have been lumped originally under the misleading term \u201ctropical typhus\u201d and consider their relative importance within the region, focusing on the so-called \u201ctsutsugamushi triangle.\u201d with epidemic character and often included bubonic pest, typhus, dysentery, yellow fever, cholera, meningococcal diseases, scurvy, syphilis, and importantly, variola . The works of Hippocrates, 460 BC , allow ius cases . Few manus cases . A varieus cases . Medicalth century and underscore its later association with times of crisis, wars and famine [th century on, and over a further three and a half centuries \u201ctyphus\u201d was gradually redefined as a collection of distinctive diseases that affected specific populations [Typhus exanth\u00e9matique\u201d by Boisier de Sauvages in Montpellier (1760) was an important attempt to distinguish the disease epidemic typhus [The earliest concise accounts consistent with classical typhus arise from the end of the 15ulations ,34. The c typhus .As the confusion gradually lifted, the term typhus was increasingly associated with epidemic typhus fever, which over time received the appellation of \u201cclassic typhus\u201d. However, the secrets of typhus were only unraveled a few years before World War I (WWI). The years from 1910 to 1915 proved to be important for rickettsia-associated typhus research, with a number of substantial discoveries made around the world to determine the causative agents, their vectors, and the identification of hosts and reservoirs. This resulted in the recognition of rickettsial diseases as distinct diseases and the creation of \u201cRickettsiology\u201d as a discipline. The impact of this developing science on our appreciation of the impact of epidemic typhus in particular became pronounced. During the war and post war-time period between 1917 and 1923, 30 million cases and 3 million deaths were recorded in European Russia alone, where combatants were returning home [Investigations into typhus and its related forms were intense in Europe and America around WWI with the driving philosophy that typhus was contagious, epidemic and associated with overpopulation and poverty. Due to difficulties in culturing rickettsiae, the etiologic relationships between typhus and other similar diseases were not firmly established until the 1930s, but typhus fever was regarded as a unitary disease. Rocky Mountain spotted fever (RMSF) and Japanese spotted fever (JSF) were considered as belonging to different categories of disease, although they were known to be transmitted to humans by ticks, which may act as the reservoir of the agent, or which may acquire the agent from animals such as squirrels, chipmunks, rats, or related forms [Bacillus proteus, a strain that caused agglutination when mixed with sera from typhus patients. Subsequent tests showed cross-reaction agglutination with whole antigen from this bacillus, subsequently reclassified in the genus Proteus. The Proteus species used was termed OX19 .A new era of rickettsiology had been triggered by the discovery of an innovative diagnostic test for typhus by Weil and Felix in 1916 . In 1915R. prowazekii) and a form of typhus from Australia, described by Hone in 1922 [In most geographic regions, the Weil\u2013Felix test successfully identified epidemic typhus from other fevers. Increasingly, however, the original Weil\u2013Felix test suggested complexity for typhus diagnosis in the tropics. Fletcher ,38, in M in 1922 .B. proteus X19, was actually a strain of P. mirabilis rather than P. vulgaris, and termed OXK . The accidental use of the OXK strain subsequently became the cornerstone of \u201cscrub typhus\u201d diagnostics. From the background of \u201ctropical typhus\u201d, Fletcher in 1923 was the first to show that using the Weil\u2013Felix test with antigens from X19 and OXK could subdivide \u201ctropical typhus\u201d [Proteus OX19, and negative for OXK, whereas Fletcher found two contrasting groups of cases of \u201ctropical typhus\u201d, one group which strongly agglutinated Proteus OX19 but not OXK and a second group showing the reverse. The mistake that allowed OX19 to be used for diagnosis resulted in Fletcher having the ability to distinguish two diagnostic groups associated with different epidemiologies. One group was associated with patients from rural jungle areas (hence \u201cbush\u201d or \u201cscrub\u201d typhus). This was similar to the Japanese tsutsugamushi disease, which had been considered until then to possibly represent a separate entity. The second group distinguished by Fletcher represented the discovery of the closely related \u201cshop typhus\u201d from urban areas . The deThe work by Fletcher and Lewthwaite and many others at the Institute for Medical Research (IMR) in Kuala Lumpur triggered numerous reports from around the world describing diseases, which, while resembling epidemic typhus, were milder diseases that occurred sporadically. In the British colonies within the AAP, expatriate military personnel, especially in India (Sir John Megaw) and the Malay states (Dr. William Fletcher), encountered a vast array of \u201ctropical fevers\u201d ,42,43,44\u201c\u2026 We call this variant \u2018Tropical Typhus\u2019 because it appears to be more common in the tropics than the epidemic form. It is necessary to distinguish it by some name\u2014to call it simply \u2018typhus\u2019 is to mislead and alarm the public who, though they may be quite ignorant of everything else about typhus know that it\u2019s highly infectious and may spread like a wildfire.\u201d [Due to the impressive diversity of vectors, animal reservoirs, and different geographical settings, a new era of confusion arose. In 1924, in the midst of this confusion, the term \u201ctropical typhus\u201d was coined in Malaya by William Fletcher. He states: ldfire.\u201d . Fletcheldfire.\u201d ,48. The Rickettsia typhi, and whose vectors are fleas) was ultimately recognized as a separate disease from the epidemic typhus by three groups working independently: Fletcher in Malaya [Murine typhus , the agent of scrub typhus [Diagnostic improvements included refinement of laboratory techniques, discussed below, leading to the important discovery of antigenic differences between strains of b typhus . PreventR. prowazekii and R. typhi, and scrub typhus and its agent O. tsutsugamushi representing the overwhelming proportion of cases in the AAP geographic region.Thus, from an ill-defined mixture of fevers, the typhus fevers became clearly defined, including epidemic and murine typhus, with agents The capability to diagnose rickettsial diseases has evolved dramatically since the early serendipitous discovery of the Weil\u2013Felix agglutinations in 1916 for typhus, and in 1924\u20131925 for scrub typhus . In the Proteus, a member of the gamma proteobacteria. It is especially useful because it separates STGO from other rickettsiae. Although generally lacking sensitivity and specificity for diagnosis, this test has served its purpose well. Even with its weaknesses, the WF test is inexpensive, easy to perform, and readily accessible; which explains its continued use in certain regions of the AAP.The Weil\u2013Felix (WF) test is the oldest assay used for the diagnosis of rickettsial diseases ,63,64. TThe complement fixation test (CF), also a test from the early rickettsial diagnostics era (1940s), could be used for both serodiagnosis and serotyping of the infecting strain of bacteria ,66. GuinO. tsutsugamushi- and R. typhi-specific antibodies [Rickettsia japonica and O. tsutsugamushi, as well as the cross-reactivity observed between different serotypes of O. tsutsugamushi [The indirect immunofluorescence assay (IFA) has been used for over 50 years to quantify anti-rickettsial antibodies including tibodies ,69. Eachtibodies , Selgadotibodies , and Yuhtibodies in this tibodies , IFA protibodies ,70. In ougamushi . R. typhi and O. tsutsugamushi [Commercial immunochromatographic rapid diagnostic tests (RDT) and immunodot assays produce rapid results, follow a simple protocol with no need for sophisticated electrical equipment, and are relatively inexpensive. They are highly attractive for point-of-care use in rural areas where the use of IFA may not be available. Moreover, the rapidity of these tests facilitates more rapid treatment, especially in rural areas with limited resources. The test(s) are based upon an enzyme-linked dot blot immunoassay, in which specific native antigens are immobilized on a membrane. Incubation of the membrane with patient sera allows for detection of antibodies to the antigen of interest on the membrane/dipstick. In the 1990s, many of these rapid dipstick/immunodot tests were developed through collaborations with the Department of Defense research laboratories and included assays for the detection of ugamushi ,72.The basic enzyme-linked immunosorbent assay (ELISA) is a technique to detect and quantify biologic substances such as peptides, proteins, lipopolysaccharides, antibodies, and hormones. In the simplest form, the ELISA is used to detect and quantify specific patient or animal antibodies against rickettsiae. Rickettsial antigens are attached to a solid surface, a 96-well plate or membrane, for example. As with most of the earlier serological assays, IFA, IIP, CF, dot blot assays, the ELISA was initially accomplished using native antigens derived from growing rickettsiae in embryonated hen eggs, or cell culture ,74,75. DO. tsutsugamushi. Publication of the corresponding gene sequences introduced the field of rickettsiology to the molecular biology era and consequently to molecular diagnostics [The first successful molecular cloning and gene expression of the rickettsial proteins involved the 110-kD and 56-kD proteins products of gnostics ,77,78. Tgnostics ,80.O. tsutsugamushi [With the introduction of molecular cloning, recombinant proteins based on the published sequences of rickettsial antigen genes were developed. Ching et al. developed a rapid immunochromatographic flow assay (RFA) for the detection of immunoglobulin M (IgM) and IgG antibodies to ugamushi . The RFAO. tsutsugamushi strains Karp, Kato, Gilliam, and TA716 for scrub typhus IgM and IgG detection. When validated against the IFA, a sensitivity of 84% and specificity of 98% was noted in Northern Thailand, and in a subsequent evaluation, a sensitivity of 91.5% with specificity of 92.4% was reported for Bangladesh. Still, the InBios ELISA endpoints have not been firmly established [A most recent and promising point-of-care (POC) commercial serodiagnostic tool, the InBios Scrub Typhus Detect\u2122 IgM ELISA, was evaluated in scrub typhus-endemic Northern Thailand and Bangablished . The diaO. tsutsugamushi targeted infected murine blood samples using primers derived from the just published O. tsutsugamushi 56-kD gene tsa56 DNA sequences of Stover, et al. [R. typhi was detected in fleas by PCR in the same year [As noted above, PCR diagnostics were \u201coff and running\u201d in 1990 as gene sequences were being published. The initial PCR-based detection of , et al. ,80. The , et al. , while Rame year . Due to ame year .O. tsutsugamushi, multilocus sequence typing/analysis (MLST/MLSA), multispacer typing (MST), and whole-genome sequencing [O. tsutsugamushi presented by Kelly et al. [Associated with the products of PCR were various approaches to the characterization of strains for comparison with each other, comparison of strains within the different geographic regions, for example. The sensitive genotyping methods that became available included restriction fragment length polymorphism (RFLP), single-gene sequencing of highly variable genes, such as the 56-kD type-specific antigen (TSA56) of quencing . Gene sey et al. in the sAnaplasma and Ehrlichia. Elementary protocol errors and inappropriate data analysis can occur, which can detract from the use of this powerful technique. Choice of source material is important. However, when using the specimen of choice, biopsy of rash and/or eschar, and strict adherence to guidelines, the sensitivity of qPCR assay approaches up to 100% [Anaplasma phagocytophilum.Quantitative real-time PCR, or qPCR ,87, is n to 100% . Quantit to 100% , in whicRickettsia felis. Similarly, in their broad review of rickettsioses in Taiwan, Minahan et al. [Orientia extend beyond the AAP geographic region.In addition to some previous comments above concerning specific diagnostic tools, many of the papers included in this special issue make use of such tools in their studies. A number of papers utilized one or more of the diagnostic tools, often focusing their use to study the occurrence of rickettsial agents in particular geographic areas. Legendre and Macaluso reviewedn et al. indicaten et al. perform n et al. discussen et al. details The possible combinations of diagnostic techniques that are being utilized in the analysis of scrub typhus are illustrated well in the study detailed by Naoi et al. , emphasiThe choice of a single gold standard diagnostic tool remains unclear, due to the bi-phasic nature of early rickettsemia followed by a subsequent antibody response. The difficulty of diagnosis is illustrated in the paper by Salgado Lynn . ReportiThe elevated caseloads and mortality rates caused by the rickettsial diseases during WWII, particularly scrub typhus in the AAP, were mission-threatening. The end of the war reduced the general concern, but investigators continued research and control efforts. Before the age of effective antibiotic therapy, scrub typhus patients suffered mortality rates of around 6\u201310%, sometimes exceeding 50% ,95\u2014oftenFurther research produced the bacteriostatic protein synthesis inhibitor, tetracycline (introduced in 1953), which proved even more effective than chloramphenicol in treating scrub typhus. Tetracycline showed fewer side effects and became the primary treatment . Later, The case for treatment failures in scrub typhus has gained much attention in recent years ,102, andO. tsutsugamushi isolates that were transported from Malaya to Britain [Exhaustive efforts to develop an effective, long-lasting, and broadly protective scrub typhus vaccine for humans have not proven successful. Those efforts extend from the WWII era when two vaccines were developed; one by the British based upon the Karp strain, and a second by US researchers based upon the Volner strain. Both were evaluated in the field and were found to not be significantly better than controls in preventing disease or death ,106. Imm Britain ,107. Eff Britain ,109. An O. tsutsugamushi [gltA, encoding rickettsial citrate synthase [rOmpA, encoding spotted fever group (SFG) rickettsiae-specific 190-kD outer membrane protein A [Rickettsia genus-specific 17-kD outer membrane antigen [Molecular approaches to the study of rickettsia now routinely use DNA sequencing as the gold standard for diagnosis and identification of rickettsial agents. These procedures often utilize DNA sequences of specific highly variable genes, such as the 56-kD type-specific antigen (TSA56) of ugamushi ,80, or rsynthase , rOmpA, rotein A , and the antigen , each of antigen . Analyse antigen , or to d antigen , or to b antigen .O. tsutsugamushi, genotypic groups, such as those identified using the tsa56 gene sequence, may reflect the phylogenetic histories of the isolates being compared [O. tsutsugamushi have suggested that the phylogenetic histories of different genes may not reflect a unified evolutionary history of the genes within the genome [Within compared . Howevere genome ,121,122.O. tsutsugamushi may facilitate significant horizontal gene transfer between isolates that would be reflected by incompatible phylogenetic histories for different genes. The characteristic of the genome that would potentially be most important in promoting substantial horizontal transfer is the large proportion of the O. tsutsugamushi genome that is made up of repeated sequences [O. tsutsugamushi, the genome of the Boryong strain, suggested that the sequence represented \"the most highly repeated bacterial genome sequenced to date\u201d [O. tsutsugamushi strains that have been determined contain a large number of copies of genes for components of conjugative type IV secretion systems, for signaling and host-cell interaction proteins, as well as more than 400 copies of transposases, 60 copies of phage integrases, and 70 copies of reverse transcriptases. These multi-copy sequences have been hypothesized to facilitate intragenomic recombination. There is also the possibility that additional acceleration of strain differentiation may be associated with population bottlenecks that are likely to be associated with the intracellular nature and transmission patterns of rickettsial bacteria [It has been suggested that some unique aspects of the genome of equences . The orito date\u201d . The genbacteria ,127,128.O. tsutsugamushi evolves. Much remains to be determined concerning the population genetic dynamics of O. tsutsugamushi, and the comparison of those dynamics with patterns occurring in other rickettsiae that contain far fewer repeated components in their genomes. Studies of gene differentiation for multiple genes has been limited to a few studies of isolates from Southeast Asia [We are still at the early stages of understanding how ast Asia ,121,122 ast Asia and Koreast Asia .O. tsutsugamushi in the AAP. The most comprehensive analyses of O. tsutsugamushi phylogenetics were presented by Batty et al. [tsa56 gene, the 47-kD htraA gene, and a seven-gene MLST panel. Differences between single-gene phylogenies and multigene phylogenies within the genome of a single species are expected to occur; the significance of this is more difficult to evaluate [In contrast, isolates that have been studied as part of genomic comparisons have been collected from a large part of the range of y et al. and Flesy et al. . Batty ey et al. . Some dievaluate ,131,132.O. tsutsugamushi [Horizontal gene transfer has been invoked to explain differences between gene trees within ugamushi ,118. Howugamushi . Much woO. tsutsugamushi. The genome sequences of isolates of O. tsutsugamushi reveal a completely different picture of genome change from that seen for genomes from R. prowazekii or R. typhi. In Rickettsia, gene order is maintained over large parts of the genus. Gene order is reasonably similar in various spotted fever group species. Comparisons between typhus group species and SFG forms show slightly more structural differences, primarily because a larger number of putative genes exist in the Typhus Group (TG) species, but most of the genome shows gene order preserved, although a few large inversions exist. For rickettsial forms within the AAP, a revealing difference between genome changes in Rickettsia and in Orientia can be seen when R. typhi genomes and O. tsutsugamushi genomes are examined. Genome sequences are available for five isolates of R. typhi . Gene order and gene number differs between isolates [R. typhi genomes can easily be compared over the entire length of the genome, with gene order conserved.This is significantly different from the situation found for elements ,123. Theisolates ,118. PaiO. tsutsugamushi allow a different, but limited approach to the comparison of isolates. Much work remains to understand how the evolution of single genes corresponds to the genome evolution of isolates in O. tsutsugamushi. However, investigations of highly variable genes, especially the tsa56 gene, do allow the examination of geographic variation within the species range of O. tsutsugamushi.Single-gene sequences in tsa56 gene is considerable, both within and between geographic areas of the AAP [tsa56 gene may give us an insight only into the evolution of cell surface variation, and perhaps do not indicate how other genes would reflect evolution of the population. This caveat is inserted because it remains unclear whether the phylogenetic relationships among sequences of the tsa56 gene would accurately reflect the phylogenetic relationship of other loci in the O. tsutsugamushi genome, or whether natural selection acting on the tsa56 gene has altered the relationship among isolates in a different way compared with other loci in the genome. If natural selection acts specifically on the gene such as the tsa56, the phylogeny of the gene may not completely mirror the evolution of the organism, or the phylogeny of other genes in the genome.Variation of the the AAP . Diversitsa56 gene. The differentiation appears to be very significant. However, these numbers reflect the reporting of genetic sequences within the international DNA sequence databases, and do not necessarily reflect an epidemiologically appropriate sampling of isolates. Numbers in the table reflect a mixture of isolates from human cases and from mite vectors and rodent secondary hosts. Further, it is important to consider that some geographic regions in which scrub typhus is endemic are noted for the lack of information concerning genetic variability. These regions include Indonesia, Malaysia, and the Philippines and the large islands of the southwestern Pacific region. There is also minimal information concerning Myanmar, Pakistan, and associated regions at the western edge of the range of scrub typhus, and from northern Australia.Examination of the tsa56 gene has been used to study variation in O. tsutsugamushi carried by both mite vectors and in the rodent populations that represent the normal host for the mites. Evidence exists that different species of trombiculid mites may act as hosts to different genetic lineages of bacteria, and that different rodent hosts may also be nonrandomly associated with genetic lineages of O. tsutsugamushi [The ugamushi .O. tsutsugamushi and the two species in the typhus group of Rickettsia. Vertical transmission may be different for O. tsutsugamushi compared with the typhus group rickettsiae [orientiae and the vector for spread of scrub typhus [tsa56 genotype of O. tsutsugamushi [O. tsutsugamushi within single rodent or mite hosts in nature [O. tsutsugamushi also exist [The nature of transmission by vectors for scrub typhus may also contribute to the differences that are observed between kettsiae ,137,138.kettsiae ,127,128.kettsiae . It is kb typhus . There iugamushi . Studiesn nature . Furtherso exist ,142,143.Leptotrombidium species affecting the differentiation of bacterial lineages that they carry and transmit.Other ecological aspects of scrub typhus could also contribute to differentiation between intraspecific lineages of bacteria. The various mite vectors can demonstrate seasonal differences in their abundance ,146,147.Ultimately, continued research efforts are required for a better understanding about scrub typhus; currently, the focus should be on addressing entomological aspects in transmission, dissecting host\u2013pathogen interactions in a clinical disease severity context, and in the development of point-of-care diagnostics and an effective vaccine.Leptotrombdium mites has enabled for the first time paired-matched morpho- and genotyping, which should enable broader characterization of mites globally and their endosymbionts , the molecular epidemiology of mites transmitting scrub typhus, as well as an improved understanding of adult mite stages [Entomology: The recent discovery of UV-based autofluorescent microscopy in morphotyping e stages .Host\u2013pathogen interactions: The availability of whole-genome sequencing provides a good platform to determine the virulence mechanisms of orientiae, as host-mediated pathogenic mechanisms and mechanisms of tissue injury remain poorly understood\u2014ideally, these virulence factors should contribute towards developing a disease severity score/prediction based on clinical features and pathophysiological markers .Diagnostics: A lot of effort has gone into improving the notoriously difficult diagnostics, but a universally useful point-of-care tool remains elusive. Ideally, this should comprise an antigen/nucleic acid-based component with an antibody detection step. Coupling this with the use of noninvasive or less\u2013invasive sample specimens, such as sampling eschars or saliva, would enable earlier diagnosis, broader epidemiological coverage globally, and contribute substantially to improved awareness and better management of these easily treatable diseases . Joint eVaccines: While extensive studies on features of the natural immune response were performed in the 1970\u201380s, this area has not received much attention lately\u2014especially research with a focus on immune memory and clinically relevant correlates of protection is lacking. The ongoing genome-sequencing efforts, the availability of a characterized non-human primate model and first vaccine candidates conferring sterile immunity against high-dose homologous challenge with orientiae are promising prerequisites to progress this work .AAP rickettsioses are a fascinating array of clinically relevant diseases\u2014these endemic and underappreciated diseases are associated with a large burden of disease globally, and to date, there are still no licensed vaccines, or vector control efforts in place. Despite increasing awareness in endemic regions, the public health burden and global distribution of scrub typhus remains poorly understood. Opportunities are vast for the next generation of clinicians and scientists, there is still a lot to do."} +{"text": "Orientia species, have been around for a very long time. Historical reference to the rickettsial disease scrub typhus was first described in China (313 AD) by Hong Ge in a clinical manual (Zhouhofang) and in Japan (1810 AD) when Hakuju Hashimoto described tsutsuga, a noxious harmful disease in the Niigata prefecture. Other clinicians and scientists in Indonesia, Philippines, Taiwan, Australia, Vietnam, Malaysia, and India reported on diseases most likely to have been scrub typhus in the early 1900s. All of these initial reports about scrub typhus were from an area later designated as the Tsutsugamushi Triangle\u2014an area encompassing Pakistan to the northwest, Japan to the northeast and northern Australia to the south. It was not until the 21st century that endemic scrub typhus occurring outside of the Tsutsugamushi Triangle was considered acceptable. This report describes the early history of scrub typhus, its distribution in and outside the Tsutsugamushi Triangle, and current knowledge of the causative agents, Orientia species.Scrub typhus and its etiological agents, Scrub typhus, a febrile disease with mild to life-threatening manifestations, is characterized by rapid onset of fever, headache, chills, arthralgias and myalgias and often the presentation of eschar prior to and a macularpapular rash following initiation of disease ,3,4,5,6.Orientia DNA is in abundance, unaffected by prior antibiotic treatment, and maintained in the lesion for the life of the lesion [The lack of scrub typhus-specific signs and symptoms makes the clinical diagnosis very difficult ,3,5. More lesion .Historically, scrub typhus has been around for a very long time. Human historical reference to scrub typhus was firsIn Japan, in 1810, Hakuju Hashimoto described \u201ctsutsuga\u201d, a noxious harmful disease in the Niigata prefecture of the main island of Japan . HoweverTheileria tsutsugamushi [Rickettsia orientalis, was reported in 1930 in the Japanese Journal of Experimental Medicine [Rickettsia tsutsugamushi and Rickettsia akamushi, respectively [R. tsutsugamushi and Nagayo\u2019s R. orientalis [R. tsutsugamushi [O. tsutsugamushi. Though much controversy was associated with this name [R. tsutsugamushi was moved out of the genus Rickettsia and into its own genus Orientia, with the new species name, Orientia tsutsugamushi [Up until the 1920s, the etiology of tsutsugamushi and, therefore, scrub typhus was unknown. In addition, multiple diseases which were later believed to be synonymous with tsutsugamushi were known as Japanese river fever, flood fever, island fever, Kedani (mite) disease, akamushi disease, shimamushi disease yochubio, and shashitsu [ugamushi . Howeverugamushi , as descugamushi . Hayashiugamushi . In the ugamushi , as descugamushi . In 1929ugamushi . This waugamushi . Unfortuugamushi . Ogata, ugamushi . MoreoveMedicine , as descMedicine . Ogata aectively ,18, as dugamushi . In the ugamushi , which was significantly different to the high mortality rate (~20\u201340%) seen with tsutsugamushi in Japan at this time . Scrub tFurther, also in 1908, two cases of tsutsugamushi disease were described for two US military personnel, stationed at Camp Connell, Samar, the Philippines, based upon clinical records ,38. One In 1910, Mossman fever, later described as endemic glandular fever and, subsequently, determined to be scrub typhus, was reported in North Queensland, Australia ,42,43,44In 1915, a fever of unknown etiology among two individuals was reported in Saigon, Vietnam to be a In 1915, a \u201cmild\u201d rickettsial disease called paratyphus was reported among 15 patients (no deaths) in the spring of 1913\u20131914 from Jemulpo, Incheon, Korea by Weir, a medical missionary . BecauseTrombicula akamshi. In addition, rickettsial diseases with mild presentations and low OX19 titers may not have been murine typhus but scrub typhus [During the period of 1910\u20131945, there was some evidence of endemic scrub typhus in Korea. A study of mites attached to wild rats collected in Suwon were similar to b typhus . During b typhus . Followib typhus and consb typhus ,61,62,63b typhus ,64.In 1900, the Institute of Medical Research (IMR) in Kuala Lumpur, Malaysia, was established as the Pathological Institute with the aim to promote the health status of the local population. In 1924, the institute began research on \u201ctropical typhus\u201d in the Federated Malay States. In the annual IMR Bulletin in 1925, Fletcher and Lesslar described tropical typhus as containing two components\u2014an urban or shop typhus and a rural or scrub typhus .Bacillus proteus X.19 (Proteus vulgaris) that was isolated from the urine of a patient with epidemic typhus [P. vulgaris agent was not the cause of the disease but was found to be agglutinated by antibodies developed during epidemic typhus. The cross-reactivity of the antibodies to the P. vulgaris antigens (OX19) has been successfully used since 1916 to serologically diagnose epidemic typhus and murine typhus. Subsequently, another strain of P. vulgaris (OX2) was identified that reacted with the sera of spotted fever patients [Proteus mirabilis Kingsbury strain which reacted with sera from scrub typhus patients but not with sera from typhus or spotted fever patients [Rickettsia typhi [O. tsutsugamushi isolations, diagnostics, treatment/prophylaxis, vaccine and immunology research, and vector and ecology research [Due to the fortuitous change in the composition of the Weil\u2013Felix test, \u201ctropical typhus\u201d could be divided into two unique diseases. The Weil\u2013Felix test initially utilized the strain of c typhus . The P. patients . These ppatients . It was patients ,69. Withpatients ,71. Moreia typhi . Throughresearch . Scrub tresearch ,76,77,78Trombicula deliensis on rodents and shrews in the Simla Hills, suggesting that similar to the reports from Malaya, that these mites may be the vectors of scrub typhus [O. tsutsugamushi Gilliam, was contracted by Dr. Gilliam in 1944 on the equally famous Stillwell Road, in Burma [In 1932, Christian reported OXK-positive typhus cases that he believed were due to tick bites . Due to b typhus . Boyd reb typhus . Interesb typhus . In 1944b typhus . One casb typhus . Subsequb typhus . The famin Burma . Nichollin Burma . Similarin Burma . Scrub tin Burma . This isin Burma ,91,92,93All of these reports of scrub typhus or diseases very similar to them throughout the Asia\u2013Pacific region prior to WWII led to the assumption of a single rickettsial disease for a very large endemic region where many people were at risk of disease. Unfortunately, this assumption of a very large endemic area of scrub typhus was reinforced during WWII, where approximately 18,000 cases occurred among the allied forces and a similar number among the Japanese forces in the islands of Ceylon, Maldives, New Britain, Goodenough, and the Schouten Islands, and in the countries of China, Thailand, Japan, Australia, Lao, Cambodia, Vietnam, and Taiwan ,37,38,83Orientia which was identified within human cases, vector mites and mammalian hosts [O. tsutsugamushi, has a diversity of antigenic phenotypes and genetic genotypes found not only between countries but within countries [Consistent within the Tsutsugamushi Triangle has been the presence of a single species of an hosts ,83,94,98ountries ,100. HowR. orientalis (O. tsutsugamushi) antigens, the authors included scrub typhus in the panel of rickettsial diseases to assess. Among the ill Africans, several were positive for the various rickettsial antigens, including two individuals who reacted to the R. orientalis antigens. To confirm the skin tests, blood from the two Orientia-positive patients were tested for complement-fixing antibodies to O. tsutsugamushi and were found to be positive, with titers of 80 and 320. To assess the reactivity to the scrub typhus assays in other populations who lived closely with native Africans in Musha Hill, healthy individuals, including nine people born in Muscat, Oman, five born in Bombay, India, and two born in Africa, with parents who were born in Bombay, were tested using the same skin and blood tests for evidence of previous O. tsutsugamushi infection. The authors considered Muscat and Bombay as scrub typhus-endemic regions and thought that people from those areas may be antibody positive to O. tsutsugamushi and may, therefore, act as positive controls. Of the nine individuals, originally from Muscat, three had strong, weak, and negative skin reactivity and antibodies against O. tsutsugamushi were detected with titers of 1280 and 640 in eight. Of the five people born in Bombay, three displayed positive skin reactivity to O. tsutsugamushi antigens and, interestingly, the two people whose parents were from Bombay, but who were born in and never traveled outside of eastern Africa, were also positive. These results suggested the presence of scrub typhus in eastern Africa. The authors indicated that a similar study they conducted among natives in western Africa showed that they were negative for evidence of scrub typhus [In 1951, Giroud and Jadin published a report that indicated that scrub typhus occurred outside the Tsutsugamushi Triangle. This report described an outbreak of a febrile disease among native Africans from Ruanda-Urundi working on constructing a factory building in Musha Hill, Belgian Congo (now Rwanda and Burundi) . To inveb typhus .O. tsutsugamushi. Unfortunately, no molecular or culture evidence confirmed the case of scrub typhus. The third case was of an individual who had visited Tanzania [O. tsutsugamushi antigens from <16 to 1024 by an indirect immunofluorescence assay (IFA). Regrettably, none of these three cases had produced a culture of Orientia or had molecular evidence of the causative agents, thus, they were considered presumptive scrub typhus cases.In the 1990s, three case reports insinuated, but could not confirm, the presence of scrub typhus in Africa. The first described an individual from Japan visiting the Republic of Congo, who presented with fever six days after his return from Africa . The disTanzania . She hadThese results collectively implied that scrub typhus was endemic to Africa and the Orientia species other than O. tsutsugamushi . Similarly, molecular assays for orientiae were not used to investigate the presence in mammalian hosts and/or arthropod vectors of orientiae outside of the Tsutsugamushi Triangle. This paradigm was changed with the two reports of scrub typhus in the United Arab Emirates and Chile. Thus, scientists included scrub typhus serological and molecular assays to conduct surveillance studies of rickettsial diseases in areas of Africa, South America, and Europe (O. tsutsugamushi that was confirmed by Western blot assays [O. tsutsugamushi ELISA antigens and 10 of these children seroconverted to O. tsutsugamushi antigens (3.6%). The seroreactivity was confirmed by Western blot analysis [O. tsutsugamushi ELISA antigens and one individual who reported a history of a febrile disease during the period of the study seroconverted to O. tsutsugamushi antigens by ELISA, IFA and Western blot tests [Following these two astounding reports, investigators looked to confirm and expand upon these results, utilizing current serological and molecular assays for evidence of d Europe . This ret assays . In an uanalysis . In Djibot tests .Orientia DNA among tissues of rodents from West Africa and Europe [Orientia spp. [O. tsutusgamushi. Trombiculid mites were collected from the trapped rodents to assess them for molecular evidence of Orientia. DNA preparations provided evidence of Orientia species from sequences of gene fragments of the rrs and htrA that were most closely aligned to but not identical with Ca. O. chuto were seropositive against d by IFA . The secd by IFA ,114. It b typhus .O. tsutsugamushi [O. tsutsugamushi found throughout the Tsutsugamushi Triangle [Herpetacarus from Chilo\u00e9 Island found the same orientiae as that associated with scrub typhus cases [Candidiatus Orientia chiloensis and serological and molecular assays based upon"} +{"text": "We propose an image-based class retrieval system for ancient Roman Republican coins that can be instrumental in various archaeological applications such as museums, Numismatics study, and even online auctions websites. For such applications, the aim is not only classification of a given coin, but also the retrieval of its information from standard reference book. Such classification and information retrieval is performed by our proposed system via a user friendly graphical user interface (GUI). The query coin image gets matched with exemplar images of each coin class stored in the database. The retrieved coin classes are then displayed in the GUI along with their descriptions from a reference book. However, it is highly impractical to match a query image with each of the class exemplar images as there are 10 exemplar images for each of the 60 coin classes. Similarly, displaying all the retrieved coin classes and their respective information in the GUI will cause user inconvenience. Consequently, to avoid such brute-force matching, we incrementally vary the number of matches per class to find the least matches attaining the maximum classification accuracy. In a similar manner, we also extend the search space for coin class to find the minimal number of retrieved classes that achieve maximum classification accuracy. On the current dataset, our system successfully attains a classification accuracy of 99% for five matches per class such that the top ten retrieved classes are considered. As a result, the computational complexity is reduced by matching the query image with only half of the exemplar images per class. In addition, displaying the top 10 retrieved classes is far more convenient than displaying all 60 classes. For a given coin class, this information includes the issuer, date of issuance, information about the material of coin, and the complete description of obverse and reverse motifs. An example of such is shown in We propose a holistic image-based class retrieval system for ancient Roman Republican coins that can be instrumental for a number of non-expert audience such as museum visitors, Numismatics students, hobbyists, auctions Websites, as well as the art market in general. For instance, most of the museums visitors are completely unaware about the historic context of the displayed coins. Such scholarly information about the ancient coins is only available in standard reference books , where tNonetheless, the classification of ancient coins based on their images is a challenging task due to variations caused by several factors. A main problem is the high intra-class and low inter-class variations due to large number of coin classes. In case of Roman Republican coins, there are 550 classes with more than 1000 subclasses [Due to the aforementioned challenges, the image-based methods proposed for the classification of modern day coins are not adaptable for ancient coins . The metNonetheless, all these methods use one or the other form of image classification where the task is to classify a given coin image into one of the coin classes. We extend this concept a step further by showing the class relevant information from the expert sources in a user friendly manner. This is due to the fact that the knowledge about an ancient coin is also of utmost importance when it comes to particular communities such as archaeologist, students and researchers. Consequently, the proposed system has a potential to be used as an assisting or educational tool at a number of places such as museums, educational institutes, or even online auction websites.The database consists of 60 coin classes where each class is represented by 10 exemplar images of their reverse side motifs. The dimensions of all the images are The exemplar coin images of any given class differ from one another mainly due to the non-rigid deformations caused by wear and tear on the coin.All the coins are imaged with a homogeneous background.The coin is depicted at the image center in almost all of the images.As the coins belong to the Roman Republican era, the scale variations in the coin images remains negligible.All the coins are imaged with their canonical orientations resulting in negligible variations caused by rotations.The graphical user interface (GUI) of our proposed holistic system for image-based classification of Roman Republican coins is shown in We use automatic coin segmentation to extraThe local image patches are densely sampled and then represented with local feature descriptors. In the proposed framework, we use the Local Image Descriptor Robust to Illumination Changes (LIDRIC) , which iFor a query image, the segmentation and features extraction are performed as stated previously. The next step is to match the local features of the query image with those of the exemplar images in the database. A depicted in max\u201d operation is then performed on these 10 values, where the maximum score is considered for that particular coin class. Consequently, for a query image, the number of similarity scores reduces to 60; one per coin class.The matching assigns similarity scores to each exemplar image of all the coin classes where a higher score shows a high similarity. In this way, for each coin class, similarity scores are determined from its exemplar images. However, a \u201cFinally, all the 60 classes are ranked in a descending order based on their similarity values where the one with highest similarity is assigned to the query image. However, our system aims at the depiction of candidate classes for the query image along with their information from the standard Numismatics source leading to the depiction of top \u201cN\u201d classes in the GUI.For a query image, the relevant classes are found by matching it with exemplar images of each class. As it is an online image matching process, the cost of computations must be optimized by evaluating the number of matches per class. For instance, in the worst case, each class has 10 exemplar images and there are a total of 60 classes resulting in 600 image matchings. This will ultimately increase waiting time in our proposed framework due to which, we evaluate the number of matches per class. Furthermore, for the query image, all the 60 classes will be ranked from top to bottom with respect to their similarity scores and consequently will be displayed in the GUI. However, displaying all the classes along with their information will cause inconvenience for the users. Therefore, another parameter that is worth investigating is the number of retrieved classes to be displayed in the GUI.For the number of matches, we start by comparing the query image with two randomly selected images of each class. We repeat this experiment by increasing the number of randomly selected images to three, four and up to nine. Through this procedure, we find the optimal number of matches both in terms of classification accuracy and time. For instance, in case of experiment for comparing with two random images from each class, we have a total of A few examples of such ranking with query images from classes \u201c1\u201d, \u201c2\u201d, and \u201c7\u201d are shown in M and the top most relevant retrieved classes denoted by N. The number of matches are empirically selected as M, the experiments are performed 5 times and an average classification accuracy is reported. Similarly, for a given query image, all the 60 coin classes are ranked based on their achieved similarity scores from high to low. The correctly predicted label for a query image can be at any position among those 60 labels. However, for displaying, we search for least number of retrieval positions N where highest possible classification accuracy is also achieved. For instance, the value M and N are shown in We carried out elaborated experiments on the relationship of the number of matches denoted by M and N.An overall increase in classification accuracy is observed by increasing both The lowest accuracy is achieved by considering The highest classification accuracy of 99% is achieved for M should be 5 and that of N should be 10.In order to reduce the computational time, the value of To summarize, our proposed framework retrieves most relevant classes for a given query image by matching it with five randomly selected class exemplar images stored in the database. This results in a total of 300 (\u00ae Core\u2122 i7-8700K CPU of 3.70GHz with 12 threads in Matlab\u00ae R2018b. The experiments are repeated 20 times and the average time taken by each setting along with the standard deviation is shown in We also performed experiments to show the time taken by matching a query image with various number of class exemplar images. The matching is performed using the parallel architecture of an IntelWe proposed an image-based framework for class retrieval of the Roman Republican coins. It takes a query image as an input, classifies using exemplar-based image classification, and then displays the resulting candidate top 10 classes in the GUI along with their information from the standard reference book. Our proposed framework can become an interesting application tool for a wide range of users such as museum visitors, Numismatics students and researchers, hobbyists, and online auction websites. We thoroughly evaluated various parameters of the framework such as number of exemplar images per class to which the query image will be matched and the number of top relevant classes to be displayed in the GUI. From our experiments, we found that the five images per class and a search space for the relevant retrieved classes extended up to 10 position successfully classifies the query image with 99% accuracy.In future, we plan to extend our database to span more coin classes, shift the current matching algorithm to GPU-based computations in order to meet the real-time matching, and include more state-of-the-art methods for image matching."} +{"text": "This cohort study evaluates the suitability of including pathogenic copy number variants associated with neuropsychiatric disorders in population screening by determining their prevalence and penetrance and exploring the personal utility of disclosing results. Are neuropsychiatric disorders that are associated with pathogenic copy number variants suitable for inclusion in population-based genomic screening programs?In this health care system population of more than 90\u2009595 participants, copy number variants associated with neuropsychiatric disorders were prevalent and penetrant. Participant responses to receiving these genomic results were overall positive, suggesting personal utility.Neuropsychiatric disorders associated with genetic variants should be considered in the design of population-based genomic screening programs. Population screening for medically relevant genomic variants that cause diseases such as hereditary cancer and cardiovascular disorders is increasing to facilitate early disease detection or prevention. Neuropsychiatric disorders (NPDs) are common, complex disorders with clear genetic causes; yet, access to genetic diagnosis is limited. We explored whether inclusion of NPD in population-based genomic screening programs is warranted by assessing 3 key factors: prevalence, penetrance, and personal utility.To evaluate the suitability of including pathogenic copy number variants (CNVs) associated with NPD in population screening by determining their prevalence and penetrance and exploring the personal utility of disclosing results.In this cohort study, the frequency of 31 NPD CNVs was determined in patient-participants via exome data. Associated clinical phenotypes were assessed using linked electronic health records. Nine CNVs were selected for disclosure by licensed genetic counselors, and participants\u2019 psychosocial reactions were evaluated using a mixed-methods approach. A primarily adult population receiving medical care at Geisinger, a large integrated health care system in the United States with the only population-based genomic screening program approved for medically relevant results disclosure, was included. The cohort was identified from the Geisinger MyCode Community Health Initiative. Exome and linked electronic health record data were available for this cohort, which was recruited from February 2007 to April 2017. Data were collected for the qualitative analysis April 2017 through February 2018. Analysis began February 2018 and ended December 2019.The planned outcomes of this study include (1) prevalence estimate of NPD-associated CNVs in an unselected health care system population; (2) penetrance estimate of NPD diagnoses in CNV-positive individuals; and (3) qualitative themes that describe participants\u2019 responses to receiving NPD-associated genomic results.P < .001); 66.4% (470) of CNV-positive individuals had a history of depression and anxiety compared with 54.6% (49 118 of 89 887) of CNV-negative individuals . 16p13.11 (71 [0.078%]) and 22q11.2 (108 [0.119%]) were the most prevalent deletions and duplications, respectively. Only 5.8% of individuals (41 of 708) had a previously known genetic diagnosis. Results disclosure was completed for 141 individuals. Positive participant responses included poignant reactions to learning a medical reason for lifelong cognitive and psychiatric disabilities.Of 90\u2009595 participants with CNV data, a pathogenic CNV was identified in 708 . Seventy percent (n\u2009=\u2009494) had at least 1 associated clinical symptom. Of these, 28.8% (204) of CNV-positive individuals had an NPD code in their electronic health record, compared with 13.3% (11 835 of 89 887) of CNV-negative individuals (odds ratio, 2.21; 95% CI, 1.86-2.61; This study informs critical factors central to the development of population-based genomic screening programs and supports the inclusion of NPD in future designs to promote equitable access to clinically useful genomic information. However, comprehensive, population-based genomic screening programs, such as that carried out within the Geisinger MyCode Community Health Initiative, are quickly emerging as promising complementary public health options to proactively affect patient care.12 Several key factors have been described as critical to implement these programs, including results that inform disease prevention, early detection, or management; affect access to social services; and benefit family members.13 In addition, personal utility, equitable access to personally and medically relevant genomic information, and health care utilization effects are important factors.13 Furthermore, there is public interest for inclusion of genomic disorders that are often not designated as medically actionable, such as neuropsychiatric disorders (NPDs).15Genome-wide testing technologies, such as exome sequencing and chromosomal microarray, are being increasingly used across clinical settings as standard of care for identifying medically relevant genomic variants that cause a variety of conditions, including hereditary cancer syndromes, cardiovascular disorders, and neurodevelopmental disorders.17 Lifetime prevalence for common psychiatric disorders, including depression and anxiety, has been estimated globally at 29%.18 Individuals with NPDs have higher than expected rates of chronic medical conditions and associated early mortality and account for a disproportionately large share of health care costs, including emergency department and inpatient visits and prescription drugs.20 Collectively, rare pathogenic copy number variants (CNVs) and single-gene sequence\u2013level variants represent the largest proportion of known NPD causes to date, yet research into precision health strategies has lagged behind other conditions, such as cancer and cardiovascular disease.27Neuropsychiatric disorders, such as autism spectrum disorder, schizophrenia, and bipolar disorder, comprise an etiologically heterogeneous group of conditions affecting at least 14% to 18% of children and adults in the US.28 Additionally, subclinical learning or social differences are often present, which may affect an individual\u2019s well-being.29 Many NPD-associated genes and CNVs share biochemical and neurobiological pathways with other brain disorders, such as epilepsy and intellectual disability, that are frequent comorbidities in people with NPDs.31 The phenotypic variability of these large effects is modulated by family genomic background through the aggregate contribution of common variants of small effect size, quantified by polygenic risk scores,37 and may be further influenced by environmental experiences.Rare CNVs have large, primary effects on neuronal pathways and are causative of brain dysfunction, manifesting as clinically distinct disorders that exhibit a high degree of variable expressivity .28 Addit43 Hundreds of distinct genetic NPD causes are now known, representing a growing subset of pediatric- and adult-onset developmental and psychiatric diagnoses. However, clinical genetic testing in adult NPD populations is much less common, raising concerns about equitable access to highly relevant information for these individuals, and the penetrance of NPD-associated CNVs from an unselected population is less well known.Genomic testing for NPDs and other brain disorders is increasingly embraced in the pediatric setting, with diagnostic yields for sequence variants and CNVs approaching 40%, and observations of NPD-associated CNVs in variably affected adults have informed our understanding of the clinical variability and natural history of these disorders.46 Previous population-based studies have reported an estimated prevalence of\u2009approximately\u20091% for NPD-associated CNVs,48 indicating they may be as, or more, common than hereditary cancer and cardiovascular disorders currently included in population-based genomic screening programs. While these initiatives have allowed an unbiased assessment of population prevalence for genomic variants, they often lack detailed clinical correlation, particularly for NPD phenotypes. Furthermore, because these research initiatives do not have approval to return clinically confirmed and medically relevant information back to participants, there are very limited data on the perceived utility of this information from program participants.Our understanding of genomic contributors to human disease has gained momentum in recent years through analyses of large-scale research population data sets that provide a crucial counterbalance to clinically ascertained cohorts.To explore the suitability of including NPD-associated CNVs in population-based genomic screening programs, we assessed key factors incorporated into development and implementation decision-making frameworks: prevalence, penetrance, and early indications of participant response and personal utility. We report evidence from the first study, to our knowledge, of a large-scale, health care system\u2013based population, using a linked electronic health record (EHR) and genomic data set, that supports inclusion of NPD-associated CNVs in population-based genomic screening and provides experience on the feasibility and logistical components for implementation.49 We evaluated the frequency of 31 pathogenic, recurrent CNVs spanned at least 90% of the defined region. CNVs called from exome data were confirmed by an array-based method using PennCNV or manual inspection of the signal intensity data.54DNA sample preparation and exome sequencing were performed in collaboration with the Regeneron Genetics Center as previously described.International Classification of Diseases, Ninth Revision and International Statistical Classification of Diseases and Related Health Problems, Tenth Revision billing codes . A mixed-method approach was used to evaluate participants\u2019 responses to receiving results, with written consent provided for audio recording of in-person disclosure sessions. Age was calculated based on the last encounter in EHR. Tests of association were performed with logistic regression adjusted for age and sex. Reported P values were 2-sided, and statistical significance was set at .005. Reported P values are Bonferroni corrected for the 9 tests of association performed in the study. All statistical analyses were performed using R statistical software version 3.4.1 . Analysis began February 2018 and ended December 2019. A detailed description is provided in the eMethods in the To assess the frequency of relevant CNV-related phenotypes in CNV-positive individuals compared with CNV-negative individuals, 48 Estonian Genome Center of the University of Tartu (0.7%),46 and deCODE (1.16%).47 However, when a direct comparison was done across cohorts limiting the analysis to our 31 CNVs of interest, the DiscovEHR prevalence rates were notably higher for some CNVs , possibly due to ascertainment bias or technical differences with previous studies . When we broadened our NPD definition to include 2 common psychiatric disorders, depression and anxiety, we observed 66.4% (470 of 708) of CNV-positive individuals with this history compared with 54.6% (49\u2009118 of 89\u2009887) of CNV-negative individuals .We estimated the penetrance of NPD and congenital malformations in individuals with CNVs. We found that 28.8% (204 of 708) of CNV-positive individuals had an NPD-associated code in their EHR (eTable 3 in the P\u2009<\u2009.001). Cardiac defects were the most common congenital malformations observed in 6.5% (46 of 708) of CNV-positive individuals followed by congenital malformations affecting the urinary system in 2.9% (21 of 708), central nervous system in 2.3% (16 of 708), and genital organs in 2.1% (15 of 708). Cleft lip and/or palate were observed in 0.99% (7 of 708) of CNV-positive individuals, a 9.84 higher odds than CNV-negative individuals (0.08% [68 of 89\u2009887]). The prevalence estimates of other congenital malformations were approximately 2- to 3-fold higher in the CNV-positive group when compared with CNV-negative individuals, including cardiac , central nervous system , and genital malformations . In contrast, congenital malformations affecting the urinary system were observed at an equal prevalence of 3.0% , in CNV-positive (21 of 708) and CNV-negative (2716 of 89\u2009887) individuals. Overall, 69.8% (494 of 708) of CNV-positive individuals had NPD, including depression and anxiety, or a congenital malformation compared with 57.5% (51\u2009645 of 89\u2009887) of CNV-negative individuals .Congenital malformations in 1 of 5 categories were observed in 13.3% (94 of 708) of CNV-positive individuals had an EHR-documented genetic diagnosis. The mean (SD) age of participants with a genetic diagnosis was 20.33 (14.53) years , reflecting a strong bias toward offering clinical genetic testing to younger individuals. For the subset of individuals with 1 of 9 CNVs prioritized for disclosure, 77.1% (216 of 280) had a relevant NPD or congenital malformation code (49.6% [139 of 280] after excluding depression and/or anxiety) and only 8.6% (24 of 280) had a known genetic diagnosis.The results disclosure process was completed for 141 DiscovEHR patient-participants with pathogenic NPD-associated CNVs. Participant perspectives on receiving CNV diagnoses were collected via genetic counselor postsession assessments . Many participants revealed learning issues, social difficulties, or hospitalizations for mental health issues that were not captured in the EHR. It was common for participants to have a perceived explanation for their personal and/or family history of NPD, often attributing it to social circumstances . Participants expressed relief and satisfaction at finally having a medical explanation for a lifelong history of learning and behavioral struggles. The CNV result reassured them that their NPD history was not their fault, and several expressed that they wished they had known this information earlier in life. Most participants observed that their genetic diagnosis fit or made sense with their personal or medical histories.Participants typically stated that they planned to share their genetic test results with family members and often brought a support person to the disclosure session. Concerns about the potential for NPD in their children and grandchildren were a motivation for communicating results within a family, especially given the 50% chance of inheriting the CNV. However, a subset indicated that they may not share the information broadly with family members who are less supportive of them. To date, cascade testing has been completed for 26 family members resulting in 16 newly detected CNV diagnoses, primarily in participants\u2019 adolescent and young adult offspring, 94% of whom had clinically diagnosed NPD of previously unknown origin.55 These programs have the potential to have broad reach and ensure individuals with medically relevant genomic variants are identified proactively, avoiding the selection bias inherent in current clinical referral-based systems, which fail to identify a significant proportion (reported estimates up to 50%) of at-risk individuals.58 In addition to the value that population screening can have on disease management, several other concepts have been recognized as critical outcomes, including access to social services, personal utility, health care utilization optimization, equal access, and communication of results within health care systems and families.13Population-based genomic screening programs are a promising element to the cost-effective implementation of genomic medicine.59 Lynch syndrome (1 in 440 [0.2%]),60 or hypertrophic cardiomyopathies (1 in 500 [0.2%]).61 This population CNV prevalence is consistent with previous estimates from European populations,48 and additional study of more diverse populations is warranted. Differences between our data and previous estimates, most notably for 15q13.3 deletion, could be explained by ascertainment bias because individuals with more severe NPD may be more likely to be included in our population.62 Furthermore, because our CNV detection used both exome and microarray methods, with secondary clinical confirmation, our true positive rate could be increased. Research exploring the inclusion of NPD-associated single-gene disorders is also recommended to more completely represent all genetic causes of NPD.This study illustrates that NPD-associated CNVs are both prevalent and penetrant in a health care system\u2013based population. Nearly 1% of MyCode participants have an NPD-associated CNV, collectively exceeding the prevalence estimates of disorders regularly included in population-based screening programs, such as familial hypercholesterolemia (1 in 200 to 250 [0.4%-0.5%]),63 These penetrance estimates (35%-70%) are comparable with some of the highest health risks associated with hereditary cancer and cardiovascular disorders, including BRCA1/2 (38%-87% lifetime risk for breast cancer), Lynch syndrome , and familial hypercholesterolemia (30%-50% risk for coronary event).64 Genetic causes will increasingly inform NPD care by hastening diagnosis and treatment of developmental or psychiatric concerns and by monitoring for known medical risks .65The medical histories of MyCode participants with NPD-associated CNVs reflect the elevated risk for NPD and congenital malformation. Approximately 35% of all CNV-positive individuals, and 77% of those with a pathogenic CNV prioritized for participant disclosure, had documented NPD or congenital malformation in their medical records. Additionally, our data are consistent with previous observations that individuals with rare CNVs are at increased risk for depression.67 A genomic result in a given patient could have relatively few medical management implications but be perceived as highly valuable in personal or social ways.70 These benefits can influence decisions to adhere to medical recommendations, one\u2019s ability to seek or obtain social or educational support services, family communication about NPD, and self-understanding.75 Participants in this study expressed several indications of personal utility, including the value of having a medical explanation for their disabilities. This notion was often expressed as a sense of reassurance that they are not at fault for their NPD, a validation that their learning and/or psychiatric difficulties were real, and an indication of enhanced understanding of themselves and their families. The personal utility of finding an etiological diagnosis may prove to be among the most important benefits of genomic testing for NPD.There is growing recognition of the broader benefits to patients learning genomic results, beyond the conservative and cost-centric requirement of medical actionability, including consideration of the personal utility of genomic information, which requires that genomic information be \u201cused for decisions, actions, or self-understanding which are personal in nature.\u201d28 However, only 5.8% of individuals in our cohort had previously received a genetic diagnosis, demonstrating that the care received by most adults with NPD-associated CNVs has not been informed by a genetic cause. Instead, most individuals live with symptom-based developmental, psychiatric, and medical diagnoses without knowing the underlying genetic cause that ties these findings together. The majority of study participants were already adults when submicroscopic CNVs were first discovered, and, because clinical testing for NPD-associated CNVs is not often offered to adults outside of parental testing in a pediatric setting, were unlikely to have had access to diagnostic testing that is now considered standard of care for pediatric developmental disorders and congenital malformations. Thus, adults with NPD are unlikely to have access to genomic information of high clinical and personal relevance to them and their family members. Importantly, 94% of participants\u2019 family members who were identified as having a CNV by cascade testing had clinically diagnosed NPD of unknown cause, further highlighting the unmet need for genomic testing in this population.Clinically defined NPD, such as autism spectrum disorder, schizophrenia, and bipolar disorder, are intertwined at a biological level by shared genomic underpinnings that can be readily diagnosed through clinically available genetic testing.76 The medical community, including psychiatry, is also beginning to respond to these public views, moving away from a historically paternalistic approach to genomic result disclosure.77 Psychosocial outcomes data also indicate that patient-participants adjust to genomic information with little long-term negative consequences.78 Identifying genetic diagnoses for individuals with NPD allows us to move beyond vague discussions about multifactorial risk to more targeted, medical explanations.Public views about receiving NPD-associated genomic results, as well as genomic results associated with nonactionable diseases, generally lean toward ensuring access and disclosure.Ongoing research is needed to further document clinical, psychological, and family outcomes of CNV results disclosure and to inform development of disease-specific clinical models and support tools to complement population-based genomic screening. Additionally, research that incorporates polygenic risk and other factors could inform CNV-specific risk estimates for particular NPD and congenital malformation diagnoses. We anticipate that medicalizing NPD through the identification of specific genetic causes, paired with targeted genetic counseling and family cascade testing, may decrease stigma, increase self-advocacy, and lead to closer engagement of NPD patients with health care clinicians. These improvements could ultimately translate into more cost-effective utilization of health care resources and improved compliance with treatment recommendations.Our prevalence and penetrance estimates of pathogenic CNVs in DiscovEHR likely underestimate the true prevalence in the general population and represent a milder range of observed clinical phenotypes. While we did not exclude individuals with more severe cognitive and psychiatric disabilities from this study, they are likely underascertained in the DiscovEHR cohort owing to difficulty ensuring informed research consent or use of specialized medical care outside the general health care system, such as residential psychiatric facilities. Furthermore, our penetrance estimates relied on structured EHR data; many participants shared additional NPD histories during disclosure sessions that were not formally documented in the EHR. The advanced age of our participants also likely introduced a survival bias due to underrepresentation of individuals with severe medical conditions, such as CNV-related congenital cardiac anomalies, that reduce life expectancy.79This study of a large, health care system population demonstrates that there is a significant proportion of our population that could benefit from including NPD in population-based genomic screening programs. We have established that NPD-associated CNVs have sufficient prevalence and penetrance to be considered in the development and clinical implementation of such programs. Furthermore, we conclude that identification and disclosure of causative genomic variants is clinically and personally valuable for individuals with NPD and their families, who are likely to be underserved and have more limited access to this meaningful information."} +{"text": "How land use shapes biodiversity and functional trait composition of animal communities is an important question and frequently addressed. Land-use intensification is associated with changes in abiotic and biotic conditions including environmental homogenization and may act as an environmental filter to shape the composition of species communities. Here, we investigated the responses of land snail assemblages to land-use intensity and abiotic soil conditions , and analyzed their trait composition . We characterized the species\u2019 responses to land use to identify \u2018winners\u2019 (species that were more common on sites with high land-use intensity than expected) or \u2018losers\u2019 of land-use intensity (more common on plots with low land-use intensity) and their niche breadth. As a proxy for the environmental \u2018niche breadth\u2019 of each snail species, based on the conditions of the sites in which it occurred, we defined a 5-dimensional niche hypervolume. We then tested whether land-use responses and niches contribute to the species\u2019 potential vulnerability suggested by the Red List status.Our results confirmed that the trait composition of snail communities was significantly altered by land-use intensity and abiotic conditions in both forests and grasslands. While only 4% of the species that occurred in forests were significant losers of intensive forest management, the proportion of losers in grasslands was much higher (21%). However, the species\u2019 response to land-use intensity and soil conditions was largely independent of specific traits and the species\u2019 Red List status (vulnerability). Instead, vulnerability was only mirrored in the species\u2019 rarity and its niche hypervolume: threatened species were characterized by low occurrence in forests and low occurrence and abundance in grasslands and by a narrow niche quantified by land-use components and abiotic factors.Land use and environmental responses of land snails were poorly predicted by specific traits or the species\u2019 vulnerability, suggesting that it is important to consider complementary risks and multiple niche dimensions. Land use disturbs natural environments, changes local geographical landscape structure and alters local biotic and abiotic conditions, e.g. microclimate \u20136. ReducLand snails are an important macroinvertebrate group that is directly and indirectly involved in ecosystem processes such as litter decomposition or nutrient cycling , 11. TheStudies on trait composition of snail communities in Sweden pointed to the importance of the species\u2019 niche-width and the importance of local environmental conditions over spatial variables . While tIn the present study, we investigated land snail communities at forest and grassland sites in different regions of Germany, which were characterized by different land-use types and intensities. We aimed to test whether the trait composition of the snail community is influenced by land-use intensity (and soil conditions). We then tested the responses of each snail species to land-use intensity; \u2018winners\u2019 significantly increase in abundance and occurrence with land-use intensity, whereas \u2018losers\u2019 significantly decrease compared to the null model , 39. We The trait composition of land snail communities differed strongly between forests and grasslands within regions, indicated by a strong differentiation of community-weighted mean trait values (CWMs). Assemblages of forest species consisted of larger species, consistently showed lower light and higher humidity preference, lower drought resistance and mostly lower inundation tolerance than grassland assemblages; differences in the number of offspring were inconsistent among forest and grassland habitats Fig.\u00a0.Fig. 1TrIn forests, land-use intensity and abiotic conditions significantly influenced the CWMs of all traits investigated, although often in a different way across regions , whereas 12% were \u2018winners\u2019 and thus increased with forest management intensity Table . The proMonacha cartusiana profited from high LUI and fertilization (4% losers and 4% winners) had a very little impact compared to the combined LUI.In grasslands, many species were predominantly found at low land-use intensities (LUI); 21% of all species were significant losers and only UI Table . HoweverHowever, in both forests and grasslands, species\u2019 land-use responses (i.e. their \u2018winner/loser\u2019 status) were independent of their traits; i.e. losers in forests or grasslands were neither characterized by a smaller or larger shell size nor by lower or higher numbers of offspring nor by lower or higher light preference etc. were associated with higher pH values as compared to forest soils, and many snail species is associated with a high vulnerability of the species, neither in forests nor in grasslands Table . A betteLand snail species are slow-dispersing organisms, and historical influences are of general importance for their distribution . Their dOur study showed that snail assemblages varied consistently in their trait composition across regions and among the two habitats, forests and grasslands. The variation between regions is consistent with a biogeographic gradient of increasing land snail diversity from the north to south caused by historical and ecological factors , 22 and In our study, 4% of all forest and 21% of all grassland snail species were significant losers concerning the compound indices of land-use intensity, including three land-use components in the forests or in the grasslands, respectively. The proportion of losers among grassland snail species was lower than the level found for grasshoppers (about 52%) and planWhile increasing land-use intensity in open habitats is known to trigger a decline of pollinator species, and such losses were associated with species-specific trait attributes such as a narrow diet breadth, climate specialization, a large body-size and low fecundity \u201339, we dCochlicopa lubricella is a xerophilic land snail [Note that single land-use parameters and abiotic conditions are often confounded in real landscapes as in our study, and thus responses of some snail species may not always correspond to single environmental dimensions as known from their global distribution or other sources. For example, nd snail whereas The range of resources and the ecological conditions generally define the niche breadth and determine the geographical area of a species at the small or large scale . SpecialOnly a few snails in our study across managed forests and grasslands are considered threatened or endangered according to the national Red List. Consistent with the expectation based on their environmental niche breadth, the species\u2019 vulnerability status was significantly predicted by a particularly narrow niche hypervolume\u2014an index that includes single land-use components as well as pH and soil moisture in each habitat. The smaller the hypervolume of a species, the higher its vulnerability according to the Red List. In addition, rarity was important: in forests, the most important predictor for their vulnerable status was a low number of sites in which they occurred. In grasslands, both their restricted occurrence and low total abundance predicted the species\u2019 vulnerability.In summary, our results indicate that the trait composition of snail communities was significantly altered by land-use intensities and abiotic conditions, and several species especially in grasslands were losers of intensive land use. These land-use and environmental responses were largely independent of specific traits and the species\u2019 Red List status\u2014this suggests that complementary risks may be important for predicting a species\u2019 vulnerability. Instead, species vulnerability was mirrored in the species\u2019 rarity and its overall niche hypervolume including single land-use components and abiotic factors.https://www.bexis.uni-jena.de/PublicData/PublicDataSet.aspx?DatasetId=24986. Wehner et al. [http://www.biodiversity-exploratories.de) [Data for this study were already part of a previous analysis of biodiversity and community composition, i.e. Wehner et al. and are r et al. collecteIn each region, 100 experimental plots (50 in forests and 50 in grasslands) were setup in 2008 along a land-use gradient covering different management types and intensities including mowing, grazing and fertilization in grasslands and the proportion of non-native trees, the proportion of dead-wood with saw cuts and the proportion of wood harvested in forests Table . Forest In June 2017, Wehner et al. took fivAs our current study focuses on species-level responses, only those individuals that could be assigned to the species level were used . Grassland plots in one region (Schorfheide) harbored large numbers of aquatic and semi-aquatic snails. In contrast to our previous analysis that covered all snails recorded , we exclAll statistical analyses were performed in R 3.5.2 using thMorphological and life-history trait values for all snail species were obtained from an established trait database by Falkner et al. and comppTi is the trait value of species i, ai,p is the abundance of species i in plot p and Ap the total abundance of all snails in plot p .For comparing snail communities among habitats and regions, the community weighted mean (CWM) of each trait was calculated as CWM per plot We characterized the environmental conditions of each forest or grassland plot by its land-use intensity and two abiotic soil parameters , we calculated each species\u2019 \u201cenvironmental niche\u201d. The method has been established in the context of the Biodiversity Exploratories and was applied to several taxa such as grasshoppers , cicadasNi sites with the same probability, with Ni being the number of sites in which species i was found. The null model thus chooses values of the focal land-use parameter of Ni sites and calculates a distribution of predicted AWMs and AWSDs values for each species based on 10,000 iterations. The null model was restricted to the one, two, or three regions in which the species was recorded to consider potential distribution boundaries of each species in Germany that may not be related to plot conditions [In addition to the AWM as a niche optimum, we also characterized the \u201cniche breadth\u201d of each species to a single environmental variable using the abundance-weighted standard deviation (AWSD) . To testnditions .p value for the significance of the deviation between observed and expected values. A \u2018winner\u2019 is defined as a species with an observed AWM larger than the upper 5% of the distribution of AWMs obtained by the null models (i.e. adapted on higher-than average land-use intensity), a \u2018loser\u2019 shows an observed AWM smaller than the lower 5% . For species which could be classified neither as \u2018losers\u2019 nor as \u2018winners\u2019, we tested whether they are specialized on intermediate land-use or abiotic levels, that is, whether they have an intermediate AWM with a narrower niche than expected. We standardized the niche breadth as weighted coefficient of variation (CV\u2009=\u2009AWSD/AWM) to account for the increase in SD with increasing mean, and compared observed CV and expected CV from the null models. This comparison allows us to distinguish \u2018opportunists\u2019 (observed CV\u2009\u2265\u2009expected CV) from species that are \u2018specialized\u2019 on intermediate land-use intensities [As in any randomization model, the proportion of AWMs or AWSDs from 10,000 null models with greater or smaller expected values respectively than the observed value, provides the CV\u2009\u2260\u20090) . The envVulnerability of land snail species was obtained from the Red List 2011 and its relation to abiotic soil conditions, we used a general linearized model with Poisson distribution including vulnerability as response factor and the respective land-use parameter or abiotic factor, the number of plots where the species occurred and its total abundance as explanatory factors. Values for grazing and fertilization were square-root transformed prior to statistical analyses and data on abundances and occurrence were log transformed because of data structure.n\u2010dimensional niche concept) as a proxy for the total \u2018niche breadth\u2019 of each snail species by multiplying the abundance-weighted standard deviations (AWSD) of all three single land-use components as well as of pH and soil moisture, respectively. The hypervolume was defined for forests and grasslands separately.Finally, we calculated a five-dimensional niche hypervolume and grasslands on the community weighted mean of the maximum shell size, the number of offspring, light preference, humidity preference, drought resistance and inundation tolerance. * p\u2009<\u20090.05, ** p\u2009<\u20090.01, *** p\u2009<\u20090.001. \u2193 negative effect, \u2191 positive effect.Additional file 2: Appendix 2. Influence of the abundance-weighted mean (AWM) of the forest management index on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in forests. Species in italics are land-use \u201cwinners\u201d, species in bold are land-use \u201closers\u201d.Additional file 3: Appendix 3. Influence of the abundance-weighted mean (AWM) of the proportion of non-native trees on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in forests. Species in italics are land-use \u201cwinners\u201d, species in bold are land-use \u201closers\u201d.Additional file 4: Appendix 4. Influence of the abundance-weighted mean (AWM) of the proportion of deadwood with saw cuts on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in forests. Species in italics are land-use \u201cwinners\u201d, species in bold are land-use \u201closers\u201d.Additional file 5: Appendix 5. Influence of the abundance-weighted mean (AWM) of the proportion of wood harvested on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in forests. Species in italics are land-use \u201cwinners\u201d, species in bold are land-use \u201closers\u201d.Additional file 6: Appendix 6. Influence of the abundance-weighted mean (AWM) of soil pH on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in forests. Species in italics are land-use \u201cwinners\u201d, species in bold are land-use \u201closers\u201d.Additional file 7: Appendix 7. Influence of the abundance-weighted mean (AWM) of soil moisture on the maximum shell, size number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in forests. Species in italics are land-use \u201cwinners\u201d, species in bold are land-use \u201closers\u201d.Additional file 8: Appendix 8. Influence of the abundance-weighted mean (AWM) of land-use intensity (LUI) on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in grasslands. Species in italics are land-use \u201cwinners\u201d, species in bold are land-use \u201closers\u201d.Additional file 9: Appendix 9. Influence of the abundance-weighted mean (AWM) of mowing on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in grasslands. Species in italics are land-use \u201cwinners\u201d, species in bold are land-use \u201closers\u201d.Additional file 10: Appendix 10. Influence of the abundance-weighted mean (AWM) of grazing on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in grasslands. Species in italics are land-use \u201cwinners\u201d, species in bold are land-use \u201closers\u201d.Additional file 11: Appendix 11. Influence of the abundance-weighted mean (AWM) of fertilization on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in grasslands. Species in italics are land-use \u201cwinners\u201d, species in bold are land-use \u201closers\u201d.Additional file 12: Appendix 12. Influence of the abundance-weighted mean (AWM) of soil pH on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in grasslands. Species in italics are land-use \u201cwinners\u201d, species in bold are land-use \u201closers\u201d.Additional file 13: Appendix 13. Influence of the abundance-weighted mean (AWM) of soil moisture on the maximum shell size, number of offspring, light preference, humidity preference, drought resistance and inundation tolerance in grasslands. Species in italics are land-use \u201cwinners\u201d, species in bold are land-use \u201closers\u201d.Additional file 14: Appendix 14. Relation of the abundance-weighted means (AWM) of the forest management index, proportion of non-native trees, proportion of dead wood with saw cuts, proportion of wood harvested, pH and soil moisture and the proportional occurrence of a certain species in forests.Additional file 15: Appendix 15. Relation of the abundance-weighted means (AWM) of the land-use intensity, mowing, grazing, fertilization, pH and soil moisture and the proportional occurrence of a certain species in forests."} +{"text": "Media reports that a company behaves in a socially nonresponsible manner frequently result in consumer participation in a boycott. As time goes by, however, the number of consumers participating in the boycott starts dwindling. Yet, little is known on why individual participation in a boycott declines and what type of consumer is more likely to stop boycotting earlier rather than later. Integrating research on drivers of individual boycott participation with multi-stage models and the hot/cool cognition system, suggests a \u201cheat-up\u201d phase in which boycott participation is fueled by expressive drivers, and a \u201ccool-down\u201d phase in which instrumental drivers become more influential. Using a diverse set of real contexts, four empirical studies provide evidence supporting a set of hypotheses on promotors and inhibitors of boycott participation over time. Study 1 provides initial evidence for the influence of expressive and instrumental drivers in a food services context. Extending the context to video streaming services, e-tailing, and peer-to-peer ridesharing, Study 2, Study 3, and Study 4 show that the reasons consumers stop/continue boycotting vary systematically across four distinct groups. Taken together, the findings help activists sustain boycott momentum and assist firms in dealing more effectively with boycotts.The online version contains supplementary material available at 10.1007/s10551-021-04997-9. In 2013, a TV documentary on the substandard work conditions of employees who were subcontracted by a leading e-tailer evoked strong reactions with consumers in Germany, many of whom decided to boycott the company may also be more likely to carry on boycotting during later stages (t1). Consistent with reports that initial egregiousness (t0) is a key driver of boycott participation in the initial phase (t0), possible changes in boycott participation should depend on perceived egregiousness may influence boycott participation at that point in time (t1), the decision should further depend on the consumer\u2019s initial decision to (not) join the boycott (t0). By partially replicating studies on perceived egregiousness as a key driver of boycott participation and his or her current level of perceived egregiousness (t1).The previously discussed models of boycott participation have been limited to examining consumer motivations to boycott at one point in time. Suggesting that a temporal extension is needed, macro level studies indicate that boycotts gradually lose participants and momentum over time on later boycott participation (t1) is.Perceived egregiousness will moderate the relationship between the initial boycott participation and participation at a later point in time. The higher the perceived egregiousness at At the macro level, boycotts can be categorized as instrumental or expressive , instrumental factors should become more influential over time, thereby leading to cognitive dissonance in the evaluation of the boycott and ultimatively to changes in the participation and also (ii) at later stages (t1).As an expressive promoter, self-enhancement will influence boycott participation positively (i) at the initial stage and also (ii) at later stages (t1).As an expressive inhibitor, brand image will influence boycott participation negatively (i) at the initial stage .As an instrumental promoter, perceived control will influence boycott participation positively at later stages .As an instrumental inhibitor, subjective costs will influence boycott participation negatively at later stages that have been validated in previous studies of boycotting. However, because our focus is on examining boycott dynamics across a number of divergent business contexts, relevant characteristics of these contexts need to be accounted for. For example, certain inhibitors may be particularly relevant in service contexts, especially inhibitors related to service intangibility and provider attributes .As an instrumental inhibitor, perceived service quality will have a negative effect on boycott participation at later stages .As an instrumental inhibitor, the customer-friendly behavior of frontline employees will have a negative effect on boycott participation at later stages ; 233 of them returned for the second set of measurements. Data sets from an additional 31 participants were subsequently dropped due to failing an attention check. We randomly recruited participants without screening for prior purchase of the company\u2019s products. A non-response analysis showed no systematic differences between participants who completed both questionnaires and those who dropped out after the first round.3We collected data at two points in time, with a time lag of three weeks.In the first round, we started the survey by presenting the vignette, followed by the request to list three more questionable actions attributed to the brand. Since most subjects repeated the content from the vignettes, we could verify that the participants had no doubts about the realism and credibility of the study. Furthermore, listing further immoral aspects helped us not only rule out individual differences among the participants in terms of the evaluation of the company's immoral behaviors but also reinforce the intended effect of the vignettes. In the second round, to avoid priming bias we merely stated that this study would be a follow-up to the one they had completed previously. In both rounds, we assessed boycott behavior, as well as inhibitors and promoters. We adapted measures of boycott participation developed by Nerb and Spada , as wellt0), perceived egregiousness (t1), and the interaction term of both variables as the independent variables and boycott participation as the dependent variable (Table t0) \u00d7 perceived egregiousness (t1) interaction effect on boycott participation (t1) interacts with perceived egregiousness at t1 to affect boycott participation at t1. Effects vary systematically for expressive drivers (perceived egregiousness and brand image) and also for instrumental drivers (subjective costs and perceived control); furthermore, effects vary between the heat-up and the cool-down phases. Study 1 also highlights the role of frontline employees in dealing with boycotts. Especially in business contexts where frontline employees contribute substantially to customers\u2019 perception of the company, such as in fast food restaurants, perceived service quality overrides other influencers of boycott participation. This finding has strong practical implications, because increasing numbers of fast food chains implement self-order kiosks, thereby reducing direct contact between customers and employees. Our findings suggest that this approach can backfire in times of egregious acts because of a lack of opportunities for frontline employees to restore damaged customer relationships.Our results show that the initial boycott participation . Third, Study 2 further disentangles the roles played by boycott drivers with distinct consumer groups. We explore temporal changes in a within-subjects design at two points in time with a time lag of two weeks.Study 2 examined how consumers respond to the alleged immoral behavior of a leading online movie streaming provider. Participants first read a vignette including news reports that the company offered morally questionable shows and movies. For example, a novel reality show format encouraged a group of actors to talk an unknowing participant into committing murder. In a second case, a woman blamed the firm for her daughter\u2019s attempt to commit suicide. In a third case, a corporate comedian proudly announced that his show was the main catalyst for over 4,500 relationships being terminated, including many divorces. In light of these news, the vignette alleged that the company\u2019s customers started to boycott the firm and to switch to other providers. Again, we explained to participants that the presented media excerpts presented real reports taken from actual news websites. Following the vignette, we again prompted participants to write down additional questionable behaviors of the firm. Again, subjects mostly repeated the content from the vignettes, indicating that they had no doubts about the realism and credibility of the study. The vignettes are displayed in Appendix A3.age\u2009=\u200934.78, SDage\u2009=\u200910.31; 57% male), with 303 participants returning to continue at the second point in time. A non-response analysis indicated no significant differences between participants who completed the study at both times and those who dropped out after the first round.4We collected data at two points in time with a time lag of two weeks. Recruited from MTurk, 533 U.S. residents took part in an online survey (M2(1)\u2009=\u20093.51, p\u2009=\u20090.043).The overall design of the study was almost identical to that of Study 1, with all scales consisting of seven-point Likert-type ratings. Different from Study 1, we added a binary measure of boycott participation . To enhance the validity of the boycott scale, we included a behavioral measure by offering respondents an opportunity to participate in a lottery. Upon completing the survey, they could choose between four coupons, one of these valid with the video streaming company and three others valid with competitors. Choosing the coupon of the video streaming company was thought to indicate non-boycotting, while the other options were thought to indicate boycotting. Accordingly, the coupons provide a dichotomous index of boycotting vs. non-boycotting. The chi-square test with the dichotomous boycott measure and the dichotomous coupon choice indicated a significant relation between both variables (\u03c7t0), perceived egregiousness (t1), and the interaction term of both variables as the independent variables, and a binary boycott participation measure as the dependent variable. All variables were mean centered and boycott participation (t0) had a significant and positive effect on boycotting (t1), thereby supporting H1. In other words, increasing levels of perceived egregiousness in t1 enhanced the initial boycott behavior's positive influence on boycott participation in t1. Figure\u00a0We ran binary logistic regressions with boycott participation , and had a marginally significant influence in t1 . Similarly, brand image, the expressive inhibitor, had significant negative effects in both t0 and t1 . In contrast, perceived control, the instrumental promoter, had a marginally significant and positive effect on boycott participation only at t1 . Among instrumental inhibitors, the effect of subjective costs on boycott participation was not significant at the two points in time, but perceived service quality had a significant negative influence in t1 only as predicted in H3c.Self-enhancement, the expressive promoter, influenced boycott participation significantly and positively in Mt0\u2009=\u20092.67, Mt1\u2009=\u20093.93, t\u2009=\u200912.48, p\u2009\u2264\u20090.001) were labeled the Deliberators (\u0394boycottt1\u00a0\u2212\u00a0boycottt0\u2009>\u20090). Respondents who exhibited constant levels of boycott participation were labeled the Apathetic. To split the large group of remaining respondents exhibiting a decrease in boycott participation (\u0394boycottt1\u00a0\u2212\u00a0boycottt0\u2009<\u20090), we additionally accounted for changes in the level of perceived egregiousness [(\u0394egregiousness\u2009<\u20090) or (\u0394egregiousness\u2009\u2265\u20090)]. Consumers who exhibited a decreasing boycott participation . As a check to the robustness of the results obtained with the psychometric measure, results with the behavioral measure corroborated that Apathetic consumers stay with the boycotted e-tailer more than do the other types.As with Study 2, we categorized respondents according to the four boycotter types. To examine drivers of boycotting among the four types, we again ran four regression models (one for each type), with perceived egregiousness, promoters, and inhibitors as the independent variables, and boycott participation as the dependent variable (see Table t0) interacts with perceived egregiousness at t1 to affect boycott participation at t1. Furthermore, boycott participation over time varies between four types of consumers due to the divergent influence of boycotting drivers. Specifically, effects vary systematically for expressive drivers (perceived egregiousness and brand image), and also for instrumental drivers (subjective costs and perceived control). In addition, they vary between the heat-up and the cool-down phases. Furthermore, the results highlight the importance of socially responsible human resource management for employee work behaviors had been suspended for trips originating at JFK Airport. This behavior caused approximately 500,000 consumers to delete the app, effectively boycotting the provider. A hashtag encouraging deletion of the app spread rapidly and widely through social media at the heart of the protest against the company. As before, we highlighted that the reports are taken from real news websites.Mage\u2009=\u200934.81, SDage\u2009=\u20099.21; 75% male). MTurk randomly assigned participants to the study regardless of whether or not they were actual customers of the ridesharing company. Of our sample, 35 respondents had never used the ridesharing service and were therefore excluded, leaving a final sample of 185 participants for subsequent analyses. To increase validity, the questions assessing consumer knowledge of the case (65% had heard of the boycott) appeared at the beginning of the questionnaire such that respondents were not able to draw conclusions about study objectives. As with Study 3, respondents indicated their level of perceived egregiousness and boycott participation for both the time of the survey and, retrospectively, twelve months earlier (t0\u2009=\u2009the time when the boycott was initiated).8In our study, 283 U.S. residents took part in an MTurk survey, with 220 participants completing all questions and boycott participation decrease significantly from the heat-up phase (t0) to the cool-down phase (t1). Further results indicate that perceived egregiousness (t1) interacts significantly with boycott participation (t0) to affect boycotting at t1 (Table t1) and service quality on boycott participation , while the boycotting level of the Apathetic remained constant . The Forgetters displayed a decrease in boycott participation in combination with a decrease in perceived egregiousness , whereas the Capitulated exhibited a decrease in boycott participation paired with an increase in perceived egregiousness . Again, self-enhancement was a significant driver across groups , contexts , and study designs.Corporate Social Responsibility (CSR) Dahlsrud, Boycotts rarely maintain their level of intensity over time to decrease switching costs and to ease the switch to competitors. On the other hand, to reduce the likelihood of extended boycotting, managers should consider raising boycott-related barriers, such as the (non-)monetary value of seeking alternatives. Third, we found that a high service quality can serve as an effective buffer against calls for boycotting. Companies should, therefore, maintain sufficiently high levels of service quality as a means to communicate more directly and competently with their customers. Similarly, activists should be aware of the high importance of service quality; to counter it, they could emphasize better service of a competitor as a motivation to switch. Fourth, in many cases frontline employees are the main touch points linking the company with its customers , while others maintain higher levels of perceived egregiousness over time and stop boycotting only because of individual cost\u2013benefit considerations (the Capitulated). Yet, another consumer type starts boycotting only after a certain amount of time has passed (the Deliberators). Our findings can help boycott activists increase the success of their boycott in the long term. On the other hand, companies could use our findings to soften the negative boycott dynamics for avoiding damage in the short term. In response to the boycott call, the company might address the cause and remedy its unethical behavior that initially triggered the boycotts. Table This study has a few limitations, offering opportunities for further research. Conceptually, we make use of established theories to put forward the two-stage model of emotional heating and cognitive cooling. We caution readers, however, that a hard distinction between the two stages may be misleading and a more nuanced view may be useful for obtaining further insights. Methodologically, our measurement of boycott participation is based on self-reported scales and past behavior. While we validated these measures with a behavioral variable (lottery) in Study A few other avenues for future research need mentioning. From a conceptual perspective, our research is based on a cost\u2013benefit model should be examined. Prior research indicates that stakeholder attention after a company\u2019s misconduct varies, often failing to result in retribution Barnett, . This isFourth, our study did not fully explore the emotional costs assosciated with boycotts. For example, future studies could examine the influence of related concepts, such as brand attachment Below is the link to the electronic supplementary material."} +{"text": "Objectives:\u00a0To study the frequency of risk factors affecting the development of parastomal hernias in patients undergoing stoma formation.Study Design: A retrospective descriptive cross-sectional study.Duration of Study: This study was conducted at the Department of General Surgery between January 2017 to December 2020.Methodology:\u00a0A total of 163 patients aged between 20 and 100 years and who required a stoma formation were included in the study. The patients with incomplete data and those lacking post-operative imaging were excluded. According to this selection criteria, 80 patients were excluded. The data was collected for all patients from the hospital database. This included patient\u2019s demographic information, co-morbidities, pre-surgery patient characteristics, an indication of stoma formation, the location of stoma exit, type of surgery, associated comorbidities, subcutaneous fat thickness, and type of stoma formed. Data were analyzed using IBM Corp. Released 2019. IBM SPSS Statistics for Windows, Version 26.0. Armonk, NY: IBM Corp.Results:\u00a0The mean age was 68.46 \u00b1 16.50 years, with males in the majority: 48 (57.8%). Most of the patients, 53 (63.8%), had malignant disease. Post-stoma formation, a total of 38 (45.9%) patients developed parastomal hernias, mostly involving the sigmoid colon . However, there was a statistically significant relationship between paroxysmal sympathetic hyperactivity (PSH) incidence with non-trans-rectus stomas .\u00a0Furthermore, malignancy was also not an independent predictor of PSH . All\u00a0other risk factors included in this study were nonsignificant.Conclusion:\u00a0Our study shows that the incidence of parastomal hernias is rising with a high rate demonstrated in our patients. There was no statistically significant association between patient-related preoperative and operative factors with increased risk of parastomal hernias in our population except for a non-trans-rectus stoma, which was identified as an independent risk factor for parastomal hernias. Based on our findings, we would recommend a trans-rectus stoma over all other stoma sites. However, a much larger study is needed to validate this finding further. Stoma formation is among the most commonly performed surgical procedures to redirect gut contents for various reasons . ParastoIt is reasonable to recognize the risk factors for parastomal hernias earlier and reduce the morbidity and mortality associated with this complication in colorectal surgeries. Hence, we conducted a retrospective study to investigate the frequency of occurrence of parastomal hernias in our center along with the possible risk factors in our community.This was a retrospective, descriptive cross-sectional study conducted from January 2017 to December 2020, which amounted to four years in the Department of General Surgery, Russells Hall Hospital, Dudley. All patients between 20 and 100 years with at least one year\u00a0of completed follow-up that required a stoma formation as a part of the first operation were included in the study.\u00a0A total of 163 surgeries for the construction of intestinal stomas were performed during the study period. Out of 163 patients, 29 patients didn\u2019t have complete information, and 51 patients had no post-operative imaging, so these were excluded. The remainder of 83 patients with comprehensive database and post-operative imaging were included in the study criteria. Pre-surgery features included co-morbidities, the indication of surgery, the etiology , and the confidential enquiry into perioperative deaths (CEPOD) level of intervention.\u00a0Operative characteristics included the location of stoma exit, type of surgery, subcutaneous fat thickness, muscle thickness, and type of stoma formed. The primary outcome measure was to see if the site of stoma formation has any impact on parastomal hernias. CT scan was used as an adjunct to establish, if any, the relation between the fat thickness and the muscle thickness measured at the site of parastomal hernia.\u00a0Data were analyzed using IBM Corp. Released 2019. IBM SPSS Statistics for Windows, Version 26.0. Armonk, NY: IBM Corp. Mean and SD was calculated for quantitative variables, while qualitative variables were recorded in frequency and percentage. An independent sample t-test was applied for comparison between the groups, and the signi\ufb01cance level was determined to be < 0.05. Analysis was completed using IBM Corp. Released 2013. IBM SPSS Statistics for Windows, Version 22.0. Armonk, NY: IBM Corp.We studied 83 patients, with a mean age of 68.46 \u00b1 16.50 years. Males accounted for 57.8% (n=48) of these patients. The majority of the patients were white: 96.4% (n=80), 48.2% (n=40) suffered from diabetes mellitus. A total of 22 (26.5%) were ASA class I, while class II and III were 36 (43.3%) and 25 (30.2%), respectively.The majority of cases were performed electively: 57 (68.6%). The most frequently performed procedure was\u00a0Hartmann\u2019s procedure: 34 (41%), followed by abdominoperineal resection, which accounted for 32 (38.6%) cases. Colon was brought up to form a stoma in 61 (73.5%). In 49 (59.1%), end stomas were formed, while the rest were loop ostomies. A majority of the stomas were trans-rectus 48 (57.7%). A total of 38 (45.9%) patients developed parastomal hernias, most of which involved the sigmoid colon .Increasing age, gender, diabetes mellitus, etiology, CEPOD level, operative time, or type of stoma did not correlate with increased risk of parastomal hernia formation, as shown in Tables We found no statistically significant association regarding fat thickness and rectus muscle thickness calculated from the CT scan , respectively, which has been shown in Table The incidence of parastomal hernias has been on the rise in recent decades, not only due to increased surgeries resulting in the formation of stomas but also due to the frequent employment of imaging, such as computed tomography, for diagnosis of even small-sized hernias . Our stuOur study showed a positive relationship between the development of parastomal hernias and the location of the exit of the stoma, with a lower risk of occurrence if it was trans-rectus compared to trans-oblique or trans-oblique\u00a0junctional stomas. Previous studies have shown that citing stoma outside the rectus muscle is associated with an increased risk of developing hernias . While oIt is highly likely that due to a limited number of patients and retrospective study design, we could not successfully identify any causal associations. However, it is essential to be aware of the findings of this study. We recommend a prospective study with a larger cohort of patients to ensure extra precautions can be taken to identify and address the patient-related risk factors and improve outcomes for patients who already have to bear the burden of reduced quality of life secondary to stoma formation.Our study shows that the incidence of parastomal hernias is on a rise, with a high rate shown in our patients as well. There was no statistically significant association found between patient-related preoperative as well as operative factors with increased risk of parastomal hernias in our population except for a non-trans-rectus stoma, which was identified as an independent risk factor for parastomal hernias. Based on our findings, we would recommend a trans-rectus stoma over all other stoma sites;\u00a0however, a much larger study is needed to further validate this finding."} +{"text": "Despite the ample progress made toward faster and more accurate Monte Carlo (MC) simulation tools over the past decade, the limited usability and accessibility of these advanced modeling tools remain key barriers to widespread use among the broad user community.An open-source, high-performance, web-based MC simulator that builds upon modern cloud computing architectures is highly desirable to deliver state-of-the-art MC simulations and hardware acceleration to general users without the need for special hardware installation and optimization.We have developed a configuration-free, in-browser 3D MC simulation platform\u2014Monte Carlo eXtreme (MCX) Cloud\u2014built upon an array of robust and modern technologies, including a Docker Swarm-based cloud-computing backend and a web-based graphical user interface (GUI) that supports in-browser 3D visualization, asynchronous data communication, and automatic data validation via JavaScript Object Notation (JSON) schemas.The front-end of the MCX Cloud platform offers an intuitive simulation design, fast 3D data rendering, and convenient simulation sharing. The Docker Swarm container orchestration backend is highly scalable and can support high-demand GPU MC simulations using MCX over a dynamically expandable virtual cluster.http://mcx.space/cloud.MCX Cloud makes fast, scalable, and feature-rich MC simulations readily available to all biophotonics researchers without overhead. It is fully open-source and can be freely accessed at First, the adoption of massively parallel computing and graphics processing units (GPUs) have greatly improved the computational efficiency of conventional MC simulations, shortening the simulation run-time by tens to hundreds fold on a modern GPU.,Compared to many published traditional research codes that were developed as single-release static software, an increasing number of new MC software packages have started tackling the challenges of usability and long-term maintainability. Many of these projects openly embrace state-of-the-art software engineering best practices and offer the software as a vibrantly growing platform via continuous enhancements, timely bug fixes, and active user support via flexible feedback channels. Ease-of-use has also become the focus of a number of recently published MC toolkits, where MATLAB-based dynamic library (MEX) interfaces and graphical user interfaces (GUIs) have been reported.,With the exciting progress in developing open-source MC simulators with increasing speed, functionality, accuracy, and user-friendliness, we would like to tackle here the next major challenge in high-performance, general-purpose MC photon simulation software, namely scalability and availability. A number of previous publications, including several from our group, have addressed the challenges in creating scalable simulations that can utilize more than one GPU or run simulations across CPUs/GPUs of multiple vendors. In particular, a number of previous papers reported OpenCL-based MC implementationsIn this work, we report a modern, scalable, high-performance, and fully open-source in-browser MC simulation platform\u2014MCX Cloud\u2014to bring state-of-the-art GPU hardware and our extensively optimized and feature-rich MCX simulator software to the rapidly growing biophotonics research community. Our MCX Cloud platform embraces an array of modern and standardized cloud-computing techniques. In the backend, it utilizes DockerA key advancement that enables us to develop such a compact, scalable and portable software/hardware platform is the adoption of JavaScript Object Notation data exchange format, and is at the core of most today\u2019s web-based applications. Compared to XML, JSON is extremely lightweight and fast to parse, yet it is capable of storing complex hierarchical data. Numerous free and lightweight JSON parsers are available today for nearly all existing programming languages, permitting plug-and-play implementation of JSON data support in most applications.Despite these aforementioned advantages, adoption of JSON in storing scientific data is largely limited to handling lightweight metadata. This is because JSON does not have explicit rules on how to serialize common scientific data structures such as N-dimensional (N-D) arrays, complex and sparse arrays, tables, graphs, trees, etc. Additionally, JSON does not directly support storage of strong-typed binary data. To bridge this gap, our group published an open-standard\u2014the JData SpecificationIn addition to using JSON to encode input data, we have also completed the migration of MCX volumetric output data as of 2020, converting from the NIfTI data formatA key benefit of adopting JSON based data formats is to enable machine-automatable data validation. This can be readily achieved using the JSON Schema framework. JSON Schema is a systematic approach to defining data types, formats, and properties for each data entry in a JSON data structure, and is currently a proposed Internet standard by the Internet Engineering Task Force (IETF).2.4The front-end, i.e., the web GUI, of MCX Cloud consists of two major components\u2014an in-browser JSON data editor to create JSON-formatted input data for MCX simulations and a 3D data rendering module based on WebGL (see below section). The web-based MCX JSON input editor was derived by combining an open-source general-purpose JSON editor developed by Jeremy Dorn et\u00a0al. with our JSON-schema of MCX input JSON data format. The JSON editor module is a lightweight (73\u00a0kB in size) JavaScript library that enables the creation and editing of arbitrary JSON-formatted data using a user-defined schema. It also simultaneously supports a number of popular web GUI frameworks and icon libraries to improve customizability.A minimalistic design style is used to provide users with a clean and streamlined environment to create, preview, execute, render, and easily share MCX simulations. All front-end functionalities are achieved using a combination of HTML5 and JavaScript programming. Notably, the use of the JQuery library makes the front-end compact and easy-to-maintain.2.5In the front-end of MCX Cloud, we have developed fully featured 3D shape and volumetric data rendering and download functionalities. In comparison, the web GUI of MCOnline only provides rendering and data downloading for a particular Our MCX JSON input file accepts two methods for defining a heterogeneous simulation domain: (1)\u00a0a constructive solid geometry (CSG) approach using a list of shape primitive constructs such as spheres, boxes, cylinders, 2.6The client and the server communicate via asynchronous data communication, known as AJAX (asynchronous JavaScript And XML). Despite the name, JSON, instead of XML, has been predominantly used in today\u2019s web applications data exchange. User inputs are encoded as lightweight JSONP (JSON with Padding) data packets and sent to the server; the server sends back the response, also encoded as JSON packets, and informs the JavaScript on the web GUI to update the web page content dynamically without needing to reload the entire web page.To facilitate the processing of user submissions and management of Docker Swarm jobs, we developed an ultra-compact common gateway interface (CGI) script, named \u201cmcxserver,\u201d written in the Perl programming language to handle user-submitted job requests. These submitted simulation data are stored in a database using SqliteTo optimize server disk usage, we define a job expiration time window (currently set to 1\u00a0h) and configure another recurrent process (known as a cron-job) to automatically clean the expired job folders to save space. If a simulation is frequently executed by users, such as the default simulation or built-in examples, we keep the simulation output folder in a cache folder to avoid repeated computation.2.7Guided by the FAIR principle3Following the methodologies discussed above, we have created a preview version of the MCX Cloud simulation platform. In this initial configuration of the MCX Cloud backend, we have currently included To demonstrate the GUI design in MCX Cloud\u2019s front-end, in To show the 3D domain rendering functions in the web GUI, in Our 3D in-browser rendering tool also automatically renders MCX-computed fluence maps, also encoded in the JSON/JNIfTI format, returned by the server after the computation is completed. In To demonstrate that one can use MCX Cloud to distribute a large simulation across multiple GPU devices installed in the Docker Swarm, we launch the digimouse benchmark simultaneously to 10 GPUs installed on the backend, each running 3.1Over the past decade, MC-based photon transport simulation has gained ample progress in terms of speed and accuracy in modeling increasingly complex anatomical structures. A list of free and open-source MC simulators with various levels of functionalities have been developed, published, and actively maintained by a number of research groups. While some of these open-source toolkits have successfully attracted a sizable user community, most of these tools were disseminated using a conventional download-and-install approach. In addition, many high-performance MC simulators require purchasing and installing high-end graphics cards on users\u2019 own computers to maximize efficiency. For less-experienced users, properly configuring and using these specialized simulation tools can be key barriers.This work specifically addresses challenges regarding the usability and availability of MC simulators as mentioned above. Particularly, we described an in-browser GPU-accelerated MC simulator and cloud-based service that can be launched anywhere a browser is available, including mobile devices such as a smartphone or a tablet. This system combines our decade-long, continual development in MCX light transport simulation software with state-of-the-art cloud-computing platforms, and offers a robust, scalable and forward-looking framework for a standardized, high-demand, high-throughput and community-focused MC modeling platform. Compared to the previously published online MC simulator, this new platform embraces the latest technologies in microservices, cloud-computing (containerization and orchestration), and web-based GUI design , and demonstrates high flexibility and scalability that were not previously available.We can not emphasize enough how adopting a standardized and web-friendly input/output data format in JSON/JData greatly simplified or even directly enabled the implementation of this lightweight yet highly versatile web-based platform. To be more specific, utilizing JSON to encode MCX\u2019s input/output data allowed us to seamlessly integrate them with JavaScript and a web environment. Also, defining MCX\u2019s input data using JSON schema allows the JSON Editor library to automatically create the JSON editing interface in our front-end. This in-browser JSON editor is not only intuitive to use, but also generates JSON data that automatically satisfies the specified schema. Similarly, adopting JSON and JData data annotations also allow MCX to store complex output data records, including volumetric fluence rate, partial-pathlengths, and various lightweight metadata in a unified, easy-to-read JSON format that can be readily transmitted, parsed and rendered inside a browser.Although we use MCX at the backend to perform the underlying MC computation, our cloud computing system can be readily adapted to use any other MC simulators, as long as the alternative simulator also supports JSON/JData as the input/output data format and provides the corresponding JSON schema of the desired input JSON data structure (can be entirely different from those of MCX). For the same reason, our current web GUI can be directly used in combination with MCX-CLFrom the benchmark results shown in Moving forward, we aim to complete the migration of our MMC simulatorThe next step of our project also includes further solidification and dissemination of the JData specificationhttp://mcx.space/cloud.In summary, we report a highly scalable, easy-to-use, and cloud-computing-based in-browser MC simulation platform\u2014MCX Cloud. This platform was built upon an array of modern open-source technologies, including the use of Docker containers and container orchestration to run GPU-based MC simulations across a robust, elastic, scalable, and distributed virtual GPU cluster. It also leverages the latest web-based technologies, such as JSON, JSON schema, AJAX, and WebGL, to create an intuitive, easily expandable, and responsive web GUI. At the core of this cloud computing platform is our significantly improved MCX photon transport simulator, packaging numerous enhancements in GPU optimization and algorithmic features that we have developed and integrated over the past decade. We want to particularly highlight that this platform is fully open-source\u2014we not only provide the source codes for the MCX simulator, but also those for the web GUI and server-side scripts\u2014so that anyone can build a private cloud for internal use or modify these scripts to accommodate other similar solvers. In the meantime, we have built an initial GPU cloud containing"} +{"text": "Blood Cancer Journal 10.1038/s41408-021-00468-6, published online 19 April 2021Correction to: After the publication of the original article, the author list was updated as given here:1, Gudmar Thorleifsson2, Aitzkoa Lopez de Lapuente Portilla1, Abhishek Niroula1,3, Molly Went4, Malte Thodberg1, Maroulio Pertesi1, Ram Ajore1, Caterina Cafaro1, Pall I. Olason2, Lilja Stefansdottir2, G. Bragi Walters2, Gisli H. Halldorsson2, Ingemar Turesson5, Martin F. Kaiser4, Asta F\u00f6rsti6, Hartmut Goldschmidt7, Kari Hemminki6, Niels Weinhold8, Niels Abildgaard9, Niels Frost Andersen10, Ulf-Henrik Mellqvist11, Anders Waage12, Annette Juul-Vangsted13, Unnur Thorsteinsdottir2,14, Markus Hansson1,5, Richard Houlston4, Thorunn Rafnar2, Kari Stefansson2,14, Bj\u00f6rn Nilsson1,3Laura Duran-Lozano1Hematology and Transfusion Medicine, Department of Laboratory Medicine, 221 84 Lund, Sweden. 2deCODE genetics, Sturlugata 8, IS-101 Reykjavik, Iceland. 3Broad Institute, 415 Main Street, Cambridge MA 02124, USA. 4Division of Genetics and Epidemiology, The Institute of Cancer Research, 123 Old Brompton Road, London SW7 3RP, United Kingdom. 5Hematology Clinic, Lund University Hospital, 221 85 Lund, Sweden. 6German Cancer Research Center (DKFZ), Im Neuenheimer Feld 580, D-69120, Heidelberg, Germany. 7Department of Internal Medicine V, University of Heidelberg, Heidelberg, Germany, 8Department of Internal Medicine V, University Hospital of Heidelberg, 69120 Heidelberg, Germany. 9Hematology Research Unit, Department of Clinical Research, University of Southern Denmark and Department of Hematology, Odense University Hospital, Denmark. 10Department of Haematology, Aarhus University Hospital, 8200 Aarhus N, Denmark. 11S\u00f6dra \u00c4lvsborgs Sjukhus Bor\u00e5s, Sweden. 12Institute of Clinical and Molecular Medicine, Norwegian University of Science and Technology, Department of Hematology, and Biobank1, St Olavs hospital, Trondheim, Norway. 13Department of Haematology, University Hospital of Copenhagen at Rigshospitalet, Blegdamsvej 9, DK-2100 Copenhagen, Denmark.14Faculty of Medicine, University of Iceland, Reykjavik, Iceland."} +{"text": "The term heterotopic pregnancy is defined as a uterine pregnancy coexisting with a second pregnancy in an extrauterine location. Spontaneous, full-term heterotopic pregnancy with alive birth is very rare. The diagnosis and management of such exceptionally unique case is difficult. When the patient presented with an advanced labor with no antenatal care follow up and with no risk factors is even more challenging for poorly equipped facilities like ours.A 25\u00a0years old gravida 3, para 2 mother presented to the labor and delivery ward of Bele Primary Hospital, Southern Ethiopia with the complaint of pushing down pain of 18\u00a0h duration. Immediately after arrival, she gave birth to a 3300gm female neonate spontaneously. After delivery, an abdominal mass was recognized and manual exploration of the uterus was done to look for the presence of after coming second twin but the uterus was empty. On ultrasound examination, there was an alive fetus in transverse lie outside the uterus. With the impression of 2nd twin in a separate horn of bicornuate uterus and to rule out abdominal pregnancy, laparotomy was done. On laparotomy, there was abdominal pregnancy in the Pouch of Douglas with an intact amniotic sac. The sac was attached with the left broad ligament, left ovary, small bowel mesentery, and posterior wall of the uterus. The sac opened, a 1600gm alive female neonate with features of fetal growth restriction and left club foot was delivered. The placenta was detached spontaneously and removed without any complication.The coexistence of spontaneous full-term intrauterine with advanced abdominal ectopic pregnancy is one of the rarest forms of heterotopic pregnancy. Every health professional should bear in mind that intrauterine and extrauterine pregnancy may happen simultaneously and it can progress to term without any symptoms. Ultrasound is the diagnostic method of choice but the existence of an intra-uterine pregnancy cannot rule out ectopic pregnancy. The life-threatening complication of abdominal ectopic pregnancy is bleeding from the detached placental site. Therefore, the decision to remove the placenta should be individualized. The word heterotopic pregnancy is used in place of the older term combined pregnancy . It is dRisk factors for the development heterotopic pregnancy are any event that can lead to scarring of the fallopian tube . Risk faBecause of the rare occurrence of heterotopic pregnancy, there is little agreement on the optimal surgical management . TreatmeHere we present an exceptional case of spontaneous heterotopic pregnancy which progresses to full term with good perinatal outcome for both the intrauterine and extrauterine fetuses.A 25\u00a0years old gravida 3, and para 2 mother presented with the complaint of advanced labor pain of 18\u00a0h duration. She came by ambulance transport from a 35\u00a0km distant rural health center to Bele Primary Hospital, Wolaita Zone, Southern Ethiopia. The mother did not remember her last normal menstrual period but claims to be amenorrheic for the last 9\u00a0months. During the current pregnancy; she had no antenatal care visit, no history of vaginal bleeding, no abdominal pain, and no other danger signs of pregnancy. She has no previous history of pelvic inflammatory disease (PID), and pelvic surgery. She has also no history of contraceptive use. Both her last deliveries were at home with no complications. During the physical examination, her vital signs were in the normal range. Pink conjunctiva and non-icteric sclera. On abdominal examination, 38\u00a0weeks sized uterus, fetal heart beat was 148\u00a0bpm, cephalic presentation, longitudinal lie, multiple fetal poles were not appreciated, there was 3 uterine contractions in 10\u00a0min with moderate strength and bladder was not distended. On the genito-urinary examination (per vagina), cervix was fully dilated, vertex presentation, fetal head visible at vulva, normal position, no sign of caput or molding, the membrane was ruptured with clean amniotic fluid.Basic laboratory investigations were done, her hematocrit level was 35%, and her blood group was \u201cO positive\u201d. Other serologic tests were also done for HIV, Hepatitis, and syphilis and all were negative and urine analysis was also negative for microscope exam.st and 5th minutes respectively by spontaneous vaginal delivery and 3rd stage of labor managed actively.Vaginal delivery summary, this mother gave birth to alive female neonate weighing 3300gm with an Apgar score of 8 and 9 in the 1nd twin and speculum examination also performed to explore the presence of additional cervical canal and double uterus, but only one cervical opening was appreciated. The posterior fornix was bulged.But after the delivery of the neonate, her abdomen shows three tumor features i.e. contracted 20\u00a0weeks sized uterus and palpable masses at both left and right upper quadrants. The mass was non-tender and slightly hard and smooth at the left side posterior to the uterus and irregular at the right side Fig.\u00a0. BimanuaOn ultrasound examination, a fetus was seen in an intact amniotic sac with scanty fluid posterior to the empty uterus. The fetus was in a transverse lie, the head and placenta were at the left upper quadrant below the spleen and its abdomen and extremities were towards the right upper quadrant of her abdomen. The fetal heartbeat was normal and no gross congenital anomaly was identified.nd twin in a separate horn of the bicornuate uterus and to rule out abdominal pregnancy. But due to economic reason, the patient refused referral. Then after getting informed, written consent, and preparing two units of cross-matched whole blood, the patient was taken to OR. Abdomen was entered through a midline vertical skin incision.Referral was planned for the impression of 2st and 5th minutes respectively. The placenta was delivered spontaneously without resistance from its site of attachment. Small bleeders from the placenta detachment site were controlled by multiple ligations. The normal anatomy of the left adnexa was distorted and it was difficult to identify the ovary , the heterotopic pregnancy was diagnosed during laparotomy or laparoscopy [Abdominal pregnancy is an alarming obstetric phenomenon . StrictlThe most frequent site of EP implantation in heterotopic pregnancy is the tube (89.2%) and abdominal heterotopic is one of the rarest types; Tal et al. reported that out of 139 heterotopic pregnancies which are conceived by ART, 3 were abdominal .The diagnosis of abdominal-heterotopic pregnancy is a more complicated task . The mosThese are consistent with our sonographic findings except the malformation was not identified. Otherwise, there was an empty uterus, and the fetal pole was in a transverse lie outside the uterus posteriorly. There was also oligohydramnios with a positive fetal heartbeat. Therefore, the diagnosis of abdominal heterotopic pregnancy was suspected by ultrasound examination and confirmed by laparotomy.The most important issue in managing advanced abdominal pregnancy is the placental management . The masThis is a rare case of spontaneous heterotopic pregnancy with advanced abdominal ectopic in which both the intrauterine and the extra-uterine pregnancies survive. This case was also diagnosed and managed in a rural district hospital by mid-level professionals (non-physician surgeons). Our patient had two home deliveries and has no ANC follow-up during the current pregnancy. She has a low socioeconomic status and she can\u2019t afford referral to a higher institution for better management for both herself and the low birth weight baby. Based on the findings on this case and our literature review, the following conclusions can be made.Every health professional should bear in mind that intrauterine and extrauterine pregnancy may happen simultaneously and it can progress to term without any symptoms. Therefore, a high degree of suspicion is needed when we encounter abdominal mass after delivery of the IUP. Abdominal ectopic is a grave obstetric condition that needs early diagnosis and prompt management. Ultrasound is the diagnostic method of choice but the existence of an IUP cannot rule out ectopic pregnancy, therefore, adnexa should be routinely examined during the first-trimester scan. The life-threatening complication of abdominal ectopic pregnancy is bleeding from the detached placental site. Therefore, the decision to remove the placenta should be individualized."} +{"text": "Yet, despite over 30\u00a0years of study, the exact role(s) mtDNA mutations play in driving aging and its associated pathologies remains under considerable debate. Furthermore, even fundamental aspects of age-related mtDNA mutagenesis, such as when mutations arise during aging, where and how often they occur across tissues, and the specific mechanisms that give rise to them, remain poorly understood. In this review, we address the current understanding of the somatic mtDNA mutations, with an emphasis of when, where, and how these mutations arise during aging. Additionally, we highlight current limitations in our knowledge and critically evaluate the controversies stemming from these limitations. Lastly, we highlight new and emerging technologies that offer potential ways forward in increasing our understanding of somatic mtDNA mutagenesis in the aging process.Mitochondria are the main source of energy used to maintain cellular homeostasis. This aspect of mitochondrial biology underlies their putative role in age-associated tissue dysfunction. Proper functioning of the electron transport chain (ETC), which is partially encoded by the extra-nuclear mitochondrial genome (mtDNA), is key to maintaining this energy production. The acquisition of Aging is broadly defined as the progressive loss of physiological homeostasis over time and is marked by significant alterations at both the molecular and cellular levels that are associated with an ever-increasing probability of pathology and death. These changes have been broadly categorized into nine types or hallmarks . The relWith only a few noted exceptions, mitochondria are the main source of cellular energy in eukaryotes. These organelles process dietary reducing equivalents and oxygen through the electron transport chain (ETC) to produce ATP via oxidative phosphorylation (OXPHOS). Mitochondria are involved in other important cellular functions such as calcium signaling, iron-sulfur cluster biosynthesis, lipid biosynthesis and apoptosis (Reviewed in ). To varAs a consequence of an endosymbiotic event \u223c2 billion years ago that gave rise to mitochondria, these organelles have retained a small rudimentary genome that, in animals, is comprised of a circular double-stranded DNA molecule present in dozens to thousands of copies per cell . The relde novo mutation rate compared to nDNA . The miiewed in . In addin a cell . To add n a cell . DespiteA significant body of observational data suggests that the genetic instability of mtDNA in somatic cells is a fundamental phenotype of aging. Numerous studies have shown that mtDNA deletions increase across human tissues during aging . These dthe cell . One sucthe cell . Interesthe cell . Indeed,the cell . While fthe cell .et al. reported that deficient cells have more somatic mtDNA deletions in PD . Indeedns in PD . A similns in PD . Mutationeration . In addineration , sarcopeneration , macularneration , heart dneration , and ulcneration , suggestCollectively, these studies have provided evidence that the accumulation of somatic mtDNA mutations is a phenotype of aging and a potential causative process in several age-related disease, but questions still remain about the true prevalence of somatic mtDNA mutations in the context on normal aging. This is especially pertinent given the heterogeneous nature of tissue decline during aging. To date, no such detailed survey has been conducted, especially with the newest generation of high accuracy methods resulted in a dramatically accelerated aging phenotype across organ systems and a \u223c50% reduction in lifespan method, which has a significantly reduced background compared to \u201cclone and sequence\u201d approaches , indicating mitochondrial dysfunction, but with no apparent effect on physical performance . Given that this model exhibited no apparent effect on mtDNA copy number or PolG expression level, the reason for the discrepancy in lifespan and aging phenotypes between the two models is unclear.Dissecting the importance of somatic mtDNA mutations in invertebrates has proven to be as equally complicated, if for differing reasons. As originally noted in studies statistically modeling the accumulation of mutations during aging, short-lived and post-mitotic organisms are unlikely to generate and clonally expand mutations to a level that is expected to interfere with tissue function. Consistent with this hypothesis, a erations . Using aerations . Surprisl-\u03b3exo- . This mopression . In contD. melanogaster, a heterozygous C. elegans Pol-\u03b3exo- mutator model has also been developed (D. melanogaster and mouse Pol-\u03b3exo- mutator models. These include a \u223c70-fold increase in mtDNA mutations (determined by RMC), an elevation in mitochondrial dysfunction, and shortened lifespan . APOBEC1APOBEC1) . In cont fitness . Taken t\u03b3 mutator models in both vertebrates and invertebrates is that the loss of exonuclease activity affects polymerase processivity and likely interferes with the final processing steps of replication . Moreov\u03b3 with single amino acid changes (either Q1009A or H1038A) corresponding to highly conserved amino acids that confer an anti-mutator phenotype in evolutionarily related E. coli DNA polymerase I . Biochey number .E. coli DNA polymerase I demonstrated a number of mutants that increased fidelity of base selectivity by \u03b3 homolog, Mip1, identified A256T (A300 in human Pol-\u03b3) as a moderate anti-mutator without any apparent loss in polymerase activity, suggesting that such screening would likely be fruitful in identifying other variants with substantially reduced mutation rates that could be used to directly test if lowering mtDNA mutagenesis extends lifespan . This phenomenon refers to genetic heterogeneity that arises from post-zygotic mutations that propagates within tissues over time due to cell division and tissue remodeling. While the biology underlying this phenomenon is poorly understood, specific mutations are frequently observed to accumulate to high levels within individual cells and tissues and are frequently associated with defects in OXPHOS. These mosaic fields have been observed in numerous different tissue types in both aging and disease and appear to have some level of tissue specificity. Phenotypically, clonal fields are largely characterized by patches of tissues, some quite large, exhibiting a loss in OXPHOS activity. For example, colonic epithelium has shown that normal aging in associated with an increase in clonal fields lacking OXPHOS activity that are the result of mtDNA mutations . Similarde novo events , which proposes that aging is the result of accumulating molecular damage caused by free radical species such as ROS that are a normal by-product of cellular metabolism . CentralOxidation of deoxyguanosine to 8-oxo-7,8-dihydroguanine (8-oxo-dG) lesion in mtDNA has been found to accompany normal aging in multiple tissues . The preMitochondria have several antioxidant defenses thought to prevent or repair oxidative damage to their genome. Several glycosylases with overlapping and complementary activities are active on mtDNA to remove ROS-induced damage to deoxyguanosine . Severalin vivo, which may be the result of lower oxygen concentrations and the presence of reducing equivalents such as thiols in the cellular environments is reported to be produced more efficiently than 8-oxo-dG ronments . Therefoutations . Third, utations . Consistutations . Examinautations . Lastly,utations . Taken t\u03b3 are the primary driver of point mutations seen during aging and the control region (CR) . This paion (CR) . This re\u03b3 or spontaneous deamination of cytidine and adenosine. However, the possibility remains that ROS may be a driver of deamination itself or an indirect mutagen arising from damage to Pol-\u03b3. One aspect that has received only minor attention is the interplay between DNA damage, repair pathways, and replication and that no one mechanism is the driver of mutagenesis. Experiments that attempt to integrate these different aspects of mtDNA biology are likely to be informative in teasing out the main source(s) of mutations.Much like the experimental work aimed at teasing out the role of mtDNA mutations in driving aging, the molecular source of endogenous mutations has proved as equally enigmatic. Results from several different methods have firmly established that the main source of mutation is unlikely to be 8-oxo-dG, contrary to the long standing free radical theory of aging. Instead, focus has shifted to either base misincorporation errors by Pol-\u22123, difficult. Another significant limitation is that PCR assays must be optimized for each new genome or target area.The invention of PCR, along with its continued refinement as a key technology, has steadily increased the ability to detect rare mutations down to the level of individual cells or even individual mtDNA molecules . While e\u22128 (The advent of NGS allows for the digital tabulation of many individual DNA fragments in parallel, offering the unique ability to detect low level nucleic acid species within heterogeneous mixtures. This has led to routine sequencing of the entire mtDNA molecule with more accuracy and dramatically increased throughput, resulting in a significant increase in our understanding of mitochondrial genetics in disease . Early d\u22128 . This ap\u22128 .NGS is typically performed on bulk DNA from thousands to millions of cells, which necessarily decouples variant phasing information between any two reads, resulting in a loss of important biological information such as if a variant is homoplasmic or heteroplasmic within any given cell. This limitation is important given the phenotypic threshold effect imparted by different heteroplasmic levels Wallace. In addide novo mtDNA mutations has, so far, been limited. Due to the much higher mutation rate of mtDNA, most applications have used de novo mutations as a type of barcode to perform cell lineage tracing (The deployment of these technologies to study tracing . As note tracing . As a by tracing . Moreove tracing . Interes tracing . Applyinin situ with only minor modifications: spatially-aware RNA-Seq (in situ sequencing (To date, most single-cell approaches involve the dissociation of cells, which results in the loss of spatial information regarding how mtDNA mutations are distributed within a tissue. The lack of this information is likely an important aspect that hinders the ability to interpret their potential physiological impact. Spatial genomics is a rapidly emerging field of research that aims to layer spatial information to sequencing data at or near single-cell resolution. Current technologies have largely focused on obtaining the distribution of transcripts within tissues, which, much like single-cell approaches, could be harnessed to obtain spatial distribution of mtDNA heteroplasmies. To date, two main approaches have been developed that are likely amenable to the study of mtDNA mutations RNA-Seq , and dirquencing . To datequencing .de novo mutations versus inherited and early arising mutations, a confounder in many current models, is also an important question in need of attention. Regardless of the role mtDNA mutations play in driving aging directly, understanding their etiology and biology will likely be important in our understanding of why certain age-related diseases, such as AD or PD, frequently exhibit elevated mutations. In addition, the etiology of mutations could provide clues for more direct drivers of aging, as well as inherited disease causing variants.The accumulation of somatic mtDNA mutations over time is undoubtedly a phenotype of aging. However, establishing the physiological impact of these mutations, especially as it pertains to aging, has proved difficult. A number of animal models have been developed with the intention of directly testing the impact of elevated mutagenesis in driving aging phenotypes. Variability among study design and techniques used to assess somatic mtDNA mutation has made initially strong conclusions more nuanced and, in some cases, has led to apparent conflicting results. The major differences among studies related to the type of sample, the technique chosen to detect somatic mutation, and the analysis have all played a part in adding to the confusion. More rigorous study designs looking at not just lifespan, but also molecular and physiological phenotypes across many tissue and cell types, will be important in more firmly establishing a role of mtDNA mutations in aging. In addition, isolating the effects of aging-linked ex vivo DNA oxidation, and base-calling errors, limiting its ability to detect low level mtDNA mutations, which occur \u223c100-fold lower than the error background of these platforms. Error-corrected NGS methods have been developed and are increasingly being used, but conventional NGS is, unfortunately, still the norm. These more modern high accuracy methods have changed our view of the mutagenic processes that act on mtDNA and need to be more frequently used in studies related to somatic mutagenesis.Some missing aspects in our understanding of the biology of mtDNA mutations will undoubtedly be helped by the use of ever improving technologies. Previously used techniques to detect mtDNA mutations, such as standard or long-range PCR, have been slowly replaced by NGS. NGS has substantially improved the accuracy of mtDNA detection and it is now relatively easy to sequence the entire mtDNA in a population. However, NGS is strongly affected by PCR-induced errors, pseudogene artifacts, While an important step forward, most NGS studies are limited by their frequent use of tissue homogenates as a source of DNA, which necessarily removes cell and tissue specific relationships that could provide important information on regarding phenotypic thresholds, clonality, and heteroplasmy that are an important aspect of establishing the impact of potentially pathogenic mtDNA mutations. Laser capture microscopy has partially addressed these issues and has established that clonal expansions of mtDNA mutations are a biological phenomenon. However, the approach is likely too low throughput to deploy at scales high needed to better understand the true burden of clones within a tissue. Embracing emerging single-cell and spatial technologies will allow for a fuller understanding of how mtDNA impact cell and organ function in a more proper biological context. Taken together, it is clear that much remains to be done in order to more fully understand mitochondrial mutagenesis. With the emergence of new capabilities, the future remains bright that a number of long proposed hypotheses related to when, where, and how somatic mtDNA mutations influence aging and disease will soon be answerable."} +{"text": "Talaromyces marneffei causes life-threatening opportunistic fungal infections in immunocompromised patients. It often has a poorer prognosis in non-human immunodeficiency virus (HIV)-infected than in HIV-infected individuals because of delayed diagnosis and improper treatment.A 51-year-old man presented with complaints of pyrexia, cough, and expectoration that had lasted for 15 day. This patient has been taking anti-rejection medication since kidney transplant in 2011.T marneffei pneumonia; post renal transplantation; renal insufficiency; hypertension.Intravenous moxifloxacin was administered on admission. After the etiology was established, moxifloxacin was discontinued and replaced with voriconazole. The tacrolimus dose was adjusted based on the blood concentration of tacrolimus and voriconazole.The patient was successfully treated and followed-up without recurrence for 1 year.T marneffei infection in immunodeficient non-HIV patients who live in or have traveled to T marneffei endemic areas. Early diagnosis and appropriate treatment can prevent progression of T marneffei infection and achieve a cure. Metagenomic next-generation sequencing (mNGS) can aid the physician in reaching an early pathogenic diagnosis. Close monitoring of tacrolimus and voriconazole blood levels during treatment remains a practical approach at this time.A high degree of caution should be maintained for the possibility of Talaromyces marneffei is responsible for invasive fungal infections, with the highest prevalence in Southeast Asia and southern China. It can cause both local and disseminated infections. The latter are more common, involve the monocyte-macrophage system, and often cause pneumonia before spreading to the liver, spleen bone marrow, lymph nodes, skin, and other organs via lymph and blood circulation. The symptoms reflect the corresponding multi-organ damage. With an increasing occurrence in human immunodeficiency virus (HIV)-negative hosts, talaromycosis has emerged as a serious public health concern.T marneffei pneumonia occurred in an HIV-negative kidney transplantation patient.This case of A 51-year-old man was admitted to our hospital on November 30, 2020 with complaints of pyrexia, cough, and expectoration that had lasted for 15 day. Two weeks before admission, the patient had experienced night sweats and fever below 37.5 \u00baC without cough, stuffy or runny nose, or other respiratory symptoms. He had sought treatment at a local hospital but his symptoms did not improve after 10 day of intravenous administration of piperacillin-tazobactam and azithromycin; in addition, he developed a cough with yellow, purulent sputum.The patient had been taking immunosuppressants following a kidney transplantation that had been carried out in 2011 for renal failure. His most recent medications were mycophenolate mofetil , tacrolimus , and prednisone . His most recently recorded creatinine levels showed maintenance between 145 \u00b5mol/L and 175 \u00b5mol/L before admission. The patient denied a history of chronic diseases, such as hypertension or diabetes, as well as of infectious diseases, such as tuberculosis or hepatitis.The patient had been living in Lushan City in Jiangxi Province, southern China, with no history of tobacco or alcohol abuse and no relevant family history. He was a cadre and usually worked in an office.Physical examination at admission showed body temperature of 37.9 \u00b0C, respiration of 20 beats/minute, heart rate of 90 beats/minute, and blood pressure of 145/78\u2009mm Hg. There was no enlargement of superficial lymph nodes throughout the body. There were no skin lesions nor joint abnormalities. A few wet rales could be heard in the right lower lung. The liver and spleen were not felt under the ribs on abdominal palpation.9/L, high neutrophil count of 80% , low hemoglobin of 90\u2009g/L , and a normal platelet count. Hypoproteinemia and elevated creatinine , blood 1-3-\u03b2-D-Glucan , blood galactomannan , and C-reactive protein were observed. Liver function, electrolytes, arterial blood gases, procalcitonin, T-spot tuberculosis test, HIV antibody, and blood cryptococcus capsular antigen levels were all normal or negative. Blood T cell subsets included low lymphocytes of 12.33% , high T lymphocytes of 96.48% , normal CD4 + T cells of 618.48/\u00b5L, and low CD8 + T cells of 351.23/\u00b5L (404\u2013754/\u00b5L).Routine blood tests revealed a normal white cell count of 9.22 \u00d7 10After the patient was transferred to our hospital, intravenous moxifloxacin was administered and the mycophenolate mofetil dose was decreased to 250\u2009mg, twice daily.T marneffei , and a 7-day culture of BALF yielded biphasic T marneffei smear found sausage-like spores with a central septum Fig. . On DeceChest computed tomography (CT) at the local hospital showed a solid lesion in the right lower lung. A repeat chest CT at our hospital showed an enlarged lesion Fig. A.T marneffei pneumonia.We did not find other disseminated lesions after a thorough evaluation, including of the abdomen, brain, lymph nodes, skin, and joints. The patient was diagnosed with On December 4, 2020, moxifloxacin was discontinued and replaced with voriconazole because the patient\u2019s creatinine clearance was less than 50\u2009mL/min. The tacrolimus dose was adjusted frequently, especially in the first month, to maintain the blood tacrolimus and voriconazole concentrations within the proper concentration range patients, following tuberculosis and cryptococcus. Highly active antiretroviral therapy has led to a decline in HIV-related T marneffei infections, while infections in HIV-negative patients have increased. The comorbidities present in patients with T marneffei infections include malignancies, organ transplantation, autoimmune diseases, and some emerging conditions such as adult-onset immunodeficiency associated with anti-interferon gamma antibodies, T lymphocyte-depleting immunosuppressive drugs, and novel targeted anticancer agents such as anti-CD20 monoclonal antibodies and kinase inhibitors. Because of its insidious onset and rapid progression, the diagnosis of talaromycosis is often delayed, which leads to high fatality rate. Furthermore, talaromycosis mortality is higher in HIV-negative than in HIV-positive patients. Early diagnosis of talaromycosis is both crucial and challenging.T marneffei is the only thermally dimorphic penicillium, with a mold phase at 25 to 28 \u00baC and producing a diffusible wine-red pigment and brush-shaped microscopic hyphae microscopically, and a nonpigmented pathogenic yeast phase pigment at 37 \u00baC. Confirmation of talaromycosis requires the isolation of biphasic penicillium from specimen cultures.T marneffei was considered. Hence, the specimen was incubated at both 26 and 37 \u00baC. It was not until the 7th day after the lavage fluid specimen was collected that it was confirmed that the causative organism of the patient\u2019s infection was indeed T marneffei. Hien et al indicated that the culture may even take up to 14 days for identification. In comparison, the actual testing time for mNGS was only 24 hours, excluding the logistical time for specimen delivery. It was this valuable time lag that allowed clinicians to quickly target the pathogen and adjust the treatment plan in time so that the disease did not spread further.In this case, the clinician communicated with the microbiology technologist to suspect a fungal possibility due to the patient\u2019s immunocompromised condition. After fluorescence staining revealed spores with transverse septa, the possibility of Li et al found that mNGS had a diagnostic sensitivity of 88.89% and specificity of 74.07% with an agreement rate of 77.78% compared to specimen culture. Compared with culture smears and polymerase chain reaction, mNGS had a diagnostic sensitivity of 77.78% and a specificity of 70.00%.In addition to the time consuming nature of culture, its positive rate is not as good as it could be. Liu et al confirmed a higher positive rate for BALF mNGS (64%) than for BALF culture (28%).Because of the long time, low accuracy and low positive rate of traditional culture, it often leads to delayed diagnosis and inappropriate use of antibiotics. Although mNGS has some inadequacies, such as interference with human-derived nucleic acids, report interpretation, and high cost, it is still increasingly used in clinical practice because of its high throughput, high timeliness, high accuracy, and broad coverage. Furthermore, BALF-based mNGS is recommended for diagnosing pulmonary fungal infections due to its diagnostic advantages over conventional tests.T marneffei. Fluorescent dyes that specifically bind to chitin and dextran in the fungal cell wall can be used to label fungi, to allow observation of cellular morphology by fluorescence microscopy. Because it is rapid, economical and direct, fluorescence staining is now widely used in clinical practice.In this case, fluorescent staining of BALF smears suggested that the pathogen was ,10 The drug manufacturer suggested that the concurrent tacrolimus dose be reduced by one-third; other case reports described an 80% to 90% reduction.,11 One report noted that the voriconazole dose was also reduced as appropriate.As the patient was found to have renal insufficiency before the onset of infection, oral voriconazole was chosen. When voriconazole is used in combination with tacrolimus, which is also metabolized by CYP3A, the blood concentration of tacrolimus could quickly rise to very high levels. Nevertheless, because of individual differences, genotype polymorphisms, and differences of pharmacokinetics and clinical backgrounds, there are no consistent criteria or recommendations for adjusting blood tacrolimus concentration in patients with invasive fungal infection after organ transplantation. Fortunately, in our case, with intensive monitoring, the tacrolimus dose was adjusted to maintain a blood concentration of 3 to 6\u2009ng/mL. The voriconazole blood concentration also remained within the effective range . The patient was finally cured and renal function was maintained at the pre-onset level.The optimal therapeutic windows for tacrolimus in patients with renal transplants are 5 to 15 \u00b5g/L at 1 to 3 month after surgery and 3 to 8 \u00b5g/L beginning at 12 month after surgery.T marneffei infection in HIV-negative immunocompromised patients with a history of tourism or residence in T marneffei endemic regions. mNGS can powerfully assist physicians in quickly and accurately targeting the pathogen, preventing further spread and worsening of the infection. Fluorescent staining is a diagnostic hint for suspected fungal infections. Individual differences and narrow treatment windows for tacrolimus require close monitoring of its blood concentration during the administration of antifungal treatment with voriconazole or other azole drugs.In conclusion, clinicians should be alert to the possibility of The authors are grateful to Dr Hui Chen for providing photos of the microorganism.Fang X-L conceived the study and developed the search strategy, supervised the patient\u2019s diagnosis and treatment during the entire process, and put forward valuable opinions, and edited the manuscript; Cai D-H and Wang J conducted the literature review and participated in the patient\u2019s diagnosis and treatment; Cai D-H produced the draft of the manuscript; All authors contributed to the final manuscript.Conceptualization: Xiao-Lin Fang.Formal analysis: Xiao-Lin Fang.Methodology: Jun Wang.Writing \u2013 original draft: De-Han Cai.Writing \u2013 review & editing: Xiao-Lin Fang."} +{"text": "The pDTT derivatives studied include polymers with simple thiohexyl end-caps or modified with AQ or methyl groups by Steglich esterification. All polymers were shown to be depolymerized using catalytic amounts of electrons delivered by AQ\u2022\u2212. For pDTT, as little as 0.2 electrons per polymer chain was needed to achieve complete depolymerization. We hypothesize that the reaction proceeds with AQ\u2022\u2212 as an electron carrier (either molecularly or as a pendant group), which transfers an electron to a disulfide bond in the polymer in a dissociative manner, generating a thiyl radical and a thiolate. The rapid and catalytic depolymerization is driven by thiyl radicals attacking other disulfide bonds internally or between pDTT chains in a chain reaction. Electrochemical triggering works as a general method for initiating depolymerization of pDTT derivatives and may likely also be used for depolymerization of other disulfide polymers.We report the use of electrogenerated anthraquinone radical anion (AQ\u2022 First, 55 \u03bcM AQ\u2022\u2212 was electrogenerated by applying an electrolysis potential of \u22121.07 V. The stability of AQ\u2022\u2212 is observed to be relatively good until a degassed solution of pDTT, HTpDTT, or (Me)pDTT (200 \u03bcM resulting concentration) is added (defined as time zero). For pDTT and HTpDTT, AQ\u2022\u2212 is consumed within 1\u20132 s, while, for (Me)pDTT, a sharp decrease in [AQ\u2022\u2212] to ~25 \u03bcM is detected within seconds, whereafter a slow decrease is observed, reaching ~15 \u03bcM after 20 s and zero after 70 s , dimethyldisulfide (DMDS), and 2,2\u2032-dithiodipyridine (DTDP) in the presence of AQ. These compounds nicely correspond to or mimic the end product, the backbone, and the end-caps of the polymers.\u2212) and \u22121.7 V vs. SCE (AQ\u2022\u2212 to AQ2\u2212), respectively (\u2212) still appears Nernstian. In contrast, the second wave at \u22121.7 V (AQ\u2022\u2212 to AQ2\u2212) exhibits a noticeable increase in the peak current, coming along with a diminishing current of the oxidation wave at \u22121.5 V (AQ2\u2212 to AQ\u2022\u2212) on the anodic sweep. The anodic peak at \u22120.8 V (AQ\u2022\u2212 to AQ) is of similar size as before but begins to overlap with the oxidation of the thiolates at \u22120.2 V. This would suggest that transfer of electrons to (Me)pDTT from the more potent electron donor, AQ2\u2212, can take place, while AQ\u2022\u2212 has no such ability on the time scale of a voltammetric cycle. Furthermore, the additional reduction peaks observed at \u22122.7 V indicate that, besides direct reduction of the polymer itself, other disulfide-containing species, originating from the depolymerization, may contribute to these signals.Reduction of AQ takes place as two Nernstian one-electron transfer processes at \u22120.9 V vs. SCE (AQ to AQ\u2022ectively a. The voectively a. On the\u2212) looks similar to that of AQ alone, while the second reduction wave comprising two peaks shifts in a positive direction. The reduction wave of the polymer itself shifts towards a more negative potential and becomes larger in the presence of AQ. For HTpDTT the electron transfer pertaining to the first reduction peak at \u22120.9 V (AQ to AQ\u2022\u2212) is Nernstian, while the second reduction peak shifts to a less negative potential. The reduction peak for the polymer itself shifts to a more negative potential (\u22122.7 V vs. SCE) and becomes broader, but with little change to the peak current.Cyclic voltammograms of pDTT and HTpDTT with and without AQ can be found in the \u2212 to AQ and dihydroanthraquinone , wi, wi\u00ae is more profound. Earlier studies on indirect reduction of simple disulfides demonstrated that electrogenerated AQ\u2022\u2212 can be used to reduce disulfides to thiolates on a time scale of min/h , together with cDTT, contain hydroxyl groups, which may protonate AQ\u2022emically . Both th of cDTT , which d of cDTT . In thiseresting as all hRegarding the three model compounds, both DMDS and cDTT comprise aliphatic disulfide bonds, thus mimicking the end-caps of HTpDTT as well as the dialkyl disulfide bonds along the backbone of all polymers. Note that cDTT also contains hydroxyl groups and is cyclic, yet without ring tension. Likewise, DTDP is meant to serve as a model compound mimicking pyridyl disulfide end-caps. Acknowledging that the four polymers, pDTT, (Me)pDTT, (AQ)pDTT, and (AQ)(Me)pDTT, contain a mixed alkyl pyridyl disulfide end-cap, the reduction potential of this end-cap should thus be located somewhere between those of DMDS and DTDP.kDMDS = 2 \u00d7 102 M\u22121 s\u22121, kDTDP = 105 M\u22121 s\u22121), because of a greater extent of electron delocalization onto the pyridyl groups. This decreases significantly the reorganization energy of the stepwise dissociative electron transfer from Fischer Scientific and EDC\u2022HCl from Fluorochem . All chemicals were used without further purification. The polymers pDTT, (Me)pDTT, (AQ)pDTT, and (AQ)(Me)pDTT were synthesized following previously established protocols with a few modifications = 2 mM, [DMDS]/[DTDP][RSSR] = 5/5/3.2 mM, half peak potentials of AQ locked at \u22120.87 and \u22121.61 V vs. SCE at sweep rate = 1 V s\u22121 = 2 mM, [DMDS]/[DTDP] = 24 mM, half peak potential of AQ locked at \u22120.87 V vs. SCE at sweep rate = 1 V s\u22121 = 3 mM, and ~1.5 mg mL\u22121 of polymers were added, providing polymer concentrations for pDTT (0.60 mM), HTpDTT (0.41 mM), (Me)pDTT (0.39 mM), (AQ)pDTT (0.39 mM), and (AQ)(Me)pDTT (0.31 mM) as listed. The catholyte solution was purged with argon for at least 10\u2005min prior to recordings. A cyclic voltammogram was recorded just prior to electrolysis to determine the cathodic peak potential of AQ (or pendant AQ); electrolysis was conducted at a potential 150 mV more negative than this potential and stopped when a specific charge in the range of 0.50\u20130.96 C was consumed (corresponding to ~3 electrons per polymer chain in each case).Electrolysis was conducted using a CH Instrument (601D) potentiostat in a two-necked two-chamber H-cell with 0.03 M Bu\u22121 of polymer, i.e., HTpDTT (0.27 mM), (Me)pDTT (0.26 mM), (AQ)pDTT (0.26 mM), and (AQ)(Me)pDTT (0.21 mM), and stopped after consumption of 0.055\u20130.072 C. Finally, electrolysis of pDTT passing 0, 0.05, 0.1, 0.2, 0.5, 1.0, 2.0, and 3.0 equiv. of electrons per polymer chain was conducted with pDTT (0.40 mM); volume of catholyte = 15 mL and [AQ] = 3 mM. Aliquots for SEC were collected after the specific amount of charge was consumed.Likewise, electrolyzes with passing of 0.5 electrons per polymer chain were conducted using 1 mg mL4NBF4/DMF purged with argon. With a dip-in probe inserted in the catholyte, spectra of AQ\u2022\u2212 were recorded and compared with those reported in literature [\u2212 was monitored at the wavelength 556 nm with a time resolution of 0.1 s. Lambert\u2013Beer law was used to calculate [AQ\u2022\u2212] from the absorbance at 556 nm and by employing \u22121 cm\u22121. Solutions of AQ\u2022\u2212 were prepared by electrochemical reduction of 2 mM AQ in 0.1 M Bu4NBF4/DMF. The electrolysis was stopped once the absorbance at 556 nm reached 0.6 corresponding to [AQ\u2022\u2212] = 55 \u03bcM. While recording the absorbance at 556 nm, 100 \u03bcL of a degassed polymer solution of pDTT, HTpDTT, or (Me)pDTT was injected to achieve [polymer] \u2248 0.2 mM. The decay of [AQ\u2022\u2212] was traced from this point on, defined as t = 0.UV\u2013Vis spectra were collected from the cathode chamber of an H-cell using an Agilent Cary-60 UV\u2013Vis absorption spectrometer with a dip probe from C-Tech with pathlength of 1 cm. The cathodic compartment was filled with 3 mL solution containing 2 mM AQ in 0.1 M Buterature . Absorba\u2212. Only a catalytic amount of electrons was needed to achieve full depolymerization, which would be in line with an intermolecular chain reaction from a mechanistic point of view. The reaction is rapid (within seconds) and applicable to all derivatives regardless of end-cap and whether they contain an internal proton donor or not. Triggering can be completed using either AQ in solution (intermolecular) or pendant AQ moieties (intramolecular). We propose a mechanism where dissociative electron transfer from AQ\u2022\u2212 to a polymer disulfide bond generates a thiyl radical and a thiolate. The former may be engaged in reaction with other disulfide bonds, creating a new disulfide and a thiyl radical, eventually leading to smaller disulfides. The proposed mechanism explains both the catalytic nature and rapidity of these depolymerizations.We successfully synthesized poly(dithiothreitol) derivatives by exchanging end-caps and/or introducing various pendant groups via Steglich esterification. All polymers were shown to be depolymerized once triggered by electrogenerated AQ\u2022"} +{"text": "Cyclocarya paliurus as a natural antioxidant for meat products (Frankfurters). The results showed that flavonoid extracts from C. paliurus had strong antioxidant and antibacterial activity. This is proportional to concentration, and the addition of extracts could significantly (p < 0.05) delay the lipid oxidation in the samples. In addition, we did not observe hazardous effects on the samples\u2019 pH and texture as a result of adding flavonoid extracts. We observed that flavonoid extracts from C. paliurus at concentrations of 0.06% and 0.12% did not affect the color and sensory evaluation of the samples. At a concentration of 0.18% and 0.24%, the flavonoid extracts had a negative impact on the color and sensory evaluation of the samples, likely due to the yellow-brown color of the extract itself. The findings showed that a low concentration of 0.12% flavonoid extracts from C. paliurus in meat products could effectively prevent lipid oxidation without affecting the sensory quality.Oxidation is one of the most common causes of the deterioration of meat and meat products. At the same time, synthetic antioxidants are becoming less accepted by consumers due to the potential health hazards they might cause. Therefore, a new trend to substitute these synthetic antioxidants with natural antioxidants has emerged. This study adds flavonoid extracts from Lipid oxidation has long been recognized as the leading cause of undesirable effects on meat and meat products\u2019 quality, acceptability, and shelf-life. Lipid oxidation results in oxidative off-flavor, discoloration, and spoilage of meat and meat products ,2. The oNatural antioxidants are mainly derived from spices, plants, fruits, and vegetable skin residue extracts. Numerous studies have evaluated natural substances as antioxidant additives in meat products and proved their effectiveness. Estevez et al. found that adding rosemary essential oil to frankfurters delayed oxidation problems and reduced tenderness during refrigeration . Tran etCyclocarya paliurus Iljinskaja , commonly known as the \u201csweet tea tree\u201d, is a well-known edible and medicinal plant cultivated in the misty highlands of southern China. In recent years, C. paliurus has gained increasing interest due to its wide range of biological activities and antioxidant effects, such as antihypertensive activity, hypolipidemic, hypoglycemic activity, enhancement of mental efficiency, and antioxidant activity [C. paliurus leaves, such as flavonoids, polysaccharides, triterpenoids, steroids, saponins, and phenolic acid compounds [C. paliurus leaves. Xie et al. found that the DPPH radical scavenging ability of flavonoids (at the concentration of 0.1\u20130.8 mg/mL) from C. paliurus was always better than that of butylated hydroxytoluene (BHT) [C. paliurus had inhibited effects on Staphylococcus aureus, Salmonella, and Escherichia coli [activity ,16,17. Tompounds . Amongstne (BHT) . Regardihia coli . Furtherhia coli . Consumers expect meat products to be nutritious, safe, convenient, and of good sensory quality. In the meat industry, there is growing interest in using innovative processing methods to reformulate products and replace synthetic additives with natural bioactive compounds to minimize health concerns and improve the overall organoleptic, nutritional, and health properties of processed meats. Frankfurters are an emulsion-type meat product containing 20~30% of fat, unfermented, and consumed worldwide. They have a short shelf life as lipids are vulnerable to oxidative damage from reactive oxygen species (ROS), particularly when exposed to light ,23. The C. paliurus have specific antioxidant and antibacterial properties that could be used in the meat processing industry as a natural function antioxidant. This study aimed to assess the possibility of adding flavonoid extracts from C. paliurus at different concentrations as a natural antioxidant to improve the physicochemical and sensory properties of cooked meat product (Frankfurters) over a refrigerated storage period of 21 days. The flavonoid extracts from C. paliurus, a brown-yellow fine powder soluble in water has the unique flavor of C. paliurus, which were purchased from Lanzhou Waters Biotechnology Co., Ltd. ; Sodium D-isoascorbate was purchased from Shiyao Group Weisheng Pharmaceutical (Shijiazhuang) Co., Ltd. ; Pig hind leg and pig back fat was purchased from Jiangsu Nanjing Yurun Group. White granulated sugar, salt, white pepper powder, and nutmeg powder were all food-grade and commercially available, sodium tripolyphosphate was purchased from Shanghai Taixin Industry Co., Ltd. , and collagen casing (22 mm) was purchased from Liu Zhou Honsen Collagen Casing Co., Ltd. . Anhydrous ethanol and methanol were purchased from Sinopharm Chemical Reagent Co., Ltd. . Flavonoid extracts from C. paliurus flavonoid extracts and prepared liquid samples with the concentration of 0.6 mg/mL, 1.2 mg/mL, 1.8 mg/mL, and 2.4 mg/mL with ultrapure water, and 0.025% sodium D-isoascorbate was prepared. Following that, we take two 1 mL centrifuge tubes for each group, add 400 \u03bcL of liquid sample, 600 \u03bcL of 80% methanol for one as the control group, and 600 \u03bcL of working solution for the other as the test group. We also add 400 \u03bcL, 80% methanol, and 600 \u03bcL working solution into the blank tube as the blank control. All treatment groups were covered to avoid light for 30 min, then 200 \u03bcL were absorbed into 96 healthy enzyme standard plates, and the absorbance was measured at 517 nm. The DPPH radical scavenging rate (%) was calculated using the following Equation (1): A1 is the absorbance of the test group, A2 is the absorbance of the control group, and A0 is the absorbance of the blank group. All measurements were performed in triplicate.The DPPH free radical scavenging rate was determined by following the instructions of the DPPH free radical scavenging capacity kit (Nanjing Jiancheng Bioengineering Research Institute). First, we take one working solution powder, add 40 mL of pure ethanol, shake it well, and store it at 4 \u00b0C away from the light. Then, we weighed the precise amount of C. paliurus) have been added. All the elements were chopped with a chopping machine and poured into 22 mm collagen casings with a sausage machine , tied at every 10 cm, and steamed in a fuming and boiling machine to maturity. The steam frankfurters were cooled to below 15 \u00b0C, then vacuum packed in a vacuum packaging machine , sterilized, and stored in a 4 \u00b0C refrigerated warehouse. Group C contained no antioxidants as a negative control. Group VC contained 0.025% sodium D-isoascorbate as a positive control. Groups T1, T2, T3, and T4 contained 0.06%, 0.12%, 0.18%, and 0.24% flavonoid extracts from C. paliurus as different treatments, respectively. All samples were evaluated on the 1st, 7th, 14th, and 21st days.To prepare the cooked meat product (frankfurters), pork hindquarter and pig back fat were purchased, the fascia was removed, separately ground in a meat grinder, and organized into six treatment groups . The composition of frankfurters is listed in g for 10 min at 4 \u00b0C with a centrifuge , the supernatant was taken and placed on ice until tested; \u2462 take a 1.5 mL centrifuge tube, accurately absorb 0.3 mL reagent 1 and 0.1 mL supernatant into the test tube, mix well, and water bath at 95 \u00b0C for 30 min; \u2463 placed the mixed liquid in the ice water for cooling down, and centrifuge at 10,000 g for 10 min; \u2464 200 \u03bcL of the supernatant was put into a 96-well plate, and the absorbance was measured at 532 nm and 600 nm. An automatic biochemical analyzer detected the total protein concentration . Take an appropriate amount of supernatant and use the automatic biochemical analyzer to detect the total protein concentration of the sample . MDA content was calculated using the following Equation (2): A = A532-A600 and Cpr is the sample protein concentration (mg protein/mL). All measurements were performed in triplicate.Thiobarbituric acid reactive substances are one of the most intuitive indicators of lipid oxidation. The oxidative stability of the frankfurters was based on measurements of the malondialdehyde (MDA) concentration by a malondialdehyde detection kit . The content of MDA was determined following the steps of the detection kit manual: \u2460 weigh 0.2 g of meat sample into a 5 mL centrifuge tube, add 2 mL of extract, and homogenized in an ice water bath; \u2461 the homogenate was centrifuged at 8000 10 CFU\u00b7g\u22121. Escherichia coli was determined following the national food safety standard GB4789.2-2016 [Escherichia coli was inoculated on the respective nutrient medium and incubated at 37 \u00b0C for 24 h of frankfurters samples with different antioxidants on days 0, 7, 14, and 21. All measurements were performed in triplicate.The total viable counts (TVC) were determined per the national food safety standard GB4789.2-2016 using the aerobic plate counting method . Plates 9.2-2016 . EscheriA portable digital pH meter was used to detect the pH of the frankfurters. Before measurements, a standard buffer solution was used for a two-point calibration. The pH value was measured on the 1st, 7th, 14th, and 21st days. All measurements were performed in triplicate.L*, a*, and b* were recorded. The color difference was measured on the 1st, 7th, 14th, and 21st days. The previous study has indicated that a total color difference (\u0394E) of approximately 1 is discriminable by consumers [L*, \u0394a*, \u0394b* were calculated by the difference between stored samples and 1st day samples. All measurements were performed in triplicate.The color difference is measured by a portable color difference meter with an 8 mm aperture and D65 light source . The colorimeter was calibrated with a standard whiteboard before use, and the frankfurter was cut into columns 1 cm high and 22 mm in diameter. Each frankfurter was measured twice at the front and rear, and the values for onsumers . The totThe determination of texture was referred to the method of Zhou et al. for appropriate modification . We tookThe method of sensory reference method has been amended appropriately for our previous experiment . A panelSAS 9.4 software was used for statistical analysis of the collected data, and unidirectional ANOVA and multiple Duncan comparisons were used to test the significance of the differences. The results were presented as mean \u00b1 standard deviation.C. paliurus are a natural extract that has been proven to have certain antioxidant properties [C. paliurus, the more efficient the DPPH scavenging activity is. The DPPH scavenging ability of the extract was significantly increased (p < 0.05) and enhanced rapidly when the concentration increased from 0.06% to 0.12%, the growth rate of DPPH scavenging ability was slowed down when the concentration from 0.12% to 0.24% and with no difference between T3 and T4 (p > 0.05). Sodium D-isoascorbate has an excellent antioxidant effect and is widely used in meat products [C. paliurus was more effective at DPPH scavenging activity than 0.025% sodium D-isoascorbate.To investigate the effects and influence of different antioxidants on meat products, the DPPH scavenging activity of different antioxidants has been determined. The DPPH free radical scavenging rate can effectively show the total antioxidant activity in natural products . Flavonooperties . To deteproducts . DPPH haE could be reflected in the changes in the overall color of frankfurters during storage. L* exhibits the degree of lightness and represents the brightness intensity. a* exhibits the degree of redness and represents the redness intensity, and b* exhibits the degree of yellowness and represents the yellowness intensity. L*, a*, and b* of frankfurters on the 1st, 7th, 14th, and 21st days, and \u0394E was calculated on the 7th, 14th, and 21st days of storage in various test groups. Over the storage period, \u0394E was increased and then declined apart from the C group. On the 7th day, \u0394E was more than 1 in groups C and T4 but lower than 1 in groups VC, T1, T2, and T3. The result showed that adding antioxidants delayed the color change of frankfurters in the early storage period. On the 14th day, \u0394E was more than 1 in six groups, groups C and T4 with the lowest \u0394E, followed by groups T2, VC, T3, and T1, which may be caused by fat oxidation and a dark brown color of extract. On the 21st day, \u0394E showed a downward trend in five treatment groups. Over the storage period, L* increased initially and then declined apart from the T2 group. Compared to group T1, groups T2, T3, and T4 maintained low L*, possibly due to the enhanced water-holding capacity (WHC) of frankfurters, which reduces light reflection [L* of frankfurters. Meanwhile, fat oxidation also adversely affects meat color. Throughout the storage period, group VC consistently maintained the highest a* value, significantly different (p < 0.05) from other test groups except on day 1. The result indicated that adding sodium D-isoascorbate could better maintain the color stability of meat products than other test groups. Among groups T1, T2, T3, and T4, a* initially increased, then declined as the concentration of flavonoid extracts increased; group T2 has the highest a* value, indicating that flavone extract may have a significate contribution in color protection; however, this color protection did not show a trend of increasing with the concentration of flavonoid extracts. Except for group T4, the higher the extract concentration, the greater the b* value. The b* value for all treatment groups increased throughout the storage period. On the 1st day, b* of frankfurters showed a trend of increasing with the concentration of flavonoid extracts, due to the influence of extracts\u2019 color. On the 14th day, the b* value of group T1 was lower than group C but gradually increased and eventually exceeded group C with the added extract concentration. Compared to other groups, excluding group VC, b* in group T1 was the lowest, possibly due to the higher antioxidant effect of the extracts. Flavonoid extracts inhibited fat and protein oxidation, maintaining the color stability of frankfurters [b* gradually decreased during the whole storage period [a* and higher b* in the test groups may be due to the yellow-brown color of the flavone extract affecting the color of the frankfurters.Color is one of the most important meat quality attributes when consumers are concerned . \u0394E coulflection . Therefoflection . In addiflection . Phenolikfurters . Manzoore period . Neverthe period . The lowp > 0.05). The pH of the group VC samples was always the highest over the storage period, which was significantly higher (p < 0.05) than the other five test groups. On the first day, all treatment groups showed a higher pH than group C except group T4, which may be related to a higher concentration of polyphenols contained in the extract, which may be decreased the pH of frankfurters. Meanwhile, microbial fermentation will also reduce the pH due to the presence of microorganisms in group T4. Compared with the 1st day, the pH of the samples on the 7th day declined; the pH of the group VC samples was the highest, with an average value of 6.34. In addition, the pH of group C was significantly different from those samples from groups tested with flavone extract, except for group T1 (p < 0.05). On the 14th day, the pH of groups T1 and T4 samples was lower, averaging 6.25 and 6.24, respectively. On the 21st day, all treatment groups showed little change in pH compared with the 14th day except group T4. In addition, as storage time increased, the pH value of each test group gradually decreased and then increased except for group T4, which may be caused by the higher antioxidant effect of the extract led to higher oxidation stability of frankfurter samples and reducing the oxidation [C. paliurus had no significant (p > 0.05) effect on pH. Rapid reprocessing micro-organisms can increase pH during storage in the samples at the later stage field [xidation ,39. All ge field .p > 0.05), indicating that antioxidant supplementation had no significant impact on springiness or chewiness except for samples in group T4. There were significant differences in firmness between each test group after 7 days of storage, and group C always had the highest firmness during storage time. The previous study showed that phenolic compounds extracted from plants could reduce the firmness of cooked meat products during cold storage. This allows the meat to obtain better emulsion stability through the antioxidant effects of the plant extracts on the lipids and proteins [Textural analysis was performed throughout the storage period to determine whether the addition of flavonoid extracts affected the frankfurters\u2019 final firmness, springiness, and chewiness. The result is presented in proteins . Furtherproteins ,41. Our p < 0.05). On the 7th day of storage, MDA content in group C was drastically increased and considerably higher than the other four test groups with different concentrations of C. paliurus extracts (p < 0.05). Compared to other treatment groups, groups T3 and T4 always have the lower MDA content, which showed that the higher concentration of flavonoid extracts, the more effective the antioxidant activity of meat products. On the other hand, MDA content also increased in all the test groups with different concentrations of flavonoid extracts during the process, which is consistent with the results from the Zhang et al. study [MDA analysis was performed throughout the whole storage to determine whether adding flavonoid extracts impacts lipid oxidation during the storage period. The result is presented in l. study . This cal. study . Our resEscherichia coli (E. coli.) was not observed in the analyzed samples during the entire testing period, indicating no secondary contamination of E. coli during the experiment. The total number of colonies increased over time while in storage in all treatment groups. Some colonies were detected in groups T3 and T4 on the first day may be caused by incomplete sterilization. On the 7th and 14th day, no colonies were detected in group VC; group C had the highest mean values for TVC, the values of TVC in groups T3 and T4 were significantly higher than in groups T1 and T2 (p < 0.05), might be caused by the initial colonies. In contrast, although the total number of initial colonies in groups T3 and T4 were higher, the trend of TVC values increase was slower than in group T1 and T2 in the storage period, suggesting that the higher concentration of extracts, the better antimicrobial ability. The growth rate of TVC values in the treatment groups with the addition of extracts and antioxidants was slower than that in group C, indicating that the antioxidant and flavonoid extract from C. paliurus had specific antibacterial activity [C. paliurus can be used as natural antibacterial agents in meat products.The microbiological test was performed on all samples on the 1st, 7th, 14th, and 21st days. The result is presented in activity . It can The sensory quality of the product is one of the most important considerations for consumers. Adding additives may change the flavor, color, and taste of products, affecting customer satisfaction. Frankfurters were evaluated on days 1, 7, 14, and 21 of storage . The objC. paliurus affect the overall quality, fat oxidation, TVC, and sensory of vacuum-packed cooked meat products (Frankfurters) over 21 days of storage. The DPPH scavenging activity showed that flavonoid extracts were an effective antioxidant, and the greater the concentration, the greater the antioxidant capacity. Compared to the negative control group C, the four different extract concentrations can significantly reduce the MDA concentration in samples (p < 0.05). The antioxidant effect of the low-concentration extract (0.06% and 0.12%) is equivalent to sodium D-isoascorbate. In comparison, the antioxidant effect of the high-concentration extract (0.18% and 0.24%) is significantly higher than that of sodium D-isoascorbate, indicating that the antioxidant capacity of flavonoid extracts is related to the concentration. The flavonoid extracts from C. paliurus had a certain antibacterial effect, and the higher concentration of extracts, the better the antimicrobial ability. Adding flavonoid extracts help to improve the tenderness of frankfurters during storage. When extracts were added, no harmful effects were observed on the quality and pH value. In the sensory evaluation towards the storage period, samples with low extract concentration had better color and appearance parameters scores. Samples with flavonoid extracts at concentrations of 0.18% and 0.24% had lower color and sensory evaluation scores, which might be caused by the yellow-brown color of the extract itself. The findings conclude that the low concentration (0.12%) of flavonoid extracts from C. paliurus in meat products could effectively inhibit lipid oxidation without affecting sensory quality.Our study observed how different concentrations of flavonoid extracts from"} +{"text": "The re-identification of animals can distinguish different individuals and is regarded as the premise of modern animal protection and management. The re-identification of wild animals can be inferred and judged by the difference in their coat colors and facial features. Due to the limitation of long-distance feature extraction, CNN is not conducive to mining the relationships among local features. Therefore, this paper proposes a transformer network structure with a cross-attention block (CAB) and local awareness (CATLA transformer) for the re-identification of wild animals. We replace the self-attention module of the LA transformer with CAB to better capture the global information of the animal body and the differences in facial features, or local fur colors and textures. According to the distribution of animal body parts of the animal standing posture, we redesigned the layer structure of the local aware network to fuse the local and global features.The wildlife re-identification recognition methods based on the camera trap were used to identify different individuals of the same species using the fur, stripes, facial features and other features of the animal body surfaces in the images, which is an important way to count the individual number of a species. Re-identification of wild animals can provide solid technical support for the in-depth study of the number of individuals and living conditions of rare wild animals, as well as provide accurate and timely data support for population ecology and conservation biology research. However, due to the difficulty of recording the shy wild animals and distinguishing the similar fur of different individuals, only a few papers have focused on the re-identification recognition of wild animals. In order to fill this gap, we improved the locally aware transformer (LA transformer) network structure for the re-identification recognition of wild terrestrial animals. First of all, at the stage of feature extraction, we replaced the self-attention module of the LA transformer with a cross-attention block (CAB) in order to calculate the inner-patch attention and cross-patch attention, so that we could efficiently capture the global information of the animal body\u2019s surface and local feature differences of fur, colors, textures, or faces. Then, the locally aware network of the LA transformer was used to fuse the local and global features. Finally, the classification layer of the network realized wildlife individual recognition. In order to evaluate the performance of the model, we tested it on a dataset of Amur tiger torsos and the face datasets of six different species, including lions, golden monkeys, meerkats, red pandas, tigers, and chimpanzees. The experimental results showed that our wildlife re-identification model has good generalization ability and is superior to the existing methods in mAP (mean average precision), and obtained comparable results in the metrics Rank 1 and Rank 5. The Convention on Biological Diversity [With human society\u2019s industrial and agricultural development, wild animals and plants have gradually lost their living spaces. iversity was propiversity , global In order to change the living status of animal species, we need to know their populations, distributions and behavior. The re-identification of animals can distinguish different individuals and is regarded as the premise of modern animal protection and management. Meanwhile, for animals in the zoo, re-identification can help staff to establish their archives and analyze their growth and breeding behaviors, in order to reasonably plan their daily lives. For animals in the wild, individual identification can assist researchers in knowing their health status, and studying their lifestyle and distribution, which provides a factual basis for making appropriate protection measures. Re-identification methods have been used to assess animal population size and density. For example, Shannon Gowans recordedOver the years, camera traps have beeIn the early years, Meek et al. once invWith the wide application of deep learning technology in the field of image processing ,12,13, rIn order to better extract and fuse global and local information on wildlife, we first applied the transformer network structure to the re-identification of wildlife and proposed a transformer network based on a cross-attention mechanism for the re-identification of wild terrestrial animals;After partitioning the whole image into patches, in order to extract the local features of the patches and the global correlation between patches, we replaced the self-attention module of the LA transformer with CAB, which captures the global information of the animal appearance and the differences in local features, such as local fur colors and textures;At the stage of the feature fusion, the hierarchical structure of the locally aware network was redesigned according to the distribution of animal body parts in the standing posture. After fusing the weighted-average values of global and local tokens, we obtained the globally enhanced local tokens where the fused features were arranged into 7 \u00d7 28 2D distribution;To validate the generalization ability of the model, we tested different types of datasets, such as the animal trunk dataset and the animal face datasets including those of tigers, lions, golden monkeys, and other common species.Due to the limitations of long-distance feature extraction, a CNN is not conducive to mining the relationships among local features. In addition, due to the shy characteristics of wild animals and the complex field environment, it is difficult to capture whole bodies of animals from multiple angles, which also poses a challenge to the re-identification of wild animals. Transformer, as an emerging deep learning structure, can utilize multi-head self-attention to capture long-distance relationships and pay attention to local features, so it has already shone in the field of human re-identification ,23,24. HIn this section, we mainly introduce the datasets in the experiments, and the division of the training set and the testing set, then we give the pipeline of the wildlife recognition network.In order to verify the performance of our proposed model from different perspectives, we chose three public datasets that are widely used for animal re-identification. These datasets include animal trunk and face images, RGB, and gray images.The dataset ATRW was firsThe details of the datasets are shown in As shown in P is the size of the image patch; N is the number of image blocks, which is added with the position information and input into the transformer encoder module. The cross-attention block in the transformer encoder module combines the inner-patch attention with the cross-patch attention. It cannot only capture the local differences in animal fur or facial features, but also obtain global information on animal body appearance. The multilayer perceptron block [on block fuses thCross-attention blocks (CAB) are the As shown in IPSA and CPSA build cross-attention by stacking basic modules. The stacking mode is shown in We calculated the weighted average of local features and global features based on the locally aware network and thene animal .Compared with the human body, the movement of animal body parts is relatively simple. Therefore, we redesigned the layer structure of the locally aware network to reduce the intra-layer difference and broaden the inter-layer difference. After the weighted-average fusion of global and local tokens, the fused feature elements were arranged into 7 \u00d7 28 2D distributions, and the number of layers was reduced.The program used in this study was written in Pytorch and trained on a computer configured with an Intel i7-11800H CPU and NVIDIA GeForce RTX 3060 GPU. The numbers of images in the training subset and the testing subset are shown in Re-identification is generally evaluated by two metrics, cumulative matching features (CMC) and mean average precision (mAP). When calculating the evaluation index, it is necessary to divide the test set into the Query image set and the Gallery image set. The re-ID methods select an image in the Query image set and retrieve the images of the same individual from the Gallery images.We utilized CMC and mAP to evaluate our experimental results. Rank 1 and Rank 5 are often used for two standards of CMC. Take Rank 1 as an example for the explanation of CMC. Given an animal image, we compare the similarity of the Query image and the images in the Gallery image sets and rank the Gallery images according to their similarity. A function is defined to judge whether the two images (Then, when calculating Rank 1, it is only necessary to count whether all Query pictures are the same as their first returned results.We replaced the original VIT attention module with the CAB to form a new transformer model. The CAB pays more attention to the differences in animal fur patterns and facial features. Since our model can re-identify the body appearances of animals, it is not limited to re-identification based on animal faces. The model proposed by Yu et al. is basedThe layer number of the local aware network directly affects the feature extraction performance of the model. More layers mean fewer differences between features, which will reduce the classification accuracy of the model. As shown in The datasets mainly included four categories of images: body appearance, color face, gray face, and complex face datasets. Since existing studies have conducted experiments on these datasets, in order to fairly evaluate our model from multiple perspectives, we chose the state-of-the-art models corresponding to each type of dataset for the comparison: the Yu et al. model for the ATRW dataset; the Tri-AI model for color face and gray face datasets; and PrimNet for C-ZoAs the testing set for the ATRW dataset was not yet released, Yu et al. also used the images in the training set for experiments, which were consistent with ours. Rank 1, Rank 5, and mAP are widely recognized as the standard evaluation metrics in the existing literature. We also used Rank1, Rank5, and mAP for the ATRW dataset, and mAP and Rank 1 for the Tri-AI dataset and the PrimNet dataset, respectively. The metrics and the proportions of the training set and testing set were consistent with these models\u2019 original studies. The experimental results are shown in As shown in Through the experiments on the above datasets, we proved that our network has good generalization ability and can be well applied to various re-identification tasks with wild terrestrial animals. The reason for this is that our model extracts the local features of small image patches and uses the cross-attention module to calculate the correlation characteristics between patches in order to construct the global information. For some wild animal species, their fur patterns and facial features have distinctive differences, so the prediction results are satisfactory. However, the faces of different chimpanzee individuals are almost the same except for their lips and ears, which increases the difficulty in identifying different individuals.With the invasion of human society to nature, the living space of wild animals is gradually decreasing. To help understand the living conditions of wild animals, we designed a novel transformer-based re-identification method for wild terrestrial animals which incorporates the cross-attention mechanism and the locally aware transformer. In addition, we fine-tuned the hierarchical structure of the locally aware network to accommodate wildlife body structures. We evaluated the model on various datasets. The experimental results showed that our model had superior performance in identifying animal individuals than the existing state-of-the-art methods, even for challenging datasets.In general, the performance of wildlife re-identification using the proposed methods significantly improved. However, not all animals\u2019 coats have stripes or spots, which greatly increases the difficulty of animal re-identification recognition. Similar to other existing methods, our model also suffered from the problem of being unable to extract subtle differences in the faces of species with a solid fur color and similar faces, such as chimpanzees. In addition, due to the occlusion of wild vegetation, the camera only can capture a part of the animal\u2019s body. The images can also be blurred by the influence of noise and bad weather such as strong winds, heavy fog and rain. Therefore, more in-depth research is needed for animal re-identification recognition.In future research, we will try to use counterfactual attention learning to learn"} +{"text": "Parkinson\u2019s disease (PD) is increasingly being studied using science-intensive methods due to economic, medical, rehabilitation and social reasons. Wearable sensors and Internet of Things-enabled technologies look promising for monitoring motor activity and gait in PD patients. In this study, we sought to evaluate gait characteristics by analyzing the accelerometer signal received from a smartphone attached to the head during an extended TUG test, before and after single and repeated sessions of terrestrial microgravity modeled with the condition of \u201cdry\u201d immersion (DI) in five subjects with PD. The accelerometer signal from IMU during walking phases of the TUG test allowed for the recognition and characterization of up to 35 steps. In some patients with PD, unusually long steps have been identified, which could potentially have diagnostic value. It was found that after one DI session, stepping did not change, though in one subject it significantly improved . After a course of DI sessions, some characteristics of the TUG test improved significantly. In conclusion, the use of accelerometer signals received from a smartphone IMU looks promising for the creation of an IoT-enabled system to monitor gait in subjects with PD. Parkinson\u2019s disease (PD) is very suitable for the application science-intensive instrumental research methods. PD is gradually becoming a kind of \u201cmodel disease\u201d for the testing of new technologies for PD diagnostics and escorting PD subjects . For sevEarlier, we have shown that in subjects with PD that both single session of Earth-based microgravity\u2014modeled with \u201cdry\u201d immersion conditions (DI) \u2014and a prThe Timed Up-and-Go (TUG) test has proven to be reliable in many domains of neuromuscular and orthopedic pathology for assessing gait, basic mobility skill, strength, agility and balance . It consThroughout the last decade, instrumented versions of the TUG test (iTUG) were increasingly invented. In most of these versions, varied numbers and positions of miscellaneous inertial measurement unit (IMU)-based wearable sensors (accelerometers) were used to discriminate between the phases of the TUG test ,16. The There is a multitude of technological approaches for studying PD. Among them are optical motion trackers, biopotential devices, audio and video recording, and, especially, wearable sensors, such as smart glasses, hats, insole sensors of ground-reaction force and smartphones . PreviouAccording to the review by Deb et al. , wearablSmartphones are equipped with IMUs that consist of a 3-axis accelerometer, a 3-axis gyroscope and a digital magnetometer that is comparable in sensitivity to research-grade biomechanical instrumentation . In the The major hypothesis of the present study was that gait characteristics in patients with PD are responsive to the conditions of either one session of DI or a course of repeated DI sessions. To address this, we obtained up-sampled acceleration signals from smartphone-based 100 Hz IMU sensors attached to the subject\u2019s head during a 13 m TUG test before and after single session of DI and a program of seven DI sessions.Altogether, data from six PD subjects was collected in the study. Six subjects with PD participated in the study after providing their informed consent. Their anthropometric and clinical data and the medication they use is presented in 3 of fresh, thermally comfortable water stabilized at T = 32 \u00b0C. The water in the tub was periodically filtrated and aerated to prevent bacterial contamination. The water surface was covered with a large, square waterproof film (3 \u00d7 4 m2), which was wrapped around the subject\u2019s body. The DI session was conducted at 9:30 AM, in the condition of \u201con-medication\u201d in order to synchronize the effects of DI and the anti-PD therapy. The subjects usually took their medicines 2 h before the study, at 7:30 AM. Before DI, subjects were instructed to drink 200 mL of water and urinate due to the strong diuretic effect of DI [The on-Earth microgravity was modeled using the conditions of a \u201cdry\u201d immersion (DI). This method of DI has already been presented in detail in our earlier papers ,8,9,10. ct of DI . Before The program of DI consisted of seven 45 min DI sessions that were conducted twice a week for 25\u201330 days. The total DI dose during the course was 5\u00bc h.The TUG test was performed in its extended form . Still, its phases were all the same: (1) standing up from a 46 cm highchair (Sit-to-Stand phase), (2) walk straight , (3) turning by 180\u00b0 (U-turn), (4) walking back , and (5) sitting down (Stand-to-Sit transition with a turn). In addition, unlike the classic 3 m version, the 13 m version of the TUG test allowed for an analysis of the subject\u2019s steps (gait)\u2014because subjects performed up to 20 steps in one direction, which is sufficient for analyzing gait . The TUGDuring the TUG test, the acceleration and rotation rate were measured with the sensor module in the smartphone Xiaomi Mi4 . The obtained signals were further processed offline. x-, y- and z- components of the acceleration vector due to the inclination as well as the presence of linear accelerations while standing up and sitting down. For the gyroscope in a motionless state, the signal includes the sensor noise as well as the head tremor; however, it is characterized by the constancy of the mean value (measurement offset). The start and end points of the TUG test were selected manually by analyzing the change in the mean value due to the rotation of the head and body during inclination while standing up and sitting down.The subject was instructed to sit still and look forward before and after the test. In a motionless state, the shape of the accelerometer signal is formed by the current projections of the gravity vector, measurement noise and the existing zero-G offsets, as well as the head tremor. The beginning and end of the movement are characterized by a change in the The internal phases of the TUG test and step moments were determined automatically. For steps in a straight-line walk (15\u201317 more or less uniform steps in the middle of both the Gait-Go and Gait-Come phases), a set of gait features was calculated. Altogether, subjects performed 35\u201340 steps in both directions, of which 30\u201335 steps that were in the middle of the walk were analyzed. The sampling rate of the inertial sensors\u2014both the accelerometer and gyroscope\u2014of the Xiaomi Mi4 smartphone that was used in the present study was 100 Hz (the period between data samples was \u0394t = 10 ms), which can be regarded as neither reliably accurate nor fast. The smartphone was fixed on the back of the head of a subject with an elastic band and, additionally, a tight-knitted hat; subjects felt comfortable with this kind of fixation and the smartphone never fell out of its position. Values for the acceleration and angular velocity were collected as a time-stamped data stream. Thus, the measurements were accompanied by timestamps from the smartphone\u2019s operating system timer. For further analysis, the accelerometer and gyroscope measurements with a time-stamp difference of less than 5 ms were considered synchronous. In order to increase the time resolution and achieve a smoother distribution, the time series data were up-sampled to a 10-fold-higher frequency of 1 kHz (\u0394t\u2019 = 1 ms) . In addi(1)x);forward Fourier transform of original signal X = F((2)x) up to new length F(y);zero-padding F((3)y) to obtain up-sampled signal y;inverse Fourier transform of F((4)y to preserve amplitudes.scaling up-sampled signal Since the analyzed signal tended to be periodic, the up-sampling, which used Fast Fourier Transform, could be applied. Furthermore, as long as the measurement signals are real-valued, the real (single-sided) FFT is suitable for conversion into the frequency domain. In the frequency domain, up-sampling means there is zero-padding at the end of the high frequency components of the signal. The up-sampling procedure included the following steps:\u22124 m/s2 for the x-, y- and z-axis of the accelerometer, and \u00d7 10\u22125 deg/s for the gyroscope. Since the test duration is less than 1 min, the bias drifts can be neglected.A simple calibration of the zero offsets of the sensors was performed before the start of the test. To do this, we used the measurements obtained from a smartphone placed on a horizontal surface. It was noted that the sensor offsets were probably pre-calibrated by the Android OS. The bias instability and velocity/angle random walk for smartphone sensors was previously analyzed by us using the Allan variation . The biaThe orientation of the smartphone was calculated using a well-known complementary filter proposed by Robert Mahony et al. and is eAutomatic detection of a turn was conducted by analyzing the projection of the angular velocity on the vertical axis. No additional filtering of measurements was performed. If the values of the amplitude and duration of the rotation rate exceeded certain threshold values, a rotation was considered to be detected (recorded). At the first stage, a comparison was made with the threshold value of the rotation rate (10 degrees per second). At the second stage, the rotation duration was estimated. Rotations lasting less than 1 s were discarded. If three or more turns were detected in the TUG test, the two longest turns were considered the 1st (at the U-turn phase) and 2nd (prior to sitting down on a chair) turn. According to the available experimental data, this algorithm was successful in 100% of cases for both turn events.2 were taken as the approximate time-stamp of the initial contact of the foot with the ground (T\u2019 point).Step detection was automatically performed by analyzing the time series of the acceleration vector. Since the typical cadence of stepping is about two steps per second, the measurements were filtered with a forward\u2013backward zero-phase low-pass filter with a cut-off frequency of 3 Hz, which allowed us to obtain the LPF time series data. After that, peak values of the filtered signal were detected. Moments where the acceleration magnitude reached 11 m/sthe first peak corresponding to the moment of standing up and the second peak corresponding to the moment of the first step;steps during the first U-turn;steps from the moment of the second turn until the end of the whole test.For each step, the revised time-stamp T_step of the heel strike and the corresponding maximum acceleration along the vertical axis were determined by searching for the maximum value in the \u00b140 ms window near the T\u2019 point. Not all steps taken during the TUG test were taken into account for gait analysis. The following local maxima that were obtained during the step detection procedure were discarded:To estimate the duration of the entire TUG20 test and its phases, the following parameters were determined :D1 (The entire TUG test duration): the time from the very beginning of motion (the Sit-to-Stand movement) until the end of the test (sitting down on a chair).D2 : the time from the start of the lifting to the moment of the heel strike on the second step.D3 : the time to perform a 180\u00b0 turn at the far turning point.D4 : the time from the beginning of the second turn until the end of the test.For the analysis of gait stability, only straight-line, uniform steps were taken into account see . The folstep moments). Before calculating the average value, two points with the largest deviation from the median value of step duration were discarded : the duration of the step (dt) was determined as the difference between consecutive time-stamps of successive steps see .To calculate the cadence mean and standard deviation, the \u201cinstantaneous walking pace\u201d was first estimated for each step (cadence = 60/dt); then two outliers should be discarded. Usually, these outliers were characteristic of the \u201ctransitional\u201d moments during the TUG test . S3 .S4 .S5 The ratio of the average deviation of the two largest outliers of the step duration to the standard deviation of the step duration without taking into account the two largest outliers ;For each outlier, calculate the absolute difference from DWO_STD as the standard deviation of the step durations without outliers;Calculate DOUT/DWO_STD.Calculate LS = DS5 was calculated according to the following algorithm:The estimates of the probability density functions of the step duration and the acceleration upper/lower peaks were obtained using kernel density estimation (KDE). KDE was computed using the Python scipy.stats.gaussian_kde function . As P1, P2 and S2 values are related to the width of the target variables\u2019 distributions, they are shown as the full width at half maximum. On the left panel, the red dots denote minimal acceleration when both feet were touching the floor, and the green dots denote heel strike. The open red and green circles represent these dots. Two outlier values are denoted with black crosses. On the right panel, the open black circles represent the individual step duration along the time course. The outlier values, denoted by red crosses (>0.7 s), represent unexpectedly longer steps right prior to U-turn. Furthermore, note that during the Gait-Come phase (upper group of open black circles), the length of the steps decreased roughly from 0.68 m to 0.6 m. For more information, see the text below; these data were obtained from Subject 6.To analyze the power (amplitude) characteristics of each step, the following parameters were estimated:2): the standard deviation of the vertical acceleration in Tstep moments. Two outliers were discarded. P1 characterizes the stability (uniformity) of the heel strike during stepping in the Gait-Go and Gait-Come phases : standard deviation of the vertical acceleration minima that corresponds to the weight transfer phase. Two outliers were discarded. P2 characterizes the stability (uniformity) of the minima values when both feet made contact with the floor during the swing phase of stepping during the Gait-Go and Gait-Come phases : the average difference between the minimum and maximum of the vertical accelerations in a series of straight steps.P3 . The values of D1\u20134, S1\u20135 and P1\u20133 were compared in the pairs of conditions \u201cbefore-after a single DI session\u201d and \u201cbefore-after a course of DI sessions\u201d with the non-parametric paired Wilcoxon 2, P2 increased (swing phase) from 0.50 to 0.82 m/s2, and P3 increased from 8\u20139 to 10\u201311 m/s2 DI session ; however\u201311 m/s2 , which p\u201311 m/s2 . In addi\u201311 m/s2 . The indUnlike with a single DI session, a course of DI sessions exerted a significant influence on a few gait parameters, namely, D4 and S5 see, , which mThe purpose of this study was (1) to test the reliability of an assessment of stepping characteristics with an up-sampled IMU-based accelerometer signal and gyroscope of a smartphone when placed on the subject\u2019s head, and (2) with the help of the acceleration signal, to study the effect of a single DI session and a program of DI sessions on stepping in subjects with PD during a long version of the TUG test.There are plenty of studies that have demonstrated sufficient reliability of iTUG test technologies based on a smartphone\u2019s IMU to recognize the phases of the TUG test ,30,31,32Instead, in the present study, we focused on (1) the gait analysis during the Gait-Go and Gait-Come phases with the help of (2) the extended 13 m TUG test, and (3) with a smartphone fixed to the head. It has been found that the 13 m TUG test returns information about 16\u201321 steps in each direction , of which 15\u201317 steps in the middle of the Gait-Go and Gait-Come phases of the TUG test were considered to be functionally uniform . This number of steps is reliable, as data collected from 10\u201320 strides (20 steps) were reported to be sufficient for the reliable characterization of the gait speed and cadence of stepping [From a technical point of view, the obtained parameters and graphical presentation of the TUG test can be regarded as reliable and demonstrative, as it allows for the tracing of individual features of the subject\u2019s gait and the recognition of graphical patterns of the subjects by eye. Furthermore, the position of the smartphone on the head can be regarded as a reliable site for the collection of information about a human\u2019s gait. This allows for a reduction in the number of IMUs to one that is placed on the head.2, and P3 increased by 2\u20133 m/s2. As a whole, these changes suggest that after a single DI session, Subject 1 walked faster, performed faster turns and stepped more firmly on the floor. All these modifications can be regarded as positive. In Subjects 2 and 3, the effect of DI conditions was negligible, probably due to the relatively good initial values of their gait parameters, for example, in Subject 3, their cadence was 145 steps/min\u2014in comparison to 90\u2013112 steps/min in Subjects 1 and 2. In addition, Subject 1 did not take anti-PD medicine, which means that before the DI session he stood in the \u201coff-medication\u201d condition. As a result, the effect of DI was not inferred by anti-PD therapy.We found that a single (\u201cacute\u201d) 45 min DI session exerted no effect on the studied parameters of gait across the entire group of subjects with PD, which is opposite to our original hypothesis. On the other hand, in each subject, an individual set of gait characteristics still changed. For example, in Subject 1 see , the entThe effect of a program of seven DI sessions was a bit more pronounced. At a minimum, the D4 and S5 parameters became significantly larger after a course of DI, and a change in P2 values resulted in an increase after a program of DI. The reaction to DI conditions was individually significant. Again, Subject 1 presented the most notable improvement in D1 , and almost in all other parameters. Subjects 2, 4 and 5 demonstrated moderate improvement of only some parameters, and Subject 3 demonstrated a notable improvement of gait.The Internet of Things (IoT) is comprised of interconnected devices, machines, and servers with data storage that functioning through a network ,37. A smA smartphone is always \u201cat hand\u201d (in the pocket), it is not heavy or cost-effective, and it is already pre-set for data transfer to cloud-based storage . SmartphData collected on the gait of PD subjects with wearable accelerometers is suitable for Artificial Intelligence (AI) or IoT decisional support ,41. AI-bThe TUG20 test accelerometer signals have a repetitive structure and contain gait features. Furthermore, there are two ways that the methods of AI could be applied. First, it can be used to collect a database of signals and split this database into two parts: the training and test sets. To increase the adequacy of the model, this approach might be applied after investigation of more than 100 PD cases, which is difficult in real life. The second way is to investigate the gait features and to understand what features are significant, i.e., to exclude insignificant features and thus decrease the dimension of the model, and then apply these data to clustering. This approach requires less studied cases, and we would prefer to follow it in the future. The major limitation of this study was the insufficient number of study subjects and measurements, which did not allow for a more precise analysis of data to be conducted. Furthermore, control groups were not formed. In future studies, we propose that more measurements should be conducted in control groups and subjects with PD, both under \u201cdry\u201d immersion conditions and non-DI conditions.In conclusion, the data from smartphone-based IMU accelerometers allowed us to compute gait characteristics that are conventionally used in the field of locomotion physiology, such as step duration and cadence. Like other IMU-based analyzing systems, the presented method allowed for the recognition of the phases of the TUG test. The application of an extended version of the TUG test provided a sufficient number of steps to characterize gait, and it allowed for the visualization of the duration of individual steps during the process of locomotion. Furthermore, the presented method appears to be suitable for a fast visual evaluation of stepping patterns in PD subjects. Of note, some of the specific characteristics of Parkinsonism events were recognized with the IMU sensors\u2014for example, unusually long steps, which were produced while walking. For the entire group, the conditions of a single 45 min \u201cdry\u201d immersion affected none of the studied gait parameters derived with the help of smartphone-based IMU sensors; however, in one subject there was a clear increase in cadence, gait and turning speed. After a course of repeated DI sessions, some characteristics of the TUG test were improved; however, gait speed did not significantly change. The presented method of gait analysis appears to be suitable for further instrumentation because a smartphone is perfectly suited for association in IoT-based networks."} +{"text": "Poly(dodecano-12-lactam) is one of the most resourceful materials used in the selective laser sintering (SLS) process due to its chemical and physical properties. The present work examined the influence of two SLS parameters, namely, laser power and hatch orientation, on the tensile, structural, thermal, and morphological properties of the fabricated PA12 parts. The main objective was to evaluate the suitable laser power and hatching orientation with respect to obtaining better final properties. PA12 powders and SLS-printed parts were assessed through their particle size distributions, X-ray diffraction (XRD), Fourier Transform Infrared spectroscopy (FTIR), differential scanning calorimetry (DSC), a scanning electron microscope (SEM), and their tensile properties. The results showed that the significant impact of the laser power while hatching is almost unnoticeable when using a high laser power. A more significant condition of the mechanical properties is the uniformity of the powder bed temperature. Optimum factor levels were achieved at 95% laser power and parallel/perpendicular hatching. Parts produced with the optimized SLS parameters were then subjected to an annealing treatment to induce a relaxation of the residual stress and to enhance the crystallinity. The results showed that annealing the SLS parts at 170 \u00b0C for 6 h significantly improved the thermal, structural, and tensile properties of 3D-printed PA12 parts. Additive manufacturing (AM), also known as 3D printing or direct digital manufacturing, has become an alternative technology that competes with more mature technologies such as casting and forging in different industrial fields including aerospace, automotive, and biomedical fields ,2,3,4. Ic) and melting temperature (Tm). Therefore, printing at a temperature slightly below Tm enables the densification of the SLS powder without reaching its melting point, thus limiting the parts\u2019 distortion. Moreover, by maintaining the temperature above Tc, the sintered structure remains in an amorphous phase to prevent rapid crystallization, making the powder material more suitable for the production of the final part. Therefore, the parts need to be maintained within the processing window during the build process and slowly cooled down to room temperature to avoid any deformation and crack formation [SLS polymers are selected based on the presence of a super-cooling processing window in which there is a large space between the crystallization temperature , ,22 PolyaSLS is a complex process that usually requires great effort and control in terms of powder and post processing after fabrication to achieve successful printing and high-quality parts. During the SLS process, the material should be kept at an elevated temperature in the build chamber to avoid any deformation of the printouts. The laser provides the necessary energy to exceed the sintering point, making it possible to form the part with the desired geometry . For SLSOn the other hand, powder spreading is a crucial step of the SLS process. Controlling the powder quality on the bed affects the quality of the tested parts. The powder should have a good flowability in order to enable the consistent deposition of thin dense layers of powder. Decreasing the porosity content will increase the mechanical properties. The layer thickness of the SLS process is typically between 100\u2013150 \u03bcm. Smooth particles with a high sphericity are thus preferable to obtain parts with a desirable microstructure after sintering ,39 and aHofland et al. studied The present work aims to analyze the impact of the hatch orientation and laser power on the mechanical, microstructural, and morphological properties of 3D-printed PA12 parts. The effect of the heat treatment on the mechanical properties will also be evaluated and compared to the as-built samples2 laser. In this study, a Polyamide 12 (PA12) powder (Precimid1171\u2122) from TPM3D with a density of 0.95 g/cm3 was used, as it is one of the most widely used materials due to its chemical and physical properties. The chemical structure of PA12 (PA 2200) is shown in All samples were printed on a P3200HT SLS system from TPM3D (Stratasys company) equipped with a 60W COThe software \u2018\u2019VisCAM RP\u2019\u2019 was used to prepare the build volume and slice models into individual layers before uploading the data to the SLS machine. The main printing parameters used to produce PA12 powder samples are shown in As shown in A dynamic image analysis measurement was performed to characterize both the size distribution and particle shape of the PA12 powder used. This analysis was performed using a Camsizer XT equipped with two digital cameras, including one optimized for the analysis of fine particles. Such a setup enables measurement of particles ranging between 2 \u00b5m and 8 mm in diameter.Two PA12 powders were analyzed for comparison: one was the as-received powder, while the second was the un-sintered powder taken from the build volume after only one fabrication. The particle size distribution (PSD) of PA12 powder was identified as a function of percent volume. Furthermore, sphericity was chosen as a shape factor to describe the shape of particles of PA12 powder.\u22121; spectral resolution of 4 cm\u22121.Fourier Transform Infrared spectroscopy (FTIR) was used in this work to analyze functional groups of SLS PA12 samples and collect infrared spectra for the structural analysis. This analysis was carried out using a NICOLET\u2122 IS50 attenuated total reflection (ATR) spectrometer. The conditions of measurement were as follows: spectral region of 4000\u2013400 cmc, was calculated using the equation bellow:\u22121 [Differential scanning calorimetry (DSC) is a common tool used for characterizing materials for laser sintering because it determines the crystallinity and quantifies the melting temperature of printed parts. This analysis was carried out on a 6.6 \u00b1 0.1 mg powder sample using a TA Instruments DSC Q20. The measurements were carried out under a nitrogen atmosphere at a flow rate of 50 mL/min. The crystallinity, X.3 J.g\u22121 .\u22121 from room temperature to the annealing temperature Ta ;Heating ramp of 2 \u00b0C minHold at Ta during the annealing time ta (6 h);\u22121 from Ta to the room temperature (25 \u00b0C);Cooling ramp of 2 \u00b0C min\u22121 to 220 \u00b0C for characterization.Heating ramp of 10 \u00b0C minThe annealing process was performed as follows:a, sample parts were placed directly in a natural convection oven for annealing before mechanical testing.After determination of the appropriate annealing temperature T\u22121.XRD is a powerful tool used to analyze the atomic or molecular structure of materials. XRD was used here to identify the phase constituent of SLS PA12 powder and samples. Examination of powder and 3D-printed samples was carried out at different laser powers using a XRD X\u2019PERT PRO MPD. Data were acquired over the range of (2\u03b8) 0\u201390\u00b0 with a step size of 0.0017 and a scan rate of 7\u00b0. min\u22121 was used.A tensile test was used to establish tensile properties of 3D-printed SLS-PA12 specimens, including tensile strength, Young\u2019s modulus, and deformation at break. The specimens used were designed according to the ASTM D638-14 \u201cStandard Test Method for tensile properties of plastics\u201d. Three runs were conducted in series to study different laser power and different build orientation as well. Each series comprised six specimens of the D638 type-5 geometry as shown in The information obtained by the software were used to enable the calculation of tensile strength, Young\u2019s Modulus, and deformation at break, using the following equations:Microstructure of both powder and fracture surface was evaluated by scanning electron microscopy (SEM) using a Quanta 200 ESEM configured with an EDAX (TSL) EDS/EBSD system for phase identification at high pressures. As-received PA12 powder, used-once SLS powder, and 3D-printed samples cryogenically fractured in liquid nitrogen were coated with a thin layer of electrically conducting gold (Au) to prevent surface charging. Layer arrangement and powder morphology were observed at an acceleration voltage of 5 kV in high-vacuum mode.The particle size distribution (PSD) has a significant impact on the quality of SLS powder, with an ideal diameter between 20 and 80 \u00b5m. However, a large number of small diameter particles gives the powder a sticky character that limits its application in the SLS process . HoweverThis particle size decrease was confirmed through a comparison of the percentile values for both powders , where aThe ATR-FTIR spectra of the PA12 samples were recorded to provide information about the infrared bands and their roles. 1 chains. Therefore, PA12 can be crystalized within structures of \u03b1 and \u03b3 phases, where the major \u03b3 phase acts as a stable structure [The X-ray diffraction patterns of both PA12 powders are shown in tructure . The chatructure .The patterns relative to the powder after fabrication are almost similar to the patterns for the as-received powder, which confirms the possibility of re-using the powder for SLS after a suitable recycling process.g) phase where the polymer changes to a highly elastic state [m), corresponds to the endothermic peak detected at 182 \u00b0C. During cooling, a third transition, observed at 151 \u00b0C, is attributed to the crystallization (Tc) of PA12, where a rearrangement of the molecular chains takes place to create crystalline lamellae inside the continuous amorphous structure. These results show that both types of PA12 powder are in a semi-crystalline state after the cooling process. As shown in c) must also be avoided as long as possible during processing.The DSC technique was applied to study the glass transition temperature, melting temperature range, and the degree of crystallinity of the PA12 material. ic state . The sec\u22121 and 48.49 J. g\u22121 have been measured. After melting, the as-received powder and the powder after fabrication re-solidify with peak crystallization temperatures of 151.9 \u00b0C and 149.1 \u00b0C, respectively. The solidification temperature is significantly lower than the crystalline melting temperature of PA12; this phenomenon is common in crystalline polymers and is known as super cooling [The melting point for the as-received powder is 182.7 \u00b0C and is 183.3 \u00b0C for the powder after fabrication. The respective fusion heat values of 55.62 J. gThe tensile properties were evaluated at different levels of laser power and hatch orientations and the results are presented in While the effects of laser power on TS are obvious, those linked to hatch orientation are less evident. Such an isolation requires a comparison between the 0\u00b0 and 90\u00b0 TS values first, followed by the 45\u00b0 and 90\u00b0 TS values. As mentioned previously, the parts made at 0\u00b0 and 90\u00b0 XY plane orientation have a nearly identical hatching. This should result in similar TS values as no other SLS parameters differ, something not observed here. Parts made at 90\u00b0 XY plane orientation exhibit TS values 2 to 10% lower compared to those at 0\u00b0, with the greatest difference observed at a low laser power. This decrease in TS for the 90\u00b0 parts could be explained by their position on the build platen during fabrication relative to the 0\u00b0 parts. It was shown that there was a need to operate with a powder layer of uniform temperature to achieve builds of multiple parts with similar mechanical properties ,56,57. TThe parts made at a 45\u00b0 XY plane orientation exhibit the overall lowest TS values compared to other orientations made in the same conditions by as much as 7.5%. As these 45\u00b0 parts were positioned at the periphery, a gradient in the powder bed temperature could partially explain this. As the 45\u00b0 and 90\u00b0 parts were interspersed at the periphery, both orientations should exhibit similar TS values, which was not the case here. This difference could be attributed to the hatching orientation. While hatching is conducted alternatively parallel/perpendicular to the applied load in the case of the 90\u00b0 parts, hatching in 45\u00b0 parts is performed at an angle relative to the applied load. In a fashion similar to this well-known effect in FDM , more loThe Elongation at break as a function of the build orientation and laser power variation was also evaluated see c. HoweveIn order to further understand the influence of laser power on the properties of SLS-PA12, the fracture surface of the SLS-PA12 samples was analyzed. (a)Differential Scanning Calorimetryg). The second thermal transition, associated with the melting temperature (Tm), is the endothermic peak detected at 176 \u00b0C. During cooling, a third exothermic transition at 146 \u00b0C is attributed to the crystallization of PA12, where a re-arrangement of molecular chains takes place to create crystalline lamellae inside the continuous amorphous structure. From these results, it can be concluded that PA12 kept its semi-crystalline property after the cooling process. A DSC analysis of the printed PA12 parts at different laser powers was performed to determine the impact of this controlling parameter on the thermal characteristics of the PA12 samples manufactured by the SLS process. m), crystallization temperature (Tc), and glass transition temperature (Tg) between the printed PA12 at different laser power. It is evident from We can notice in g from 51.7 to 44.07 \u00b0C as a result of sintering correlates strongly with the degree of crystallization of the sample. This can be attributed to the gain in the mobility of the polymer chains as they are not partly anchored inside the crystalline domain.(b)X-ray DiffractionHowever, the degree of crystallization decreased from 46.62% see for the Many studies investigating polymeric materials have considered that using heat treatment post-processing can improve the material properties and crystallinity of SLS parts made from Nylon 12 . These hm) to characterize the melting enthalpy, and then the degree of crystallinity (Xc) generated during the annealing cycle. c and all thermal transitions were measured were annealed for six hours at various temperatures to allow for the relaxation of the residual stress generated during their printing process. The annealed parts were analyzed by DSC, using the first heating cycle to characterize the thermal history experienced during the annealing process. During this heating cycle, the material achieved its melting temperature , the Tg value was still higher than the unannealed printed parts. It can also be noted that raising the annealing temperature from 130 to 170 \u00b0C results in an increase in the heat flow of melting from 65.51 to 76.51 J.g\u22121. An increase in the relative degree of crystallinity was also observed, from 31.29% to 36.55%. This latter crystallinity enhancement could be the result of a phenomenon called secondary crystallization, which increases the lamellar form of PA12. Moreover, the increase in Tg because of annealing correlates strongly with the degree of crystallinity of the printed PA12\u2032s parts. Annealing thus induces strong intermolecular interactions between polymer chains. Thus, this increase is attributed to the loss of mobility of polymer chains as they are partly anchored inside the crystalline region.The annealed specimens were then subjected to mechanical testing to record the Young\u2019s modulus, tensile strength, and strain at break. From From the above results, it has been confirmed that high-temperature annealing (170 \u00b0C) yields the best improvement in Young\u2019s modulus , tensile strength , and elongation at break over the unannealed parts. It can be concluded from these results that annealing had a higher percent contribution to the mechanical performance over the duration of annealing. This confirms the importance of the annealing process for achieving proper chain crystallization, thus enhancing the mechanical properties of 3D parts .This study evaluated the effect of laser power and hatch orientation on the tensile properties and morphology of the SLS PA12-produced parts. The main objective was to identify the suitable laser power and hatch orientation leading to better mechanical properties and high-quality parts. Different methods were used to study the SLS parts by considering the morphological, structural, and mechanical properties using XRD, FTIR, DSC, tensile testing, and SEM characterizations.The results confirmed the significant impact of laser power, while the effects of hatching were almost unnoticeable when using a high laser power. A more significant condition is the uniformity of the powder bed temperature, a factor that is seldom considered. This needs to be accounted for because of its effects on the mechanical properties. However, the operator has little recourse with respect to these conditions, which are strongly dependent on the quality of the SLS system.Operating at a high laser power minimized the presence of spherical particles normally related to un-melted powder and yielded an improved microstructure. It was also observed that reducing the laser power to LP: 75% decreases the mechanical properties, with the parts exhibiting spherical particles and a poor microstructure. Heat treating SLS-produced PA12 parts showed the positive impact of annealing, especially at 170 \u00b0C, on the tensile properties. This can be related to changes in the microstructure of the PA12 parts."} +{"text": "Halyomorpha halys, native to East Asia, has become one of the most damaging agricultural pests worldwide. After being first detected in Europe (in Switzerland), it is now widely spread throughout the European continent and many countries in Eurasia. Since its first appearance in Slovenia in 2017, it has caused extensive damage to fruit and vegetable production. Investigating the biology and behavior in local environmental conditions is the first step towards effective pest control. Information on the number of generations per year is crucial for anticipating critical phases of pest development and for adapting control measures that target the pest\u2019s vulnerable life stages. A 3-year study (2019\u20132021) on the biological parameters of H. halys was performed outdoors in Nova Gorica (western Slovenia), confirming that in the sub-Mediterranean climate this pest has two overlapping generations per year. The net reproductive rates observed over the period studied indicate growing populations. The highest population growth was recorded in 2019, when the net reproductive rate of increase (R0) reached 14.84 for the summer generation and 5.64 for the overwintering generation. These findings reflect the current situation in Slovenia, where the growing populations of H. halys has been causing considerable damage to agricultural crops since 2019.The invasive brown marmorated stink bug Halyomorpha halys, native to East Asia, has become one of the most serious pests for agricultural crops worldwide. First detected in Europe (in Switzerland), the insect is now widely found across the European continent and many Eurasian countries. Since its first appearance in Slovenia in 2017 it has caused considerable damage to fruit and vegetable production. Understanding the biology and behavior in the local environmental conditions is of key importance for an effective pest management. Knowledge of the voltinism of the species is crucial to anticipate critical phases of pest development and for adapting control measures that target the vulnerable life stages of the pest. A 3-year study (2019\u20132021) of H. halys biological parameters was performed outdoors in Nova Gorica (western Slovenia), confirming that in the sub-Mediterranean climate this pest has two overlapping generations per year. The net reproductive rates observed in the studied period indicate growing populations. The highest population growth was recorded in 2019, when the net reproductive rate of increase (R0) reached 14.84 for the summer generation and 5.64 for the overwintering generation. These findings match the current situation in Slovenia, where increasing populations of H. halys and severe crop damage have been observed since 2019.In the last decade, the invasive brown marmorated stink bug Halyomorpha halys , also known as the brown marmorated stink bug, is native to East Asia and has rapidly become a globally distributed invasive species by stowing away on traded goods and people traveling was calculated as a fraction of died individuals in a specific stage (dx) out of the total number of individuals entered in the same stage (lx).Apparent mortality (qx) = [dx/l0] was calculated as a fraction of died individuals in a specific stage (dx) out of the total number initially entering the first stage in a life table (l0).Real mortality (rx) = \u2212log (1 \u2212 mx): the kx-values which express the intensity of mortality are the negative logarithm of ) for a factor. When only one mortality factor occurs in a stage, or where more than one occurs and they act sequentially, then the apparent mortality (qx) equals the marginal attack rate (mx).k-values (ks) is the sum of all mortality factor k-values and represents the total mortality for the generation . Ks = keggs + kN1 + kN2 + kN3 + kN4 + kN5.Generation mortality , which describes the number of times the population increases or decreases from one generation to the next [H. halys according to the formula:The net reproductive rate of increase R0 summer generation = realized progeny/number of eggs in the summer generation (l0 = 760)R0 values for growing populations are greater than 1, whereas values for declining populations are less than 1.The RThe relationships between temperature (x) and the development time (y) were determined using a simple linear regression analysis (y = kx + n) . StatistThe overwintering survival rate of adults was followed and assessed in the 2018/2019, 2019/2020 and 2020/2021 seasons. In all 3 years, some single specimens emerged from overwintering cages at the end of January and in the first week of February, respectively. However, they were unable to survive in cold weather conditions. In all 3 years, the noticeable adult emergence began at the end of March, when average daily temperatures exceeded 12 \u00b0C and peaked at the end of April, with average daily temperatures above 15 \u00b0C. Overwintering survival rates during the three following years were 39.9%, 40.3% and 36.5%, respectively. One third to one half of the adults who survived the winter died before the mating season. Total overwintering mortality of adults in the 3 years was 78.8%, 71.2%, and 80.8%. Oviposition began when average daily temperatures reached 17 \u00b0C, and the earliest was recorded in 11 May 2020. In the other 2 years, the first egg laying was observed at the end of May. The oviposition period of overwintering generations lasted 12\u201313 weeks. In 2020 and 2021 the last eggs were laid in the first half of August, while in 2019 the overwintering generation laid eggs from the end of May until the end of third week of August . The oviThe time elapsed between the exit from overwintering and the first egg laying was 39.43 \u00b1 3.28 days. The shortest time was recorded in 2020, when females needed an average of 33.2 days at an average temperature of 15.9 \u00b0C during this period. The pre-oviposition period was generally shorter at high temperatures. Females of the summer generation required 14.0 \u00b1 1.11 to 18.6 \u00b1 0.58 days to start oviposition after their emergence as adults , at meanThe accumulation of degree days was calculated using the lower threshold of 12.2 \u00b0C startingn = 10) produced in total 1742 eggs, with the average lifetime fecundity of 174.2 \u00b1 19.07 eggs per female. The females of the summer generation (n = 10) laid in total 760 eggs; the average lifetime fecundity was 76.0 \u00b1 11.73 per female. The females of the overwintering population laid 3\u201311 egg masses, while the females of the summer generation laid 1\u20136 egg masses. Mating and oviposition were not observed among newborn adults of the summer generation, and the development of the third generation did not occur.The highest fecundity of our experimental populations was recorded in 2019, when overwintering couples of the overwintering and summer generations were 14.84 and 5.64, respectively.The total generation mortality of the summer generation (89.86%) was highH. halys is a recently established invasive pest in Europe, there is still a lack of basic biological knowledge in the newly invaded areas, which is essential for the development of management solutions. Three studies of H. halys biology have been carried out in Europe so far, namely Swiss research that reported the existence of one generation/year in the area of Zurich [H. halys develops two generations/year [H. halys developed two generations/year for 3 consecutive years. The favorable temperature conditions in Nova Gorica (western Slovenia) with monthly average temperatures 12 \u00b0C in April and 16.7 \u00b0C in May allowed for the early emergence of adults from the winter shelters and the early start of oviposition from mid-May to late May. Similarly, the early onset of oviposition has been reported in the province of Modena [H. halys populations are also bivoltine. Subsequently, the development of the nymphs was driven by the high summer temperatures of June and July, which accelerated the emergence of adults. The first adults of the summer generation appeared from mid-July onwards, which is also in good agreement with the Chinese and Italian results, where the new generation adults emerged in the beginning of July. In addition to temperature, which is the main driver of insect development, photoperiod is a crucial factor in determining the developmental pattern [H. halys entry into the reproductive diapause [As f Zurich , an Italons/year and Russons/year . The bas , and in , where H pattern . Haye et pattern suggeste pattern and othediapause ,45,46,47H. halys [H. halys [H. halys have higher overwintering mortality than univoltine populations [The overall overwintering mortality rates observed in 3-year period ranged from 71 to 81%, slightly lower than the results for northern Italy (86%), but much higher than those recorded in the Swiss survey (39%). Temperature is known to play a critical role in the biology of H. halys . Being aH. halys . AccordiH. halys (2017) fH. halys , adults ulations . This coH. halys in our study averaged 39.43 \u00b1 3.28 days for the overwintering population with an average 174.2 \u00b1 19.07 eggs laid per female and 16.63 \u00b1 1.37 days for the summer generation, with an average 76.0 \u00b1 11.73 SE eggs laid per female. Fecundity was much lower than that observed in the native area in Asia, where a single female can produce over 480 eggs in her lifetime [H. halys was also pointed out in a study conducted by Govidan et al. [H. halys but reduces the fecundity of females, gave an additional explanation for the lower fecundity obtained in our study.The pre-oviposition period of lifetime and evenlifetime . The higlifetime . Subsequlifetime , who foulifetime . In our lifetime , while tn et al. . Accordin et al. , differen et al. , who fouH. halys, based on the shortest development time and the highest survival obtained in the experiments. The opposite trend was observed in specimens that completed development in autumn. With decreasing temperatures, the development time was extended. The last specimens that were able to complete development, hatched in late August and molted into adults in early November. At the average temperature 16.8 \u00b0C measured during this period, it takes 67 to 81 days to complete development. The lowest temperature at which H. halys was able to complete development, as observed in our study (16.8 \u00b0C), is consistent with findings of Nielsen et al. [Here we present data on development time obtained at natural fluctuating temperatures under outdoor conditions. Over a 3-year period, developmental time and average temperature during the development period were calculated for all adults of the first and second generation. In line with the aforementioned studies, the development time from egg to adult was closely linked with temperature. In general, the development time decreased with increasing temperatures. The shortest development time recorded was 38 days, at an average temperature of 25 \u00b0C during the development period. This result is fully consistent with the findings of the laboratory experiments conducted by Nielsen et al. , who sugn et al. , Haye etn et al. and Govin et al. , who sugH. halys Slovenian populations is comparable to the Swiss population (588.24 DD), to those of the United States where 538 DD are needed with a minimum temperature threshold of 14.14 \u00b0C [Considering the minimum temperature threshold of 12.2 \u00b0C , develop14.14 \u00b0C and simi14.14 \u00b0C .H. halys was first found in Slovenia in 2017, increasing populations and crop damage have been observed [0) for the Slovenian populations in 2019 were 14.8 for the first generation and 5.6 for the second generation. Since the research was performed in outdoor conditions, it provided a clear view of temperature-dependent development and population growth. Considering that temperature directly affects insect population dynamics by changing rates of development, reproduction and mortality [H. halys are expected to vary within years. Although our study did not include limiting factors, such as the contribution of natural enemy-induced mortality, recent findings suggest that biological control is expected to play an important role in pest suppression over the coming years.Since observed . The preortality , the bioH. halys has only recently invaded the Slovenian territory, there is still lack of natural enemies that can effectively limit its population growth and spread. So far, the most promising natural enemies of H. halys found in Slovenia are the egg parasitoids Anastatus bifasciatus (Geoffroy) and Trissolcus mitsukurii (Ashmead) [H. halys suppression. Trissolcus mitsukurii is non-native parasitoid and currently has limited distribution in Europe, but it has a high potential to expand its range globally and help mitigate the negative effects of H. halys in invaded areas [Since Ashmead) . The fired areas .H. halys remains a very challenging task. Many researchers have worked hard over the past decade to provide alternative management tactics that are effective and environmentally acceptable as well. In this 3-year research study, key phenological data and information on the biological parameters of H. halys were obtained, which helps to optimize control measures and adapt them, especially in relation to the most vulnerable life stages of the pest. Regardless of the control method used, its effectiveness depends on an optimal timing and this must be taken into consideration both when applying pesticides or when releasing natural enemies. Knowledge of the phenology of local populations makes it possible to predict the occurrence of particular development stages of in the field. The results obtained in this work could also be applied in the development of forecasting models as important decision-making tools, contributing to a more efficient management of this globally invasive species.The present study confirmed that Slovenia has favorable climatic conditions for development of two generations per year, which causes high population growth and increases the risk of damage to agricultural production. For the above reasons, the management of"} +{"text": "The overall performance for H2/CO2 and H2/CH4 separation transcendsthe Robeson upper bounds and ranks among the most powerful H2-selective membranes. The versatility of this strategy is demonstratedby synthesizing different types of LA-\u03b1-CD-in-COF membranes.Covalent organic framework (COF) membranes have emergedas a promisingcandidate for energy-efficient separations, but the angstrom-precisioncontrol of the channel size in the subnanometer region remains a challengethat has so far restricted their potential for gas separation. Herein,we report an ultramicropore-in-nanopore concept of engineering matreshka-likepore-channels inside a COF membrane. In this concept, \u03b1-cyclodextrin(\u03b1-CD) is in situ encapsulated during the interfacial polymerizationwhich presumably results in a linear assembly (LA) of \u03b1-CDsin the 1D nanochannels of COF. The LA-\u03b1-CD-in-TpPa-1 membraneshows a high H Approaches including staggered stacking,39 oriented growth,40 and hybridizationwith other microporous nanomaterials42 have beenexplored to reduce the effective pore size of COF membranes towardthe ultramicroporous range, mainly aiming at improving the molecularsieving mechanism. Even so, to realize a tuning of the channel sizewith Angstrom-precision still remains a great challenge, and therefore,the COF membrane performance for gas separation is often limited.Covalent organic frameworks(COFs) are an emerging class of porous crystalline polymers connectedby organic building units through covalent bonds into highly orderedand periodic network structures.44 Such nanocomposite materials often possess synergistic functionalities,providing significantly enhanced properties in comparison to thoseof their individual counterparts.48 COFs having a 1D pore channelor a 3D \u201ccage-like\u201d pore system are ideal host matrixesfor accommodating nanoentities such as metal nanoparticles, quantumdots, organic and metal\u2013organic molecules, biomacromolecules,metal\u2013organic polyhedra, porous organic cages, and metal\u2013organicframeworks (MOFs).52 The encapsulation of these nanoentities in COFs has led to the developmentof functional materials for adsorption and separation, sensing, heterogeneouscatalysis, energy harvesting, and molecular release systems.55 One representative group is the \u03b2-CD (\u03b2-cyclodextrin)-decoratedCOF nanochannels which enable an enantioselective transport of aminoacids.56 Another example is the COF nanocompositemembrane which contains a unit cell-sized MOF, exhibiting a more precisemolecular sieving for selective H2 separation.57 Thus, the construction of hybrid COFs as host\u2013guestcomplex offers a rich playground to design gas separation membranematerials.Host\u2013guest inclusion complexes are an interesting configurationin which a small \u201cguest\u201d molecule is included withinthe interior of a porous macromolecular \u201chost\u201d compound.2-preferential transport pathways, which are expected to exhibit anexcellent H2 permeance and high selectivity in mixed-gaspermeation. The pore-in-pore strategy provides another way to tunethe pore environment of COFs for advanced molecular-separation membranes.Inspired by the host\u2013guest hybrid nanocomposites,in thisstudy, we present a pore-in-pore engineering concept of packing linearcyclodextrin (CD) polymers into the 1D nanochannels of 2D COF to fabricatehierarchical-structured COF membranes. These membranes have matreshka-likeultramicropore-in-nanopore channels consisting of fast and Hp-phenylenediamine(Pa-1) (TpPa-1) membrane. The \u03b1-CD is a cylinder-shaped macromolecule,composed of six glucose units, which not only has a tiny cavity diameter(0.47\u20130.53 nm) but also a desirable molecular dimension ofabout 1.37 nm smaller thanthe 1D nanopore channels of the TpPa-1 (\u223c1.8 nm).59 Moreover, the \u03b1-CD molecules could be assembled into linearpolymers through a preprogrammed cross-linking reaction (linear assembly(LA)).61 The packing of linear \u03b1-CD polymersinto the TpPa-1 nanochannel would create numerous selective transportpathways for the targeted gas components in the resulting membrane.This strategy is demonstrated via the preparation of the LA-\u03b1-CD-in-TpPa-1membrane alkaline solution to allowthe reaction between the encapsulated \u03b1-CD molecules with theepoxide ring of ECH, resulting in glyceryl bridges between neighboring\u03b1-CD molecules in the confined space of 1D COF channel. It shouldbe noted that also reactions between ECH and nonreacted COF monomers might happen because the COFstructure is rather difficult to get 100% perfect during the membraneformation. These potential reactions would immobilize \u03b1-CDsonto the COF pore walls and promote the formation of linearly oriented\u03b1-CDs inside the COF pores. Finally, the LA-\u03b1-CD-in-TpPa-1membrane was obtained after a rinsing treatment.To realizethe pore-in-pore concept, \u03b1-cyclodextrin (\u03b1-CD) is selectedas the building block for ultramicropore channels inside the 2D ketoenamine-linked1,3,5-triformylphloroglucinol (Tp) 2O3 diskbecame crimson after the COF synthesis due to the characteristic colorof TpPa-1 mapping , indicating no detectable LA-\u03b1-CD-in-TpPa-1formed in the bulk ceramic substrate.The white \u03b1-Al surface 1a. MoreoFigure S1) and the prepared self-supporting COF layer , mainly due to the thinner thicknessof about 1.5 \u03bcm and much smaller amount compared to the Al2O3 disk which led to the dominating diffractionsignals of the Al2O3 corundum substrate.63 There are no obvious characteristicdiffraction signals of \u03b1-CD in the LA-\u03b1-CD-in-TpPa-1 membrane, probably due to the lowcontent (about \u223c11.8 vol%) and molecular-level distributionof \u03b1-CD molecules assembled inside the 1D nanochannels of TpPa-1.Otherwise, the diffraction peaks of the \u03b1-CD aggregates withthe cage-type structure65 should have been detectedas we have proven by studying a powder mixture of \u03b1-CD and TpPa-1. Attenuated Total Reflectance-FourierTransformed InfraRed (ATR-FTIR) spectra . The intensity of these two peaks in the LA-\u03b1-CD-in-TpPa-1membrane is relatively weak because of the low content of LA-\u03b1-CDin the TpPa-1 layer. Spatial arrangement of the LA-\u03b1-CD in themembrane could be exposed by fluorescence spectroscopy on the basisof the host\u2013guest interaction between LA-\u03b1-CD (and/or\u03b1-CD) and a fluorescence probe such as rhodamine B. It followsfrom Figure S5 that red fluorescence can be clearlyobserved on the whole surface and cross-section of the LA-\u03b1-CD-in-TpPa-1membrane as compared to the \u03b1-CD-free TpPa-1 membrane, evidentlyindicating the existence of LA-\u03b1-CD (and/or \u03b1-CD).As shown in Figure S6b corresponding to Figure S6a) both display the evenly distributedether linkages (C\u2013O\u2013C) over the LA-\u03b1-CD-in-TpPa-1membrane as compared to the TpPa-1 powders and TpPa-1 membrane without \u03b1-CD , confirming the uniformity of LA-\u03b1-CD (and/or\u03b1-CD) incorporated inside the TpPa-1 layer on a macroscopicscales. The full XPS survey spectra indicate the presence of nitrogen, carbon, and oxygen in the LA-\u03b1-CD-in-TpPa-1membrane. The high-resolution spectrum of deconvoluted N1s shows the characteristicenergy peaks of TpPa-1 matrix. From the high-resolution spectra ofO1s . It is worth noting thatthe BET surface areas of both TpPa-1 and LA-\u03b1-CD-in-TpPa-1 arecomparable to that of the reported TpPa-1 powders and TpPa-1 membranesin the literature.67 However, due to the large amounts of staggered and thus interruptedpore channels, the nitrogen adsorption is affected, and the LA-\u03b1-CD-in-TpPa-1and TpPa-1 layers usually show relatively low BET values comparedto powder samples . Notably,the experimental pore size distribution of the LA-\u03b1-CD-in-TpPa-1layer is concentrated in the ultramicroporous region of 0.3\u20130.5nm compared with the \u03b1-CD-free TpPa-1 . This finding could serve as compelling evidence for theLA of \u03b1-CD molecules to be the linear channel-type \u03b1-CDstructure in the compact TpPa-1 layer. These results collectivelydemonstrate the successful generation of the ultramicropore-in-nanoporeLA-\u03b1-CD-in-TpPa-1 membrane with the designed matreshka-likepore-channel structure (Raman and X-ray photoelectron spectroscopy (XPS)were performedto further detect the membrane structure. Surface and cross-sectionalRaman mappings .However, inside the confined space of the 1D channels of 2D COF, itcould be rationally inferred that the most possible case is the statisticallychannel-type linear arrangement in which \u03b1-CD molecules areconnected together and stacked on top of each other (head-to-heador head-to-tail orientation) by cylindrical columns . Moreover, the \u03b1-CD molecules in the linearLA-\u03b1-CD polymers may not be strictly aligned. In addition, determiningthe number of \u03b1-CD molecules assembled is indeed interestingbut also extremely difficult mainly due to the tiny size and contentwhich is often below the detection limit. We have tried a varietyof methods such as high-resolution scanning tunneling microscopy (STM)to figure out the specific shape of LA-\u03b1-CD in the membrane,but the results were negative owing to the low image contrast causedby the similar elemental composition to the TpPa-1. Despite this,whatever the length , the pore-in-porestructures induced by various channel-type LA-\u03b1-CD will be conduciveto creating selective transport pathways in the 1D pore channels ofCOF for the targeted gas molecules such as H2 through themembrane.It is worth mentioning that there are several possiblecases ofstructural arrangement for the LA-\u03b1-CD molecules . The fluxes of the single gases H2, CO2, CH4, C3H6, and C3H8 as well as equimolar (1:1) binarymixtures of H2 with CO2, CH4, C3H6, and C3H8 were testedat room temperature (298 K) and 1 bar, respectively. It can be seenthat the LA-\u03b1-CD-in-TpPa-1 membrane shows a single componentH2 permeance of 3077.3 GPU, which is much higher than thoseof the other gases . Thisdemonstrates the superior H2-permselective properties ofthe LA-\u03b1-CD-in-TpPa-1 membrane, which is expected to displaya desirable separation performance in mixed-gas permeation. The realselectivities (or separation factors) of the LA-\u03b1-CD-in-TpPa-1membrane for equimolar H2/CO2, H2/CH4, H2/C3H6, and C3H8 gas pairs can reach 34.8, 38.1, 144.6, and 169.2,respectively. A significant improvement in separation selectivityis observed as compared to that of the TpPa-1 membrane without LA-\u03b1-CD. Meanwhile, the LA-\u03b1-CD-in-TpPa-1membrane still has a high H2 permeance of \u223c3000GPU during the mixed-gas permeation. This result further suggeststhe formation of an ultramicroporous structure with fast and selectivetransport channels for H2 in the LA-\u03b1-CD-in-TpPa-1membrane ratio from XPS), thereby leading to only a small mass transfer resistance.We also investigated the effect of the \u03b1-CD concentration inthe precursor solution on the membrane performance . As the \u03b1-CD concentration increases, theH2 permeance decreases gradually. Simultaneously, the separationselectivity of H2/CH4 went up first and thenstarted to descend when the \u03b1-CD concentration reached 2.1 mg/mL.A higher concentration of \u03b1-CD makes the COF layer more rigidand brittle, which is prone to have crack defects during the solventevaporation and drying process , resulting in a drop of selectivity.Gas-separation performance was measured following the Wicke\u2013Kallenbachmethod 3aby pla2/CO2 and H2/CH4 mixtures was scarcely deteriorated.Considering practical applications, gas permeation was also conductedat higher temperature to investigate the thermal stability of ourmembrane. We find a gradual decline of selectivity in H2/CO2 separation with the increase of testing temperature, which is due to the strongeractivated diffusion of CO2 than H2 . Damage to the membranestructure, in terms of a thermal decomposition of TpPa-1 and \u03b1-CD,can be excluded from the data. The membrane remains stable at 180\u00b0C, and no performance degradation was observed after a continuousoperation over 70 h . In contrast,the \u03b1-CD-in-TpPa-1 membrane without LA chain formation throughECH treatment was also prepared and subjected to a long-term gas-permeationstudy. Both H2 permeance and separation selectivity ofthe H2/CO2 mixture descend during the testingprocess . A shifting and jogglingof the unbound \u03b1-CDs inside the COF nanopores interrupts andblocks the selective gas transport through the \u03b1-CDs, whichslowly form nonselective diffusion channels in the membrane. Thisfinding also indicates that an LA is critical and necessary to obtaina robust membrane for selective H2 separation followingour pore-in-pore concept.In addition, the LA-\u03b1-CD-in-TpPa-1 membraneexhibits a goodrunning stability during a 60 h long continuous gas-permeation measurement3d becaus2 and CO2 increased,and meanwhile, the H2/CO2 separation factordecreased first from 33 at 1 bar to 17 at 1.2 bar and then leveledoff with the further increase of feed pressure . The decrease of separation selectivity is probablyrelated to the flexibility of the channel-type LA-\u03b1-CD structure,which might slightly dilate the transport channels formed betweenthe inner wall of COF pore and the exterior wall of LA-\u03b1-CDunder pressurization, causing the penetration of more CO2 molecules. It is noteworthy that the cases such as structural damageand defects could be reasonably ruled out because the LA-\u03b1-CD-in-TpPa-1membrane exhibited an excellent pressure resistance even at 6 barin a cross-flow nanofiltration test by using the water and Na2SO4 aqueous solutionas the feed, respectively . Moreover, the rejection rate can reach >99% toward variouswater-solubleorganic dyes with a molecule size larger than 1.1 nm and also >80% toward monovalent,divalent, and trivalent ions with hydrated ions diameters ranging from 0.6 to 1nm . However, under the same conditions, the TpPa-1membrane shows low rejections for some dyes such as acid fuchsin (29.2%),rose bengal (36.8%), and methylene blue (8.3%) and also exhibits lowrejection for some salts such as NaCl (48.3%). The obtained pore-sizedistributions by rejectingneutral solutes of polyethylene glycol (PEG) with different molecularweight verified the ultramicropores (about 0.44 nm) of the LA-\u03b1-CD-in-TpPa-1membrane. In addition, the possibility of LA-\u03b1-CD\u03b1-CD-in-TpPa-1instead being a mixed matrix COF membrane with \u03b1-CD inclusionscan be rationally excluded based on the effective nanofiltration performance. These resultsfurther indicate the compacted structure and narrowed pore size inthe LA-\u03b1-CD-in-TpPa-1 membrane after incorporation of LA-\u03b1-CD.Despite a decline, the value of H2/CO2 separationselectivity keeps above 15 with the pressurization, and also the separationselectivity could be recovered to the previous value of about 30 afterrelease of the pressure. These findings suggest the good pressure-resistantproperty of the host\u2013guest LA-\u03b1-CD-in-TpPa-1 membrane.Besides temperature, feed pressureis also an important factorinfluencing the membrane performance. For our LA-\u03b1-CD-in-TpPa-1membrane, the permeances of both H2/CO2 and H2/CH4 versus H2 permeance for our LA-\u03b1-CD-in-TpPa-1membrane and other types of membranes (see the detailed comparisonin Tables S3 and S4). In contrast to othermembranes, the LA-\u03b1-CD-in-TpPa-1 membrane exhibits high valuesin terms of both permeance and selectivity, demonstrating an anti-trade-offphenomenon. The overall performance far exceeds the Robeson upper-boundlimits for H2/CO2 and H2/CH4 mixtures,68 respectively, and belongsto the most powerful COF membranes. The excellent performance togetherwith the good stability could also provide circumstantial evidenceof fast and H2-preferential transport pathways within thelinear \u03b1-CD polymer-tailored COF nanochannels.Figures S27\u2013S29) by using a similarsynthesis protocol as for the LA-\u03b1-CD-in-TpPa-1 membrane. Thegas permeation measurements were conducted at room temperature byusing an equimolar H2/CH4 mixture as feed . It can be seen that all of the membranesexhibit a very good H2 permeance higher than 2800 GPU.The H2/CH4 separation selectivities of the LA-\u03b1-CD-in-TBPa-1membrane, LA-\u03b1-CD-in-TBBD membrane, and LA-\u03b1-CD-in-TpBDmembrane could reach 30.6, 20.6, and 21.5, respectively. Moreover,their comprehensive performances are also competitive and higher thanmost of the existing COF gas membranes without \u03b1-CDs,30 illustrating the potential and broad applicability of this pore-in-porestrategy.To demonstratethe versatility of the pore-in-pore design concept,we prepared three other types of LA-\u03b1-CD-in-COF membranes . The simulationsshow two preferential transport pathways for H2 moleculesin the LA-\u03b1-CD-in-TpPa-1 channels, and at the LA-\u03b1-CDs surface .These strong adsorptive interactions retard the transport of CO2 molecules and result in a low CO2 permeation,simultaneously narrowing this gap for the mobile H2 molecules.As shown in 2 molecules (18 out of 30 molecules)in the feed chamber could pass through the LA-\u03b1-CD-in-TpPa-1membrane to the permeate chamber within 1000 ps, accompanied by only13.3% of the CO2 molecules (4 out of 30 molecules), demonstratinga faster H2 permeation. Further, the H2 diffusioncoefficient could be calculated from the linear fitting of the MSDin 2 diffusivity. Thisfinding signifies that the simulated separation selectivity is 1.8times the experimental measurement (about 35). On the contrary, inan TpPa-1 channel without LA-\u03b1-CD , the transport of CO2 molecules atthe pore wall is retarded. However, at the center of the pore channelsthe diffusion of CO2 molecules is not affected. In thiscase, a large proportion of CO2 molecules together withH2 molecules can pass through the membrane. As expected,47% of the H2 molecules (14 out of 30 molecules) passedthrough but were accompanied by 43% of the CO2 molecules(13 out of 30 molecules) at the same time. Here, the calculated H2 diffusion coefficient is about 23 times larger than the CO2 one , which is 1.6times the corresponding experimental measurement (D(H2)/D(CO2) \u2248 14.5).However, a difference in performance between experimental measurementand MD simulation can be expected since the modeling structure cannotbe strictly consistent with that of the real membrane structure onthe macroscopic scale. For example, the low content of \u03b1-CDand the not strictly aligned ultramicropore structure will probablyresult in relatively low H2/CO2 selectivitiesin the experimental measurements. Despite this, the quotients of measuredselectivity and simulated selectivity for the LA-\u03b1-CD-in-TpPa-1membrane (\u223c1.8) and TpPa-1 membrane (\u223c1.6) are veryclose to each other. Meanwhile, the result of the simulated selectivityfor the LA-\u03b1-CD-in-TpPa-1 membrane being higher than that forthe TpPa-1 membrane is basically in agreement with the experimentalmeasurements. Likewise, the H2 molecules permeate fasterthrough the LA-\u03b1-CD-in-TpPa-1 channels along the same pathwaysmentioned above in the presence of CH4 molecules, as comparedto the permeation process inside the TpPa-1 channels . The simulations indicatethat the performance improvement of the pore-in-pore membrane is mainlyattributed to the competitive diffusion mechanism in the confinedultramicropore channels rather than to rigid size-sieving effects.Furthermore, these results imply that increasing the LA-\u03b1-CDcontent within the COF nanopores and maintaining the ordered structureof the host\u2013guest confinement is crucial for the membrane discriminationaccuracy for molecular\u2013selective gas separations.Apparently, the engineering of pore-in-pore channels by packingof linear CD polymers contributes to the improvement in gas-separationperformance of the COF membrane. To elucidate the separation mechanismon a microscopic scale, molecular dynamics (MD) simulations were carriedout to investigate the permeation behavior of equimolar Hmembrane S31. The membrane S31. The membrane S31. The 2 and CH4 but renderspreferential transport pathways for the H2 molecules. Owingto the competitive diffusion mechanism in the confined ultramicroporechannels, the linear assembly-\u03b1-CD-in-TpPa-1 and p-phenylenediamine(Pa-1)) membrane displays an ultrahigh H2 permeance andsignificantly enhanced separation selectivity for gas mixtures ascompared to the starting TpPa-1 membrane without \u03b1-CD inclusion.The excellent overall performances combined with a high stabilityrecommend the pore-in-pore COF membranes for advanced H2 purification and production. In consideration of a certain generalizability,this research complements the existing design strategies of ultramicroporousCOF membranes and could facilitate their advancement in the fieldof precise molecular separations. The preparation of linear arrangementsof guest species in the nanochannels of COFs is also of interest forthe preparation of functional host\u2013guest materials.We have developed a pore-in-pore strategyfor packing linear \u03b1-cyclodextrin(\u03b1-CD) polymers into the COF nanochannels to engineer COF membraneswith a matreshka-like pore structure suitable for gas separation.The formation of ultramicropore-in-nanopore channels retards the diffusionof bulky molecules such as COp-phenylenediamine (Pa-1) , benzidine (BD) ,1,3,5-triformylphloroglucinol (Tp) , 1,3,5-triformylbenzene(TB) , acetic acid , \u03b1-cyclodextrin, 3-aminopropyltriethoxysilane , epichlorohydrin , ethanol , sodium hydroxide , hydrochloricacid , polyethylene glycols with various molecularweights . Sodiumchloride , sodium sulfate , sodium carbonate ,and gadolinium chloride were providedby Sinopharm Chemical, China. Dyes including chrome black T, methylblue, acid fuchsin, congo red, rose bengal, and methylene blue (MEB)were supplied by Shanghai Macklin Biochemical Technology, China. Porousasymmetric \u03b1-Al2O3 disks as substrateswere purchased from Fraunhofer IKTS, Germany.All chemicals and materialswere used as received without further purification: 2O3 substrate was activated by usingHCl aqueous solution (1 M), and then the surface was modified withAPTES (2 mM in toluene) at 110 \u00b0C for 2 h under argon. The LA-\u03b1-CD-in-TpPa-1membrane was synthesized via an \u03b1-CD-embedded interfacial polymerizationfollowed by a linear assembly (LA). First, 24 mg of Pa-1 and 20 mg of \u03b1-CD were dissolved into 12 mL ofultrapure water, and then 2 mL of acetic acid aqueous solution (3M) was added to form the solution A. After that, 31.5 mg of Tp wasdissolved into 14 mL of toluene to form solution B. It should be notedthat both solution A and solution B were prepared under argon atmosphere.Before interfacial polymerization, the amino-\u03b1-Al2O3 substrate was thoroughly immersed into solution A untilit was saturated. Subsequently, the amino-\u03b1-Al2O3 substrate was fixed in between solution A and solution Bwith a homemade device, which was then horizontally placed at 120\u00b0C for 72 h to allow the growth of the \u03b1-CD-in-TpPa-1 layeronto the amino-\u03b1-Al2O3 substrate. Aftercooling, the formed \u03b1-CD-in-TpPa-1 membrane was washed withwater and ethanol and dried at 120 \u00b0C overnight. Finally, the\u03b1-CD-in-TpPa-1 membrane was soaked into the prepared ECH alkalinesolution solution) at 50 \u00b0Cfor 8 h to cross-link the packed \u03b1-CDs. This process is namedas LA in this study. The LA-\u03b1-CD-in-TpPa-1 membrane was obtainedafter a thorough rinsing by ultrapure water. It should be pointedout that the \u03b1-CD used was structurally stable in the weak acidicenvironments during the interfacial polymerization, and the membranewas stable in the alkaline environments during the cross-linking ofthe packed \u03b1-CDs because of the short immersion time.The porous\u03b1-AlFor comparison, the TpPa-1 membrane without \u03b1-CD and the \u03b1-CD-in-TpPa-1membrane were synthesized by using a similar procedure as mentionedabove. Three other types of LA-\u03b1-CD-in-COF membranes were preparedby using a similar protocol as for the LA-\u03b1-CD-in-TpPa-1 membrane.The main difference is that the LA-\u03b1-CD-in-TBPa-1 membrane wassynthesized from 24 mg of Pa-1, 20 mg of \u03b1-CD, and 24 mg ofTB; the LA-\u03b1-CD-in-TBBD membrane was synthesized from 27.6 mgof BD, 20 mg of \u03b1-CD, and 16 mg of TB; the LA-\u03b1-CD-in-TpBDmembrane was synthesized from 27.6 mg of BD, 20 mg of \u03b1-CD,and 21 mg of Tp.\u20131, a voltage of 40 kV, and current of 40 mA. X-rayphotoelectron spectroscopy (XPS) spectra were recorded on a ThermoScientific K-Alpha+ spectrometer using Al K\u03b1 radiation as theenergy source at a voltage of 15 kV and current of 15 mA. The pressurein the instrumental chamber was about 5 \u00d7 10\u20139 mbar. The binding energies were calibrated on C 1s (284.8 eV). Attenuatedtotal reflectance-Fourier transformed infrared spectra (ATR-FTIR)in a wavenumber region of 400\u20134000 cm\u20131 witha resolution of 0.4 cm\u20131 were obtained by usinga spectrometer (Agilent Technologies Cary 630 FTIR). The ATR-FTIRspectrum of the membrane samples was obtained from the powders shavedoff from the selective layer on the \u03b1-Al2O3 substrate. Raman spectra were acquired by a Horiba LabRAM HR Evolutioninstrument using an Ar+ laser at 514.5 nm. Fluorescence imaging wasrecorded on an Olympus IX71 microscope at an excitation wavelengthof 546\u2013560 nm. Before measurement, membrane samples were submergedin an aqueous rhodamine B solution and subjected to the treatmentsincluding rinsing with water and drying. N2 adsorption\u2013desorptionmeasurements were performed at 77 K on a Micromeritics ASAP2460 surfacearea and pore distribution analyzer instrument. Before the adsorptionexperiments the powdered samples were vacuum degassed at 120 \u00b0Cfor 10 h. The isotherms were analyzed by using the Brunauer\u2013Emmett\u2013Tellermethod and the t-plot micropore volume method.Micromorphologies of the membranesand powdered samples were observed by using a JEOL JSM-6700F instrumentwith a cold field emission gun operating at 2 kV and 10 mA. Beforemeasurement, all samples were coated with a 15 nm thick gold layerin vacuo to reduce the charging effects. Energy-dispersive X-ray spectroscopy(EDXS) mapping and elemental analysis of the membrane cross sectionwere conducted on the scanning electron microscopy (SEM) at 15 kV,10 mA and 15 mm lens distance. For transmission electron microscopy(TEM) measurements, a small drop of aqueous solution of mashed membrane,LA-\u03b1-CD, or \u03b1-CD was dripped on an ultrathin carbon supportfilm and dried, and then the specimen was observed with a JEM2100Fmicroscope. X-ray diffractometer (XRD) patterns were recorded on aBruker D8 Advance diffractometer at room temperature, and each XRD pattern was acquiredfrom 3\u00b0 to 35\u00b0 of the diffraction angles at a rate of 0.01\u00b0sFigure S13) sealed with rubber O-rings. N2 was used as the sweep gas set at 50 mL min\u20131 during the measurement process, and the pressures at both sideswere kept at about 1 bar. It should be noted that the feed pressureis always a little bit larger than the permeate side to avoid thepossibility of back flow of the sweep gas. Before gas permeation,an on-stream activation was carried out at 393 K to get rid of potentialsolvent molecules inside the pores of the membrane by using an equimolarH2/CO2 mixture as the feed. For the measurementof single gas permeation, both feed and sweep flow rates were setat 50 mL min\u20131. For the measurement of mixed-gaspermeation, a series of equimolar (1:1) binary gas mixture includingH2/CO2, H2/CH4, H2/C3H6, and H2/C3H8 were applied to the feed side of the membrane, andthe feed flow rate of each gas was kept constant at 25 mL min\u20131. A calibrated gas chromatograph (HP 6890B) was usedto detect the component concentration on the permeate side of themembrane when the measurement system ran stable.The prepared membrane wasplaced inside a laboratory-made gas-permeation apparatus (i (Pi) was calculated as follows (Ni represents the permeation rate of component i (mol s\u20131), A is the effectivemembrane area (m2), and \u0394pi is the partial pressure difference of component i (Pa). Fi denotesthe molar flux of component i (mol m\u20132 s\u20131). Permeance of each membrane was calculatedby the average of five data points. The GPU is the unit of gas permeance(1GPU = 3.3928 \u00d7 10\u201310 mol m\u20132 s\u20131 Pa\u20131).The permeanceof component follows 11where Ni (Pi) divided by the permeance of component j (Pj) in thesingle-gas permeation. The real selectivity of an equimolar binary gas mixture (or separation factor)was calculated as follows . A dye aqueous solution (100 mg/L) or a salt aqueous solution(1000 mg/L) as the feed was circulated by using a plunger pump, andthe operating pressure was set at 4 bar. The measurement began whenthe filtration system reached a steady state after a period of operation.The water flux (L m\u20132 h\u20131) andpermeance (L m\u20132 h\u20131 MPa\u20131) were calculated by normalizing the permeate volume collected duringthe time t. The concentration of dyes and salts inthe feed (Cif) and permeate(Cip) was monitored bya UV\u2013vis detector (UV BlueStar A) and a conductivity meter(KEDIDA CT3030), respectively. Accordingly, the rejection of the dyes or saltswas calculated as follows composed of nine-layered nanosheets with a thickness of approximately2.7 nm along the z-axis was conducted as shown in Figure S31a. Based on the most possible caseof statistically channel-type linear arrangement that we have analyzed,the atomic structure of LA-\u03b1-CD-in-TpPa-1 is built by incorporatinga linear \u03b1-CD polymer consisting of three cross-linked \u03b1-CDsinto one nanochannel of the TpPa-1, as illustrated in Figure S31b. To some extent, it represents theelementary mass transfer unit in the 1.5 \u03bcm-thick LA-\u03b1-CD-in-TpPa-1membrane. MD simulation was carried out by a Materials Studio software6.0 package with a COMPASS force field.70 A typicalsimulation box with a dimension of 45.1 \u00c5 \u00d7 45.1 \u00c5\u00d7 139.4 \u00c5 was established and separated into two chambersby the TpPa-1 layer or LA-\u03b1-CD-in-TpPa-1 layer from the middle.An equimolar mixture of H2/CO2 or H2/CH4 (30 molecules for each component) was added to theleft chamber, and a vacuum was applied on the right. An NVT ensemble was employed forsimulation, and system optimizing was implemented before diffusionsimulation. The initial velocities were random, and the Andersen thermostatwas employed to maintain a constant simulation temperature of 298.0K. The MD simulation was performed for 1 ns with a time step of 1fs using the Forcite module. The diffusion coefficient is relatedto the mean square displacement (MSD) and simulation time. The diffusioncoefficients of the H2, CO2, and CH4 were calculated from the slope of the straight line fitted fromthe MSD versus simulation time .The eclipsed atomicstructure of TpPa-1 ("} +{"text": "Pancreatic ductal adenocarcinoma remains one of the most serious malignancies and a leading cause of cancer-related deaths worldwide. There are no effective screening methods available so far, even for high-risk individuals. At the time of diagnosis, impaired glucose metabolism is present in about 3/4 of all patients. Several types of diabetes mellitus can be found in pancreatic cancer; however, type 2, pancreatic-cancer-associated type 3c, and diabetes mellitus associated with non-malignant diseases of the exocrine pancreas (with a reduction or loss of islet-cell mass) are the most frequent ones. This paper proposed a distinct approach to older subjects with new-onset diabetes mellitus with possible pancreatic cancer. It could improve the current unsatisfactory situation in diagnostics and subsequent poor outcomes of treatment of pancreatic ductal adenocarcinoma.Background: Pancreatic ductal adenocarcinoma (PDAC) is associated with a very poor prognosis, with near-identical incidence and mortality. According to the World Health Organization Globocan Database, the estimated number of new cases worldwide will rise by 70% between 2020 and 2040. There are no effective screening methods available so far, even for high-risk individuals. The prognosis of PDAC, even at its early stages, is still mostly unsatisfactory. Impaired glucose metabolism is present in about 3/4 of PDAC cases. Methods: Available literature on pancreatic cancer and diabetes mellitus was reviewed using a PubMed database. Data from a national oncology registry (on PDAC) and information from a registry of healthcare providers were obtained. Results: New-onset diabetes mellitus in subjects older than 60 years should be an incentive for a prompt and detailed investigation to exclude PDAC. Type 2 diabetes mellitus, diabetes mellitus associated with chronic non-malignant diseases of the exocrine pancreas, and PDAC-associated type 3c diabetes mellitus are the most frequent types. Proper differentiation of particular types of new-onset diabetes mellitus is a starting point for a population-based program. An algorithm for subsequent steps of the workup was proposed. Conclusions: The structured, well-differentiated, and elaborately designed approach to the elderly with a new onset of diabetes mellitus could improve the current situation in diagnostics and subsequent poor outcomes of therapy of PDAC. Pancreatic cancer remains one of the most serious malignancies and a leading cause of cancer-related deaths worldwide . Accordihttps://pubmed.ncbi.nlm.nih.gov) (accessed on 9 April 2023), there are nearly 4200 records on \u201cpancreatic cancer + diabetes mellitus\u201d. Our aim was not to provide a comprehensive literature review or to try to create a meta-analysis based on the very heterogeneous studies available so far. We focused instead on providing an explanation of the different types of diabetes mellitus associated with pancreatic cancer and outlining a possible future basis for PDAC screening and/or diagnosis at the early stage of the disease.At the time of PDAC diagnosis, impaired glucose metabolism is present in about 3/4 of all patients ,10,11,12First of all, it is important to briefly review the current terminology, with particular emphasis on the difference between screening, early diagnosis, and diagnosis at an early stage of PDAC.Screening refers to the use of simple tests across a healthy population to identify those individuals who have a disease but have been asymptomatic in relation to that particular disease so far. Based on the existing evidence, average-risk population screening can only be advocated for cervical, breast, and colorectal cancer .Many different methods and possible markers for PDAC screening have been proposed, including microRNA and other non-coding RNAs, cell-free DNA (cfDNA), circulating tumor DNA (ctDNA), lipidomic profiling and other metabolomics, combined spectroscopy analyses , exosomes, S100 proteins, CEMIP and other proteomics, different enzymatic activities , or multiple metabolites ,52,53,54Average-risk sporadic PDAC is responsible for about 90% of all cases of pancreatic cancer. The rest comprises high-risk subjects with a family history of PDAC (at least two first-degree relatives with PDAC), underlying inherited diseases and/or patients with known genetic mutations investigated due to another indication ,65,66,67A unique prospective long-term follow-up study enrolled 354 individuals at high risk for PDAC (based on genetic factors or family history). Overall, 24 of 354 patients (7%) had neoplastic progression (14 PDAC and 10 high-grade dysplasia) over a 16-year period. Magnetic resonance imaging (MRI) and endoscopic ultrasound (EUS) were the two major methods used for a follow-up ,65,66,68The Japanese Pancreas Society, AGA , and ASGE have recently published guidelines on screening and early diagnosis of PDAC in high-risk subjects ,69,70,71European evidence-based guidelines are available for the diagnosis and management of pancreatic cystic neoplasms . MucinouThe preferred staging system used for all pancreatic cancers is the TNM classification of the combined American Joint Committee on Cancer and the Union for International Cancer Control . High-grEarly detection does not necessarily mean diagnosis of early-stage PDAC. Early diagnosis programs aim at reducing the proportion of patients who are diagnosed at a late stage. They have three main components: increased awareness of the first signs of cancer ; improved accessibility and affordability of diagnosis and treatment services, and improved referral from the first to secondary and tertiary levels of care . Unique Great attention has been paid to the relationship between diabetes mellitus and pancreatic cancer. Many large studies have addressed this issue. A Nurses\u2019 Health Study and a Health Professionals Follow-Up Study (with repeated assessments over 30 years) enrolled 112,818 women and 46,207 men. In total, 1116 incident cases of pancreatic cancers were identified. New-onset diabetes mellitus (HR 2.97) accompanied by weight loss was associated with a substantially increased risk of developing pancreatic cancer. Older age, previous \u201chealthy weight,\u201d and no intentional weight loss elevated this risk further . DiabeteMajor shortcomings of a substantial amount of studies are caused by the fact that they do not distinguish (define) particular types of diabetes mellitus, mainly type 2, from others. Notably, \u201cpancreatogenic diabetes mellitus\u201d ,88,89 isAccording to the American Diabetes Association (ADA), diabetes mellitus is classified into four major groups: type 1, type 2, specific types, and gestational diabetes mellitus . Apart fRegarding a practical clinical approach, the current pathophysiology basis should be maintained as it is determinative for both diagnosis and therapy. Thus several types of diabetes mellitus can be found in PDAC; however, type 2, PDAC-associated type 3c, and diabetes mellitus associated with non-malignant diseases of the exocrine pancreas (with a reduction or loss of islet-cell mass) are more frequent ,97 . It is characterized by insulin resistance as the first event, resulting in hyperinsulinemia ,94. LongBased on a meta-analysis, therapy of type 2 diabetes using metformin could reduce the risk of PDAC by 18% compared with other treatments . HoweverPDAC-associated type 3c diabetes mellitus has a typical onset in older subjects (over 60 years of age) . This tyPDAC-associated type 3c diabetes is characterized by a decreased secretion of insulin and increased insulin resistance ,101. ThiThe pathogenesis of pancreatic cancer-associated diabetes has not been fully elucidated yet; see References ,101 for Insulin resistance in PDAC is presumed to occur at the postreceptor level. In searching for the exact mechanism, islet amyloid polypeptide (IAPP) was proposed as a putative mediator. The serum level of IAPP was higher in pancreatic cancer patients than in the diabetic population or healthy controls . IAPP isOne-third to one-half of type 3c diabetes mellitus might be resolved, or at least improved, after successful surgical resection of PDAC, despite the reduction of islet-cell mass resulting from the surgery . A studyThis type of diabetes mellitus is characterized by decreased insulin secretion and normal insulin sensitivity. It is a secondary complication of different advanced chronic non-malignant diseases of the pancreas, usually preceded or associated with the exocrine pancreatic insufficiency, with a reduced or lost islet-cell mass . Alone, We strive to propose a set of investigations that would be feasible, accessible, and affordable on a broad population basis (not just for research purposes). The diagnosis of PDAC-associated type 3c diabetes, when the pancreatic disease is not overt, can be challenging. Therefore, new-onset diabetes in subjects over the age of 60 and associated with weight loss should prompt consideration of this type of diabetes. Rapid progression of hyperglycemia or an early need for insulin should also prompt to consider this diagnosis .Diagnosis of PDAC-associated type 3c diabetes mellitus is based on a decrease in insulin production and an increase in insulin resistance , see https://www.omnicalculator.com/health/homa-ir, (accessed on 27 June 2023) The normal value of HOMA-IR is <2.5 [0) + log(G0)], where I0 is the fasting insulin (mU/mL), and G0 is the fasting glucose concentration (mg/dL). The normal value of QUICKI is >0.45 [Several insulin resistance indices were proposed ,133,134. is <2.5 . QUICKI is >0.45 .An increased glucagon/insulin ratio is another marker of insulin resistance . InvestiAt present, there is no feasible, effective, and reliable population-based program for the diagnostics of PDAC. Continuously rising incidence and ongoing poor outcomes in PDAC treatment mean that a new strategic approach is required urgently. A program based on a distinct attitude to subjects with new-onset diabetes mellitus of different types might be one way, at least according to data from the Czech Republic . The incidence of pancreatic cancer in the Czech Republic is the 3rd highest in Europe, with 2332 (2018) and 2466 (2020) new cases per year . The median age at diagnosis was 70 (interquartile range 63\u201378) . IncidenLess than 20% of newly diagnosed PDAC patients undergo surgical resection with curative intent ,144. AboNew-onset diabetes mellitus, together with significant weight loss, serves as an important early discriminating factor . A fullyShould transabdominal ultrasound be recommended as the first step in the vast majority of new-onset diabetes mellitus cases, such an approach needs to be supported by a feasibility estimation and capacity balance sheet. According to the Institute of Health Information and Statistics of the Czech Republic, about 3,000,000 transabdominal ultrasound investigations are performed in the Czech Republic per year, with a slightly increasing trend within the past five years . Even inLast but not least, it is necessary to mention the irreplaceable role of multidisciplinary teams. All PDAC cases must be evaluated and appraised by a multidisciplinary team (MDT) that can decide the further course, follow-up, and planned therapy.We are aware of the possible limitations of our concept. A prospective multicenter study is needed to validate the efficacy of this approach. Prompt access to CT imaging (pancreatic protocol) and/or MRI and/or EUS (with fine-needle aspiration/biopsy) might be a limiting factor. Our concept does not cover younger patients and/or subjects without diabetes mellitus or individuals with prediabetes .A structured, well-differentiated, and elaborately designed approach to older patients (over 60 years of age) with new-onset diabetes mellitus could improve the current unsatisfactory situation in diagnostics and subsequent poor outcomes of treatment of PDAC. Our concept includes such an algorithm. Further studies are warranted."} +{"text": "Arabidopsis thaliana as queries. These 286 NLR genes contained at least one NBS domain and LRR domain. Phylogenetic and N-terminal domain analysis showed that these NLRs could be separated into four subfamilies (I-IV) and their promoters contained many cis-elements in response to defense and phytohormones. In addition, transcriptome analysis showed that 22 NLR genes were up-regulated after infected by Green Peach Aphid (GPA), and showed different expression patterns. This study clarified the NLR gene family and their potential functions in aphid resistance process. The candidate NLR genes might be useful in illustrating the mechanism of aphid resistance in peach.Resistance genes (R genes) are a class of genes that are immune to a wide range of diseases and pests. In planta, NLR genes are essential components of the innate immune system. Currently, genes belonging to NLR family have been found in a number of plant species, but little is known in peach. Here, 286 NLR genes were identified on peach genome by using their homologous genes in The online version contains supplementary material available at 10.1186/s12870-023-04474-7. The plant innate immunity system ensures normal growth during pathogen infection . Plants NRG1 (DQ054580.1) in tobacco and ADR1 (AT1G33560.1) in Arabidopsis thaliana can both regulate the accumulation of the defense hormone salicylic acid during the immune response, and ADR1 can also be used as \u201cauxiliary NBS-LRR\u201d to transduce specific NBS-LRR receptors during ETI [Physcomitrella patens, Marchantia polymorpha, and sphagnum fallax respectively, but the functions of these bryophyte-specific NLR subclasses have not yet been explored [NLR genes are the most important R genes in plants . The pro1 AT1G3350.1 in ArG1 DQ05450.1 in toexplored \u201311.Arabidopsis thaliana, it was identified that the NBS domain usually contains 8 conserved motifs [tomato I-2 resulted in continuous activation [RPM1 and other NLR genes in Arabidopsis thaliana showed that the proteins were inactivated [Listeria and Streptococcus, can integrate into host cells by encoding proteins with LRR domains [The NB-ARC structure (NBS) domain, belongs to the signal transducing ATPase multi-structural domain (STAND) superfamily , which hd motifs , includid motifs . Kinase tivation . In the ctivated . The leuctivated . Therefo domains .Rpi-blb2 confer broad-spectrum resistance to pathogen isolates in potato [Mi-1.2 is similar to Rpi-blb2, it has specific resistance to root knot nematodes and aphids in tomato [Adnr1 [RMES1 locus which contains five NLR genes on sorghum genome were predicted, and proven resistance to Melanaphis sacchari [Dp-fl locus, which confers resistance to Dysaphis plantaginea contains 19 genes acting as R-genes, 2 of which are NLRs in Malus pumila [Plants rely on NLR protein to respond to invasive pathogens and activate the immune response, so as to obtain resistance to bacteria, viruses, nematodes and pests . In prevn potato . Mi-1.2 n tomato . In gramo [Adnr1 . The RMEsacchari . In addis pumila .Myzus persicae, GPA) is the most harmful pest during peach production. It can stab and suck the new shoots and leaves, resulting in curling leaves, growth limitations. It can also secrete honeydew to spread viruses between species. In the last decades, several genetic loci conferring resistance to aphids have been identified and mapped on peach genome. Most of genes belong to the resistance genes encoding NLR proteins [Rm3 locus were identified [Peach is the fourth largest deciduous fruit crop in the world and has valuable nutrition . Green Pproteins . In peacentified . HoweverIn this study, we analyzed the NLR gene family in peach. A total of 286 NLR genes were identified, and their chromosome location, phylogenetic relationship, gene structure, conserved domains and promoter cis-elements were analyzed. Transcriptome analysis showed that the expression of 22 identified NLR genes was significantly up-regulated after GPA infestation. The results would provide a basis for further study on the function of NLR genes in aphid resistance.Arabidopsis thaliana were used as queries to find out the candidate genes in peach using the NCBI-Blastp toolkit. Then, their protein domains were further analyzed, especially the number of NBS and LRR domain. Finally, 286 NLR genes were selected in this study, which showed at least one NBS and LRR domain. These NLR genes are unevenly distributed on peach chromosomes, most of which are present on chr.1 (14.3%), chr.2 (25.52%) and chr.8 (27.27%) to 2026 (Prupe.7G065400), with an average length of 1055. The molecular weight ranged from 48041.58 Da (Prupe.4G236500) to 230015.7 Da (Prupe.7G065400), with an average of 120157.56 Da. The isoelectric point of these NLR proteins ranged from 5.14 (Prupe.8G110800) to 9.61 (Prupe.3G040800), with an average value of 6.99, indicating that peach NLRs are mostly neutral protein , which included 153, 104, 11, 18 peach NLR genes respectively phylogenetic tree was constructed using protein sequences of peach NLRs and 20 reported NLR genes in According to the differences in N-terminal domain, the subfamilies I-III were mainly characterized as CNL, TNL and RNL respectively, although some NLRs without N-terminal domain were also clustered in subfamily I and II Fig. . By furtPrupe.4G236500) has 3 Exons and no UTRs, much simpler than the longest (Prupe.2G118000) (4 Exons and 3 UTRs). .Gene structure analysis of NLR gene family showed that peach NLR genes contained many Exon and UTRs, and there were significant differences among different subfamilies. The average numbers of Exon and UTRs of these NLR genes was 4.69 and 4.47. Besides, the numbers in subfamily I was mostly less than II Table , while mPrupe.1G389500/Prupe.7G138500, Prupe.1G541300/Prupe.8G077100, Prupe.2G055200/Prupe.2G066600, Prupe.2G057100/Prupe.2G068000, Prupe.2G040500/ Prupe.2G053700, Prupe.2G043000/ Prupe.2G504200, Prupe.2G045200/Prupe.2G055200, Prupe.2G055200/Prupe.2G068900, Prupe.2G057100/Prupe.2G068000) were identified, indicating duplication was a major mode of gene expansion . A total of 1289 results were predicted, including 20% in cytoplasm, 17% in plasma membrane, 15% in chloroplast, and relatively few in other organelles , which had the closest phylogenetic relationship with three types Arabidopsis thaliana NLR genes respectively were cloned into the pCAMBIA1300 vector fused with GFP reporter. The results showed that all three types of NLR were localized both in nucleus and cytoplasm . Totally, 14 types of cis-elements were mainly enriched in these promoters, including 5 types of plant hormones response elements , 3 types of stress response elements and 6 types of growth related response elements . Among the total elements, plant hormone elements accounted for 35.3%, stress response elements accounted for 48.6%, and growth related elements accounted for 16.1%. In addition, heat map showing the number of cis-elements in each NLR genes was further constructed, and the results showed that the most enriched cis-element was light. Hormone associated element were also greatly enriched in these promoters, such as MeJA, ABA, SA, which indicated that NLR might participate in stress triggered signaling pathways. However, no significant differences in the number and distribution of promoter elements between different subfamilies were found showed much higher expression levels than the others , and then rapidly decreased to the normal level. Only a few genes showed lower expression . The expression of most genes not infected by aphids did not change significantly with time, only 3 genes were highly expressed , Oryza sativa (508), Glycine max (429), Solanum tuberosum (438), Populus (416), Gossypium spp (355) [TIR-LRR, NBSCC-LRR and NBS-LRR , 45. Phyine max 49, Solanupp (355) . AccordiPopulus 46, GossypThe localization of plant NLR proteins might be associated with the localization of effectors . In geneMost of plant immune responses are accompanied with the release of phytohormones . SalicylPrupe.1G217900, Prupe.1G545200, Prupe.2G060400, Prupe.4G224500, Prupe.5G019000, Prupe.7G065500, Prupe.7G138600, Prupe.8G023100 and Prupe.8G023800 were highly expressed after 3\u00a0h of GPA infection. Prupe.2G022500, Prupe.2G283300, Prupe.5G025600 and Prupe.7G139100 were highly expressed after 6 or 12\u00a0h GPA infection. Tissue specific expression analysis showed that peach NLR genes was mainly expressed in root, leaf and stem, indicating their roles in disease and insect resistance . Using NLR genes in Arabidopsis thaliana as queries, the homologous NLR genes in peach were identified using Blastp tools in NCBI and the NBC and LRR domains were checked manually to get the final set of peach NLR genes. Structural domains were analyzed using Pfam (http://pfam-legacy.xfam.org/) [The protein sequences of NLR genes in am.org/) . Physicoam.org/) .https://phytozome-next.jgi.doe.gov/) and chromosome length information were download from JGI . The chromosome annotation of peach NLR gene family members were extracted using TBtools and mapped on chromosome [The peach genome annotation file [http://gsds.gao-lab.org/) [https://meme-suite.org/meme/tools/meme) was used to predict and analyze the conserved protein motifs [Arabidopsis thaliana were analyzed using MCScanX software and mapped using TBtools [The phylogenetic tree of peach NLRs was constructed using MEGA11 software, and was viewed using evolview online website (olview/) . Chromosab.org/) . Gene stn motifs . The basn motifs . Genome- TBtools .pCAMBIA1300 vector fused with GFP under CaMV 35\u00a0S promoter , Then, the recombined constructs were transferred into Agrobacterium tumefaciens GV3101 for transient overexpression in tobacco leaves using previously described methods [Three peach NLR genes represent the main types of TNL, CNL and RNL were selected according to the phylogenetic tree. Their CDS sequences were obtained from NCBI and were cloned into methods . GFP rephttps://www.rosaceae.org/) and submitted to PLANTCARE database for promoter element prediction [The promoter sequences of 286 peach NLR genes (2\u00a0kb upstream of the 5\u2019UTR) were download from Genome Database for Rosaceae and first-strand cDNA was synthesized using PrimeScript first-strand cDNA synthesis kit . Real-time quantitative polymerase chain reaction (qRT-PCR) was performed on ABI7500 system using SYBR premix ExTaq with the following procedure: 95\u00a0\u00b0C for 5\u00a0min, followed by 45 cycles at 95\u00a0\u00b0C for 10\u00a0s, 58\u00a0\u00b0C for 10\u00a0s and 72\u00a0\u00b0C for 20\u00a0s. The relative expression level was calculated by 2T method . PrimersBelow is the link to the electronic supplementary material.Supplementary Material 1"} +{"text": "The reason for the enhancing tribological performance was that, on the one hand, PEEK fibers have a high strength and modulus which can enhance the specimens at lower temperatures; on the other hand, molten PEEK at high temperatures can also promote the formation of secondary plateaus, which are beneficial for friction. The results in this paper can lay a foundation for future studies on intelligent RBFM.Resin-based friction materials (RBFM) are widely used in the fields of automobiles, agriculture machinery and engineering machinery, and they are vital for safe and stable operation. In this paper, polymer ether ketone (PEEK) fibers were added to RBFM to enhance its tribological properties. Specimens were fabricated by wet granulation and hot-pressing. The relationship between intelligent reinforcement PEEK fibers and tribological behaviors was investigated by a JF150F-II constant-speed tester according to GB/T 5763-2008, and the worn surface morphology was observed using an EVO-18 scanning electron microscope. The results showed that PEEK fibers can efficiently enhance the tribological properties of RBFM. A specimen with 6 \u03c9t% PEEK fibers obtained the optimal tribological performance, the fade ratio was \u22126.2%, which was much higher than that of the specimen without the addition of PEEK fibers, the recovery ratio was 108.59% and the wear rate was the lowest, which was 1.497 \u00d7 10 Phenolic resins are widely used in the industries of adhesives, flame retardant materials and friction materials because of their excellent acid resistance, mechanical properties and high temperature resistance ,2. HowevPEEK is a kind of semi-crystalline polymer that has an excellent temperature resistance (the glass transition temperature is 143 \u00b0C and the melting point is 343 \u00b0C) . Its thePEEK has good tribological properties. As a reinforcement, Crosslinking Solidification between PEEK and the phenolic matrix has a negative influence for tribological properties and temperature perception, thus limiting its further application . In addi3AlC2/Cu composites with a sandwich structure. During friction, GO and Ti3AlC2 synergistically promote the formation of a continuous, compact and lubricating tribo-layer on the worn surface and enhance the wear resistance. Lekai Li [An effective method for the structure design of composites is the granulation technique . The comLekai Li investigThis paper presents an intelligent friction material that can regulate the microstructure of the friction interface through the perception of temperature. The specimens are fabricated by step feeding, wet granulation and particle coating technology and hot-pressing, which can physically isolate the phenolic resin from PEEK fibers to prevent crosslinking solidification. Specimens were subjected to tribological tests and worn surface characterization to study the relationship between the tribological behavior and the PEEK fiber content, which could provide data information for product development in industry and lay a foundation for the development of intelligent tribological materials.The content of RBFM in this paper is shown in PEEK fibers were treated with Silane Coupling Agent . The composition of the SCA solution is shown in The first step was the mixing of raw materials. Fibers including PEEK fibers, Sepiolite fibers and Compound Mineral fibers were thrown into an Electrical Blender for 3\u20135 min to increase dispersion. After dispersion, all the other compositions except for phenolic resin were thrown into a Compact Rake Blender for 8\u201310 min to obtain the mixture of the raw materials.The second step was wet granulation, which can separate PEEK fibers and phenolic resin physically to avoid crosslinking solidification. The third step was hot-pressing. Granules were molded for 10 min at 160 \u00b0C under 45 MPa by a hot compression machine , according to our previous study ,20,21. TThe tribological performance of RBFM was tested using a Constant-Speed Tester according to GB/T 5763-2008. \u03bc of the specimens was calculated according to Equation (1) [f represents the friction force (N); NF is the normal pressure (N); r is the distance between the rotation center and the sample center; n is the number of revolutions (5000); A is the surface area (A = 625 mm2); d1 and d2 are the initial and final thickness of the sample, respectively.The COF tion (1) , and thetion (1) .(1)\u03bc=fFFadeF) and recovery ratio (RecoveryF) are calculated according to Equation (3) and Equation (4), respectively.\u03bcCF100\u00b0 and \u03bcCF350\u00b0 represent the COF in the fade test at 100 \u00b0C and 350 \u00b0C, respectively, and \u03bcR100\u00b0C represents the COF in the recovery test at 100 \u00b0C.The tribological test was divided into two stages: a fade test and recovery test. The fade ratio at 20 kV.The microscopic morphology of the PEEK fiber is shown in The result of the fade test was summarized in From the results in From the result in Combining the results in Combining the results in As shown in The result of the wear rate is shown in \u22127 cm3 \u00d7 (N \u00d7 m)\u22121, which was 51.1% lower than that of RBFM-1.In summary, the addition of 6 \u03c9t% PEEK fibers provides the best compromise of tribological properties. On one hand, the addition of 6 \u03c9t% PEEK fibers leads to a higher fade resistance and recovery performance, which can ensure the stability of the braking; on the other hand, it can also significantly improve the service life of RBFM and ensure the safety of braking.Friction is determined by the contact area condition, which is composed of hard materials and the compaction of wear debris around them ,47. PlatThe microscopic morphology of the worn surface of RBFM-1 was shown in The microscopic morphology of the worn surface of RBFM-2 is shown in In summary, intelligent RBFM were fabricated by wet granulation, which could perceive the friction interface temperature and regulated the microstructure of the friction interface during braking. Specimens with 6 \u03c9t% PEEK fibers had the best fade resistance and the lowest wear rate, and the stability of the COF was greatly improved. The main reason is that the high strength and modulus of the PEEK fibers enhanced the strength of the RBFM at lower temperatures, and the molten PEEK at high temperatures could effectively adhere wear debris, thus promoting the formation of secondary plateaus, which provided stable and continuous friction. Thus, PEEK fibers are a promoting intelligent reinforcement of RBFM. In the near future, a further study will be conducted to investigate the mechanical properties of PEEK fibers-reinforced RBFM."} +{"text": "Single use ureteroscopes are a technological innovation that have become available in the past decade and gained increased popularity. To this end, there are now an increasing number of both benchside and clinical studies reporting outcomes associated with their use. Our aim was to deliver a narrative review in order to provide an overview of this new technology.A narrative review was performed to gain overview of the history of the technology's development, equipment specifications and to highlight potential advantages and disadvantages.Findings from preclinical studies highlight potenial advantages in terms of the design of single use ureteroscopes such as the lower weight and more recent modifications such as pressure control. However, concerns regarding plastic waste and environmental impact still remain unanswered. Clinical studies reveal them to have a non inferior status for outcomes such as stone free rate. However, the volume of evidence, especially in terms of randomised trials remains limited. From a cost perspective, study conclusions are still conflicting and centres are recommended to perform their own micro cost analyses.Most clinical outcomes for single use ureteroscopes currently match those achieved by reusable ureteroscopes but the data pool is still limited. Areas of continued debate include their environmental impact and cost efficiency. All study types were eligible for inclusion. Bibliographic databases searched included PubMed/MEDLINE, Google scholar and Scopus. Reference lists and relevant grey literature such as conference abstracts were also searched. Search terms included \u2018single use\u2019, \u2018disposable\u2019, \u2018ureteroscopy\u2019, \u2018retrograde intra\u2010renal surgery\u2019 and \u2018minimally invasive surgery\u2019. The results have been summarised in a narrative format the following key areas identified: history and development, equipment specifications, equipment properties/findings from clinical studies, cost, environmental impact and future perspectives.3One of the earliest reports of flexible URS was by Marshall et al. in 1964 where an impacted ureteric calculus was visualised.4The lighting system is LED in the majority of the models with source integrated in the handle, but there are few exceptions such as Shaogang that has an external fibre optic cable attachment.4.1Within a short time period, numerous modifications have been introduced. Most models are now available with the option of the articulated lever executing either standard or reverse deflection according to preference. Similarly, some models such as WiScope are available in left\u2010 and right\u2010handed versions. Certain models now have an autolock function that can be applied at the surgeon's discretion, for example, once deflected in lower pole. Such ergonomic improvements are welcomed given that 39% of endourologists have been reported to experience orthopaedic problems in their hand and/or wrist.Of note too, is the development of SU semi\u2010rigid ureteroscopes such as the RIWO D\u2010URS\u2122 , which has a hybrid function in that the tip is flexible. It has an outer diameter of 9\u2009Fr, and a special feature is that it houses three channels consisting of an outflow channel, a working channel (3.6\u2009F) for accessories and a dedicated channel for the laser fibre (1.6\u2009Fr). To date, formal studies are lacking, which report its use in a clinical setting.4.2Essential scope features include manoeuvrability, optical characteristics , working channel flow, deflection (including when working channel occupied) and durability . A number of in vitro simulation studies have been performed assessing these properties. Dragos et al. compared four different SU models with four of the main RU models in use.4.3n\u00a0=\u200949) found that outcomes were comparable for all the same variables with the exception of SFR, which favoured the SU group . More recently, Ali et al. recorded results from a randomised study of 242 patients.To date, there have only been three randomised controlled trials (RCTs) comparing SU versus RU ureteroscopes. The first was a trial by Qi et al. in 126 patients.4.4p\u00a0=\u20090.018).Reprocessing is a time intensive process that combines both manual and machine automated steps, and although protocols vary between sterile processing departments, drying alone can take over 3\u2009h.4.5p\u00a0<\u20090.05) less in the SU group.Expenditure associated with this technology is one of the main reasons for a slowed uptake across many parts of the world, especially those with less resources. Results of cost comparison studies reveal varying estimates.A new development has been the updated coding reimbursement for SU device has been adapted in certain areas such as the Medicare Hospital Outpatient Prospective Payment System (OPPS). Given it fulfils requirements for an innovative device, it qualifies for a transitional pass\u2010through payment, which equates to an additional reimbursement.4.62 footprint associated with transport of new endoscopes, those sent for repair and at the time of disposition. Factory locations for manufacture and repair are often located in another country.The healthcare sector currently accounts for 4.4% of global greenhouse gas emissions.Borofsky et al. reported their multi\u2010institutional pilot experience in the United States of a partnership with a medical waste company that aimed to salvage metal and electronic components while electricity was generated from steam energy generated during incineration of medical waste.5Although reported repair rates vary, it is known that surgeon experience and investment in the education of operational staff can affect longevity of RU ureteroscopes.Endoscopic combined intra\u2010renal surgery (ECIRS) is also argued as a scenario to consider SU models. However, a recent study found that use of accessories such as baskets rendered more damage to ureteroscopes than performing ECIRS.6Other SU endoscopes used in medical setting have seen high growth rates in recent years. Sales for SU bronchoscopes and rhinoscopes have increased at 124% and 441% per year.7SU ureteroscopes have favourable physical characteristics including modifications and low weight that translate to certain ergonomic advantages for the surgeon. Clinical outcomes match those of RU models. However, both the economic and environmental sustainability warrant further research. Further studies are also needed to evaluate if SU models result in lower infection rates and to determine durability and issue of device failure intra\u2010operatively.Patrick Julieb\u00f8\u2010Jones: Conception; data collection; analysis; writing of draft and revision. Eugenio Ventimiglia: Conception; data collection; editing and writing of manuscript. Bhaskar K. Somani: Data collection; editing and writing of manuscript; supervision. Mathias S\u00f8rstrand \u00c6s\u00f8y: Data collection; editing and writing of manuscript. Peder Gjengst\u00f8: Data collection; editing and writing of manuscript. Christian Beisland: Conception; data collection; editing and writing of manuscript; supervision. \u00d8yvind Ulvik: Conception; data collection; editing and writing of manuscript; supervision.\u00d8yvind Ulvik has acted as a consultant for Olympus. The other authors have nil to declare."}